LogoLogo
  • Welcome to Composabl
  • Get Started
  • Reference
    • CLI Reference
    • SDK Reference
    • Glossary
    • Sample Use Cases
  • Tutorials
    • Industrial Mixer
      • Get Started
      • Deep Reinforcement Learning
      • Strategy Pattern
      • Strategy Pattern with a Perception Layer
      • Plan-Execute Pattern
  • Establish a Simulation Environment
    • Simulation Overview
    • Connect a Simulator to Composabl
    • Composabl Simulation API
  • Build Multi-Agent Systems
    • Anatomy of a Multi-Agent System
    • Create a Use Case
    • Set Goals, Constraints, and Success Criteria
    • Create Skill Agents
      • Create Skill Agents
      • Create Skill Agents with Rewards Using the SDK
      • Configure Programmed Algorithms as Skill Agents
      • Configure API Connections to Third-Party Software as Skill Agents
    • Orchestrate Skill Agents
    • Configure Scenarios
    • Add a Perception Layer
      • Create a New Perceptor
      • Configure an ML Model as a Perceptor
      • Configure an LLM Model as a Perceptor
    • Publish Skill Agent Components to the UI
  • Train Agents
    • Submit a Training Job through the UI
    • Analyze Agent System Behavior
      • View Training Session Information
      • Analyze Data in Detail with the Historian
  • Evaluate Performance
    • Set KPI and ROI
    • Analyze Data
  • Deploy Agents
    • Access a Trained Agent System
    • Deploy an Agent System in a Container
    • Deploy an Agent System as an API
    • Connect Runtime Container to Your Operation
    • Connecting to Agent System Runtime and Plotting Results of Agent System Operations
  • clusters
    • Creating a Cluster
      • Manual
      • Automated
      • Azure
    • Connecting a Cluster
  • Troubleshooting
    • Resolving Certificate Issues for Installing the Composabl SDK on WSL
Powered by GitBook
On this page
  • Create an LLM Perceptor
  • Step 1: Create the perceptor shell
  • Step 2: Define the perceptor class
  • Step 3: Filter the Sensor Space
  • Step 4: Publish the Perceptor
Export as PDF
  1. Build Multi-Agent Systems
  2. Add a Perception Layer

Configure an LLM Model as a Perceptor

You can use an LLM as a to add language and communication capabilities to your agent system.

This allows you to create human-like assistants or copilots who can contribute natural language capabilities to your agent system. Composabl has several personas for LLM assistants to help structure your agent design.

  • The analyst interprets sensor data and passes it to an interface that the user can access, allowing real-time monitoring of conditions and the agent system's responses.

  • The executive reads external data sources in text and reports information to the agent system, such as trends in the business press that would help to anticipate demand for a product

  • The plant manager allows operators to communicate directly with the agent system and gives it instructions based on information that would not be otherwise available in its sensor space

LLM perceptors can either:

  1. Output language to the operator about what the agent system is doing (ex. the analyst)

  2. Take in inputs in natural language and then transform them into information that the decision-making layer of the agent can use (ex. the executive and plant manager)

Create an LLM Perceptor

Step 1: Create the perceptor shell

From the CLI, when logged into Composabl, type composabl perceptor new. You will be prompted for a location to save your new perceptor, and then a new directory with your perceptor will be created.

This will include the pyproject.toml file that will allow you to publish the perceptor to the UI once it is created.

Step 2: Define the perceptor class

Within the perceptor.py file, create the API call and prompt for the LLM.

Analyst Perceptor Code Sample

The analyst displays information to the human user, but doesn't send information to the decision-making layer of the agent, so the perceptor returns 0.

from fake_llm import llm_client
from fake_factory_console import factory_console_client

from composabl_core import PerceptorImpl

class AnalystPerceptor(PerceptorImpl):
    """
    The analyst type that displays information to the human operators but doesn't send any information to the agent.
    """
    def __init__(self, *args, **kwargs):
        # Example:
        self.llm_client = llm_client()
        self.factory_console_client = factory_console_client()
        pass    
    async def compute(self, obs_spec, obs):       
        # First, ask the LLM for its thoughts on the current state of the plant
        llm_response = self.llm_client.ask(f"You are controlling a CSTR plant, the current state of the plant is {obs}. What are your thoughts on the current state of the plant?")

        # Second, post the LLM's thoughts to the factory console for a human to read
        self.factory_console_client.post(f"The LLM thoughts on the current state of the plant are: {llm_response}")

        return {"chemical_engineer_llm": 0}

Executive Code Sample

The executive in this sample related to the industrial mixer use case automatically queries a chemical engineering LLM for advice about control actions to take.

The perceptor returns an action that it recommends the decision-making layer of the skill agent to take. This becomes a new sensor variable that the skill agent teacher(s) will take into account when training the agent system in simulation.

from fake_llm import llm_client
from composabl_core import PerceptorImpl

class ChemicalEngineerPerceptor(PerceptorImpl):
    """
    The perceptor for the text agent
    """
    def __init__(self, *args, **kwargs):
        self.llm_client = llm_client()
        pass

    async def compute(self, obs_spec, obs):
        """
        Asks the LLM for its thoughts on the current state of the plant, and returns a recommended action
        """
        llm_response = self.llm_client.ask(f"You are controlling a CSTR plant, the current state of the plant is {obs}. what action do you recommend?")
        llm_action = llm_response.find("action")
        return {"chemical_engineer_llm": llm_action}

Examples and Reference

Step 3: Filter the Sensor Space

Composabl agent systems can include text fields in perceptors, but they must be transformed or filtered out in the teacher.py file before training with DRL. For any text variables that are not transformed into a different data type, use the filtered_sensor_space method of the teacher to remove them.

Step 4: Publish the Perceptor

Publish the perceptor to the UI.

composabl login

Naviage to the folder above your perceptor. Then publish your perceptor.

composabl perceptor publish foldername

Select the organization and project that your perceptor to add your perceptor. The refresh your Agent Orchestration Studio to see the perceptor and add it to agents.

PreviousConfigure an ML Model as a PerceptorNextPublish Skill Agent Components to the UI

Last updated 24 days ago

See full code samples and more examples.