LogoLogo
  • Welcome to Composabl
  • Get Started
  • Reference
    • CLI Reference
    • SDK Reference
    • Glossary
    • Sample Use Cases
  • Tutorials
    • Industrial Mixer
      • Get Started
      • Deep Reinforcement Learning
      • Strategy Pattern
      • Strategy Pattern with a Perception Layer
      • Plan-Execute Pattern
  • Establish a Simulation Environment
    • Simulation Overview
    • Connect a Simulator to Composabl
    • Composabl Simulation API
  • Build Multi-Agent Systems
    • Anatomy of a Multi-Agent System
    • Create a Use Case
    • Set Goals, Constraints, and Success Criteria
    • Create Skill Agents
      • Create Skill Agents
      • Create Skill Agents with Rewards Using the SDK
      • Configure Programmed Algorithms as Skill Agents
      • Configure API Connections to Third-Party Software as Skill Agents
    • Orchestrate Skill Agents
    • Configure Scenarios
    • Add a Perception Layer
      • Create a New Perceptor
      • Configure an ML Model as a Perceptor
      • Configure an LLM Model as a Perceptor
    • Publish Skill Agent Components to the UI
  • Train Agents
    • Submit a Training Job through the UI
    • Analyze Agent System Behavior
      • View Training Session Information
      • Analyze Data in Detail with the Historian
  • Evaluate Performance
    • Set KPI and ROI
    • Analyze Data
  • Deploy Agents
    • Access a Trained Agent System
    • Deploy an Agent System in a Container
    • Deploy an Agent System as an API
    • Connect Runtime Container to Your Operation
    • Connecting to Agent System Runtime and Plotting Results of Agent System Operations
  • clusters
    • Creating a Cluster
      • Manual
      • Automated
      • Azure
    • Connecting a Cluster
  • Troubleshooting
    • Resolving Certificate Issues for Installing the Composabl SDK on WSL
Powered by GitBook
On this page
Export as PDF
  1. Build Multi-Agent Systems

Add a Perception Layer

PreviousConfigure ScenariosNextCreate a New Perceptor

Last updated 24 days ago

Adding perception modules to your agent can provide richer, complex, condensed, and nuanced information to the decision-making parts of the agent system. For example, you might include a computer vision model in your perception layer that inputs images or video from a camera and outputs classifications of objects that it identifies. You can also add large language models as perceptors to take in and interpret information in natural language.

Each module in the perception layer for a Composabl agent system inputs the sensor variables, processes those variables in some way, and outputs one or more new variables that the platform will automatically add to the list of sensors.

Perceptors can use any supported Python function or library to calculate outputs. They can even call machine learning and large language models or their APIs.

The next three pages explain how to use the SDK and CLI workflow to create new perceptors or configure existing models as perceptors to use in Composabl agent systems.

Add Perceptors to Agent Systems

Just like skill agents, perceptors can be dragged and dropped into agent systems using the UI. Perceptors will always be situated in the Perception layer that comes before orchestrators and skill agents. That’s because perception needs to be applied to the sensor inputs to create new variables that are then passed to the skills layer for the agent system to use in decision-making.