Add a Perception Layer
Last updated
Last updated
Adding perception modules to your agent can provide richer, complex, condensed, and nuanced information to the decision-making parts of the agent system. For example, you might include a computer vision model in your perception layer that inputs images or video from a camera and outputs classifications of objects that it identifies. You can also add large language models as perceptors to take in and interpret information in natural language.
Each module in the perception layer for a Composabl agent system inputs the sensor variables, processes those variables in some way, and outputs one or more new variables that the platform will automatically add to the list of sensors.
Perceptors can use any supported Python function or library to calculate outputs. They can even call machine learning and large language models or their APIs.
The next three pages explain how to use the SDK and CLI workflow to create new perceptors or configure existing models as perceptors to use in Composabl agent systems.
Just like skill agents, perceptors can be dragged and dropped into agent systems using the UI. Perceptors will always be situated in the Perception layer that comes before orchestrators and skill agents. That’s because perception needs to be applied to the sensor inputs to create new variables that are then passed to the skills layer for the agent system to use in decision-making.