Add a Perception Layer
Last updated
Last updated
Adding perception modules to your agent can provide more rich, complex, condensed, and nuanced information to the decision-making parts of the agent. For example, you might include a computer vision model in your perception layer that inputs images or video from a camera and outputs classifications of objects that it identifies. You can also add large language models as perceptors to take in and interpret information in natural language.
Each module in the perception layer for a Composabl agent inputs the sensor variables, processes those variables in some way, and outputs one or more new variables that the platform will automatically add to the list of sensors.
Perceptors can use any supported Python function or library to calculate outputs. They can even call machine learning and large language models or their APIs.
The next three pages explain how to use the SDK and CLI workflow to create new perceptors or configure existing models as perceptors to use in Composabl agents.
Just like skills, perceptors can be dragged and dropped into agents using the UI. Perceptors will always be situated in the Perception layer that comes before selectors and skills. That’s because perception needs to be applied to the sensor inputs to create new variables that are then passed to the skills layer for the agent to use in decision-making.