Adding perception modules to your agent can provide more rich, complex, condensed, and nuanced information to the decision-making parts of the agent. For example, you might include a computer vision model in your perception layer that inputs images or video from a camera and outputs classifications of objects that it identifies. You can also add large language models as perceptors to take in and interpret information in natural language.
Each module in the perception layer for a Composabl agent inputs the sensor variables, processes those variables in some way, and outputs one or more new variables that the platform will automatically add to the list of sensors.
Perceptors can use any supported Python function or library to calculate outputs. They can even call machine learning and large language models or their APIs.
The next three pages explain how to use the SDK and CLI workflow to create new perceptors or configure existing models as perceptors to use in Composabl agents.
Just like skills, perceptors can be dragged and dropped into agents using the UI. Perceptors will always be situated in the Perception layer that comes before selectors and skills. That’s because perception needs to be applied to the sensor inputs to create new variables that are then passed to the skills layer for the agent to use in decision-making.
Perceptors use the SDK and CLI workflow.
To access a template for a perceptor, type composabl perceptor new
into the CLI. Composabl will then generate a perceptor template that you can populate with your information.
In this simple perceptor example we calculate the perceptor outputs that will be added as new sensor variables and we create a list of perceptors that comprise the perception layer.
In this tutorial, we will walk through how to integrate a trained machine learning (ML) model into your Composabl agent as a Perceptor. A perceptor allows your agent to interpret data from sensors, process it using a machine learning model, and output new variables that will help the agent make better decisions.
The goal is to publish a pre-trained ML model as a perceptor that adds a new layer of perception to your agent, enabling it to process sensor data in a more advanced way. This could be useful in a variety of scenarios, such as predictive maintenance, anomaly detection, or autonomous decision-making.
A Perceptor in Composabl is a module in the perception layer that inputs sensor data, processes it (potentially using an ML model), and outputs new variables that are automatically added to the list of available sensors.
For this example, let’s assume we are building a perceptor that uses a trained machine learning model to predict thermal runaway in a system.
We will use a pre-trained ML model stored as a pickle file to predict thermal runaway based on certain temperature and chemical sensor readings. Here’s how to set up the trained ML model for use as a perceptor.
Store the ML Model: Assume the ML model has been trained and saved as a .pkl
file. For this example, the model is stored in the path: ml_models/ml_predict_temperature.pkl
.
Load the ML Model in the Perceptor: In the perceptor class, we will load the model and define how it processes the sensor data.
Now, we’ll create the perceptor using the trained ML model to process the sensor data and predict thermal runaway events. The perceptor will be responsible for calling the model and returning the prediction as a new sensor variable.
We can start by creating the preceptor by using the Composable CLI with the following command:
The new preceptor will have the following file structure:
Here’s the Python code to create the perceptor:
In this perceptor:
We load the trained machine learning model from a pickle file.
The compute()
method takes in sensor data (e.g., temperature, chemical concentrations), processes it, and uses the ML model to predict whether a thermal runaway event will occur.
The perceptor outputs the prediction as a new sensor variable, thermal_runaway_predict
.
Once the perceptor is defined, you can login to the Composabl editor and add it to your agent.
In this tutorial, we covered how to publish a trained ML model as a perceptor in Composabl. This allows the agent to integrate more advanced decision-making by processing raw sensor data through a machine learning model and outputting predictions as new sensor variables. This method can be applied in various domains, such as predictive maintenance, anomaly detection, and control systems.
You can use an LLM as a to add language and communication capabilities to your agent.
This allows you to create human-like assistants or copilots who can contribute natural language capabilities to your agent. Composabl has several personas for LLM assistants to help structure your agent design.
The analyst interprets sensor data and passes it to an interface that the user can access, allowing real-time monitoring of conditions and the agent's responses.
The executive reads external data sources in text and reports information to the agent, such as trends in the business press that would help to anticipate demand for a product
The plant manager allows operators to communicate directly with the agent and give it instructions based on information that would not be otherwise available in its sensor space
LLM perceptors can either:
Output language to the operator about what the agent is doing (ex. the analyst)
Take in inputs in natural language and then transform them into information that the decision-making layer of the agent can use (ex. the executive and plant manager)
From the CLI, when logged into Composabl, type composabl perceptor new
. You will be prompted for a location to save your new perceptor, and then a new directory with your perceptor will be created.
This will include the pyproject.toml
file that will allow you to publish the perceptor to the UI once it is created.
Within the perceptor.py
file, create the API call and prompt for the LLM.
The analyst displays information to the human user, but doesn't send information to the decision-making layer of the agent, so the perceptor returns 0.
The executive in this sample related to the industrial mixer use case automatically queries a chemical engineering LLM for advice about control actions to take.
The perceptor returns an action that it recommends the decision-making layer of the skill to take. This becomes a new sensor variable that the skill teacher(s) will take into account when training the agent in simulation.
See full code samples and more examples.
Composabl agents can include text fields in perceptors, but they must be transformed or filtered out in the teacher.py
file before training with DRL. For any text variables that are not transformed into a different data type, use the filtered_sensor_space
method of the teacher to remove them.
Publish the perceptor to the UI.
Naviage to the folder above your perceptor. Then publish your perceptor.
Select the organization and project that your perceptor to add your perceptor. The refresh your Agent Builder Studio to see the perceptor and add it to agents.