Deploy an Agent System in a Container
Last updated
Last updated
Based on the notes provided in the image, we'll help you create a tutorial on accessing the agent system runtime after deploying it to Docker. This tutorial will explain the steps for building the Docker container, deploying the agent system, and accessing the runtime for inference or interaction.
Once you have packaged and deployed your agent system inside a Docker container (https://docs.composabl.com/deploy-agents/deploy-an-agent-in-a-container), the next step is accessing its runtime for operations like model inference. This tutorial will guide you through the process of building and running the Docker container and then connecting to the agent system's runtime for further interactions.
To deploy the agent system to Docker, we need to first create an image from the Dockerfile (https://docs.composabl.com/deploy-agents/deploy-an-agent-in-a-container). The Dockerfile will package the necessary runtime, model, and environment for the agent system.
Building the Image: You can build the Docker image by running the following command in the terminal. This will take the Dockerfile and the associated files (like the pre-trained model) and create an image.
The -t
flag allows you to tag the image (composabl_agent_api
), which makes it easier to reference later.
Make sure that the model file (agent.json
) and all relevant scripts are reachable within the Docker context (i.e., the directory from which you are building).
Checking the Image: Once the build is complete, you can verify that the image was created successfully by running:
Now that the image is built, the next step is to run it in a container. You will run the Docker container in an interactive mode to access the runtime.
-it
: Runs the container interactively.
-p 8000:8000
: Maps port 8000 from the container to port 8000 on your local machine so that you can access the HTTP server for the agent system runtime.
-e
COMPOSABL_LICENSE="<your_license>" : is exporting the environment variable and linking to your composabl license
The HTTP server should now be up and running within the container, ready to handle model inference or other tasks.
With the Docker container running, you can now connect to the agent system's runtime. The runtime will be an HTTP server, as mentioned in your notes. You can access it through a POST request for model inference or other operations.
Sending Requests to the Agent System: You can send a POST request to the running server using a tool like curl
, Postman, or any Python HTTP library (such as requests
).
Here’s an example using curl
:
This request will:
POST data to the /predict
endpoint on localhost:8000
, which is being forwarded from the Docker container.
The agent system will handle the request, infer the model, and return the action as a result.
In this tutorial, we walked through the process of:
Building a Docker image with your agent system and its runtime.
Running the Docker container interactively to expose the agent system’s HTTP server.
Accessing the agent system runtime by sending HTTP requests for inference or other tasks.
By following these steps, you can deploy and interact with your Composabl agent system in a Dockerized environment.