Deploy an Agent in a Container
Last updated
Last updated
Based on the notes provided in the image, We'll help you create a tutorial on accessing the agent runtime after deploying it to Docker. This tutorial will explain the steps for building the Docker container, deploying the agent, and accessing the runtime for inference or interaction.
Once you have packaged and deployed your agent inside a Docker container (https://docs.composabl.com/deploy-agents/deploy-an-agent-in-a-container), the next step is accessing its runtime for operations like model inference. This tutorial will guide you through the process of building and running the Docker container and then connecting to the agent's runtime for further interactions.
To deploy the agent to Docker, we need to first create an image from the Dockerfile (https://docs.composabl.com/deploy-agents/deploy-an-agent-in-a-container). The Dockerfile will package the necessary runtime, model, and environment for the agent.
Building the Image: You can build the Docker image by running the following command in the terminal. This will take the Dockerfile and the associated files (like the pre-trained model) and create an image.
The -t
flag allows you to tag the image (composabl_agent_api
), which makes it easier to reference later.
Make sure that the model file (agent.json
) and all relevant scripts are reachable within the Docker context (i.e., the directory from which you are building).
Checking the Image: Once the build is complete, you can verify that the image was created successfully by running:
Now that the image is built, the next step is to run it in a container. You will run the Docker container in an interactive mode to access the runtime.
-it
: Runs the container interactively.
-p 8000:8000
: Maps port 8000 from the container to port 8000 on your local machine so that you can access the HTTP server for the agent runtime.
-e
COMPOSABL_LICENSE="<your_license>" : is exporting the environment variable and linking to your composabl license
The HTTP server should now be up and running within the container, ready to handle model inference or other tasks.
With the Docker container running, you can now connect to the agent's runtime. The runtime will be an HTTP server, as mentioned in your notes. You can access it through a POST request for model inference or other operations.
Sending Requests to the Agent: You can send a POST request to the running server using a tool like curl
, Postman, or any Python HTTP library (such as requests
).
Here’s an example using curl
:
This request will:
POST data to the /predict
endpoint on localhost:8000
, which is being forwarded from the Docker container.
The agent will handle the request, infer the model, and return the action as a result.
In this tutorial, we walked through the process of:
Building a Docker image with your agent and its runtime.
Running the Docker container interactively to expose the agent’s HTTP server.
Accessing the agent runtime by sending HTTP requests for inference or other tasks.
By following these steps, you can deploy and interact with your Composabl agent in a Dockerized environment.s