LogoLogo
  • Welcome to Composabl
  • Get Started
  • Reference
    • CLI Reference
    • SDK Reference
    • Glossary
    • Sample Use Cases
  • Tutorials
    • Industrial Mixer
      • Get Started
      • Deep Reinforcement Learning
      • Strategy Pattern
      • Strategy Pattern with a Perception Layer
      • Plan-Execute Pattern
  • Establish a Simulation Environment
    • Simulation Overview
    • Connect a Simulator to Composabl
    • Composabl Simulation API
  • Build Multi-Agent Systems
    • Anatomy of a Multi-Agent System
    • Create a Use Case
    • Set Goals, Constraints, and Success Criteria
    • Create Skill Agents
      • Create Skill Agents
      • Create Skill Agents with Rewards Using the SDK
      • Configure Programmed Algorithms as Skill Agents
      • Configure API Connections to Third-Party Software as Skill Agents
    • Orchestrate Skill Agents
    • Configure Scenarios
    • Add a Perception Layer
      • Create a New Perceptor
      • Configure an ML Model as a Perceptor
      • Configure an LLM Model as a Perceptor
    • Publish Skill Agent Components to the UI
  • Train Agents
    • Submit a Training Job through the UI
    • Analyze Agent System Behavior
      • View Training Session Information
      • Analyze Data in Detail with the Historian
  • Evaluate Performance
    • Set KPI and ROI
    • Analyze Data
  • Deploy Agents
    • Access a Trained Agent System
    • Deploy an Agent System in a Container
    • Deploy an Agent System as an API
    • Connect Runtime Container to Your Operation
    • Connecting to Agent System Runtime and Plotting Results of Agent System Operations
  • clusters
    • Creating a Cluster
      • Manual
      • Automated
      • Azure
    • Connecting a Cluster
  • Troubleshooting
    • Resolving Certificate Issues for Installing the Composabl SDK on WSL
Powered by GitBook
On this page
  • Tutorial: Accessing the Agent System Runtime After Deploying to Docker
  • Step 1: Preparing the Dockerfile and Environment
  • Step 2: Building the Docker Image
  • Step 3: Running the Docker Container
  • Step 4: Accessing the Agent System Runtime
  • Step 5: Automating the Process
  • Step 6: Troubleshooting and Debugging
  • Conclusion
Export as PDF
  1. Deploy Agents

Connect Runtime Container to Your Operation

Based on the notes provided in the image, I'll help you create a tutorial on accessing the agent system runtime after deploying it to Docker. This tutorial will explain the steps for building the Docker container, deploying the agent system, and accessing the runtime for inference or interaction.


Tutorial: Accessing the Agent System Runtime After Deploying to Docker

Once you have packaged and deployed your agent system inside a Docker container, the next step is accessing its runtime for operations like model inference. This tutorial will guide you through the process of building and running the Docker container and then connecting to the agent system's runtime for further interactions.


Step 1: Preparing the Dockerfile and Environment

To deploy the agent system to Docker, we need to first create a Dockerfile. The Dockerfile will package the necessary runtime, model, and environment for the agent system.

  1. Dockerfile Setup: Your Dockerfile should contain the following key components:

    • Base Image: Use a Python base image (or any base that supports the necessary libraries).

    • Copy Model Files: Copy the pre-trained model (e.g., .gz file) to the container.

    • Install Dependencies: Install any required Python libraries (like OHTTP or other packages for the agent system).

Here’s an example Dockerfile:

# Use an official Python runtime as the base image
FROM python:3.10-slim

# Set the working directory
WORKDIR /usr/src/app

# Copy the necessary files into the Docker image
COPY . .

# Install any dependencies specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Expose port 8000 for the HTTP server
EXPOSE 8000

# Command to run the server when the container starts
CMD ["python", "agent_inference.py"]

Step 2: Building the Docker Image

  1. Building the Image: You can build the Docker image by running the following command in the terminal. This will take the Dockerfile and the associated files (like the pre-trained model) and create an image.

docker build -t my-agent-runtime .
  • The -t flag allows you to tag the image (my-agent-runtime), which makes it easier to reference later.

  • Make sure that the model file (agent.gz) and all relevant scripts are reachable within the Docker context (i.e., the directory from which you are building).

  1. Checking the Image: Once the build is complete, you can verify that the image was created successfully by running:

docker images

Step 3: Running the Docker Container

Now that the image is built, the next step is to run it in a container. You will run the Docker container in an interactive mode to access the runtime.

docker run -it -p 8000:8000 my-agent-runtime
  • -it: Runs the container interactively.

  • -p 8000:8000: Maps port 8000 from the container to port 8000 on your local machine so that you can access the HTTP server for the agent system runtime.

The HTTP server should now be up and running within the container, ready to handle model inference or other tasks.


Step 4: Accessing the Agent System Runtime

With the Docker container running, you can now connect to the agent system's runtime. The runtime will be an HTTP server, as mentioned in your notes. You can access it through a POST request for model inference or other operations.

  1. Sending Requests to the Agent System: You can send a POST request to the running server using a tool like curl, Postman, or any Python HTTP library (such as requests).

Here’s an example using curl:

curl -X POST http://localhost:8000/infer -d '{"input_data": "your_input_here"}'

This request will:

  • POST data to the /infer endpoint on localhost:8000, which is being forwarded from the Docker container.

  • The agent system will handle the request, infer the model, and return the result.

  1. Interacting with the Agent System: If you prefer to interact with the agent system directly, you can also enter the container’s interactive mode and run commands.

docker exec -it <container_id> bash

This will open a shell inside the running Docker container, allowing you to execute any runtime commands manually.


Step 5: Automating the Process

For convenience, you can automate the entire process of building the image, running the container, and interacting with the agent system by creating a script.

Here’s a basic example of an automation script:

#!/bin/bash

# Build the Docker image
docker build -t my-agent-runtime .

# Run the Docker container
docker run -it -p 8000:8000 my-agent-runtime

Save this as run_agent.sh, and then execute it:

bash run_agent.sh

This script will:

  • Build the Docker image.

  • Run the container, mapping the necessary port and exposing the HTTP server for inference.


Step 6: Troubleshooting and Debugging

If the container fails to start, or if the server doesn't respond, you can debug the container by checking the logs:

docker logs <container_id>

This command displays the output of the running container, which can help diagnose issues such as missing dependencies or server errors.


Conclusion

In this tutorial, we walked through the process of:

  • Building a Docker image with your agent system and its runtime.

  • Running the Docker container interactively to expose the agent’s HTTP server.

  • Accessing the agent system runtime by sending HTTP requests for inference or other tasks.

By following these steps, you can deploy and interact with your Composabl agent system in a Dockerized environment.

PreviousDeploy an Agent System as an APINextConnecting to Agent System Runtime and Plotting Results of Agent System Operations

Last updated 23 days ago