LogoLogo
  • Welcome to Composabl
  • Get Started
  • Reference
    • CLI Reference
    • SDK Reference
    • Glossary
    • Sample Use Cases
  • Tutorials
    • Industrial Mixer
      • Get Started
      • Deep Reinforcement Learning
      • Strategy Pattern
      • Strategy Pattern with a Perception Layer
      • Plan-Execute Pattern
  • Establish a Simulation Environment
    • Simulation Overview
    • Connect a Simulator to Composabl
    • Composabl Simulation API
  • Build Multi-Agent Systems
    • Anatomy of a Multi-Agent System
    • Create a Use Case
    • Set Goals, Constraints, and Success Criteria
    • Create Skill Agents
      • Create Skill Agents
      • Create Skill Agents with Rewards Using the SDK
      • Configure Programmed Algorithms as Skill Agents
      • Configure API Connections to Third-Party Software as Skill Agents
    • Orchestrate Skill Agents
    • Configure Scenarios
    • Add a Perception Layer
      • Create a New Perceptor
      • Configure an ML Model as a Perceptor
      • Configure an LLM Model as a Perceptor
    • Publish Skill Agent Components to the UI
  • Train Agents
    • Submit a Training Job through the UI
    • Analyze Agent System Behavior
      • View Training Session Information
      • Analyze Data in Detail with the Historian
  • Evaluate Performance
    • Set KPI and ROI
    • Analyze Data
  • Deploy Agents
    • Access a Trained Agent System
    • Deploy an Agent System in a Container
    • Deploy an Agent System as an API
    • Connect Runtime Container to Your Operation
    • Connecting to Agent System Runtime and Plotting Results of Agent System Operations
  • clusters
    • Creating a Cluster
      • Manual
      • Automated
      • Azure
    • Connecting a Cluster
  • Troubleshooting
    • Resolving Certificate Issues for Installing the Composabl SDK on WSL
Powered by GitBook
On this page
  • Step 1: Understanding agent_inference.py
  • Step 2: Connecting to the Runtime and Loading the Agent System
  • Step 3: Connecting to the Simulation Environment
  • Step 4: Setting the Scenario and Running Inference
  • Step 5: Saving Data and Plotting Results
  • Step 6: Running the Script
  • Conclusion
Export as PDF
  1. Deploy Agents

Connecting to Agent System Runtime and Plotting Results of Agent System Operations

In this tutorial, we will cover how to connect to the agent system runtime, load a pre-trained agent system, run inference, and visualize the results in a production-like environment. The provided script, agent_inference.py, is a key component that demonstrates connecting to the Composabl agent system runtime, initializing the environment, and plotting agent system operation results.


Step 1: Understanding agent_inference.py

The script agent_inference.py connects to the runtime, loads a pre-trained agent system, connects to a local simulation, collects sensor data from the sim and plots the results. Here is an outline of the core steps in the process:

  1. Start Runtime and Load Agent System: The script initializes the trainer and loads a pre-trained agent system from a model folder.

  2. Set Up the Simulation Environment: It connects to a simulation environment.

  3. Run Inference: The pre-trained agent system interacts with the simulation to perform inference (decisions), collecting observations and giving actions at each step.

  4. Collect Data and Plot Results: Sensor data and actions are collected in a Pandas DataFrame, and the results are plotted using Matplotlib to visualize how the agent system is performing over time in a production like environment.


Step 2: Connecting to the Runtime and Loading the Agent System

The first task is to connect to the Composabl runtime and load the pre-trained agent system. This is accomplished using the Trainer and Agent classes. The agent system's model is loaded from the directory where the model was saved during training.

async def run_agent():
    # Start Runtime
    trainer = Trainer(config)

    # Load the pre-trained agent
    agent = Agent.load(PATH_CHECKPOINTS)

    # Prepare the agent for inference
    trained_agent = await trainer._package(agent)

Here:

  • Trainer(config) initializes the runtime with a configuration file.

  • Agent.load(PATH_CHECKPOINTS) loads the saved agent from the specified checkpoint directory.

  • trainer._package(agent) prepares the agent for inference by packaging it.


Step 3: Connecting to the Simulation Environment

Next, we connect the agent system to the simulation environment. The make() function creates a connection to the local simulator, and the environment is initialized.

    # Inference
    print("Creating Environment")
    sim = make(
        run_id="run-benchmark",
        sim_id="sim-benchmark",
        env_id="sim",
        address="localhost:1337",
        env_init={},
        init_client=False
    )

    print("Initializing Environment")
    await sim.init()
    print("Initialized")

Here:

  • The simulator is configured to run locally (localhost:1337) and you have to start it locally and manually before.

  • The environment is initialized with sim.init(), and the agent system is connected to it.


Step 4: Setting the Scenario and Running Inference

After connecting to the simulator, you need to set up the specific scenario that the agent system will operate in. This scenario determines the environment's initial state.

    # Set scenario
    noise = 0.0
    await sim.set_scenario(Scenario({
        "Cref_signal": "complete",
        "noise_percentage": noise
    }))

With the environment set, the agent can now run inference for a set number of iterations. At each iteration, the agent observes the environment, takes an action, and collects the results (observations and rewards). This is done in a loop.

    obs_history = []
    df = pd.DataFrame()
    print("Resetting Environment")
    obs, info = await sim.reset()
    obs_history.append(obs)
    action_history = []

    for i in range(90):
        action = await trained_agent._execute(obs)  # Get action from agent
        obs, reward, done, truncated, info = await sim.step(action)  # Step the environment

        # Create a temporary DataFrame for the current observation
        df_temp = pd.DataFrame(columns=[s.name for s in sensors] + ['time'], data=[list(obs) + [i]])
        # Concatenate the new data to the existing DataFrame
        df = pd.concat([df, df_temp])

        obs_history.append(obs)
        action_history.append(action)

        if done:
            break

In each iteration:

  • The agent system performs an action based on the current observations.

  • The environment advances one step with sim.step(action), and the agent receives a new observation and reward.

  • Sensor data and actions are logged into a Pandas DataFrame for later analysis.


Step 5: Saving Data and Plotting Results

Once the inference loop is complete, the collected data is saved, and the results are visualized. The results are plotted using Matplotlib.

    # Save the DataFrame to a pickle file for later use
    df.to_pickle(f"{PATH_HISTORY}/inference_data.pkl")

    # Plot results
    plt.figure(figsize=(10, 5))

    # Plot Temperature Controller Data (Tc)
    plt.subplot(3, 1, 1)
    plt.plot(df.reset_index()['time'], df.reset_index()['Tc'])
    plt.ylabel('Tc')
    plt.legend(['reward'], loc='best')
    plt.title(f'Agent Inference DRL - Noise: {noise}')

    # Plot Temperature and Reference Temperature (T, Tref)
    plt.subplot(3, 1, 2)
    plt.plot(df.reset_index()['time'], df.reset_index()['T'])
    plt.plot(df.reset_index()['time'], df.reset_index()['Tref'], 'r--')
    plt.ylabel('Temp')
    plt.legend(['T', 'Tref'], loc='best')

    # Plot Concentration and Reference Concentration (Ca, Cref)
    plt.subplot(3, 1, 3)
    plt.plot(df.reset_index()['time'], df.reset_index()['Ca'])
    plt.plot(df.reset_index()['time'], df.reset_index()['Cref'], 'r--')
    plt.legend(['Ca', 'Cref'], loc='best')
    plt.ylabel('Concentration')
    plt.xlabel('Iteration')

    # Save plot
    plt.savefig(f"{PATH_BENCHMARKS}/inference_figure.png")

This code generates three subplots:

  1. Temperature Controller (Tc) over time.

  2. Temperature (T) and Reference Temperature (Tref) over time.

  3. Concentration (Ca) and Reference Concentration (Cref) over time.

The plots provide a visual representation of the agent system's performance during the simulation. Finally, the figure is saved as inference_figure.png in the benchmarks directory.


Step 6: Running the Script

To run the script, execute the agent_inference.py in your terminal.

python agent_inference.py

Conclusion

In this tutorial, we demonstrated how to:

  • Connect a pre-trained Composabl agent system to a runtime and simulation environment.

  • Set up a scenario and run inference.

  • Collect observations and actions, and plot the results using Matplotlib.

By following these steps, you can visualize the performance of your agent system and gain insights into how it interacts with the environment over time.

PreviousConnect Runtime Container to Your OperationNextCreating a Cluster

Last updated 23 days ago