Analyze Data
Benchmark Testing and Data Generation
After training a multi-agent system in Composabl, the system automatically runs a series of standardized tests to evaluate its performance. This benchmarking process:
Places the system in controlled testing environments
Records detailed metrics at each step of operation
Aggregates results to provide comprehensive performance statistics
The output of this testing process is compiled into a structured benchmark.json file, which contains rich performance data that can be analyzed to assess effectiveness, identify improvement opportunities, and compare different design approaches. This file is a performance record and a valuable analytics resource for optimizing your agentic systems.
Downloading Benchmark Artifacts
To download benchmark data for further analysis:
Navigate to the "Training Sessions" page
Click the artifacts dropdown in the top right page of a trained system
Select "Benchmark"
The benchmark.json file will be saved to your local machine

Understanding the Benchmark.json File
The benchmark.json file contains structured data about the performance of a trained agent system. Here's how to interpret this file:
File Structure
{
"skill-name": {
"scenario-0": {
"scenario_data": { ... },
"episode-0": [ ... ],
"aggregate": { ... }
}
}
}
Key Components
Scenario Data: Contains reference values for the scenario:
"scenario_data": {
"sensor_one": {"data": 8.57, "type": "is_equal"},
"sensor_two": {"data": 373, "type": "is_equal"}
}
Episode Data: Array of state-action pairs showing how the agent performed in each step:
[
{
"state": "{'sensor_one': array([311.2639], dtype=float32), ...}",
"action": "[-1.253192]",
"teacher_reward": 1.0,
"teacher_success": false,
"teacher_terminal": null
},
...
]
Aggregate Statistics: Summary statistics for the entire benchmark:
"aggregate": {
"mean": { ... },
"medians": { ... },
"std_dev": { ... },
"max": { ... },
"min": { ... }
}
Last updated