Analyze Data
Last updated
Last updated
After training a multi-agent system in Composabl, the system automatically runs a series of standardized tests to evaluate its performance. This benchmarking process:
Places the system in controlled testing environments
Records detailed metrics at each step of operation
Aggregates results to provide comprehensive performance statistics
The output of this testing process is compiled into a structured benchmark.json file, which contains rich performance data that can be analyzed to assess effectiveness, identify improvement opportunities, and compare different design approaches. This file is a performance record and a valuable analytics resource for optimizing your agentic systems.
To download benchmark data for further analysis:
Navigate to the "Training Sessions" page
Click the artifacts dropdown in the top right page of a trained system
Select "Benchmark"
The benchmark.json file will be saved to your local machine
The benchmark.json file contains structured data about the performance of a trained agent system. Here's how to interpret this file:
Scenario Data: Contains reference values for the scenario:
Episode Data: Array of state-action pairs showing how the agent performed in each step:
Aggregate Statistics: Summary statistics for the entire benchmark: