Welcome to Chapter 5!
In the previous chapter, LLM Client Interface, we gave our agents brains by connecting them to AI models. Before that, in Platform Infrastructure (Launcher & Server), we built the server where they live.
Now we have a problem of scale.
If you want to test a hypothesis—like "Do customers buy more if the price is lower?"—you can't just run one agent manually. You need to simulate 50 businesses and 100 customers interacting simultaneously. You need to run this simulation over and over again to get good data.
Manually starting 150 terminal windows is impossible.
We need a Director for our movie. We need Experiment Orchestration.
Orchestration is the layer that automates the entire lifecycle of a simulation. It connects Static Data (files on your disk) to Dynamic Runtime (active agents).
Instead of hard-coding "Bob the Builder" inside Python code, we define our agents in YAML files. This separates the code (logic) from the content (scenarios).
Example Business (YAML):
id: "business_01"
name: "Luigi's Pizza"
menu_features:
"Pepperoni Pizza": 15.00
"Cheese Pizza": 12.00
amenity_features:
"Outdoor Seating": true
The "Experiment Runner" is a Python script that reads these files and brings them to life. Its job is to:
Let's see how we actually run an experiment using the Command Line Interface (CLI).
We use a tool called magentic-marketplace to kick off the process.
magentic-marketplace run ./data/my_experiment \
--experiment-name "pizza_price_test_v1" \
--customer-max-steps 20 \
--search-algorithm simple
When you run this, the computer takes over. You will see logs scrolling by as the database is created, agents log in, and transactions occur. When it stops, you have a database full of results.
How does the code turn text files into a living economy? Let's trace the flow.
The heart of this logic lives in magentic_marketplace/experiments/run_experiment.py.
Let's break down the code into understandable chunks.
First, we need to turn those YAML files into Python objects. We use helper functions to scan the directories.
# magentic_marketplace/experiments/run_experiment.py
async def run_marketplace_experiment(data_dir, ...):
# Define paths to your data folders
data_path = Path(data_dir)
# Load the static profiles from YAML into lists of objects
businesses = load_businesses_from_yaml(data_path / "businesses")
customers = load_customers_from_yaml(data_path / "customers")
print(f"Loaded {len(customers)} customers and {len(businesses)} businesses")
Next, we initialize the MarketplaceLauncher. As we learned in Chapter 3, this manages the server. We also create a unique database for this specific run so we don't mix up data from different experiments.
# Create a unique name for this run's database schema
if experiment_name is None:
experiment_name = f"run_{int(datetime.now().timestamp())}"
# Initialize the Launcher (The Server Wrapper)
marketplace_launcher = MarketplaceLauncher(
protocol=SimpleMarketplaceProtocol(),
database_factory=database_factory, # Connects to Postgres
experiment_name=experiment_name,
)
Now we take the loaded data (customers list) and wrap them in the actual Agent classes (CustomerAgent). We also tell them where the server is (server_url).
async with marketplace_launcher:
# Create Business Agents (The Shopkeepers)
business_agents = [
BusinessAgent(profile, marketplace_launcher.server_url)
for profile in businesses
]
# Create Customer Agents (The Shoppers)
customer_agents = [
CustomerAgent(profile, marketplace_launcher.server_url, ...)
for profile in customers
]
This is a critical detail. In the real world, you can't buy pizza if the pizza shop isn't open yet.
The orchestration layer ensures Businesses start first. Only after they are ready do the Customers enter the world.
# We use the AgentLauncher helper
async with AgentLauncher(marketplace_launcher.server_url) as runner:
# This smart function starts 'dependent_agents' (Businesses) first,
# waits for them, and then starts 'primary_agents' (Customers).
await runner.run_agents_with_dependencies(
primary_agents=customer_agents,
dependent_agents=business_agents
)
You might be wondering: "Do I have to write 100 YAML files by hand?"
No! We have a script for that too.
In data/data_generation_scripts/generate_customers_and_businesses.py, we use LLMs to generate synthetic personalities.
This allows us to generate massive, diverse datasets in minutes.
In this chapter, we learned how to be the Director of our simulation:
Now, you have run your experiment. The agents have talked, traded, and finished their shopping. You have a database full of actions.
But... what actually happened? Did the agents make good deals? Did the price bias work?
To answer these questions, we need to Visualize the data.
Next Chapter: Simulation Visualization
Generated by Code IQ