Welcome to the final chapter of the Multi-Agent Marketplace tutorial!
In the previous chapter, Experiment Orchestration, we learned how to run large-scale simulations. We launched dozens of agents, and they interacted, traded, and generated a massive amount of data.
But now we have a new problem.
Imagine you ran a simulation for an hour. You have a database filled with thousands of rows of text. How do you know if the agents actually behaved intelligently? Did they haggle? Did they get ripped off? Reading raw database logs (SELECT * FROM messages) is like trying to read the Matrix codeβit's exhausting and unintuitive.
We need a way to watch the simulation. We need Simulation Visualization.
The Visualizer is a web dashboard built with React. It acts like a security camera room for your marketplace.
It connects to the same database your agents are writing to, but it reads the data in "read-only" mode. It transforms raw text logs into a beautiful, chat-like interface (similar to WhatsApp or Slack) so you can observe the social dynamics of your AI agents.
App.tsx)
The heart of the visualizer is the main App component. Its primary job is Polling.
Since the simulation is live, the database changes every second. The React app needs to ask the server, "Anything new?" repeatedly.
Here is how the app keeps the data fresh:
// packages/marketplace-visualizer/src/App.tsx
useEffect(() => {
const initializeApp = async () => {
// 1. Load the static list of agents once
await loadInitialData();
// 2. Load messages immediately
loadMessages();
// 3. Set up a timer to refresh messages every 5 seconds
const interval = setInterval(loadMessages, 5000);
return () => clearInterval(interval);
};
initializeApp();
}, [loadMessages]);
Explanation:
loadInitialData: Fetches the list of Shops and Customers (which usually doesn't change).setInterval: Every 5,000 milliseconds (5 seconds), it runs loadMessages to fetch new chat bubbles.
In the middle of the screen, we have the feed of all active conversations. This is handled by MarketplaceCenter.tsx.
A key feature here is Filtering. If you click on "Customer Alice" in the sidebar, the center panel updates to show only Alice's chats.
// packages/marketplace-visualizer/src/App.tsx (Logic view)
const filteredMessageThreads = useMemo(() => {
let filtered = data.messageThreads;
// If a customer is clicked, keep only their threads
if (selectedCustomer) {
filtered = filtered.filter(
(thread) => thread.participants.customer.id === selectedCustomer.id,
);
}
return filtered;
}, [data?.messageThreads, selectedCustomer]);
What happens here?
The useMemo hook is a performance optimization. It says: "Only recalculate the list if the data changes or the user clicks a new customer." This keeps the interface snappy even if there are thousands of messages.
The most important part of the visualizer is the Conversation component.
In Chapter 2: Marketplace Protocol & Actions, we learned that agents send different types of messages: TextMessage, OrderProposal, and Payment.
The visualizer needs to render these differently so humans can scan them quickly. A generic text bubble isn't enough for a financial transaction.
We use a helper function to choose an icon based on the message type.
// packages/marketplace-visualizer/src/components/Conversation.tsx
const getMessageIcon = (type: string) => {
switch (type.toLowerCase()) {
case "payment":
return <CreditCard className="h-4 w-4" />; // π³ Icon
case "order_proposal":
return <MessageSquare className="h-4 w-4" />; // π Icon
case "search":
return <Search className="h-4 w-4" />; // π Icon
default:
return <Send className="h-4 w-4" />; // βοΈ Icon
}
};
This simple visual cue allows a researcher to scroll through a long thread and immediately spot where the money changed hands (the Credit Card icon).
The visualizer also calculates "Utility" (how happy the customer was) and counts payments in real-time.
// packages/marketplace-visualizer/src/components/Conversation.tsx
const conversationStats = useMemo(() => {
// Count how many times money was sent
const payments = thread.messages.filter(
(m) => m.type === "payment"
).length;
// Get the utility score (calculated by the backend)
const utility = thread.utility;
return { payments, utility };
}, [thread.messages]);
By calculating this in the browser, we can display stats like "Payments: 1 | Utility: $5.00" right on the conversation card.
How does a Python backend talk to a React frontend?
The visualizer doesn't connect directly to the database (that is insecure for web browsers). Instead, it talks to the same API Server we built in Chapter 3: Platform Infrastructure (Launcher & Server).
Using this tool, we can spot interesting behaviors.
Example Scenario: The Stubborn Shopkeeper
In the visualizer, you would see this as a thread with 4 text bubbles followed by a Payment Card. You can immediately see that the negotiation failed (the customer paid full price), but the transaction succeeded.
Congratulations! You have completed the Multi-Agent Marketplace Tutorial.
Let's recap your journey:
You now have a fully functional, observable, and scalable simulation environment. You can use this to test economic theories, train better sales bots, or simply watch AI try to sell pizza to each other.
Where to go from here?
system_prompt of the agents to make them aggressive negotiators.Refund) to the Protocol.Thank you for following along. Happy coding!
Generated by Code IQ