Chapter 6 ยท AGENTS

Multi-Agent Architecture

๐Ÿ“„ 06_multi_agent_architecture.md ๐Ÿท Agents

Chapter 6: Multi-Agent Architecture

In the previous chapter, Structured Memory Systems, we gave our agent a long-term memory so it wouldn't forget important facts.

But now we face a different problem: Complexity.

What if you want an AI that can write code, book travel, analyze stocks, and provide medical advice? If you try to stuff the instructions for all those jobs into one single System Prompt, you create a "God Object." The instructions contradict each other. The tool definitions overflow the context window. The agent gets confused.

The solution is Multi-Agent Architecture.

The Problem: The Overworked Employee

Imagine a small startup with only one employee. This employee is the CEO, the Receptionist, the Coder, and the Janitor.

In AI terms, this is a Single Agent. It has one context window filled with instructions for every possible task. It suffers from Context Pollution.

The Solution: The Organizational Chart

We solve this by mimicking a human company. We don't hire one person to do everything; we hire specialists.

  1. The Supervisor (The Boss): Their only job is to understand the user's request and decide who should handle it. They have no tools other than "Delegate."
  2. The Workers (The Specialists): They are experts in one tiny domain (e.g., "Flight Booker"). They have a very clean context window containing only flight-related tools.

This approach is called Context Isolation.

Use Case: The Travel Agency

Let's build a system that can book a complete vacation.

Defining the Workers

First, we define our specialists. Notice how short and focused their instructions are.

Worker 1: The Flight Agent

This agent thinks it is the only AI in existence. Its universe is just airports.

flight_agent_prompt = {
    "role": "system",
    "content": """
    You are a Flight Specialist. 
    You can ONLY search for flights.
    Tools available: [search_flights, book_flight].
    """
}

Worker 2: The Hotel Agent

This agent has a totally different set of tools.

hotel_agent_prompt = {
    "role": "system",
    "content": """
    You are a Hotel Specialist.
    You can ONLY search for hotels.
    Tools available: [find_hotel, reserve_room].
    """
}

Defining the Supervisor

The Supervisor is the "Router." It doesn't know how to book a flight. It only knows who can.

Its "Tools" are actually just instructions to hand off the conversation to someone else.

supervisor_prompt = {
    "role": "system",
    "content": """
    You are a Travel Manager. You receive requests from users.
    
    Decide which worker should handle the next step:
    1. 'Flight_Agent' for air travel.
    2. 'Hotel_Agent' for accommodation.
    
    Output the name of the agent to call.
    """
}

Explanation: The Supervisor's output isn't a final answer to the user. It is an internal command to switch the active agent.

Internal Implementation: The Handoff Loop

How does the code actually switch brains? It works like a relay race. We hold a variable called current_active_agent.

The Flow Diagram

sequenceDiagram participant U as User participant S as Supervisor participant F as Flight Agent participant H as Hotel Agent U->>S: "I need a flight to Tokyo." Note over S: Thinks: "This is a flight task." S->>F: Handoff Control Note over F: Context Loaded: Flight Tools F->>F: search_flights("Tokyo") F->>S: "Found flight UA803." S->>U: "I found flight UA803 for you."

The Code: The Router Loop

We use a simple loop to manage the state.

# Start with the Supervisor in charge
current_agent = "Supervisor" 

def run_multi_agent_system(user_message):
    messages = [{"role": "user", "content": user_message}]
    
    while True:
        # 1. Ask the CURRENT agent for a response
        response = get_llm_response(current_agent, messages)
        
        # 2. Check if the response is a "Handoff Command"
        if response == "Flight_Agent":
            current_agent = "Flight_Agent" # Switch brains!
            print("๐Ÿ”„ Switching to Flight Agent...")
            continue # Loop again with the new agent
            
        # 3. If it's normal text, return it to the user
        return response

Explanation:

  1. Step 1: The LLM generates a response based on who current_agent is.
  2. Step 2: If the Supervisor says the name of a worker, we update the current_agent variable.
  3. Step 3: The loop restarts. Now, the get_llm_response function uses the Flight Agent's system prompt and tools instead of the Supervisor's.

Context Isolation in Action

Why is this better than one big agent?

Scenario: The user asks about a hotel.

Sharing History

There is one tricky part: Shared History. Usually, when the Supervisor hands off to the Flight Agent, we pass the conversation history along.

def handoff(new_agent_name, history):
    """
    Switches the active system prompt but keeps the 
    conversation history so the worker knows what to do.
    """
    new_system_prompt = load_prompt(new_agent_name)
    
    # The worker sees: [New Rules] + [Old Conversation]
    return [new_system_prompt] + history

Summary

In this chapter, you learned:

  1. The "God Object" Problem: Too many tools confuse the AI.
  2. The Supervisor Pattern: Separate "Management" (Routing) from "Labor" (Execution).
  3. Context Isolation: Specialized agents perform better because they are focused.

We have built a powerful system! It remembers facts, it thinks before it acts, and it delegates tasks to specialists.

But there is one final question. As we build these complex systems, how do we know if they are actually working correctly? How do we grade them?

We need a Judge.

Next Chapter: LLM-as-a-Judge


Generated by Code IQ