Chapter 7 · CORE

Gateway API

📄 07_gateway_api.md 🏷 Core

Chapter 7: Gateway API

Welcome to the final chapter of the deer-flow tutorial!

In Chapter 6: Long-Term Memory Updater, we gave our AI a permanent memory so it remembers you. Before that, we built the Brain (Lead Agent), the Hands (Skills), the Workers (Sub-Agents), and the Safety Lab (Sandbox).

We have built a powerful engine. But currently, all these parts are hidden deep inside Python scripts and configuration files.

If the Frontend (built in Chapter 1) wants to know: "Which AI models are available?" or "What skills can I use?", it cannot read your backend's config.yaml file directly. That would be insecure and messy.

In this chapter, we build the Gateway API. This is the "Receptionist" of our system—a clean, unified interface that connects the visual Frontend to the complex Backend.

The Motivation: The Hotel Receptionist

Imagine a large hotel.

If a guest wants a towel, they don't wander into the basement to find the laundry room. They call the Receptionist. The Receptionist knows where the laundry room is, gets the towel, and hands it to the guest.

In deer-flow:

  1. LangGraph handles the conversation (The actual stay).
  2. The Gateway handles the administration (Check-in, Room Service menu, Bill).

It aggregates everything—Models, Memory, Skills, and Files—into a simple set of web commands.

Central Use Case: "The Settings Panel"

Let's imagine the user opens the Settings menu in the Frontend. They want to see:

  1. Model: A dropdown list to choose between "GPT-4" or "DeepSeek".
  2. Skills: A list of enabled tools (e.g., "Data Analysis").
  3. Memory: A view of what the AI knows about them.

The Frontend will make a request to the Gateway to get this data.


Key Concept: The API Router

We build the Gateway using FastAPI, a modern Python framework. The core concept here is the Router.

Think of Routers as different "Department Desks" at the reception.

Instead of one giant file with 100 functions, we split them into organized folders.


Implementation: The Gateway Application

Let's look at backend/src/gateway/app.py. This is the entry point. It sets up the server and connects the departments.

1. The Setup

We create the application and give it a title.

# src/gateway/app.py

def create_app() -> FastAPI:
    app = FastAPI(
        title="DeerFlow API Gateway",
        description="API Gateway for DeerFlow...",
        version="0.1.0",
    )
    
    # ... code continues ...
    return app

Explanation: This initializes the web server. It creates the "Building" where our receptionist works.

2. Plugging in the Departments (Routers)

This is the most important part. We tell the application which specific APIs we want to expose to the world.

    # Inside create_app() function
    
    # 1. Plug in the Models Department
    app.include_router(models.router)

    # 2. Plug in the Memory Department
    app.include_router(memory.router)

    # 3. Plug in the Skills Department
    app.include_router(skills.router)

    # 4. Plug in the Artifacts (Files) Department
    app.include_router(artifacts.router)

Explanation: app.include_router is like opening a service window. Now, if someone visits /api/models, the models.router handles it. If they visit /api/memory, the memory.router handles it.

3. The Health Check

How do we know if the server is alive? We add a simple "heartbeat" endpoint.

    @app.get("/health", tags=["health"])
    async def health_check() -> dict:
        return {
            "status": "healthy", 
            "service": "deer-flow-gateway"
        }

Explanation: When a monitoring system (or the Frontend) pings /health, it gets a JSON response saying "I'm OK!"


How It Works: The Data Flow

Let's trace what happens when the Frontend asks for the list of available AI models.

Sequence Diagram

sequenceDiagram participant F as Frontend participant G as Gateway (FastAPI) participant C as Config Loader participant Y as config.yaml F->>G: GET /api/models G->>C: get_app_config() Note right of G: Gateway asks for internal settings C->>Y: Read YAML file Y-->>C: Return raw data C-->>G: Return AppConfig Object G->>G: Filter and Format as JSON G-->>F: { "models": ["gpt-4", "deepseek"] }

Deep Dive: Configuration Management

The Gateway doesn't "guess" the configuration. It uses the AppConfig system we see in backend/src/config/app_config.py.

This system is the single source of truth. It reads config.yaml and environment variables (like API keys) and provides them to the Gateway securely.

Reading the Config

The Gateway uses a helper function to load settings safely.

# src/config/app_config.py (Simplified)

def get_app_config() -> AppConfig:
    global _app_config
    # Singleton pattern: Load once, reuse everywhere
    if _app_config is None:
        _app_config = AppConfig.from_file()
    return _app_config

Explanation: This ensures we don't re-read the file from the hard drive 100 times a second. We load it once into memory and serve it fast.

MCP Configuration (Extensions)

One specific thing the Gateway handles is the MCP (Model Context Protocol) configuration. As seen in backend/src/mcp/client.py, this allows us to connect external tools (like a Google Drive connector or a Slack connector).

# src/mcp/client.py (Simplified)

def build_servers_config(extensions_config):
    # Get list of enabled servers from config
    enabled_servers = extensions_config.get_enabled_mcp_servers()

    # Format them for the client
    results = {}
    for name, config in enabled_servers.items():
        results[name] = build_server_params(name, config)
        
    return results

Explanation: The Gateway converts the raw MCP settings into a format that the LangGraph agents can actually use to connect to external tools.


Putting it All Together

With the Gateway in place, our architecture is complete.

  1. User types a message in the Frontend (Chapter 1).
  2. Frontend sends the message to the Lead Agent (Chapter 2).
  3. Lead Agent checks Memory (Chapter 6).
  4. Lead Agent decides to use a Skill (Chapter 3).
  5. If complex, it delegates to a Sub-Agent (Chapter 4).
  6. The code runs in a Sandbox (Chapter 5).
  7. The Gateway (Chapter 7) oversees the configuration, serves the files (Artifacts), and lets the Frontend know what is happening.

Conclusion

Congratulations! You have toured the entire deer-flow system.

We moved from a simple "chat bubble" interface to a fully-fledged AI Operating System. By breaking the system down into these 7 distinct chapters, we ensured that every part has a single responsibility:

You are now ready to start building your own agents, adding new skills, or customizing the frontend to your liking.

Happy Coding!


Generated by Code IQ