Chapter 1 · CORE

Local Proxy Gateway

📄 01_local_proxy_gateway.md 🏷 Core

Chapter 1: Local Proxy Gateway

Welcome to the first chapter of the cc-switch tutorial!

In this project, we are building a tool that gives you superpowers over your AI coding assistants (like Claude Code or Codex). To do this, we need a central "Traffic Controller" that sits between your tools and the internet. We call this the Local Proxy Gateway.

The Problem: Direct Connections are rigid

Imagine you are using a command-line tool like Claude Code. By default, it connects directly to Anthropic's servers.

flowchart LR A[Claude Code CLI] -->|Direct Internet Connection| B[Anthropic API]

The issue: If you want to switch to a different provider (like OpenRouter) to save money, or if you want to log how much you're spending, you can't. The tool is a "black box" talking directly to the internet.

The Solution: A Private Switchboard

The Local Proxy Gateway acts like a private switchboard operator. Instead of your CLI tool calling the API directly, it calls your local computer (localhost).

Your application (cc-switch) answers the call, looks at the request, and decides what to do with it—like logging it or forwarding it to a different provider—without the CLI tool ever knowing the difference.

flowchart LR A[Claude Code CLI] -->|Local Request| B(Local Proxy Gateway\nlocalhost:15721) B -->|Forward Request| C[Anthropic API] B -.->|Or Route to| D[OpenRouter] B -.->|Or Route to| E[Other Provider]

Key Concepts

1. The Listener

The gateway listens on a specific port (default 15721) on your computer. It waits for incoming HTTP requests, just like a web server waits for browser visits.

2. The Shared State

Since this is a desktop app, the server needs to know what the user wants. Is the "Interceptor" turned on? Which provider did the user select in the UI? We store this in a thread-safe ProxyState.

3. The Router

Different tools speak different languages (endpoints).

The router directs these different conversation types to the right logic.


Usage: How it Works

From the user's perspective, enabling the proxy is as simple as flipping a switch in the UI.

In the React Frontend (ProxyPanel.tsx)

When you click "Start Proxy" in the dashboard, the frontend calls a Rust command.

// src/components/proxy/ProxyPanel.tsx (Simplified)

// When the user toggles the switch
const handleTakeoverChange = async (appType: string, enabled: boolean) => {
    // Call the Rust backend to enable interception
    await setTakeoverForApp.mutateAsync({ appType, enabled });
    toast.success(`${appType} takeover enabled`);
};

In the Rust Backend (server.rs)

The Rust backend launches a lightweight HTTP server using a library called Axum.

Here is how we define the server structure. It holds the database connection and configuration.

// src-tauri/src/proxy/server.rs

pub struct ProxyServer {
    config: ProxyConfig,
    state: ProxyState, // Holds database & status
    // A channel to send a "stop" signal to the server later
    shutdown_tx: Arc<RwLock<Option<oneshot::Sender<()>>>>,
}

When you start the server, it binds to your local address (e.g., 127.0.0.1:15721).

// src-tauri/src/proxy/server.rs

pub async fn start(&self) -> Result<ProxyServerInfo, ProxyError> {
    // 1. Parse the address (e.g., 127.0.0.1:15721)
    let addr: SocketAddr = format!("{}:{}", self.config.listen_address, self.config.listen_port)
        .parse()
        .map_err(|e| ProxyError::BindFailed(format!("Invalid address: {e}")))?;

    // 2. Build the router (the switchboard logic)
    let app = self.build_router();

    // ... (Log startup and set status to running)
}

The actual listening happens in a background task so it doesn't freeze your UI:

// src-tauri/src/proxy/server.rs

// 3. Spawn the server in the background
tokio::spawn(async move {
    // axum::serve runs the server until it receives a shutdown signal
    axum::serve(listener, app)
        .with_graceful_shutdown(async {
            shutdown_rx.await.ok();
        })
        .await
        .ok();
});

Beginner Note: tokio::spawn is like creating a new thread. It lets the server run in the background while your main application keeps responding to mouse clicks.


Internal Implementation: The Request Lifecycle

What happens when a CLI tool sends a request? Let's trace the path.

Sequence Diagram

sequenceDiagram participant CLI as Claude Code CLI participant GW as Local Gateway participant RTR as Router participant DB as SQLite DB CLI->>GW: POST http://127.0.0.1:15721/v1/messages GW->>RTR: Match URL path "/v1/messages" RTR->>DB: Check active provider config RTR->>GW: Logic: Forward to OpenRouter GW-->>CLI: Response (Stream)

Defining the Routes

The "Router" is the map that tells the server which function handles which URL.

// src-tauri/src/proxy/server.rs

fn build_router(&self) -> Router {
    Router::new()
        // If a request comes to /health, run health_check
        .route("/health", get(handlers::health_check))
        
        // If a request comes to /v1/messages (Claude), run handle_messages
        .route("/v1/messages", post(handlers::handle_messages))
        
        // Pass the shared state to all handlers
        .with_state(self.state.clone())
}

Sharing State

The handlers (the functions that actually process the request) need access to the database and configuration. We achieve this with ProxyState.

// src-tauri/src/proxy/server.rs

#[derive(Clone)]
pub struct ProxyState {
    pub db: Arc<Database>, // Access to settings
    pub status: Arc<RwLock<ProxyStatus>>, // Is it running?
    // Tracks which provider is currently active (e.g., "openai" vs "openrouter")
    pub current_providers: Arc<RwLock<HashMap<String, (String, String)>>>,
}

By passing this state to the router, every time a request comes in, the handler knows exactly which provider user wants to use.

Summary

In this chapter, we built the foundation of cc-switch:

  1. We created a Local Proxy Gateway that acts as a middleman.
  2. We configured it to listen on localhost:15721.
  3. We set up a Router to distinguish between different API calls (like Claude vs. OpenAI).

Now that we have the request trapped in our local server, the real magic begins. How do we decide where to send it? And what happens if that provider is down?

In the next chapter, we will learn how the gateway makes smart decisions.

Next Chapter: Intelligent Routing & Failover


Generated by Code IQ