In the previous chapter, Parsing & Symbol Resolution, we built a "Universal Translator" that turns raw code text into a structured, meaningful map. We stored this map in our database.
But there is a problem: Who is going to read this map?
You (the human) could write complex database queries, but the goal of GitNexus is to let AI Agents (like Cursor, Claude, or Windsurf) understand your code.
This chapter introduces the Model Context Protocol (MCP) Server. It is the bridge that allows an AI to "talk" to your database.
Imagine you are working with an AI assistant on a massive project (100,000 lines of code). You ask: "How does the payment system work?"
You cannot copy-paste all 100,000 lines into the chat. The AI has a limit (a "Context Window").
Instead, the AI needs to be able to ask the database questions like:
processPayment function."processPayment."The MCP Server acts like a librarian. The AI asks the librarian for specific information, and the librarian fetches only what is needed from the shelves (KuzuDB).
MCP is an open standard. It works like a USB port for AI. Because GitNexus implements this standard, any AI tool that supports MCP can instantly plug into your knowledge graph without custom code.
We give the AI a specific set of "superpowers" (functions) it can call. These are:
query: "Search for a concept."context: "Give me details about a specific function."impact: "What breaks if I change this?"
These are like read-only files that the AI can open to see summaries, like gitnexus://repo/stats.
Let's look at a concrete example. You act as the developer.
Developer: "I want to rename the User class to Customer. Is that safe?"
Without MCP: The AI guesses based on the few files you have open. It might miss a file in a different folder that uses User.
With MCP:
impact({ target: "User", direction: "downstream" }).User is extended by Admin in auth.ts and imported by 15 other files."The MCP Server runs in the background. You don't "call" it yourself; your AI editor does. However, it helps to understand the tools we are exposing to the AI.
queryThis is the starting point. It combines text search with graph relevance.
AI Input:
{
"query": "authentication flow",
"limit": 3
}
GitNexus Output: Returns a ranked list of "Processes" (execution flows) related to authentication, identifying the exact files and functions involved.
contextOnce the AI finds a function name, it needs the details.
AI Input:
{
"name": "login",
"include_content": true
}
GitNexus Output: Returns the source code of login, PLUS a list of every function that calls it and every function it calls.
impactThis is for safety checks before editing code.
AI Input:
{
"target": "validateToken",
"direction": "upstream" // Who depends on me?
}
GitNexus Output: A "Blast Radius" report showing items categorized by depth (Direct impact vs. Indirect impact).
How does this work under the hood? The MCP server doesn't use HTTP (like a web server); it uses stdio (Standard Input/Output). It listens for JSON messages from the AI editor directly.
Here is the flow of conversation:
Let's look at how we build this in TypeScript. We use the official @modelcontextprotocol/sdk.
In gitnexus/src/mcp/tools.ts, we define the "Menu" of options available to the AI. We must provide a schema so the AI knows what arguments to pass.
// gitnexus/src/mcp/tools.ts
export const GITNEXUS_TOOLS = [
{
name: 'impact',
description: 'Analyze the blast radius of changing a code symbol.',
inputSchema: {
type: 'object',
properties: {
target: { type: 'string', description: 'Name of function/class' },
direction: { type: 'string', enum: ['upstream', 'downstream'] }
},
required: ['target', 'direction'],
},
},
// ... other tools (query, context, etc.)
];
Explanation:
description to decide when to use the tool.inputSchema validates the data the AI sends us.
In gitnexus/src/mcp/server.ts, we initialize the server and tell it how to handle requests.
// gitnexus/src/mcp/server.ts
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { CallToolRequestSchema } from '@modelcontextprotocol/sdk/types.js';
export async function startMCPServer(backend) {
// 1. Create the server instance
const server = new Server(
{ name: 'gitnexus', version: '1.0.0' },
{ capabilities: { tools: {} } }
);
// 2. Tell the AI what tools we have
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: GITNEXUS_TOOLS
}));
// ... continued below
Explanation:
Server object.ListToolsRequestSchema), we return the list we defined above.When the AI actually uses a tool, we execute the logic.
// 3. Handle the actual execution
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
// "backend" is our link to KuzuDB (Chapter 2)
const result = await backend.callTool(name, args);
// Add a "Next Step Hint" to guide the AI
const hint = getNextStepHint(name, args);
return {
content: [{ type: 'text', text: JSON.stringify(result) + hint }]
};
});
}
Explanation:
request.params.name to see which tool was called (e.g., "impact").backend, which runs the Cypher queries we learned about in Chapter 2: Graph Persistence.query, we hint: "Next: Use context() to see details." This helps "chain of thought" reasoning.Finally, we connect the server to the terminal input/output.
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
// ... inside startMCPServer
const transport = new StdioServerTransport();
await server.connect(transport);
Explanation:
stdin, and GitNexus writes answers to stdout.We have successfully built the "API Gateway" for our AI agents.
query and impact.Now, your AI editor isn't just guessing based on open files. It has deep, structural knowledge of the entire repository.
But sometimes, humans need to see the big picture too. Lines of text and JSON are hard for people to visualize.
In the next chapter, we will build a visual dashboard to let you see the graph that the AI sees.
Next Chapter: Web Graph Visualization
Generated by Code IQ