Welcome to the fourth chapter of the Services project tutorial!
In the previous chapter, Context Compaction, we learned how to keep our conversation history light so the AI doesn't get overwhelmed.
At this point, we have a smart conversationalist. It can talk, remember, and summarize. But it is still just a "brain in a jar." It cannot touch anything. It cannot run a terminal command, edit a file, or browse the web.
The Tool Execution Pipeline acts as the Mechanical Hands of the AI. It translates the AI's textual desire ("I want to read file.txt") into actual operating system actions.
Imagine you are a Chef (the AI). You have a recipe and the knowledge, but you are stuck in a control room. You can only shout orders through a microphone.
You need a Sous-Chef (The Tool Pipeline) in the kitchen.
Goal: The AI wants to fix a bug. It decides to:
src/index.ts (to see the code).src/utils.ts (to see helper functions).Action: The Pipeline notices both are "read-only" actions. It runs them simultaneously to save time. It gets the file contents and feeds them back to the AI.
The AI doesn't run code directly. It outputs a special JSON block called tool_use. It looks like this:
{ "tool": "readFile", "path": "src/index.ts" }
Our pipeline constantly scans the AI's response for these blocks.
Before executing any tool, the pipeline checks if the user allows it.
rm -rf / will trigger a popup asking you to click "Approve."When the AI decides to act, the pipeline takes over.
Let's break down the actual TypeScript code that manages this flow.
toolOrchestration.ts)
The AI might ask for 10 things at once. We need to group them. We check if tools are isConcurrencySafe (read-only).
// services/tools/toolOrchestration.ts (Simplified)
function partitionToolCalls(toolMessages) {
const batches = []
// Group consecutive safe tools (like "Read File") together
// But isolate unsafe tools (like "Write File") into their own batch
for (const tool of toolMessages) {
if (isReadBuffer(tool)) {
addToCurrentBatch(tool)
} else {
startNewBatch(tool)
}
}
return batches
}
Explanation: This organizes the "orders." If we have [Read, Read, Write, Read], it becomes Batch 1: [Read, Read] (Parallel), Batch 2: [Write] (Wait for finish), Batch 3: [Read] (Parallel).
toolOrchestration.ts)Now we iterate through our sorted batches.
// services/tools/toolOrchestration.ts (Simplified)
export async function* runTools(batches) {
for (const batch of batches) {
if (batch.isSafe) {
// Run all read requests at the exact same time
yield* runToolsConcurrently(batch.tools)
} else {
// Run sensitive requests one by one
yield* runToolsSerially(batch.tools)
}
}
}
Explanation: runToolsConcurrently uses Promise.all to be fast. runToolsSerially uses a simple for loop to be safe.
toolExecution.ts)Before we actually touch the system, we stop at the checkpoint.
// services/tools/toolExecution.ts (Simplified)
async function checkPermissionsAndCallTool(tool, input) {
// 1. Ask the Permission System (Hooks/User Settings)
const decision = await resolveHookPermissionDecision(tool, input)
// 2. If the user said "No", stop immediately
if (decision.behavior !== 'allow') {
return "Error: User denied this action."
}
// 3. User said "Yes", proceed to execution
return await executeTool(tool, input)
}
Explanation: This is where the popup "Allow Command?" happens. If the user rejects it, the function returns an error string to the AI, so the AI knows it wasn't allowed to do that.
toolExecution.ts)Finally, we run the tool logic and format the output.
// services/tools/toolExecution.ts (Simplified)
async function executeTool(tool, input) {
try {
// Run the actual function (e.g., fs.readFile)
const result = await tool.call(input)
// Log success for telemetry
logEvent('tool_success', { toolName: tool.name })
// Return the data
return result
} catch (error) {
// If the file doesn't exist, tell the AI
return `Error: ${error.message}`
}
}
Explanation: This calls the specific tool class (like FileReadTool). It captures the output (or error) and sends it back up the chain to be given to the AI.
We have given our AI hands!
Now the AI can connect, remember, manage its context, and execute standard tools. But what if we want to connect to external tools provided by other applications, like a database client or a Slack integration?
We need a standard protocol for connecting to outside tools.
Next Chapter: Model Context Protocol (MCP)
Generated by Code IQ