In the previous chapter, Output Styling and Persona, we gave the AI a voice and a personality. We decided whether it should act like a strict engineer or a patient teacher.
Now that the AI has a Brain (Chapter 1) and a Voice (Chapter 2), we need to give it Hands.
However, giving an AI "hands" (the ability to run shell commands, edit files, or search the web) is risky. What if it tries to read a 10GB movie file as text? What if it tries to spawn a copy of itself infinitely until your computer crashes?
This chapter introduces Tool Governanceβthe safety protocols, weight limits, and security badges that keep the AI helpful but harmless.
Imagine the AI is a worker in a digital workshop.
.png image is not. Trying to "read" an image as text results in gibberish.
We don't give the AI access to every tool all the time. We use Sets (lists of unique items) to define boundaries.
In tools.ts, we explicitly define what is allowed for different modes.
If an AI agent creates another agent, and that agent creates another agent, we get an infinite loop. We prevent this using a "Denylist."
// From tools.ts
import { AGENT_TOOL_NAME } from '../tools/AgentTool/constants.js'
// A list of tools that are FORBIDDEN for sub-agents
export const ALL_AGENT_DISALLOWED_TOOLS = new Set([
// Don't let an agent hire another agent!
AGENT_TOOL_NAME,
// Don't let a sub-agent stop the main task!
'task_stop_tool'
])
Explanation: If a sub-agent tries to use the agent_tool, the system checks this list, sees it is forbidden, and blocks the request.
Large Language Models (LLMs) have a "Context Window"βa limit on how much text they can remember. If a tool returns 500 pages of text, the AI forgets the original instruction.
We define these limits in toolLimits.ts.
// From toolLimits.ts
// Max characters allowed before we cut the output off
export const DEFAULT_MAX_RESULT_SIZE_CHARS = 50_000
// Max tokens (approx 4 bytes each) allowed
export const MAX_TOOL_RESULT_TOKENS = 100_000
Explanation: If the AI runs a command that outputs 1,000,000 characters, the system intercepts it, saves it to a file, and only shows the AI a preview. This protects the AI's "Brain."
The AI works with text. If it tries to read a binary file (like an image or executable), the output looks like Q. This confuses the AI.
We use files.ts to detect these files before reading them.
// From files.ts
export const BINARY_EXTENSIONS = new Set([
'.png', '.jpg', '.gif', // Images
'.exe', '.bin', // Executables
'.zip', '.pdf' // Archives
])
export function hasBinaryExtension(filePath: string): boolean {
const ext = filePath.slice(filePath.lastIndexOf('.')).toLowerCase()
return BINARY_EXTENSIONS.has(ext)
}
Explanation: Before the fs.readFile tool runs, we check the extension. If it's on this list, we tell the AI: "I cannot read this file as text."
Here is what happens when the AI tries to use a tool.
Let's look at the code that enforces these rules.
Async agents are background workers. We want them to code, but not manage the project lifecycle.
// From tools.ts
export const ASYNC_AGENT_ALLOWED_TOOLS = new Set([
'file_read_tool', // Yes: Read code
'file_edit_tool', // Yes: Fix bugs
'web_search_tool', // Yes: Look up docs
'grep_tool' // Yes: Search text
])
Explanation: This is an Allowlist. If a tool is not on this list, the background agent cannot use it. Notice that task_stop_tool is missingβthe background worker cannot decide to quit the job.
Sometimes a file ends in .txt but contains binary garbage. We inspect the actual bytes (buffer) to be sure.
// From files.ts
export function isBinaryContent(buffer: Buffer): boolean {
// Check the first few bytes
const checkSize = Math.min(buffer.length, 8192)
for (let i = 0; i < checkSize; i++) {
const byte = buffer[i]
// If we find a "Null Byte" (0), it's definitely binary
if (byte === 0) return true
}
return false
}
Explanation: Text files rarely contain a "Null Byte" (a byte with value 0). If we see one, we assume the file is binary (like an image or compiled code) and stop the read operation.
When a tool returns a massive result, we need to make a decision based on the constants in toolLimits.ts.
// Pseudo-code implementation logic
function handleToolResult(output: string) {
// Check against our constant
if (output.length > DEFAULT_MAX_RESULT_SIZE_CHARS) {
const path = saveToTempFile(output)
return `Output too large (${output.length} chars). ` +
`Content saved to ${path} for you to read via grep.`
}
return output
}
Explanation: This logic ensures the DEFAULT_MAX_RESULT_SIZE_CHARS limit we defined earlier is actually enforced. The AI gets a pointer to the data instead of the raw data itself.
In this chapter, we learned:
ALL_AGENT_DISALLOWED_TOOLS) to prevent dangerous behaviors like recursive agent spawning.BINARY_EXTENSIONS and content inspection to prevent the AI from choking on non-text files.DEFAULT_MAX_RESULT_SIZE_CHARS to prevent data floods from overwhelming the AI's context window.This system ensures that the AI's "hands" are strong but safe.
Now that the AI has a Brain, a Voice, and Safe Hands, it needs a way to communicate its thoughts and tool requests to the system. We don't just paste raw text back and forth; we use a structured protocol.
Next Chapter: XML Messaging Protocol
Generated by Code IQ