In the previous chapter, Channel Gateway, we gave our bot ears to hear messages and a mouth to speak them. But right now, if you say "Hello", the bot just holds that message. It has no brain to process it.
In this chapter, we will build the Brain, technically known as the Agent Loop.
An AI agent isn't just a simple script that says if input == "hi" then print("hello"). It is a dynamic system that needs to coordinate several moving parts.
Think of the Agent Loop as the Conductor of an orchestra.
The Conductor's job is to take a request from the audience (the User), look at the sheet music (Context), wave the baton to let the LLM compose a plan, and signal the Tools to play their part.
If a user asks: "What is today's date?"
get_date tool."2023-10-27".This back-and-forth process is the Loop.
Before the Conductor can make a decision, it needs to understand the situation. Large Language Models (LLMs) are statelessβthey don't remember what you said 5 seconds ago unless you send that text back to them every time.
We use a ContextBuilder to gather everything the bot needs to know.
In nanobot/agent/context.py, the ContextBuilder stitches these pieces together.
# nanobot/agent/context.py
def build_messages(self, history, current_message, ...):
messages = []
# 1. System Prompt (Identity & Instructions)
system_prompt = self.build_system_prompt()
messages.append({"role": "system", "content": system_prompt})
# 2. Conversation History (Short-term memory)
messages.extend(history)
# 3. The New Message
messages.append({"role": "user", "content": current_message})
return messages
Explanation:
Now let's look at the heart of the engine: nanobot/agent/loop.py.
The loop is designed to run in cycles. Why? Because sometimes one tool isn't enough. The bot might need to:
Here is what happens inside the AgentLoop class.
This is the most critical logic in the bot. It keeps asking the LLM "What's next?" until the LLM says "I'm done."
# nanobot/agent/loop.py
async def _run_agent_loop(self, messages):
iteration = 0
# Keep going until we hit a limit (e.g., 20 steps)
while iteration < self.max_iterations:
# 1. Ask the LLM what to do
response = await self.provider.chat(messages, tools=self.tools)
# 2. Did the LLM ask to use a tool?
if response.has_tool_calls:
# Execute the tool (e.g., read_file, web_search)
for tool_call in response.tool_calls:
result = await self.tools.execute(tool_call.name, tool_call.args)
# 3. Add the result to the message list for the next loop
messages = self.context.add_tool_result(messages, result)
else:
# 4. No tool needed? Then this is the final answer.
return response.content
Explanation:
provider.chat: This sends the data to the AI (covered in LLM Provider Abstraction).has_tool_calls: The AI didn't reply with text; it replied with a JSON command like {"name": "web_search", "args": "weather"}.tools.execute: We run the Python function requested (covered in Tooling System).
In Chapter 1, we learned that the Channel Gateway puts messages onto a MessageBus. The Agent Loop needs to take them off that bus.
This happens in the run() method of the Agent Loop. It acts as a permanent listener.
# nanobot/agent/loop.py
async def run(self):
self._running = True
while self._running:
# 1. Wait for a message from the Gateway
msg = await self.bus.consume_inbound()
# 2. Process it (Run the loop we saw above)
response = await self._process_message(msg)
# 3. Send the reply back to the Gateway
if response:
await self.bus.publish_outbound(response)
Explanation:
consume_inbound() gives it work.Sometimes a task is too big for one loop. For example: "Research the history of Rome and write a report." This might take 50 steps. We don't want to block the user from saying "Stop!" while that happens.
nanobot supports Subagents. These are mini-loops that run in the background.
# nanobot/agent/subagent.py
async def spawn(self, task):
# Create a background task (Fire and Forget)
asyncio.create_task(self._run_subagent(task))
return "I started a subagent to handle that for you."
When the Subagent finishes, it sends a special "System Message" back into the main loop saying: "Hey, I finished the report, here it is."
The Agent Loop is the bridge between raw text and intelligent action.
Currently, our loop calls self.provider.chat. But what exactly is provider? How do we switch between OpenAI, Anthropic, or a local Llama model without rewriting our loop?
We will discover that in the next chapter: LLM Provider Abstraction.
Generated by Code IQ