Welcome to the first chapter of the AionUi developer guide!
In this series, we will explore how AionUi builds powerful AI interfaces. We start with the most critical component: the Agent Manager.
Imagine you are a Client (the User Interface) who wants to build a website. You could try to talk directly to the raw AI models (like Gemini or Codex), acting as the Specialists. However, raw AI models are like brilliant but disorganized geniuses:
You need a middleman. You need a Project Manager.
In AionUi, this Project Manager is the Agent Task Orchestration layer. It sits between the UI and the raw AI. It hires the right specialist, manages the project files (Context), remembers the history (State), and asks you for permission before doing anything risky (Approvals).
Let's look at a simple scenario we want to solve in this chapter:
hello.py."The Agent Manager handles steps 2, 4, and 6.
The Agent Manager is a class (like GeminiAgentManager or CodexAgentManager) that wraps the raw AI. It ensures the AI behaves like a reliable employee.
When the UI sends a message, it doesn't send it to the cloud immediately. It sends it to the Manager. The Manager ensures the "office is open" (bootstrapped) before passing the note.
// Conceptual usage inside the Manager
async sendMessage(data) {
// 1. Tell the UI we are busy working
this.status = 'running';
// 2. Ensure the agent is fully loaded (bootstrapped)
await this.bootstrap;
// 3. Actually send the data to the AI logic
await super.sendMessage(data);
}
running, starts a spinner in the UI, and eventually streams the response back.Sometimes, you trust the AI completely. We call this YOLO Mode (You Only Live Once). The Manager checks this setting before bothering the user.
// Inside GeminiAgentManager.ts
private tryAutoApprove(content) {
// Check if we are in "YOLO" mode
if (this.currentMode === 'yolo') {
console.log("Auto-approving operation!");
// Automatically say "Yes" to the tool execution
this.postMessagePromise(content.callId, 'ProceedOnce');
return true;
}
return false; // Otherwise, we must ask the User
}
currentMode is 'yolo', the Manager acts on your behalf and instantly approves the tool. If not, it halts and waits for the UI.How does a click in the UI travel all the way to the Agent Manager? Let's visualize the flow.
The entry point for all orchestration is conversationBridge.ts. This acts as the receptionist. It receives requests from the frontend and routes them to the correct Manager.
We use the IPC Bridge here, which is covered in detail in Chapter 5: IPC Bridge (Inter-Process Communication).
// src/process/bridge/conversationBridge.ts
// The bridge listens for the 'sendMessage' event from the UI
ipcBridge.conversation.sendMessage.provider(async ({ conversation_id, input }) => {
// 1. Find the specific Project Manager (Task) for this conversation
const task = WorkerManage.getTaskById(conversation_id);
if (!task) return { success: false, msg: 'Manager not found' };
// 2. Hand over the message to the Manager
// The bridge doesn't care HOW the task is done, just that it gets assigned.
await task.sendMessage({ input });
return { success: true };
});
GeminiAgentManager)
Once the bridge hands off the task, the specific Manager takes over. In GeminiAgentManager.ts, we handle the specific quirks of the Gemini AI.
State & History Injection: Before the AI answers, the Manager checks if there is old history (chat logs) that needs to be loaded so the AI remembers context.
// src/process/task/GeminiAgentManager.ts
private async injectHistoryFromDatabase(): Promise<void> {
// 1. Get recent messages from the database
const messages = await db.getMessages(this.conversation_id);
// 2. Format them for the AI
const formattedHistory = this.formatForGemini(messages);
// 3. Load them into the AI's memory
await this.model.injectHistory(formattedHistory);
}
Tool Confirmation: When the AI wants to use a tool (covered in Chapter 3: Tools & Skills Framework), the Manager intercepts the request.
// src/process/task/GeminiAgentManager.ts
// Called when the AI initiates a tool call
private handleConformationMessage(message) {
// 1. Check if we can auto-approve (YOLO mode)
if (this.tryAutoApprove(message)) return;
// 2. If not, prepare a confirmation request for the UI
this.addConfirmation({
id: message.callId,
title: "Allow Execution?",
description: message.command,
options: ["Yes", "Always Allow", "No"]
});
}
This pauses the AI's execution flow until the user responds via the UI.
In this chapter, we learned:
Now that we know who manages the agents, we need to understand how we talk to different types of agents (like Gemini vs. Codex).
Next, we will look at Agent Protocol Adapters to see how AionUi translates a common language into specific AI protocols.
Next Chapter: Agent Protocol Adapters
Generated by Code IQ