Welcome to deer-flow! We are about to build an AI system that doesn't just "chat"βit works.
In a standard chatbot, you type text, and the AI sends text back. But when an AI needs to write complex software, run terminal commands, or coordinate with other agents, a simple text bubble isn't enough. We need a cockpit.
This chapter introduces the Frontend Workspace, the user's interface. It translates complex backend events into a human-readable dashboard using specialized AI Elements.
Imagine you ask an AI: "Build a Snake game in Python."
A standard chatbot would dump 200 lines of code into the chat window. It's messy, hard to copy, and you don't know if the code actually works.
deer-flow solves this by separating the "Talk" from the "Work":
Throughout this chapter, we will visualize a single request:
User: "Research the history of coffee and save it as a text file."
To handle this, our Workspace needs to render three distinct things dynamically:
coffee_history.txt file.Let's look at the React components that make this possible.
This is the skeleton of our application. It holds the header (navigation) and the body (where the conversation happens).
Simplified Code (workspace-container.tsx):
// The main wrapper for the page
export function WorkspaceContainer({ children }: { children: React.ReactNode }) {
return (
// A full-screen flex column layout
<div className="flex h-screen w-full flex-col">
{children}
</div>
);
}
Explanation: This is simply the "frame" of the application. It ensures the workspace takes up the full screen height.
The MessageList is the most critical component. It doesn't just map through messages and print text. It inspects the type of message and decides which "AI Element" to render.
Simplified Logic (message-list.tsx):
// Inside MessageList component
{groupMessages(messages, (group) => {
// 1. If it's a standard chat message
if (group.type === "human" || group.type === "assistant") {
return <MessageListItem message={group.messages[0]} />;
}
// 2. If the AI is performing a subtask (calling a tool)
else if (group.type === "assistant:subagent") {
// Render specific cards for tasks
return <SubtaskCardWrapper messages={group.messages} />;
}
// ... handling other types below
})}
Explanation: We group messages logically. If the backend sends a "tool call" (like a search request), the Frontend sees assistant:subagent and knows not to render a text bubble, but a Subtask Card instead.
Modern AI models "think" before they answer. We want users to see this process to build trust, but we don't want it cluttering the screen. We use a collapsible component.
Simplified Code (chain-of-thought.tsx):
export const ChainOfThought = ({ children }: { children: React.ReactNode }) => {
// State to toggle visibility
const [isOpen, setIsOpen] = useState(false);
return (
<div className="border rounded-md p-2">
<button onClick={() => setIsOpen(!isOpen)}>
{isOpen ? "Hide Thoughts" : "Show Chain of Thought"}
</button>
{isOpen && <div className="text-gray-500">{children}</div>}
</div>
);
};
Explanation: In the real app, this is styled beautifully with animations. Functionally, it allows the user to peek into the Agent's "Internal Monologue" (which we will cover in Lead Agent & Orchestration).
When our Lead Agent delegates work to a specialist (like a "Coder" or "Researcher"), the workspace renders a card to show progress.
Simplified Code (message-list.tsx logic):
// Inside the assistant:subagent logic block
if (toolCall.name === "task") {
// Update the UI to show a task is running
return (
<div className="card bg-gray-100 p-4 rounded">
<p>Executing Subtask: {toolCall.args.description}</p>
<Badge>In Progress</Badge>
</div>
);
}
Explanation: This gives the user immediate feedback. Instead of staring at a blinking cursor, they see "Searching Google..." or "Writing File...".
When the AI creates a file (like coffee_history.txt), we don't dump the text in the chat. We create an Artifact.
Simplified Code (artifact.tsx):
export const Artifact = ({ title, content }: { title: string, content: string }) => (
<div className="shadow-lg border rounded-lg overflow-hidden">
<div className="bg-gray-200 p-2 font-bold">{title}</div>
<div className="p-4 bg-white">
<pre>{content}</pre>
</div>
</div>
);
Explanation: The Artifact component treats generated content as a distinct object. Users can often download, copy, or preview these files separately from the conversation flow.
How does the Frontend know which element to show? It relies on the data stream coming from the Backend.
The Backend sends a stream of events. The Frontend accumulates these events into a "Thread State." The MessageList component reads this state and acts as a Dispatcher.
Let's look deeper into message-list.tsx to see how it handles the "Artifacts" specifically.
Code Deep Dive (message-list.tsx):
} else if (group.type === "assistant:present-files") {
// The backend sent a specific message type indicating files were created
const files = [];
// Extract file paths from the message metadata
for (const message of group.messages) {
if (hasPresentFiles(message)) {
files.push(...extractPresentFilesFromMessage(message));
}
}
// Render the Artifact component
return <ArtifactFileList files={files} />;
}
group.type: The system groups sequential messages. If the AI sends a specialized "I made a file" signal, the group type becomes assistant:present-files.ArtifactFileList: This component (imported in the full file) takes the file paths and renders the nice UI box we defined in artifact.tsx.In this chapter, we built the stage for our AI actors.
ChainOfThought, SubtaskCard, Artifact) that make complex AI actions human-readable.By strictly separating these visual elements, deer-flow allows users to follow the AI's complex reasoning and file generation without getting lost in a wall of text.
Now that we have a visual interface, we need an intelligent "brain" to control it. Who decides when to show a subtask or create a file?
Next Chapter: Lead Agent & Orchestration
Generated by Code IQ