In the previous chapter, Skills (Knowledge Modules), we taught our individual agents how to do specific jobs—like using a specific testing framework or following a coding style.
Now we have a team of skilled workers. But if you have a team of 10 workers, you don't want to tap each one on the shoulder individually to give them a task. You want a Manager.
This brings us to Command Orchestration.
Imagine you want to build a new feature. If you are chatting with a standard AI, the conversation usually looks like this:
This is tedious. You are the bottleneck.
Command Orchestration allows you to wrap all those steps into a single "Macro." You wave the baton once, and the entire orchestra plays the symphony.
/lfg) Command
Let's look at the ultimate lazy developer command included in this plugin: /lfg (short for "Let's F*ing Go").
The goal of this command is to take a single sentence from you and perform the entire engineering lifecycle automatically.
Instead of baby-sitting the AI, you type this:
/lfg "Add a dark mode toggle to the settings page"
And then you walk away. When you come back, the plan is written, the code is written, the tests are passed, and the video demo is ready.
A Command in this system is just a Markdown file in the commands/ folder. It acts like a script for the AI.
Let's look at how /lfg is actually defined in the source code.
First, we give the command a name and a hint for the user.
---
name: lfg
description: Full autonomous engineering workflow
argument-hint: "[feature description]"
disable-model-invocation: true
---
disable-model-invocation: true: This is a special flag. It tells the AI: "Don't think about this file. Just do exactly what it says." This makes the command faster and cheaper.Below the metadata, we list the steps. Notice that this command is just a list of other commands!
Run these slash commands in order:
1. `/workflows:plan $ARGUMENTS`
2. `/workflows:work`
3. `/workflows:review`
4. `/compound-engineering:test-browser`
5. `/compound-engineering:feature-video`
When you run /lfg, the Orchestrator reads this list. It passes your input ($ARGUMENTS) to the first step. Then, it waits for that to finish before starting step 2.
The /lfg command runs steps Sequentially (one after another). But sometimes, you want to go fast. You want Parallelism.
Let's look at the /workflows:compound command we introduced in Chapter 1. It uses a more complex structure to spin up multiple agents at the exact same time.
We use XML-like tags to tell the Orchestrator to split the work.
<parallel_tasks>
Launch these subagents IN PARALLEL:
1. **Context Analyzer**: Extract history.
2. **Solution Extractor**: Find the code fix.
3. **Prevention Strategist**: Write test ideas.
</parallel_tasks>
When the LLM sees <parallel_tasks>, it doesn't generate one response. It generates three separate streams of thought, acting as three different experts simultaneously.
Here is how the Orchestrator handles a complex command like /workflows:compound.
How does the system actually find and load these commands? It happens in src/parsers/claude.ts.
The system scans the commands/ directory for any Markdown files.
// src/parsers/claude.ts (Simplified)
async function loadCommands(commandsDirs: string[]) {
// 1. Find all markdown files in commands/
const files = await collectMarkdownFiles(commandsDirs);
const commands = [];
// 2. Loop through each file
for (const file of files) {
// ... processing logic (see next step)
}
}
It separates the YAML frontmatter from the instructions.
// Inside the loop...
const raw = await readText(file);
const { data, body } = parseFrontmatter(raw);
commands.push({
name: data.name, // e.g., "lfg"
argumentHint: data["argument-hint"],
disableModelInvocation: data["disable-model-invocation"],
body: body.trim(), // The instructions
});
The plugin sends this list to the AI interface. Now, when you type /, the autocomplete menu shows /lfg and /workflows:compound because they were loaded into this array.
You might notice disable-model-invocation: true in the /lfg command.
This is useful for "Macros" where you don't need the AI to be creative; you just need it to execute a list of actions rigorously.
Command Orchestration turns your AI from a Chatbot into a Workflow Engine.
/lfg to chain multiple steps (Plan -> Code -> Test).<parallel_tasks> to run multiple agents at once.You now have a system that can run complex engineering tasks. But there is one problem remaining: The AI works in text and code, but sometimes we need it to output to different formats—like a PDF report, a Jira ticket, or a structured JSON file.
In the next chapter, we will look at how to handle Multi-Target Conversion.
Generated by Code IQ