In the previous LLM Provider Abstraction chapter, we learned how to create "blueprints" (configurations) for our models. We have the settings, but we don't have a running application yet.
Now, we need to turn those blueprints into a real, functioning structure. This is the job of the Workflow Builder.
Imagine you just bought a complex piece of furniture from IKEA.
The Problem: The manual doesn't assemble the furniture for you. If you were building an AI agent manually in Python, you would have to write code like this:
# The "Manual" Way - Messy and Hard to Change
auth = AuthProvider(api_key="...")
llm = OpenAIClient(auth=auth, model="gpt-4")
database = PostgresDB(host="localhost")
# Manually passing dependencies everywhere...
agent = Agent(brain=llm, memory=database)
If you change the database or the LLM, you have to rewrite all this wiring code.
The Solution: The Workflow Builder. It acts like an automatic assembly robot. You feed it the instruction manual (Config), and it automatically creates the LLM, connects the database, and hands you a finished Workflow object.
To understand the Builder, we need to understand three simple ideas:
Let's see how we use the Builder to assemble an agent without manually wiring everything.
First, recall that we have a YAML configuration (or Python object) defining our parts.
# config.yaml
llms:
my_gpt_brain:
_type: openai
model_name: "gpt-4o"
workflow:
llm_name: my_gpt_brain
When writing your agent's logic, you don't import OpenAI. Instead, you ask the builder for the component by name.
# This function defines your agent's logic
async def my_agent_logic(user_input: str, builder: Builder):
# ASK the builder for the LLM defined in config
llm = await builder.get_llm("my_gpt_brain")
# Now use it!
response = await llm.generate(user_input)
return response
Explanation:
Notice builder.get_llm("my_gpt_brain"). We didn't specify API keys or URLs here. The Builder looks at the config, sees "my_gpt_brain", initializes the OpenAI client, and gives it to us.
Finally, the system wraps your logic into a Workflow object. This object holds the instructions (my_agent_logic) and all the live tools needed to run it.
# Abstract representation of what happens internally
workflow = await builder.set_workflow(
entry_point=my_agent_logic,
config=my_config
)
Now, workflow is a complete, executable package.
How does the Builder know how to create these objects?
_type (e.g., openai), it calls the specific provider function (which we registered in LLM Provider Abstraction).Here is the flow:
The Builder class is an abstract base class that defines the "contracts" for fetching components.
Here is a simplified look at the Builder class definition.
# packages/nvidia_nat_core/src/nat/builder/builder.py
class Builder(ABC):
@abstractmethod
async def get_llm(self, llm_name: str, wrapper_type: str) -> Any:
"""
Finds the config for 'llm_name', instantiates it,
and returns the object.
"""
pass
Explanation:
The Builder defines standard methods like get_llm, get_tool, and get_memory_client. This ensures that no matter how complex your agent is, the way you retrieve components is always the same.
The result of the build process is a Workflow.
# packages/nvidia_nat_core/src/nat/builder/workflow.py
class Workflow:
def __init__(self, entry_fn, llms, tools, ...):
self._entry_fn = entry_fn # Your main logic function
self.llms = llms # The created LLM objects
self.functions = tools # The created Tool objects
Explanation:
The Workflow acts as a container. It holds:
_entry_fn: The code logic you wrote (what the agent does).llms / functions: The dictionary of instantiated objects (what the agent uses).
This separation is crucial. It means your logic (_entry_fn) is pure code, while the heavy components (llms) are managed and stored separately by the container.
In this chapter, we learned:
builder.get_llm("name") to ask for components, allowing the Builder to handle the initialization details.Workflow object, which contains everything needed to run the application.
We now have a Workflow objectโa fully assembled car with an engine and wheels. But a car doesn't move unless someone gets in the driver's seat and turns the key.
In the next chapter, we will learn how to start the engine and process user requests.
Next Chapter: Runtime Session & Runner
Generated by Code IQ