Welcome back! in Chapter 1: Language Models (Chat Models & LLMs), we learned how to send a simple string to a Chat Model and get a response.
In the previous chapter, we did this:
model.invoke("Translate 'Hello' to French.")
The Problem: In a real application, you don't want to hardcode the input. You want to accept user input. You might be tempted to do this using Python f-strings:
user_input = "Hello"
# Don't do this! messy and hard to manage complex prompts
response = model.invoke(f"Translate '{user_input}' to French.")
The Solution: LangChain provides Prompts and Messages.
Think of a PromptTemplate as a recipe. The recipe stays the same, but the ingredients (variables) can change.
Let's create a template for our translation app.
We use {curly_brackets} to mark where variable data goes.
from langchain_core.prompts import PromptTemplate
# Define the "recipe"
template = PromptTemplate.from_template(
"Translate the following word to {language}: {word}"
)
Explanation: We created a reusable template. It knows it needs two variables: language and word.
To use it, we "format" it by providing the missing ingredients.
# Fill in the blanks
formatted_prompt = template.format(language="French", word="Hello")
print(formatted_prompt)
# Output: "Translate the following word to French: Hello"
Explanation: The .format() method replaces the {variables} with the actual values. The result is a plain string ready for an LLM.
Modern Chat Models (like GPT-4 or Claude) don't just see a single string of text. They see a list of messages, similar to a chat history on your phone.
LangChain provides three main message types (BaseMessage):
Instead of sending a raw string, we can send a structured list.
from langchain_core.messages import SystemMessage, HumanMessage
messages = [
SystemMessage(content="You are a sarcastic translator."),
HumanMessage(content="Translate 'I love programming' to Spanish.")
]
Explanation: By adding a SystemMessage, we set the behavior of the AI. If we sent this list to a model, it would respond sarcastically in Spanish.
In most LangChain apps, you will combine the two concepts above using a ChatPromptTemplate. This allows you to create a dynamic template that produces a list of messages.
from langchain_core.prompts import ChatPromptTemplate
# Create a template consisting of roles and variables
chat_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that translates to {language}."),
("human", "{text}")
])
Explanation: We define a System message that takes a variable {language} and a Human message that takes a variable {text}.
Now we can generate the final prompt value.
# Fill in the variables
messages = chat_template.invoke({"language": "French", "text": "I love AI"})
print(messages)
Explanation: The output is not a string, but a list of message objects (SystemMessage and HumanMessage) with the variables filled in, ready to be sent to the Chat Model.
What happens when you call .format() or .invoke() on a prompt template?
PromptTemplate)
At its core, PromptTemplate is a wrapper around Python's string formatting.
File Reference: libs/core/langchain_core/prompts/prompt.py
class PromptTemplate(StringPromptTemplate):
template: str # The string with {curly_braces}
def format(self, **kwargs):
# Simplified logic:
# 1. Merge user variables (kwargs)
# 2. Use python f-string style formatting
return self.template.format(**kwargs)
It validates that you provided all the required variables (like language and word) before attempting to format, preventing errors later in the pipeline.
BaseMessage)
When you use ChatPromptTemplate, it constructs BaseMessage objects. These objects are simple data containers holding the content (text) and the type (role).
File Reference: libs/core/langchain_core/messages/base.py
class BaseMessage(Serializable):
content: str # The actual text
type: str # 'human', 'ai', 'system', etc.
def __init__(self, content, **kwargs):
self.content = content
# ... sets up internal IDs and metadata
This standardization is crucial. Whether you are using OpenAI, Anthropic, or Ollama, LangChain ensures the messages are stored in this exact format internally.
In this chapter, we learned:
System, Human, AI) structure the inputs so the model understands roles.Why this matters: We have the Model (Chapter 1) and the Instructions (Chapter 2). Now we need a way to glue them together so the output of the Prompt automatically goes into the Model.
In the next chapter, we will learn how to connect these pieces using Runnables, the piping system of LangChain.
Next Chapter: Runnables & Chains
Generated by Code IQ