In Chapter 7: Atomic Git Integration, we learned how to save our work safely. We now have a project history filled with "Features" and "Fixes."
But here is the scary truth: Just because the AI wrote code, doesn't mean the code works.
A common AI trick is to be "lazy." You ask for a complex payment system, and the AI creates a file that looks like this:
// PaymentSystem.js
// TODO: Implement payment logic later
return true;
The AI marks the task as "Done." The file exists. The Git commit is saved. But your app doesn't actually process payments.
This is where Goal-Backward Verification comes in. It is the "Quality Assurance" phase that prevents the AI from cheating.
Most project management systems work "Forward":
The Problem: You can check all the boxes (create file, add import, style button) and still have a broken app because the pieces aren't connected properly.
Get-Shit-Done (GSD) reverses the process. We don't ask "Did you finish the tasks?" We ask "Is the Goal true?"
The Analogy: The Bridge Inspector
If the truck falls, it doesn't matter if you checked the boxes. The goal failed.
We use a specialized agent called the Verifier (gsd-verifier). It is the "Skeptic" of the team.
It ignores what the Executor said it did. It looks directly at the code files to find three things:
A Stub is a placeholder. It's when the AI writes // Code goes here instead of actual code.
The Verifier is trained to hunt for these specific "lazy" patterns. If it finds a file that returns null or has TODO comments in critical places, it fails the phase.
This is the most common failure.
Button component.API to save data.Both files exist ("Artifacts" are good), but the connection is missing ("Wiring" is bad). The Verifier explicitly checks for these connections using imports and function calls.
Let's look at how the Verifier inspects a phase. Imagine we just built a "Login Screen."
VERIFICATION.mdWhen the Verifier runs, it generates a report. This isn't just a "Pass/Fail" grade; it's a detailed investigation.
Example Report Snippet:
# Phase Verification: Login
## Goal Achievement
| Truth | Status | Reason |
|-------|--------|--------|
| User can see login form | β
VERIFIED | File exists, elements present |
| User can submit form | β FAILED | Button has no `onClick` handler |
## Gaps Found
1. **Missing Wiring**: The Submit button is not connected to the API.
Explanation: The user can instantly see why it failed. It's not a mystery; it's a specific missing connection (onClick).
How does the Verifier actually "read" the code? It doesn't run the app (that's hard for AI). Instead, it uses Static Analysisβit scans the text of your files looking for proof.
The gsd-verifier starts with a strict instruction to trust nothing.
<role>
You are a GSD phase verifier.
Your job: Verify the GOAL, not the TASKS.
**Critical mindset:**
Do NOT trust SUMMARY.md claims.
Verify what ACTUALLY exists in the code.
</role>
Explanation: This sets the mood. "Don't listen to the Executor's excuses. Look at the evidence."
The Verifier uses standard Linux tools (like grep) to find "fake" code. Here is a simplified version of the logic it uses.
# Check for "lazy" comments
grep -n -E "TODO|FIXME|PLACEHOLDER" src/Login.tsx
# Check for empty implementations
grep -n "return null" src/Login.tsx
Explanation:
grep: A command that searches for text.To check if two files talk to each other, the Verifier looks for Imports and Usage.
# 1. Does the Login page import the API?
grep "import.*loginAPI" src/LoginPage.tsx
# 2. Does it actually USE the API?
grep "loginAPI.submit(" src/LoginPage.tsx
Explanation: If Step 1 passes (Import exists) but Step 2 fails (Function never called), the status is "Orphaned." The code exists but isn't doing anything.
Finally, the Verifier structures its thinking backwards from the goal. This logic is embedded in its instruction file (agents/gsd-verifier.md).
# Simplified Logic Flow
Goal: "User can send message"
1. What must exist?
-> Artifact: MessageInput.tsx
-> Artifact: SendButton.tsx
2. What must happen?
-> Truth: SendButton triggers API
3. Verify:
-> Check Artifacts (Do files exist?)
-> Check Wiring (Does Button call API?)
When you are learning, you often assume that if you followed the tutorial steps, it should work. When it doesn't, you feel lost.
Goal-Backward Verification teaches you a superpower: How to debug logic, not just syntax.
In this chapter, we learned:
// TODO) that AI loves to write; the Verifier hunts them down.VERIFICATION.md report that acts as a "Pass/Fail" gate for the phase.So, the Verifier ran, and it found a problem. The login button isn't wired up. How do we fix it? We don't just guess. We use the Scientific Method.
Next Chapter: Scientific Debugging
Generated by Code IQ