Mastering this loop is what separates engineers who use AI as a glorified autocomplete from those who use it to ship production features in a fraction of the time.
Understanding the Loop as a Workflow, Not a Magic Button
The agentic loop is a disciplined, repeatable workflow. It is not a single prompt that produces working software. Think of it the same way you think about a pull request lifecycle — there are distinct phases, each with a clear purpose and clear exit criteria. Skipping phases or conflating them is the primary reason engineers get stuck in frustrating cycles where the AI produces plausible-looking but wrong output.
The five phases are: Spec (define what you want with enough precision that ambiguity cannot derail execution), Plan (have the AI produce a structured, reviewable task breakdown before touching code), Implement (execute the plan in checkpointed steps, not one giant leap), Review (systematically verify the output against the spec and the plan), and Iterate (refine with targeted prompts, not full restarts). Each phase feeds the next. Skipping Plan and jumping straight to Implement is the single most common mistake — you get output that feels productive but drifts away from what you actually needed.
As a mid or senior engineer, you already understand that a vague ticket produces a vague PR. The same law applies here. The quality of your spec directly bounds the quality of everything downstream. We covered spec-writing in Module 3; this module is about what happens after you have a good spec in hand.
Learning tip: Treat the agentic loop like a CI pipeline. If a phase fails its quality gate, you stop and fix it before moving forward — you do not paper over a bad plan with more implementation.
Phase 1 — Spec: Anchoring the Agent's Context
By the time you reach Module 4, you have a spec document. It should answer four questions: What is being built? What are the acceptance criteria? What are the constraints (tech stack, existing interfaces, performance requirements)? What is explicitly out of scope?
A spec does not need to be long. For a single-feature task, one page is often enough. What it must be is unambiguous enough that two engineers reading it independently would build the same thing. Before handing it to the agent, read it through that lens. If you find yourself thinking "well, it's obvious that we'd use the existing AuthService" — write that down. The agent has no implicit knowledge of your team conventions unless you put them in the spec or the project context.
Keep the spec as the single source of truth throughout the loop. In every subsequent phase, you will reference it explicitly in your prompts. This prevents context drift, which is when the agent gradually optimizes for the most recent instructions rather than the original requirements.
Learning tip: Add a one-line "non-goals" section to every spec. Explicitly stating what you are NOT building is as valuable as stating what you are — it prevents the agent from gold-plating or wandering into adjacent scope.
Phase 2 — Plan: Let the Agent Think Before It Acts
Planning is where you extract the most leverage from an AI agent. A well-prompted planning step produces a numbered task list with dependencies, file-level scope annotations, and a clear definition of done for each task. This is not a gift to the AI — it is a gift to you. You can read it in two minutes and catch misunderstandings before any code is written.
The planning prompt should include the full spec and ask for output in a specific structured format. Request that the agent identify risks and flag any ambiguities it found in the spec. Agents are good at surfacing gaps when explicitly asked — they will not volunteer this information if you just ask them to "start coding."
A plan should be reviewed like a design doc. Ask yourself: Does the sequence make sense? Are there missing steps (migrations, feature flags, test data setup)? Does anything touch surfaces that are not in the spec? If the plan looks wrong, fix it now with a follow-up prompt. A two-minute plan revision saves twenty minutes of unwinding bad implementation.
Learning tip: Ask the agent to estimate the complexity of each task on a simple scale (small / medium / large). Tasks marked "large" are candidates for further decomposition before implementation begins.
Phase 3 — Implement: Execute in Checkpointed Steps
Do not ask the agent to implement everything in a single prompt. Treat each task in the plan as a unit of work with its own prompt. This keeps each execution context small and focused, makes failures cheap to diagnose, and gives you natural checkpoints to verify correctness before proceeding.
At each checkpoint, compile and run the code (or at minimum do a syntax check), run any relevant existing tests, and do a quick read of the diff. You are not doing a full code review at this stage — you are doing a sanity check. Did the agent modify files it was not supposed to? Did it introduce dependencies that are not in your stack? Did it break existing interfaces? These are cheap catches when you are looking at a single task's output; they are expensive to untangle when you have ten tasks stacked on top of each other.
For tasks that involve external APIs, database schemas, or shared interfaces, write the interface or type definition first and get explicit confirmation that the agent understands it before asking it to implement against it. This pattern — define the contract, then implement — maps directly to how good engineers work and produces dramatically better results.
Learning tip: Keep a running scratch file called
AGENT_SESSION.mdduring long implementations. Paste the current task, any constraints discovered mid-session, and the last verified state. This becomes your recovery document if the session drifts or you need to resume later.
Phase 4 — Review: You Are the Editor, Not the Approver
Review is not a rubber stamp. The agent's output should be treated like a PR from a capable junior engineer: competent, well-intentioned, but requiring your domain knowledge to catch subtle issues. Your job in the review phase is to verify correctness against the spec, correctness of implementation details (error handling, edge cases, security assumptions), and coherence with the existing codebase.
Use the AI to assist the review itself. Ask it to self-review against specific criteria, generate test cases for the implemented logic, or explain its own implementation choices. These prompts often surface issues the agent did not flag during implementation. Be specific — "does this handle the case where the user has no active session?" produces better output than "does this look correct?"
If the review surfaces issues, categorize them before iterating: Is this a spec misunderstanding (fix the spec reference in the next prompt)? Is this an implementation bug (targeted fix prompt)? Is this a design issue (may require partial replan)? Knowing the category prevents you from applying the wrong kind of fix.
Learning tip: Before submitting the final output for human review, run one prompt asking the agent: "List every assumption you made that is not explicitly stated in the spec." The answers are frequently surprising and always worth knowing.
Phase 5 — Iterate: Targeted Prompts, Not Full Restarts
Iteration is the phase most engineers mishandle. When output is wrong, the instinct is to rewrite the prompt from scratch and try again. Most of the time, that is the wrong move. Targeted iteration — a precise prompt that addresses one specific gap — is faster and produces less regressive output.
Structure iteration prompts as surgical corrections: reference the specific file and function, describe exactly what is wrong and what correct behavior looks like, and state explicitly what should NOT change. The last part is critical. Without it, the agent may "fix" the targeted issue while inadvertently changing adjacent code.
Know when to restart. A restart is warranted when: the plan was fundamentally wrong and the implementation has baked in that wrong model, the accumulated context has caused the agent to lose track of the spec, or more than 40% of the generated code needs to change. In those cases, close the session, consolidate what you learned into an updated spec, and run a fresh plan phase. A clean restart from a better spec is almost always faster than iterating on a broken foundation.
Learning tip: Keep your iteration prompts under 150 words. If you find yourself writing a long correction prompt, that is a signal you may be trying to fix too many things at once — split it into sequential targeted prompts instead.
Hands-On: Complete Agentic Loop for a "User Preferences" Feature
This exercise walks through all five phases for a concrete feature: adding a user preferences endpoint to an existing REST API. The stack is Node.js/Express with a PostgreSQL database. Assume you already have an authenticated user system in place.
Step 1 — Write the Spec
Write a one-page spec document. The critical elements: the endpoint signatures (GET /users/:id/preferences, PATCH /users/:id/preferences), the data shape for preferences (notification_email: boolean, theme: "light" | "dark", timezone: string), the constraint that only the authenticated user can read or write their own preferences, and the non-goal that bulk preference updates and admin overrides are out of scope.
Step 2 — Run the Planning Prompt
Paste your spec into the agent and use the following prompt:
Here is the spec for the user preferences feature:
[paste spec here]
Before writing any code, produce a numbered implementation plan. For each task, include:
- The specific files to be created or modified
- The definition of done for that task
- Any dependencies on other tasks in the list
- Complexity estimate: small, medium, or large
Also list any ambiguities or assumptions you had to make while reading the spec.
Expected output: a numbered list of 5–8 tasks. A good plan will include: adding a database migration for a user_preferences table, adding a repository function, adding service-layer logic with authorization check, adding route handlers, adding input validation, and adding unit tests for the service layer. Review this plan carefully before proceeding.
Step 3 — Implement the Migration (Task 1)
Pick the first task and prompt for implementation only:
Implement only Task 1 from the plan: the database migration for the user_preferences table.
The table should have: user_id (foreign key to users.id), notification_email (boolean, default true), theme (varchar(10), default 'light'), timezone (varchar(50), default 'UTC'), created_at, updated_at.
Do not modify any other files. Output the migration file content only.
Expected output: a single migration file. Check it against the spec's data shape before moving to Task 2.
Step 4 — Implement the Repository and Service Layers
After verifying the migration, move to the next tasks:
The migration from Task 1 is verified. Now implement Tasks 2 and 3:
Task 2: Add getUserPreferences(userId) and updateUserPreferences(userId, patch) to the preferences repository. Use the existing db client pattern from src/repositories/users.repository.ts.
Task 3: Add a PreferencesService that calls the repository. The service must enforce that the requesting user's ID matches the target userId — throw a 403 ForbiddenError if not. Use the existing ForbiddenError class from src/errors.ts.
Output both files. Do not modify any existing files.
Expected output: two new files. Verify that the authorization logic is present in the service, not leaked into the repository or route layer.
Step 5 — Implement Routes and Validation
Tasks 2 and 3 are verified. Implement Task 4 and Task 5:
Task 4: Add GET /users/:id/preferences and PATCH /users/:id/preferences route handlers in a new file src/routes/preferences.routes.ts. Wire them to PreferencesService. Use the existing authenticateRequest middleware from src/middleware/auth.ts.
Task 5: Add Zod validation for the PATCH request body. Valid fields: notification_email (boolean, optional), theme (enum: 'light' | 'dark', optional), timezone (string, max 50 chars, optional). Reject requests with no recognized fields.
Output the new routes file and the validation schema. Show the change needed in src/app.ts to register the new router.
Expected output: the routes file, the schema, and the one-line addition to app.ts. Verify the middleware is applied correctly on both routes.
Step 6 — Review Against the Spec
Here is the original spec:
[paste spec again]
Review the implementation we have produced so far against this spec. For each acceptance criterion, confirm whether it is met or identify the gap. Then list every assumption you made that is not explicitly covered by the spec.
Expected output: a criterion-by-criterion assessment. Common gaps found here include missing handling for non-existent user IDs (404 vs 500) and missing test coverage for the authorization check.
Step 7 — Iterate on a Specific Gap
If the review surfaces a gap — for example, the service throws an unhandled error when the user has no existing preferences row rather than returning defaults — use a targeted iteration prompt:
In PreferencesService.getUserPreferences(), when the repository returns null (no row exists for this user_id), the function should return the default preferences object rather than throwing. The defaults are: notification_email: true, theme: 'light', timezone: 'UTC'.
Modify only the getUserPreferences() method in src/services/preferences.service.ts. Do not change any other code.
Expected output: a precise diff of one method. No other files should be touched.
Step 8 — Generate Tests
Write unit tests for PreferencesService using Jest and the existing mock pattern in src/services/__tests__/users.service.test.ts.
Cover: (1) returns defaults when no preferences row exists, (2) returns stored preferences when row exists, (3) throws ForbiddenError when requesting user's ID does not match target userId, (4) successfully patches a subset of fields without overwriting others.
Expected output: a test file with four test cases. Run the tests and verify they pass before closing the session.
Key Takeaways
- The loop has five distinct phases — Spec, Plan, Implement, Review, Iterate — and each has a quality gate. Move forward only when the current phase passes its gate.
- The Plan phase is your highest-leverage investment. A ten-minute plan review eliminates hours of implementation rework.
- Implement in checkpointed steps, one task at a time. Never ask the agent to do everything in a single prompt.
- Iteration prompts should be surgical — reference specific files and functions, state what must not change, and keep prompts under 150 words.
- Know the restart signal: if the foundational plan was wrong, the implementation is baked in wrong assumptions, or more than 40% of code needs to change, a clean restart from an improved spec is faster than iterating on a broken foundation.