This is where the entire agentic development workflow clicks — not as separate techniques, but as a unified system for shipping production-quality features faster.
What This Capstone Covers
The previous topics in Module 4 covered the individual components of the agentic workflow: specs, planning, staged implementation, code review, and git hygiene. This capstone puts them together in a single, continuous exercise. You will build a real feature — paginated search on a Node.js/Express REST API backed by PostgreSQL — from the first prompt to a merged-ready PR, using AI at every stage.
Everything in this capstone is copy-paste ready. Every prompt is real. Every expected output is described in concrete terms. You will not be asked to imagine what you could do with AI — you will do it.
The feature: Add a GET /users/search endpoint to an existing Express API. The endpoint accepts a query string and returns paginated results using cursor-based pagination, backed by a PostgreSQL database.
This is a complete but bounded feature: it touches the route layer, service layer, repository layer, database query construction, input validation, error handling, and tests. It's representative of the kind of work mid and senior engineers do every week.
Learning tip: Don't just read this capstone — do it. Open your editor, create a throwaway Express project, and run every prompt. The difference between understanding this workflow and being able to execute it under deadline pressure is repetition.
Stage 1: Spec Generation
Before writing a line of code or a single prompt, you need a spec. Not a 20-page PRD — a structured, unambiguous description of what the feature does, what its inputs and outputs are, and what the edge cases are. AI is excellent at generating a draft spec if you give it the feature description and let it enumerate the details you haven't thought of.
Step 1: Generate the initial spec.
I'm adding a paginated search endpoint to a Node.js/Express REST API backed by PostgreSQL.
Feature description:
- Endpoint: GET /users/search
- Accepts a query string parameter `q` (the search term) and optional pagination parameters
- Searches users by name and email (partial match, case-insensitive)
- Returns paginated results using cursor-based pagination (not offset-based)
- The cursor should be opaque to the client (base64-encoded)
- Response includes: results array, next_cursor (null if no more pages), total_count
Generate a complete feature spec that includes:
1. Full request/response schema (with field names, types, and validation rules)
2. Cursor-based pagination design (how the cursor encodes position, how it's decoded on the server)
3. Error cases and their HTTP status codes (invalid cursor, missing q param, q too short/long, etc.)
4. Database query design (what indexes are needed, how the cursor translates to a WHERE clause)
5. Performance considerations for large datasets
Be specific and opinionated. Make choices where I haven't specified them.
Expected output: A structured spec document covering the full API contract, pagination design, error cases, and database strategy. Read it carefully — AI will make reasonable choices, but you need to verify they match your system's conventions. In particular, check the cursor format, the index strategy, and the error response shape.
Learning tip: The spec generation prompt is worth spending time on. Every ambiguity you resolve in the spec is a bug you don't write during implementation.
Stage 2: Implementation Planning
With the spec in hand, the next step is generating an implementation plan — the sequence of discrete, testable tasks that turns the spec into working code. A good plan makes the implementation deterministic and reviewable before any code is written.
Step 2: Generate the implementation plan.
Here is the spec for the paginated search endpoint:
[paste spec from Step 1]
My project has the following structure:
- src/routes/ — Express route handlers (HTTP layer only, no business logic)
- src/services/ — Business logic
- src/repositories/ — Database queries (uses pg library, raw SQL, no ORM)
- src/validators/ — Input validation (uses Joi)
- tests/ — Jest tests, organized to mirror src/
Generate a detailed implementation plan as a numbered task list. Each task should:
- Be small enough to implement and commit in one session (ideally under 2 hours)
- Have a clear, testable completion criterion
- Specify which layer it touches (route / service / repository / validator / test)
- Be ordered so that each task can be tested independently before moving to the next
Include a task for the database migration (adding the index), for the validator, for the repository method, for the service method, for the route handler, and for the integration test.
Expected output: A numbered list of 7–10 tasks, each with a layer label and completion criterion. Example output might look like:
1. [Migration] Add GIN index on users.name and users.email for full-text search — verified when migration runs without error and EXPLAIN ANALYZE shows index usage
2. [Validator] Create searchUsers validator using Joi — validates q (string, 1–100 chars, required), limit (integer, 1–50, default 20), cursor (string, optional, base64 format)
3. [Repository] Implement searchUsers(q, limit, cursor) method — returns { rows, nextCursor } using cursor-decoded WHERE clause
...
Learning tip: If the plan has tasks that are too large (e.g., "implement the whole feature"), push back: "Break task 3 into smaller steps, each independently testable." Plans with small, focused tasks produce better code and better commit history.
Stage 3: Staged Implementation with Commits
With the plan in hand, implement each task as a focused session: prompt for the code, review it, test it, commit it. Do not batch multiple tasks into one session — this is how you lose track of what's working and what isn't.
Step 3: Implement the database migration.
Implement task 1 from the plan: add a PostgreSQL index to support case-insensitive partial-match search on users.name and users.email.
Context:
- Database: PostgreSQL 14+
- Table: users, columns: id (uuid), name (varchar), email (varchar), created_at (timestamp)
- We are using pg-migrate for migrations
- The search will use ILIKE for case-insensitive matching
Generate:
1. A pg-migrate migration file that adds a trigram index (pg_trgm extension) on both columns
2. A comment in the migration explaining why trigram indexes are used here instead of a standard B-tree index
3. The rollback SQL
Output only the migration file content. No explanation outside the code.
After implementing and verifying: commit with git commit -m "chore(db): add trigram indexes on users.name and email for search".
Step 4: Implement the validator.
Implement task 2: a Joi validator for the GET /users/search request query parameters.
Validation rules from the spec:
- q: string, required, min length 1, max length 100, trimmed
- limit: integer, optional, min 1, max 50, default 20
- cursor: string, optional, must be a valid base64 string if provided
File location: src/validators/searchUsers.validator.js
Output only the validator file. Include a brief inline comment explaining any non-obvious validation choice.
Step 5: Implement the repository method.
Implement the repository method for paginated cursor-based search.
Spec details:
- Function signature: searchUsers(q, limit, cursor)
- cursor is null for the first page, or a base64-encoded JSON object: { id: string, created_at: string } representing the last item seen
- Use ILIKE for case-insensitive matching on name and email
- WHERE clause when cursor is provided: (created_at, id) < (cursor.created_at, cursor.id) — keyset pagination
- ORDER BY created_at DESC, id DESC
- Fetch limit + 1 rows to determine if there is a next page
- Return: { rows: User[], nextCursor: string | null }
File location: src/repositories/user.repository.js (add to existing file)
Database client: pg Pool instance imported from src/db.js
Output only the new function to add to the repository file. Include inline comments for the cursor decoding and the keyset pagination WHERE clause.
Step 6: Implement the service method and route handler.
Implement the service method and route handler for the search feature.
Service method (src/services/user.service.js):
- Function: searchUsers(q, limit, cursor)
- Calls userRepository.searchUsers
- No additional business logic needed for this feature
Route handler (src/routes/users.route.js):
- GET /users/search
- Uses the searchUsers validator from src/validators/searchUsers.validator.js
- Returns 200 with { results, next_cursor, total_count } on success
- Returns 400 with { error: string } on validation failure
- Returns 500 with { error: "Internal server error" } on unexpected errors
- total_count should be omitted in this implementation (set to null) — we'll add it in a future iteration to avoid a COUNT(*) query on every request
Output both files. Keep the route handler thin — HTTP concerns only.
Learning tip: Ask AI to implement one layer at a time and commit between layers. If you implement everything at once and something breaks, you won't know which layer introduced the bug.
Stage 4: AI Code Review
With the implementation complete, run the four-dimension AI review before the PR goes to human review.
Step 7: Run the structured review.
Review the following implementation of a paginated search endpoint for a Node.js/Express + PostgreSQL API.
The feature adds GET /users/search with cursor-based pagination.
[paste all implementation files: validator, repository method, service method, route handler]
Run four reviews in sequence:
**Correctness review:**
- List all assumptions the code makes about inputs, callers, and database state
- Identify unhandled edge cases (empty results, invalid cursor, expired cursor, concurrent deletions between pages)
- Check error path consistency
**Security review:**
- Attack surface: public REST endpoint, unauthenticated (assume auth middleware runs before this route)
- User-controlled inputs: q, limit, cursor query parameters
- Output: JSON response
- Check for: SQL injection (even with parameterized queries — verify), cursor tampering, information leakage in error messages, input size attacks
**Performance review:**
- This endpoint will be called approximately 50 times/second at peak
- User table has approximately 500,000 rows
- Check: index usage, query plan, cursor decoding overhead, missing LIMIT enforcement, result set size
**Architectural review:**
- Rules: routes = HTTP only, services = business logic, repositories = DB queries only, no ORM
- Check: layering violations, responsibilities in wrong layers, tight coupling
For each finding: severity (critical / should-fix / nice-to-have), location, description, and recommended fix.
Expected output: A structured list of findings across all four dimensions. Common real findings on this type of code: cursor not validated as valid JSON before parsing (correctness/security), error message exposing internal query structure (security), missing trigram index verification in the query (performance), response formatting logic in the repository layer (architectural).
Learning tip: Run this review before any human review. Fix the critical and should-fix items, then pass the cleaned-up code to your team. Human reviewers will have faster, higher-quality reviews because the structural issues are already resolved.
Stage 5: PR Description Generation
Step 8: Generate the PR description.
Generate a pull request description for the following change.
**Ticket summary:** Add paginated search to the users REST API endpoint. Product requires search by name or email with stable pagination for large result sets.
**Commits on this branch:**
- chore(db): add trigram indexes on users.name and email for search
- feat(validator): add searchUsers Joi validator
- feat(repository): add searchUsers cursor-based pagination method
- feat(service): add searchUsers service method
- feat(routes): add GET /users/search endpoint
- test(search): add integration tests for paginated user search
**Key technical decisions:**
- Used cursor-based (keyset) pagination instead of offset-based to maintain stable results on large tables and avoid expensive OFFSET scans
- Cursor is opaque (base64-encoded JSON) to allow future changes to pagination implementation without breaking clients
- Used pg_trgm trigram indexes to support ILIKE pattern matching efficiently — standard B-tree indexes don't support ILIKE
- Omitted total_count in this iteration to avoid COUNT(*) on every request; can be added with caching later
**Testing performed:**
- Integration tests with real PostgreSQL instance: first page, subsequent pages, last page, empty results, invalid cursor returns 400, search by name, search by email
- Manual testing with 10k row seed data — query time under 5ms with index
Generate a PR description using this structure:
## Summary
## Changes
## Motivation
## Technical Decisions
## Testing
## Checklist
Expected output: A complete, well-structured PR description that a reviewer can read in 3 minutes and understand the full context of the change. The "Technical Decisions" section — populated from your input — is what transforms a generic PR description into a genuinely useful engineering document.
Learning tip: The quality of the PR description is directly proportional to the quality of your input. The more specific your "key technical decisions" input, the more useful the output. Don't skip that section — it's the most valuable part of any PR description.
Stage 6: Final Review and Commit
Step 9: Final self-review before marking PR ready.
Before opening the PR for human review, run a final self-check. This is not an AI step — this is the judgment call that only you can make:
- Does the implementation match the spec? Check each spec requirement against the code.
- Are all four review findings addressed? Go through the AI review output and confirm every critical and should-fix item is resolved.
- Is the commit history clean? Each commit should be atomic and meaningful. Squash or rebase if needed.
- Does the PR description accurately reflect what was built? Read it again — add anything the AI missed.
- Are there any business logic implications the AI wouldn't know about? (e.g., does this search expose data that should be filtered by the caller's permissions?)
Step 10: Generate the commit message for any final cleanup.
Here is the diff of my cleanup changes after the code review:
[paste git diff --staged]
Write a Conventional Commits message. This is a cleanup commit addressing review feedback, not a new feature. Use the fix or refactor type as appropriate.
Learning tip: The final self-review step is non-negotiable. AI handles the systematic checks; you handle the semantic and business-logic checks. Never skip this step just because the AI review came back clean.
Complete Workflow Summary
Here is the full agentic workflow for this feature, condensed:
| Stage | AI Role | Your Role |
|---|---|---|
| Spec | Generate full spec from feature description | Verify decisions match your system conventions |
| Plan | Generate task list with layers and criteria | Verify scope and task granularity |
| Implementation | Generate code one layer at a time | Review, test, and commit each layer |
| Code Review | Four-dimension structured review | Fix findings, add human context |
| PR Description | Draft from your inputs | Enrich with technical decisions |
| Final Review | None | Semantic and business logic check |
Total wall-clock time for an experienced engineer using this workflow: approximately 2–3 hours for this feature. Without AI assistance: 4–6 hours. The time savings come from eliminating blank-page friction at every stage — spec writing, code drafting, review, and PR description — not from eliminating judgment.
Key Takeaways
- The agentic workflow is not a sequence of AI-generated outputs you paste together — it is a disciplined loop of AI draft, human review, refinement, and commit, repeated for each stage.
- Spec generation before implementation prevents ambiguity from becoming bugs. Spend time on the spec prompt and verify the output carefully.
- Implement and commit one layer at a time. This produces a clean git history and isolates bugs to the layer that introduced them.
- The four-dimension code review (correctness, security, performance, architectural fit) run before human review makes human review faster and higher-signal.
- The PR description is only as good as the "key technical decisions" input you provide. Document decisions and tradeoffs explicitly — AI can draft the words, but only you know the reasoning.