·

This hands-on topic takes everything from Module 6 and applies it to a single realistic scenario: a 50-turn conversation that has grown bloated, drifted off scope, and is delivering diminishing returns. Your job is to analyze it, diagnose its problems, and restructure it into an optimized multi-session workflow that achieves the same outcomes at dramatically lower token cost.

This is not a theoretical exercise. You will work through a complete example — with real conversation excerpts, real diagnoses, and real restructured alternatives — and apply the same process to your own work.


The Scenario: The Checkout Feature Session

A senior engineer at an e-commerce company is implementing a new "express checkout" feature. She has been working in a single AI coding session (Claude via the API, with a 200K context window) for approximately 3 hours. The session is now at 50 turns.

The original goal: implement express checkout — a one-click purchase flow for returning users, using saved payment methods and addresses.

Session summary at turn 50:
- Total tokens consumed: ~85,000 (estimated from turn lengths)
- Current quality: degraded — the model is giving inconsistent advice and occasionally contradicting earlier decisions
- Feature completion: approximately 60% — core logic done, but payment flow, error handling, and tests are incomplete
- The engineer is frustrated and has spent the last 6 turns correcting the model's misunderstandings


Step 1: Diagnose the Bloated Session

Before restructuring, you need to understand specifically what went wrong. Apply the drift and lifecycle analysis tools from earlier topics.

Turn-by-turn drift audit (condensed)

Turns 1–10: Initialization and core design (High value, low waste)
The session starts well. Clear goal, defined constraints, good structural design work.

Sample turn 3 (user): Design the ExpressCheckout service class. It needs to: fetch the user's default payment method and address, validate they are still valid, and initiate a payment intent. Use our existing PaymentGatewayAdapter interface.

Sample turn 3 (model): [280 tokens — clean design with appropriate methods]

Drift level: None. Efficiency: High.

Turns 11–20: Implementation begins (Moderate value, growing noise)
The model starts implementing. A misunderstanding about address validation at turn 14 creates repair drift. Two turns spent correcting it. The correction remains in context.

Turn 14 (user): The address validation isn't right — we don't validate address existence against a third-party service, we just validate it's still in the user's saved addresses list.

Turn 15 (model): [Re-explains address validation, but now the context contains BOTH the wrong explanation and the right one]

Turn 16 (user): Right, use that approach.

Drift introduced: Two conflicting framings of address validation are now permanently in context.

Turns 21–30: Scope creep begins (Lower value, significant noise)
The engineer notices that the PaymentGatewayAdapter is clunky and asks a side question about whether it should be refactored. The model provides a detailed refactoring proposal. The engineer finds it interesting but out of scope for this sprint.

Turn 23 (user): While we're here — do you think the PaymentGatewayAdapter interface is well designed? It feels like the method naming is inconsistent.

Turn 23 (model): [450 tokens analyzing the adapter interface and proposing a refactoring]

Turn 24 (user): Good points, but let's not refactor it now. Back to express checkout.

Drift introduced: 450 tokens of PaymentGatewayAdapter analysis in context, plus the implicit suggestion that the adapter is problematic — which starts influencing the model's subsequent express checkout code.

Turns 31–40: Context contamination visible (Low value, high noise)
The model begins adding unnecessary caveats about the PaymentGatewayAdapter in responses that have nothing to do with it. An unrelated question about session security leads to 3 turns of general OAuth discussion. A debugging detour adds 4 turns of context about a problem that was resolved by restarting the dev server.

Turn 33 (model): [In a response about implementing the payment intent, the model adds a paragraph cautioning about the "inconsistent naming in PaymentGatewayAdapter noted earlier" — this was not asked and is not relevant]

Turn 36 (user): By the way, should express checkout use the same session tokens as regular checkout?

Turn 36–38: [3-turn OAuth discussion that concludes with "yes, same session tokens" — the conclusion fits in one sentence but took 3 turns to reach]

Turn 39 (user): The tests aren't running. Getting a "Cannot find module" error.

Turn 39–42: [4 turns debugging a module resolution issue caused by a missing tsconfig path alias — resolved by a simple config fix. The debugging history remains in context.]

Drift introduced: ~3,000 tokens of noise — PaymentGatewayAdapter contamination, OAuth tangent, module resolution debugging.

Turns 41–50: Quality degradation (Very low value, maximum noise)
The model contradicts the address validation decision from turn 16. It also suggests using a pattern for the payment intent that conflicts with the PaymentGatewayAdapter interface it has already implemented. The engineer spends 4 turns correcting. Responses are growing longer and less specific.

Turn 44 (model): [Implements payment validation using third-party address lookup — directly contradicting the decision at turn 16]

Turn 45 (user): No — we decided at turn 16 we're NOT using third-party address lookup.

Turn 46 (model): [Corrects the implementation but adds a 300-token caveat explaining why third-party lookup would be "better" — relitigating a settled decision]

Diagnosis Summary

Category Token estimate % of total
Active productive work 25,000 29%
Load-bearing decision context 8,000 9%
Historical context (no longer needed) 18,000 21%
Repair drift (corrections + re-explanations) 14,000 16%
Scope creep (adapter discussion, OAuth tangent) 12,000 14%
Debugging tangent (module resolution) 5,000 6%
Contradiction and re-litigation 3,000 4%
Total 85,000 100%

Active useful content: 38% of total tokens consumed. The session is 62% waste.

Tip: Run this exact diagnosis on your own most recent long session. Categorize each turn cluster, estimate tokens, and calculate your "active useful content" percentage. Most practitioners find it in the 30–50% range for sessions over 30 turns. This is your optimization baseline.


Step 2: Extract the Checkpoint — Capturing What Matters

Before ending the bloated session, extract a checkpoint that captures the 38% of value from the 85,000 tokens. Use the following prompt in the current session:

Generate a session checkpoint for this conversation. Focus on decisions, not discussion. Include:

1. Feature goal and current completion status (2–3 sentences max)
2. Key technical decisions — each as a bullet: [decision]: [exact implementation detail]
3. Current state of each file modified or created (name + brief status)
4. What is NOT in scope (important negative constraints)
5. Blockers preventing completion
6. Remaining work items in priority order
7. A validation question for opening the next session

Be ruthlessly concise. Target 300 words maximum.

Expected checkpoint output (generated from the session):

## Express Checkout Checkpoint — Session 1

**Goal**: Implement one-click express checkout for returning users.
**Status**: ~60% complete. Core service design and basic payment intent flow done. Payment completion, error handling, and test suite not yet started.

**Technical Decisions**:
- Address validation: check against user's saved addresses list ONLY — no third-party address lookup service
- Payment method: use user's default saved card via existing PaymentGatewayAdapter interface (no adapter changes this sprint)
- Session handling: express checkout uses same session tokens as regular checkout — no change required
- Key schema: `users/{userId}/savedAddresses` (Firestore path, established in turn 8)

**File State**:
- `/src/services/ExpressCheckoutService.ts` — new file, ~80% complete. Methods: `validateUserPrerequisites()`, `buildPaymentIntent()` done. `completeCheckout()` not started.
- `/src/types/ExpressCheckout.ts` — complete, all types defined
- `/src/controllers/CheckoutController.ts` — route handler stub added, not yet wired to service

**Not In Scope (this sprint)**:
- PaymentGatewayAdapter refactoring
- Third-party address validation
- OAuth/session token changes

**Blockers**: None currently. Next work is straightforward implementation.

**Remaining Work (priority order)**:
1. Implement `completeCheckout()` in ExpressCheckoutService
2. Wire CheckoutController route to ExpressCheckoutService
3. Implement error handling (payment failure, expired payment method, address no longer valid)
4. Write unit tests for ExpressCheckoutService (target: 15 tests)
5. Write integration test for the full express checkout flow

**Validation question**: What address validation approach did we decide on, and why did we rule out the alternative?

This checkpoint is approximately 310 tokens. It replaces 85,000 tokens of conversation history.

Tip: After generating an AI-produced checkpoint, always manually review it for completeness. AI-generated checkpoints are approximately 85–90% accurate. The most common gap is missing "not in scope" constraints — add these manually if they were established but not captured.


Step 3: Design the Optimized Multi-Session Workflow

Now restructure the remaining work into an optimized multi-session plan. Apply the parallel vs. sequential framework from Topic 5.

Dependency analysis of remaining work

Looking at the 5 remaining items from the checkpoint:

  1. completeCheckout() implementation — depends on: PaymentGatewayAdapter interface (known), address validation decision (known). Independent of: error handling, tests.
  2. Controller wiring — depends on: completeCheckout() being defined (its signature needed). Can proceed in parallel with internal implementation.
  3. Error handling — depends on: knowing what errors completeCheckout() can throw. Must come after item 1.
  4. Unit tests — depends on: all service methods being defined. Must come after items 1 and 3.
  5. Integration test — depends on: full service + controller wired. Must come after items 1, 2, and 3.

Optimal session structure

Session 2A (Independent): completeCheckout() implementation
Session 2B (Independent, parallel with 2A): Controller wiring stub + route definition

↓ (both complete)

Session 3: Error handling (depends on 2A output)

↓

Session 4: Unit tests (depends on 2A + 3 output)

↓

Session 5: Integration test (depends on all prior sessions)

Sessions 2A and 2B can run in parallel (or near-parallel — start 2B while 2A is in progress). Sessions 3, 4, and 5 are sequential.

Session blueprints

Session 2A: completeCheckout() Implementation

Opening prompt:

You are resuming implementation of an express checkout feature. Checkpoint below.

[paste checkpoint]

Validation check first: What address validation approach did we decide on, and why did we rule out the alternative?

If correct, proceed to:
Task: Implement the `completeCheckout(paymentIntentId: string, userId: string): Promise<CheckoutResult>` method in ExpressCheckoutService.ts.

The method must:
1. Confirm the payment intent status via PaymentGatewayAdapter.confirmPayment()
2. Update the order status in the database
3. Clear the user's cart
4. Return a CheckoutResult with order ID and confirmation number

Do not implement error handling in this session — we will add that in Session 3.
Output: the complete method implementation as a TypeScript code block.

Expected session length: 8–12 turns. Estimated tokens: 6,000–9,000.

Session 2B: Controller Wiring (can start in parallel)

Opening prompt:

Express checkout context (minimal — only what this task needs):

Project: e-commerce platform, Node.js/TypeScript
File: /src/controllers/CheckoutController.ts
Task: Wire the express checkout route to the ExpressCheckoutService.

Route spec:
- POST /checkout/express
- Auth: required (user must be logged in)
- Request body: { confirmCheckout: boolean }
- Response: { orderId: string, confirmationNumber: string } on success

The ExpressCheckoutService class exists at /src/services/ExpressCheckoutService.ts.
It has a method `completeCheckout(paymentIntentId: string, userId: string): Promise<CheckoutResult>`.
The paymentIntentId is stored in the user's session under session.expressPaymentIntentId.

Output: the updated CheckoutController.ts with the new route handler wired in.
Do not modify any existing routes.

Note: This session does NOT need the full checkpoint because it has a clear, narrow scope. A minimal context is sufficient — and leaner.

Expected session length: 6–8 turns. Estimated tokens: 3,000–5,000.

Session 3: Error Handling

Opening prompt after receiving Session 2A output:

Continuing express checkout implementation. Current state:

- ExpressCheckoutService.completeCheckout() is complete [paste implementation from 2A]
- Controller is wired [paste from 2B]

Task: Add error handling to ExpressCheckoutService. Handle these specific cases:
1. Payment confirmation fails (PaymentGatewayAdapter throws PaymentFailedError)
2. Payment method expired (PaymentGatewayAdapter throws ExpiredPaymentMethodError)
3. User's default address no longer in saved addresses
4. Database update fails after payment confirmation (must handle payment already charged)

For each error:
- Define the error type (TypeScript class extending Error)
- Add the throw in completeCheckout()
- Add appropriate HTTP status mapping in a new ErrorMapper utility

Output format: separate code blocks for each new/modified file.

Expected session length: 10–14 turns. Estimated tokens: 7,000–11,000.

Session 4: Unit Tests

Opening prompt:

Writing unit tests for ExpressCheckoutService. Here is the complete current implementation:

[paste ExpressCheckoutService.ts — including completeCheckout() and error types from Session 3]

Testing standards: Jest, unit tests only (mock all external dependencies), target 15 tests.

Test cases to cover:
1. Happy path: valid user, valid payment, completes successfully
2. Payment failure: PaymentGatewayAdapter throws PaymentFailedError
3. Expired payment method: throws ExpiredPaymentMethodError
4. Address validation: user's default address removed from saved list
5. Database failure post-payment: handles the "already charged" edge case
6–15: Additional edge cases — generate based on the implementation

Format: complete Jest test file, one describe block, clear test names.

Expected session length: 8–12 turns. Estimated tokens: 6,000–9,000.

Tip: Write the session blueprints before starting any of the sessions. This forces you to think through dependencies and context requirements upfront. Practitioners who plan their session structure before starting report 2–3x better outcomes than those who decide session boundaries reactively.


Step 4: Token Cost Comparison

Original approach (continuing the bloated session to completion):
- Current: 85,000 tokens consumed
- Estimated to complete: 20–25 more turns × ~2,000 tokens/turn (inflated by context bloat) = 40,000–50,000 more tokens
- Total: ~125,000–135,000 tokens

Optimized multi-session approach:
- Session 2A (completeCheckout): ~7,000 tokens
- Session 2B (controller wiring): ~4,000 tokens
- Session 3 (error handling): ~9,000 tokens
- Session 4 (unit tests): ~7,500 tokens
- Session 5 (integration test): ~6,000 tokens
- Total remaining: ~33,500 tokens

Savings on remaining work: 40,000–50,000 vs. 33,500 = 18–32% direct token savings on remaining work.

Add the 85,000 tokens already consumed: if you had structured this way from the start, the complete feature would have cost approximately:
- Optimized structure for all sessions: ~55,000–65,000 tokens total
- Actual bloated single session: ~125,000–135,000 tokens total
- Full restructuring savings: 50–60% token reduction for equivalent output quality


Step 5: Apply This to Your Own Work — The Restructuring Exercise

Now apply the same process to a recent session from your own workflow. Follow these 5 steps:

Step 1: Select a target session

Pick a session that:
- Was 20+ turns long
- Felt like it drifted at some point
- Produced work that took longer than expected

Step 2: Audit the session

Go back through the conversation history and categorize each turn cluster (5-turn blocks). Estimate the token percentage in each category:
- Active productive work
- Load-bearing decisions
- Historical context
- Repair drift
- Scope creep
- Debugging tangents
- Contradiction/re-litigation

Step 3: Extract the checkpoint

Identify what a complete checkpoint of that session would have looked like at its midpoint. Write it out — what decisions, what file states, what constraints, what remaining work.

Step 4: Design the optimized structure

Map the remaining work from your checkpoint. Identify which tasks are independent (parallel candidates) and which are dependent (sequential required). Draw the dependency graph.

Step 5: Write 2–3 session blueprints

For the most complex remaining tasks, write a complete session-opening prompt. Include the minimal context needed, the validation question, and the specific output format required.

Tip: Share your restructured session plan with a colleague and ask them to review it for gaps — specifically, "does the context in each session blueprint contain everything the model needs to do this task correctly?" Peer review of session blueprints catches missing constraints that you, as the author, take for granted.


Common Restructuring Mistakes and Corrections

Mistake: Over-splitting independent tasks
Breaking a task into 8 sessions when 3 would do. Session management overhead — writing opening prompts, running validation turns, synthesizing outputs — has a cost. Sessions shorter than 5–6 turns often do not justify the setup overhead.

Correction: Combine tasks that are independent but small. Group them into "batches" that fit naturally into a single focused session.

Mistake: Under-specifying the context in parallel sessions
Starting a parallel session with "the interface spec" but not including the relevant constraint decisions — resulting in an implementation that is technically correct but violates an architectural constraint established in an earlier session.

Correction: Add an "Active constraints" section to every parallel session blueprint, listing constraints from the main session that apply even though the full session history is not included.

Mistake: Skipping the synthesis session
Running 4 parallel implementation sessions and then trying to integrate the outputs in the main codebase without a dedicated synthesis session. Inconsistencies emerge during integration and require costly ad-hoc debugging.

Correction: Always budget a synthesis session. Even for simple tasks, 5–8 turns to review and integrate parallel outputs is cheaper than discovering integration issues post-deploy.

Tip: The first time you restructure a bloated session into an optimized multi-session workflow will feel like it takes longer than just pushing through the bloated session. By the third time, the discipline will feel natural and the time investment will feel obviously worthwhile. The habit compounds — each well-structured session produces better checkpoints, which make the next session cheaper to set up.