·

Translating Business Requirements Into Engineering Specs

Translating Business Requirements Into Engineering Specs

The gap between "the business wants X" and "the agent builds the right X" is always a specification problem — and it is always the engineer's responsibility to close it.


The Translation Chain: From Business Ask to Agent Task

Business requirements and engineering tasks are written for fundamentally different audiences. A product manager describes what a user should experience. An agent needs to know exactly what code to write, what to validate, what edge cases to handle, and what "done" means in testable terms. The distance between those two descriptions is where most agentic engineering failures originate.

The translation chain has four links: business ask → user story → engineering spec → agent task. Each link serves a different purpose, and collapsing them — skipping directly from a Slack message to an agent prompt — is the single most common mistake engineers make when adopting AI-assisted development.

The business ask is typically outcome-oriented and technology-agnostic. "We need to let users pay with Apple Pay" is a business ask. It says nothing about the payment provider SDK, the existing checkout state machine, the session handling requirements, or the error recovery behavior. It is a correct statement of intent, but an insufficient instruction for an agent.

The user story captures the actor, action, and motivation in a structured form. It surfaces implicit stakeholders (guest users vs. logged-in users, mobile-only vs. all platforms) and brings acceptance criteria into view. The engineering spec translates those acceptance criteria into system behavior: API contracts, data model changes, validation rules, error codes, edge case handling, performance bounds, and security requirements. The agent task is a scoped, self-contained instruction derived from the engineering spec — small enough that the agent can execute it, verify it, and return a reviewable result.

Learning tip: When you receive a business ask, do not write a prompt immediately. Write the spec first, even a rough one. The act of writing the spec will surface every question you need answered before the agent touches code. The time you spend on the spec is the time you save in review cycles.


Common Gaps: Ambiguity, Contradiction, and Assumed Context

Requirements handed to engineers — and subsequently to agents — almost always contain three categories of problems: ambiguity, contradiction, and assumed context. Each produces a different failure mode.

Ambiguity is the most common. "Users should be able to save their cart" does not specify whether this means persisting to the server, to localStorage, or both. It does not say whether anonymous users can save carts, whether the cart expires, or what happens when a saved cart contains an item that has since gone out of stock. An agent given an ambiguous requirement will make choices — often plausible ones — that turn out to be wrong for your system.

Contradiction is more dangerous because it is less visible. A requirement might state "the checkout flow must work for guest users" in one section and "all orders require a user account for order history" in another. Both are reasonable requirements; together they conflict. An agent resolves contradictions by defaulting to whatever pattern is most common in its training data — which may not match your business rules. You will not catch this until a PM reviews the implementation and asks why guest orders disappear from the order history page.

Assumed context is the hardest to catch because it is, by definition, invisible until something breaks. Requirements assume the engineer knows that your currency handling uses integer cents, not floats. That your inventory system is eventually consistent and stock counts lag by up to 30 seconds. That your payment provider returns error codes differently in test mode versus production. None of this appears in the requirement. The agent does not know it unless you put it there.

Using AI to surface these gaps before implementation starts is one of the highest-leverage applications of agentic tooling. The cost of finding an ambiguity in a spec is near zero. The cost of finding it in a code review — or worse, in production — is not.

Learning tip: Before handing any requirement to an agent for implementation, first hand it to an agent for interrogation. Ask it to find the gaps. The two prompts are different tasks and should be two separate sessions.


Using AI to Interrogate Requirements

The goal of interrogating a requirement is to produce a complete list of questions, assumptions, and edge cases before a single line of code is written. AI tools are exceptionally good at this task because they can pattern-match against thousands of similar requirement scenarios and surface the long tail of cases that experienced engineers usually catch only through hard experience.

The interrogation prompt pattern has three components: the requirement text, a framing that activates adversarial or exhaustive thinking, and explicit output formatting so the results are actionable.

Step 1: Extract edge cases from a requirement

You are a senior software engineer with experience in e-commerce systems. I am going to give you a business requirement. Your task is NOT to implement it — your task is to interrogate it.

Produce a structured list of:
1. Ambiguous terms or phrases that require clarification before implementation
2. Edge cases that the requirement does not address
3. Implicit assumptions the requirement makes about the system
4. Potential conflicts with common e-commerce system behaviors (e.g., inventory, pricing, session management)

For each item, write one sentence explaining why it matters to the implementation.

Requirement:
"Users should be able to apply a discount code during checkout. If the code is valid, the discount should be applied to the order total before tax."

Run this prompt against any requirement before you write the spec. The output will not be exhaustive — no tool is — but it will catch 70–80% of the gaps that would otherwise surface during implementation or review.

Step 2: Surface hidden assumptions with a context probe

You are a senior backend engineer. I will give you a requirement and a description of our system. Identify every assumption the requirement makes that our system may or may not satisfy. For each assumption, tell me what would break if the assumption is false.

System context:
- E-commerce platform, Node.js/TypeScript backend
- PostgreSQL database, prices stored as integers (cents)
- Orders can be created by guest users (no account required)
- Tax calculation is handled by a third-party service called at checkout finalization
- Inventory is managed by an external warehouse system with eventual consistency (30-second lag)

Requirement:
"Users should be able to apply a discount code during checkout. If the code is valid, the discount should be applied to the order total before tax."

List each assumption as: [Assumption] → [What breaks if false]

The output of this prompt becomes the "assumptions" section of your engineering spec — a section that most specs omit and almost every implementation relies on.

Learning tip: Save these interrogation prompts as reusable templates. The same structure works for any requirement domain. The only thing that changes is the system context block. Build a prompt library, not a one-off habit.


Writing Specs That Survive PM Review and Drive Accurate Agent Output

A spec has two audiences with different needs: stakeholders who need to confirm the spec reflects their intent, and agents (and engineers) who need to implement it unambiguously. Writing a spec that serves both audiences is a discipline worth developing deliberately.

For stakeholder review, the spec needs to be readable in plain language, organized around user-facing behavior, and explicit about scope (what is included and what is explicitly out of scope for this work). Product managers and designers review specs for intent alignment, not technical accuracy. If your spec is impenetrable to a non-engineer, you will get blanket approval without genuine review — which means you will discover the misalignment later.

For agent execution, the spec needs to be precise about system behavior, not user experience. API contracts, field names, validation rules, error codes and messages, data model changes, and acceptance criteria expressed as input/output pairs are what the agent needs. Narrative descriptions of the user journey are useful context, but they do not replace behavioral specification.

The structure that works for both audiences:

1. Summary (2–3 sentences, plain language): What this feature does and why.

2. Scope (bullet list): What is in scope. What is explicitly out of scope.

3. User-facing behavior (numbered scenarios): Describe the flow from the user's perspective, including success paths and error states. Use concrete examples with specific values.

4. System behavior (technical spec): API endpoints, request/response schemas, validation rules, error codes, state transitions, database changes, integration points.

5. Acceptance criteria (testable statements): "Given X input, the system returns Y response with Z side effects." Each criterion maps directly to a test case.

6. Assumptions and dependencies (explicit list): What must be true for this spec to be correct. External systems, configuration values, data states.

7. Out-of-scope edge cases (explicit deferral list): Edge cases you have identified but are deliberately not handling in this iteration, with a brief rationale.

Learning tip: Section 7 — the out-of-scope deferral list — is the section most engineers skip. It is also the section that prevents the most scope creep. When a stakeholder asks "what about the case where a user applies two discount codes?" you can point to the deferral list instead of having an undocumented conversation.


Hands-On: Checkout Requirement to Complete Engineering Spec

This exercise walks through the full translation chain for a realistic e-commerce requirement. Follow the steps in order — each builds on the previous output.

Starting requirement (received from product):

"Allow users to apply a discount code at checkout. Valid codes reduce the order total. The discount should apply before tax is calculated."

Step 1: Run the edge case interrogation prompt

Use the prompt from the "Using AI to Interrogate Requirements" section. Paste the requirement as-is. Collect the output — you will use it in Step 3.

Expected output: A list of 8–15 questions and edge cases, including items like: What happens when a code is expired vs. never valid? Can a code be used more than once? Does the discount apply per item or to the cart total? What happens if the discounted total goes below zero? Can a code be stacked with a sale price?

Step 2: Answer the questions with your PM/domain knowledge

Go through the interrogation output and answer each item. You do not need to answer all of them — some will surface items to defer. Write your answers inline. This is the most important step and the one that requires the most human judgment.

Example answers:
- Codes are single-use per customer (not globally single-use)
- Codes cannot be combined with other codes (one active discount at a time)
- Discount is applied to cart subtotal, not per item
- Minimum order value may be required (configurable per code)
- Total cannot go below $0.00 — code reduces total to zero at minimum

Step 3: Generate a draft spec from answered questions

You are a senior software engineer writing an engineering spec for an e-commerce checkout feature. I will give you:
1. The original business requirement
2. A list of answered questions and scope decisions

Your task is to produce a complete engineering spec using this structure:
- Summary (2–3 plain-language sentences)
- Scope (in-scope and explicitly out-of-scope bullet lists)
- User-facing behavior (numbered scenarios with concrete examples)
- System behavior (API endpoints, request/response schemas, validation rules, error codes)
- Acceptance criteria (testable input/output statements)
- Assumptions and dependencies
- Deferred edge cases

Business requirement:
"Allow users to apply a discount code at checkout. Valid codes reduce the order total. The discount should apply before tax is calculated."

Answered questions and scope decisions:
- Codes are single-use per customer (not globally single-use per code)
- A user can only apply one discount code per order
- Discount applies to cart subtotal before tax calculation
- Minimum order value is configurable per code (may be null = no minimum)
- Discounted total cannot go below $0.00
- Guest users can apply codes (no account required)
- Code validation happens server-side only
- Codes do not apply to shipping costs
- Expired codes return a specific error, not a generic "invalid code" error
- System stores: code string, discount type (percentage or fixed), discount value, expiration date, per-customer usage count

Generate the engineering spec now.

Expected output: A 400–600 word spec covering all sections. Review it for accuracy against your answers before moving forward.

Step 4: Convert the acceptance criteria into agent tasks

Take the acceptance criteria section from the generated spec and convert each criterion into a discrete agent task. The pattern is: one acceptance criterion = one agent task with explicit inputs, expected outputs, and affected files.

You are a senior engineer breaking down an engineering spec into discrete agent implementation tasks.

I will give you the acceptance criteria section of a spec. For each criterion, produce a task card with:
- Task title
- Files to create or modify (based on a standard Node.js/Express/TypeScript e-commerce backend with a PostgreSQL database via Prisma)
- Precise implementation instruction (what behavior to add, not how to code it)
- Verifiable output (how to confirm the task is done correctly)

Acceptance criteria:
[paste the acceptance criteria section from Step 3's output]

Expected output: 6–10 task cards, each small enough for an agent to complete in a single session with a verifiable result.

Step 5: Run a spec conflict check before implementation begins

Before handing any task to an agent, validate the spec against your existing system context for contradictions.

You are a senior backend engineer reviewing a new feature spec for conflicts with an existing system.

Existing system constraints:
- All prices are stored and computed as integers (cents). There are no float values in the database or API responses.
- Tax calculation is performed by a third-party service (TaxJar) and called as the last step before order finalization.
- Guest sessions are stored in Redis with a 2-hour TTL.
- The orders table has a non-nullable `user_id` column with a foreign key to the users table.

New feature spec summary:
- Discount code applied to cart subtotal before tax
- Guest users can apply codes
- Discount value can be fixed (e.g., $10 off) or percentage (e.g., 15% off)
- Discounted total cannot go below $0.00

Identify any conflicts between the spec and the existing system constraints. For each conflict, suggest a resolution.

Expected output: At minimum, the agent should flag that "orders table has a non-nullable user_id" conflicts with "guest users can apply codes" — guest orders need either a nullable user_id or a guest user record strategy. This is exactly the kind of conflict that slips through spec review and surfaces as a runtime error.

Step 6: Finalize and store the spec before running agents

Store the finalized spec in your repository as a versioned artifact — a Markdown file alongside the relevant feature branch, or in a /specs directory. This gives you a reference point for agent tasks, a document for stakeholder re-review, and a historical record if requirements change.

The spec file is not documentation written after the fact. It is the source of truth written before any code exists. Treat it with the same rigor as a migration file or an API contract.

Learning tip: If a stakeholder asks you to change a requirement after the spec is written, update the spec first, re-run the conflict check, and then update the agent tasks. Never update agent tasks directly from a verbal requirement change — the spec is the authoritative record of what was agreed, and it should reflect every change before any code changes follow it.


Key Takeaways

  • The translation chain is non-negotiable. Business ask → user story → engineering spec → agent task is not bureaucratic overhead — it is the structure that makes agent output verifiable and reversible. Collapsing these steps produces implementations that are locally plausible but globally wrong.
  • Ambiguity, contradiction, and assumed context are the three failure modes in requirements. AI interrogation prompts are your first line of defense — run them before writing the spec, not after the implementation surfaces the gaps.
  • A good spec serves two audiences. Stakeholders read it for intent alignment; agents need it for precise behavioral specification. Structure your spec so it can do both jobs without switching between two documents.
  • Acceptance criteria are the bridge between spec and agent task. Each criterion should be a testable input/output statement. If you cannot write a test for it, you cannot verify an agent implemented it correctly.
  • Run a conflict check between every new spec and your existing system constraints. Requirements that look internally consistent can still contradict your data model, external service contracts, or platform limitations. Catch conflicts in the spec; do not discover them in production.