Overview
This topic is a practical synthesis of everything covered in Module 4. Rather than introducing new concepts, it walks through the complete requirements engineering workflow — from feature brief to stakeholder-ready requirements package — using a single worked example. Every step in the workflow uses the skills, techniques, and prompts developed in the preceding topics, applied in sequence to a realistic product scenario.
The worked example is a realistic B2B SaaS feature: a Smart Invoice Matching Dashboard for a finance operations platform. This feature gives finance managers a real-time view of their invoice processing pipeline — auto-matched invoices, exceptions requiring review, and the key metrics that help them manage their reconciliation workload. It is complex enough to exercise all aspects of the requirements workflow but grounded enough to be immediately applicable to practitioners working in similar enterprise product domains.
The goal of this topic is not just to produce a requirements package for this specific feature. It is to demonstrate a repeatable workflow that you can apply to any feature in any product domain. By the end of this topic, you will have a concrete, end-to-end model for AI-assisted requirements engineering that you can adapt and deploy immediately in your own work.
Each section builds on the previous one. The feature brief from Section 1 feeds into the PRD in Section 2. The PRD feeds into the quality audits in Section 3. And the quality-audited requirements package is the input for the stakeholder-ready deliverable in Section 4. This is not a modular set of disconnected exercises — it is a single continuous workflow.
Start from a Feature Brief and Generate Epics, User Stories, and Acceptance Criteria
The starting point for almost all feature requirements work is a feature brief — a short document that summarizes the business opportunity, the user need, the proposed solution direction, and the success criteria. Feature briefs are typically written before discovery is complete enough for full PRD writing. They are the output of a scoping conversation, a design sprint, or a discovery synthesis session.
Here is the feature brief for our worked example:
Feature Brief: Smart Invoice Matching Dashboard
Product: FinOps Pro — A B2B invoicing and AP automation platform for mid-market finance teams.
Problem: Finance managers currently have no consolidated view of their invoice processing pipeline. They must manually check the auto-match queue, the exception queue, and the processed queue separately to understand their workload. This creates cognitive overhead, delays in catching critical exceptions, and difficulty in reporting processing performance to their CFO.
User: Finance Manager (primary), Controller/CFO (secondary consumer of reporting data).
Proposed Solution: A real-time dashboard that gives finance managers a single-screen view of: (1) total invoices in the pipeline by status (Auto-Approved, Pending Review, Exception, Processed), (2) exceptions sorted by age and amount, (3) key metrics (auto-match rate, average processing time, exception resolution rate), (4) quick-action buttons to jump into exception review or approve auto-matched batches.
Success Criteria: Reduce time to identify critical exceptions by 50%. Achieve 80%+ adoption among active finance manager users within 60 days of launch. Enable finance managers to generate a processing performance report for their CFO in under 2 minutes.
Constraints: Desktop web only (v1). Must use existing real-time data pipeline. No new ERP queries; all data sourced from existing matching engine outputs.
Hands-On Steps
- Read the feature brief carefully. Identify the core user need, the proposed solution components, and the constraints.
- Run the epic generation prompt using the feature brief as input.
- Review the generated epics. Confirm each epic represents a coherent body of work with standalone value.
- For each approved epic, run the story decomposition prompt.
- Review the stories. Identify any that are too large (need splitting) or too small (consider merging).
- For each story, run the acceptance criteria generation prompt — covering happy path, error states, edge cases, and boundary conditions.
- Organize the full story hierarchy in a structured document.
Prompt Examples
Prompt:
You are a senior business analyst generating requirements from a feature brief.
Here is the feature brief for a new product feature:
[Paste the Smart Invoice Matching Dashboard feature brief above]
Step 1: Generate 3-5 epics for this feature. Each epic should:
- Represent a distinct, independently valuable area of work
- Be named with a clear, noun-phrase title
- Have a one-sentence value statement describing what it delivers to the user
Step 2: For each epic, generate 3-5 user stories. Each story should:
- Be in "As a [behavioral persona], I want [specific capability], so that [measurable outcome]" format
- Be small enough to complete in one sprint
- Deliver standalone value
Do not write acceptance criteria yet. Focus on getting the story hierarchy right.
Expected output: An epic and story hierarchy such as:
EPIC 1: Pipeline Status Overview — Gives finance managers an at-a-glance view of their entire invoice processing pipeline without navigating multiple queues.
- STORY 1.1: As a finance manager who starts their day by assessing invoice backlog, I want to see the count of invoices in each status category (Auto-Approved pending payment, Exception requiring review, In-Progress, Processed today) on a single screen, so that I understand my workload before opening any individual queue.
- STORY 1.2: As a finance manager monitoring my pipeline, I want the status counts to update automatically without requiring a page refresh, so that I can leave the dashboard open during the day and trust that what I see is current.
EPIC 2: Exception Prioritization — Enables finance managers to identify and triage the most critical exceptions first, reducing time lost on low-priority exceptions while high-value ones age.
And so on, for 3-5 epics with 3-5 stories each.
Prompt:
You are a senior business analyst generating acceptance criteria.
Here is a user story that needs acceptance criteria:
Story: "As a finance manager who starts their day by assessing invoice backlog, I want to see the count of invoices in each status category (Auto-Approved Pending Payment, Exception Requiring Review, In-Progress, Processed Today) on a single dashboard screen, so that I understand my full workload before opening any individual queue."
Generate acceptance criteria that cover:
1. Happy path — the dashboard loads with accurate counts
2. Data freshness — how current must the counts be?
3. Zero state — what happens when there are no invoices in a category?
4. Error state — what happens when the data pipeline is unavailable?
5. Permission boundary — what does a read-only user see vs. an approver?
6. Performance — how fast must the dashboard load?
Use Given/When/Then format. Make each criterion independently testable.
Expected output: Six to eight precisely written acceptance criteria — for example:
AC-1 (Happy Path): Given a finance manager navigates to the dashboard, when the page loads, then four status count tiles are displayed: "Auto-Approved Pending Payment," "Exceptions Requiring Review," "In-Progress," and "Processed Today," each showing an accurate count sourced from the last 5 minutes of matching engine data.
AC-4 (Error State): Given the data pipeline has been unavailable for more than 5 minutes, when a finance manager accesses the dashboard, then each count tile displays a warning icon and the text "Data unavailable — last updated at [timestamp]." The dashboard remains navigable but all counts are greyed out.
AC-6 (Performance): Given a finance manager with up to 1,000 invoices in their pipeline, when the dashboard page loads, then all four status count tiles display their counts within 3 seconds at the 90th percentile, measured from navigation initiation to final tile render.
Learning Tip: Generate the epic and story hierarchy before writing a single line of acceptance criteria. PMs who jump straight to acceptance criteria without establishing the story hierarchy produce stories that are either too large (because the scope of the epic got compressed into one story) or too small (because they fragmented the epic without thinking about what units of value make sense). The hierarchy step takes 30 minutes and saves hours downstream.
Produce a PRD and Technical Specification Using AI
With the story hierarchy complete, the next step is generating the PRD — the document that places the feature in its full business and product context. The PRD for the Smart Invoice Matching Dashboard draws directly on the feature brief and the story hierarchy. Many of the section contents are already implicit in the brief and the stories; the PRD makes them explicit and organized.
The technical specification is generated after the PRD, using the product requirements as input. The technical spec surfaces the data requirements, integration dependencies, performance requirements, and security considerations that engineering needs before they can begin design and estimation.
This two-document sequence (PRD → technical spec) is the standard product-to-engineering handoff workflow. The PRD is the product view: why we are building this, what it must do, how we will measure success. The technical spec is the engineering entry point: what data it needs, what the APIs must do, what performance it must meet, what the edge cases are from a system perspective.
Hands-On Steps
- Assemble the PRD input brief: feature brief + epic/story hierarchy + any additional constraints or decisions made during the story generation process.
- Run the PRD generation prompt with the full input brief.
- Review the PRD draft section by section. Apply the completeness, clarity, consistency, and measurability review framework.
- Run the gap check prompt to identify any sections that are incomplete or inconsistent.
- Generate the technical specification from the product requirements using the technical context generation prompt.
- Review the technical spec with your engineering lead before including it in the final document.
Prompt Examples
Prompt:
You are a senior product manager writing a complete Product Requirements Document.
Write a PRD using the following structure:
1. Background (why are we building this now?)
2. Problem Statement (what user problem are we solving, with evidence)
3. Goals (3-5 measurable goals with targets and timeframes)
4. Non-Goals (explicit exclusions)
5. Solution Overview (functional description, 3-4 paragraphs)
6. Key User Stories (reference the top 5-6 stories from the story hierarchy)
7. Success Metrics (KPIs, measurement methods, and targets)
8. Dependencies (data pipeline, matching engine, ERP connectors, design system)
9. Open Questions (unresolved decisions)
Here is the complete context:
Feature Brief: [Paste the Smart Invoice Matching Dashboard feature brief]
Epic and Story Hierarchy: [Paste the epic and story hierarchy generated in Section 1]
Additional decisions made during story generation:
- Data freshness SLA is 5 minutes (decided during AC writing)
- Read-only users see counts but cannot take actions (decided during AC writing)
- Dashboard will not support mobile in v1 (per brief constraints)
- The "Processed Today" count resets at midnight in the user's local timezone (decided during AC writing)
Expected output: A complete, well-structured PRD that is specific to the Smart Invoice Matching Dashboard — not a generic template document. The Problem Statement will cite the specific pains from the brief. The Goals will reference the 50% reduction in exception identification time and the 80% adoption target. The Solution Overview will describe the four status tiles, exception prioritization, and quick-action buttons. The Non-Goals section will explicitly exclude mobile, new ERP queries, and the detailed exception review screen (which is a different feature).
Prompt:
You are a senior business analyst writing the technical specification section of a PRD.
Here are the product requirements for the Smart Invoice Matching Dashboard:
[Paste the PRD generated above]
Generate a Technical Specification section with the following subsections:
1. DATA REQUIREMENTS
- What data must the dashboard display?
- Where does each data type currently live (source system)?
- What are the data freshness requirements (from ACs)?
- Are there any data aggregations needed (e.g., "count of invoices by status")?
2. API / INTEGRATION REQUIREMENTS
- What API calls does the dashboard need to make?
- What existing services or data feeds can it use?
- Are there new API endpoints needed, or can existing ones be extended?
3. PERFORMANCE REQUIREMENTS
- Load time requirements (from ACs)
- Real-time update requirements (from ACs)
- Maximum data volume the feature must handle
4. SECURITY AND ACCESS REQUIREMENTS
- What data is sensitive? What access controls apply?
- How are read-only vs. approver permissions enforced?
5. ERROR HANDLING REQUIREMENTS
- What must the system display when the data pipeline is unavailable?
- What are the graceful degradation requirements?
For each requirement, classify as: Hard Product Requirement | Engineering Decision | Open Question
Expected output: A detailed technical specification with 5-8 requirements per subsection, each clearly classified. For example, under Data Requirements: "The dashboard must display invoice counts aggregated by status category. [Source: Matching engine output database] [Hard Product Requirement] The aggregation must be pre-computed and cached — the dashboard should not run live queries on the full invoice database on each page load. [Engineering Decision — recommended approach, not mandated] [Open Question: What is the maximum staleness acceptable for cached counts? ACs specify 5 minutes, but is this a hard limit or a target?]"
Prompt:
You are a senior product manager running a PRD pre-distribution quality review.
Review the following PRD and Technical Specification against the four-dimension review framework:
COMPLETENESS: Are all required sections present and substantive? Score each: Complete / Partial / Missing
CLARITY: List any section that would be unclear to a reader without product meeting context.
CONSISTENCY: Do the Goals, Solution, and Success Metrics logically align? Flag any misalignment.
MEASURABILITY: Do all success metrics have a specific target, a timeframe, and a measurement method?
[Paste PRD and Technical Specification here]
Output: Section | Rating | Issues | Recommended Improvement
Expected output: A structured review matrix with ratings for each PRD section. Expected findings: the Open Questions section is likely incomplete (the data caching decision is an open question that was not captured), the Success Metrics section may lack a measurement method (how will "50% reduction in exception identification time" be measured — is there a baseline?), and the Technical Specification's error handling section may be incomplete (what happens if only some status counts fail to load, not all?).
Learning Tip: The PRD and technical specification are complementary documents, not a single merged document. Keep them separate — the PRD is shared broadly with all stakeholders, including non-technical ones. The technical specification is primarily for engineering and QA. Keeping them separate means each document can be written at the right level of detail for its intended audience without one audience's needs compromising the other's.
Run AI Quality Audits on the Generated Requirements
With a complete requirements package (story hierarchy + PRD + technical spec), the next step is systematic quality auditing. Three audits should be run before the requirements are considered sprint-ready: an INVEST audit on the user stories, a gap analysis on the acceptance criteria, and a traceability check to confirm every story links to a PRD goal and every acceptance criterion links to a user story.
This section demonstrates running all three audits on the requirements package generated in Sections 1 and 2, interpreting the results, and making targeted improvements. The audit is not about achieving perfection — it is about identifying and fixing the issues that would cause problems in development before they reach the sprint.
Hands-On Steps
- Run the INVEST audit on the five most complex stories (the ones most likely to have quality issues).
- For any story with a Fail or multiple Partial ratings, run the targeted improvement prompt.
- Run the language clarity audit on the full acceptance criteria set.
- For each ambiguous criterion, apply the specificity test and rewrite.
- Run the edge case coverage audit for the two highest-complexity stories.
- Fill any identified gaps with new acceptance criteria.
- Run the traceability check: confirm every story has a PRD goal link and every acceptance criterion has a story link.
- Document the audit results and any changes made as a "Quality Audit Log" in the requirements document.
Prompt Examples
Prompt:
You are a senior business analyst running an INVEST audit on user stories for sprint planning.
Audit the following 5 stories against all INVEST criteria. For each story:
- Score each criterion: Pass / Partial / Fail
- Explain any Partial or Fail rating in one sentence
- Give an overall rating: Ready / Needs Work / Not Ready
After the individual audits, identify the top 3 quality issues across all five stories and recommend the most impactful fix for each.
Stories:
STORY 1.1: [As a finance manager who starts their day by assessing invoice backlog, I want to see the count of invoices in each status category on a single screen, so that I understand my workload before opening any individual queue.]
ACs: [Paste the 6 acceptance criteria from Section 1]
STORY 1.2: [As a finance manager monitoring my pipeline, I want the status counts to update automatically without requiring a page refresh, so that I can leave the dashboard open during the day and trust that what I see is current.]
ACs: [Paste ACs for Story 1.2]
STORY 2.1: [As a finance manager who needs to prioritize my exception review, I want to see my exception queue sorted by age (oldest first) with the invoice amount visible, so that I can address exceptions that are at risk of breaching payment terms first.]
ACs: [Paste ACs]
STORY 2.2: [As a finance manager who is responsible for high-value invoices, I want to filter the exception queue to show only invoices above a threshold I set, so that I can focus my limited review time on exceptions that have the most financial impact.]
ACs: [Paste ACs]
STORY 3.1: [As a finance manager who needs to report processing performance to my CFO, I want to see a processing metrics panel showing auto-match rate, average processing time, and exception resolution rate for the current month, so that I can generate a CFO-ready performance summary without manually compiling data from multiple reports.]
ACs: [Paste ACs]
Expected output: A structured audit table with Pass/Partial/Fail for each INVEST criterion for each story, with explanations for any failures, and an overall readiness rating. Common findings for this feature: Story 1.2 (real-time update) may fail "Estimable" because the technical approach (WebSocket vs. polling vs. server-sent events) is not specified; Story 2.2 (filter threshold) may fail "Small" because configurable filtering could be a large engineering effort if the filtering infrastructure is not already built; Story 3.1 (metrics panel) may fail "Independent" if it depends on a data aggregation service not yet built.
Prompt:
You are a senior QA engineer performing a gap analysis on acceptance criteria.
Review the following acceptance criteria set for the Smart Invoice Matching Dashboard. For each story, identify:
1. Missing coverage categories (from this checklist):
[ ] Happy path
[ ] Zero/empty state
[ ] Error/failure state
[ ] Permission boundary (read-only vs. approver)
[ ] Performance (load time, response time)
[ ] Data freshness / staleness behavior
[ ] Concurrent access (if relevant)
[ ] Boundary conditions (max items, min items, edge values)
2. Ambiguous or untestable criteria
3. Criteria that are missing specificity (thresholds, timeframes, exact conditions)
For each gap, write a new acceptance criterion to fill it.
Stories and ACs:
[Paste the full story hierarchy with all acceptance criteria]
Expected output: A comprehensive coverage analysis with specific gaps identified and new acceptance criteria written for each gap. Expected findings: The empty state for the exception queue (when there are zero exceptions) may not be covered. The concurrent access scenario (two finance managers viewing the dashboard simultaneously with one taking an action that affects the counts the other sees) may not be covered. The boundary condition for the status count tiles (what happens when the count exceeds a display limit, e.g., 9,999+) may not be covered.
Prompt:
You are a senior business analyst performing a traceability check before sprint planning.
Here is the requirements package for the Smart Invoice Matching Dashboard:
- PRD Goals: [Paste the Goals section of the PRD]
- Epic and Story Hierarchy: [Paste the full hierarchy]
- Technical Specification: [Paste the tech spec]
Perform the following traceability checks:
1. STORY-TO-GOAL TRACEABILITY: For each user story, identify which PRD goal it directly supports. Flag any story with no clear link to a PRD goal.
2. AC-TO-STORY TRACEABILITY: Confirm that all acceptance criteria belong to their stated parent story. Flag any AC that is misclassified.
3. TECHNICAL SPEC TRACEABILITY: For each item in the technical specification, identify which functional requirement it supports. Flag any tech spec item with no functional requirement link.
4. GOAL COVERAGE: For each PRD goal, identify which stories contribute to achieving it. Flag any PRD goal that has no story coverage.
Output: Traceability Matrix (Story ID | PRD Goal | Coverage) and a list of all gaps found.
Expected output: A complete traceability matrix for the Smart Invoice Matching Dashboard, with any orphaned stories (no goal link), orphaned acceptance criteria (no story link), and uncovered goals flagged for resolution. This matrix confirms that the requirements package is internally consistent and that every piece of work can be justified against a business objective.
Learning Tip: Run the traceability check last — after the INVEST audit and gap analysis are complete and resolved. The traceability check is the most valuable when the requirements are already high quality. Running it on unaudited requirements produces a traceability matrix that will be out of date as soon as the quality issues are fixed. Sequence matters: quality first, traceability second.
Deliver a Stakeholder-Ready Requirements Package
A requirements package becomes "stakeholder-ready" when it has been quality-audited, is internally consistent, and is presented in a format appropriate for its intended audiences. The requirements engineering work — all the stories, acceptance criteria, PRD sections, and technical specifications — is the substance. The stakeholder-ready package is the substance packaged for communication.
Different stakeholders need different views of the same requirements package. The VP of Product or business sponsor needs the PRD's high-level sections: problem, goals, solution overview, and success metrics. Engineering needs the full story hierarchy with acceptance criteria and the technical specification. QA needs the acceptance criteria organized by test category. The design team needs the user stories with behavioral persona context and the UI-relevant acceptance criteria.
A well-structured requirements package does not force each stakeholder to read the full document and extract their relevant section. It presents each stakeholder with a view that starts with what they care most about and provides a clear path to deeper detail if needed.
The review meeting is the primary stakeholder engagement point for a new requirements package. A well-prepared review meeting has: a clear agenda (what decisions need to be made, not just what will be presented), pre-read materials distributed 48 hours in advance, a structured walk-through of the requirements that respects everyone's time, and a clear record of decisions and open questions at the end.
Hands-On Steps
- Assemble the final requirements package: quality-audited stories with ACs, finalized PRD, and reviewed technical specification.
- Run the package completeness checklist prompt to confirm nothing is missing.
- Generate stakeholder-specific summary documents for each primary audience.
- Write the review meeting agenda using AI: goals, pre-read materials, walk-through structure, decision points, and follow-up actions.
- Distribute the package with a cover note summarizing what is being shared and what input is needed from each stakeholder.
- After the review meeting, use AI to generate the meeting summary, decisions log, and action items.
- Incorporate meeting feedback into the requirements package and update the changelog.
Prompt Examples
Prompt:
You are a senior product manager preparing a stakeholder-ready requirements package.
Here is the complete requirements package for the Smart Invoice Matching Dashboard:
[Paste the full requirements package]
Generate the following stakeholder-specific summary documents:
1. EXECUTIVE SUMMARY (for VP Product / CFO): 1 page maximum.
- What is being built (2-3 sentences, business language)
- Why now (the business case and strategic rationale)
- What success looks like (the 2-3 most important metrics)
- What is explicitly out of scope (non-goals summary)
- What decisions are needed from this audience
2. ENGINEERING BRIEF (for engineering team lead): Structured, 1-2 pages.
- Feature scope (epics and estimated story count)
- Key technical requirements and constraints
- Open technical questions that need engineering input
- Dependencies and what must be resolved before development starts
- Proposed sprint sequence (which epics first)
3. QA TEST PLANNING BRIEF (for QA lead): Structured, 1 page.
- Feature scope for testing
- Key acceptance criteria categories (happy path, error states, edge cases)
- Test data requirements
- High-risk areas (what is most likely to break or be misunderstood)
- QA open questions
Format each document clearly, using headers and bullet points. Each should stand alone — the reader should not need to read the full requirements package to understand it.
Expected output: Three distinct, audience-optimized summary documents. The executive summary focuses entirely on business value, timing, and decision needs — no technical detail. The engineering brief is precise and technically grounded, surfacing the dependencies and open questions engineering cares about. The QA brief organizes the acceptance criteria from a test coverage perspective, flagging the high-risk areas that need the most careful testing.
Prompt:
You are a senior product manager preparing for a requirements review meeting.
Write a review meeting agenda for the Smart Invoice Matching Dashboard requirements review. The meeting is 60 minutes. Attendees: VP Product, Engineering Lead, QA Lead, UX Lead, and the Business Analyst.
The meeting goals are:
1. Confirm that the PRD goals and success metrics are accepted by the VP Product
2. Get engineering and QA to confirm there are no blockers to starting the work
3. Resolve the 3 open questions identified in the PRD
4. Agree on the sprint sequence for the epics
Write the agenda with:
- Time allocated to each section
- The specific question or decision to be made in each section
- Pre-read materials to be distributed before the meeting
- Facilitation notes for the PM (what to watch for, how to handle disagreements)
Expected output: A detailed meeting agenda structured for productive decision-making — not a passive presentation. For example:
- 0-5 min: Context set (PM) — "We're here to make four decisions. I'll present, and I need your input at specific decision points."
- 5-15 min: Problem and goals walk-through — Decision: VP Product confirms goals and success metrics are aligned with Q3 OKRs.
- 15-25 min: Solution overview and story hierarchy — Decision: Engineering confirms no architectural blockers; flags any high-effort stories.
- 25-35 min: Open questions resolution — Three specific open questions, 3-4 minutes each.
- 35-45 min: Sprint sequence discussion — Decision: Agree on which epic ships first and why.
- 45-55 min: QA and design readiness — Any additional inputs needed before design or QA starts?
- 55-60 min: Actions and owners recap.
Prompt:
You are a senior product manager running an end-of-meeting wrap-up.
Write a meeting summary for the Smart Invoice Matching Dashboard requirements review meeting. The meeting produced the following outcomes:
Decisions made:
1. VP Product confirmed PRD goals and success metrics are approved
2. Engineering flagged that Story 3.1 (Metrics Panel) has a dependency on a new aggregation service — this will add 2 sprints to delivery. Decision: metrics panel moves to v1.1; v1 scope is Epics 1 and 2 only.
3. Open Question 1 resolved: data freshness SLA confirmed as 5 minutes hard limit.
4. Open Question 2 resolved: mobile support deferred to v2, confirmed as non-goal.
5. Open Question 3 unresolved: caching architecture decision deferred to engineering design review.
Sprint sequence agreed: Epic 2 (Exception Prioritization) ships first in Sprint 21, Epic 1 (Pipeline Status Overview) ships in Sprint 22.
Action items:
- PM: Update PRD to reflect scope reduction (metrics panel to v1.1) by end of week
- Engineering Lead: Schedule technical design review for caching architecture in Sprint 20
- QA Lead: Begin test planning for Epic 2 stories based on distributed acceptance criteria
- UX Lead: Begin wireframes for Exception Prioritization view this week
Write a meeting summary that:
1. Summarizes the context and purpose of the meeting in one paragraph
2. Lists all decisions made with a brief rationale
3. Lists all action items with owner and due date
4. Notes the one unresolved open question and its next step
5. Is appropriate for distribution to all attendees and interested stakeholders
Expected output: A professional, complete meeting summary ready to distribute via email or Slack — serving as the official record of the meeting decisions and action items. Importantly, this summary doubles as a change log entry for the requirements package: the scope reduction (metrics panel deferred) and the sprint sequence decision are now recorded as decisions made in a specific meeting on a specific date.
Package Completeness Checklist
Before distributing the requirements package for stakeholder review, confirm each item:
REQUIREMENTS PACKAGE COMPLETENESS CHECKLIST
STORY HIERARCHY
[ ] All epics named and described
[ ] All stories in "As a [behavioral persona], I want [specific capability], so that [outcome]" format
[ ] All stories sized for one sprint
[ ] All acceptance criteria written in Given/When/Then format
[ ] Happy path, error states, zero states, and edge cases covered
QUALITY AUDITS
[ ] INVEST audit run on all stories
[ ] All FAIL ratings resolved
[ ] Language clarity audit run — no subjective qualifiers or undefined references
[ ] Edge case coverage audit run for stories rated Complexity 3+
[ ] Traceability check run — all stories linked to PRD goals
PRODUCT REQUIREMENTS DOCUMENT
[ ] Background section: explains why now
[ ] Problem Statement: includes specific user evidence
[ ] Goals: 3-5 goals, each measurable with a target and timeframe
[ ] Non-Goals: explicit exclusions listed
[ ] Solution Overview: describes functional behavior, not implementation
[ ] Key User Stories: 5-6 representative stories referenced
[ ] Success Metrics: target, timeframe, and measurement method for each metric
[ ] Dependencies: all external dependencies listed
[ ] Open Questions: all unresolved decisions captured
TECHNICAL SPECIFICATION
[ ] Data Requirements: source systems identified for all data
[ ] Integration Requirements: all required APIs and services listed
[ ] Performance Requirements: load time and throughput requirements specified
[ ] Security Requirements: access controls and data sensitivity addressed
[ ] Error Handling Requirements: graceful degradation behavior specified
STAKEHOLDER DELIVERABLES
[ ] Executive Summary prepared (VP Product / business sponsor)
[ ] Engineering Brief prepared (engineering team lead)
[ ] QA Test Planning Brief prepared (QA lead)
[ ] Review meeting agenda prepared
[ ] Pre-read materials assembled and ready for distribution
Learning Tip: Use the package completeness checklist as your personal quality gate before every requirements review meeting. Run through it in 5-10 minutes before distributing the package. The checklist catches the items that are most likely to generate review meeting questions ("where are the success metrics?" "what about mobile?") — and handling them before the meeting keeps the meeting focused on decisions rather than gaps.
Key Takeaways
- A complete requirements engineering workflow has a clear sequence: feature brief → epic and story hierarchy → acceptance criteria → PRD → technical specification → quality audits → stakeholder package. Each stage builds on the previous one.
- The epic and story hierarchy is the foundation. Invest time here — the quality of everything that follows depends on having the right levels of decomposition and the right scope per story.
- The PRD and technical specification serve different audiences and should be written at different levels of detail. The PRD is for business and product audiences. The technical specification is the engineering entry point. Keep them structurally linked but readable independently.
- Three quality audits are the minimum before a requirements package is sprint-ready: INVEST audit, gap analysis on acceptance criteria, and traceability check. The audits are not overhead — they are insurance against the far more expensive cost of discovering gaps during development.
- Stakeholder-ready delivery requires audience-specific summary documents. The VP Product reads the executive summary; the engineering lead reads the engineering brief; the QA lead reads the test planning brief. One document does not serve all audiences.
- The review meeting produces decisions. Prepare an agenda structured around specific decision points, not a passive walk-through of the document. Record decisions and action items immediately after the meeting and distribute them the same day.
- The requirements package is a living document. When decisions change scope, update the package and log the change in the changelog. A requirements package that is not maintained becomes a liability; one that is maintained becomes the team's most reliable reference.