Overview
This topic is a complete, end-to-end practical exercise. You will work through the full prioritization-to-roadmap workflow that combines all four preceding topics into a single, integrated sequence — from a raw backlog to a board-ready roadmap presentation. Unlike theoretical walkthroughs, this exercise uses a realistic worked example: a B2B SaaS project management product called "TaskFlow" at a growth-stage company. The scenario is intentionally detailed and messy, reflecting the kind of imperfect, context-heavy inputs that real product teams work with every day.
The scenario: You are the lead PM for TaskFlow. The company has just closed a Series B and is under pressure to grow enterprise revenue by 3x in the next 12 months. You have a backlog of 22 items, a set of Q3–Q4 OKRs, discovery insights from recent enterprise customer interviews, and a planning session with the leadership team in 72 hours. Your task is to go from this raw material to a prioritized roadmap with stakeholder-ready narratives and a prepared defense of your decisions.
This exercise is structured as four sequential stages, each building on the previous. You will run real AI prompts at each stage, review and refine the outputs, and make active decisions about what to accept, adjust, or override. By the end, you will have produced: a WSJF-scored, ranked backlog; a three-quarter outcome-based roadmap; executive and engineering roadmap narratives; and a mock roadmap review session with AI-generated objections and prepared responses.
Each stage includes both the prompt sequence and the decision points where human judgment must be applied. This is not a "follow the prompts and get an output" exercise — it is a "use the prompts, make the calls, and defend your decisions" exercise.
Stage 1: Score and Prioritize a Real Backlog Using AI-Assisted Frameworks
The TaskFlow backlog contains 22 items accumulated over the past quarter from a mix of sources: enterprise customer feedback, internal product team ideas, sales team escalations, engineering technical debt flags, and a recent discovery sprint focused on enterprise onboarding. Before the planning session, you need to triage this backlog, score it using WSJF, and produce a ranked priority list with documented rationale.
The worked example backlog represents realistic complexity: some items are clearly enterprise-critical, some are popular but strategically misaligned, some are well-described and some are vague, and a few contain implicit conflicts or duplicates. Your job is to use AI to process this efficiently while applying your own strategic judgment at each decision point.
STRATEGIC CONTEXT FOR THE SCORING EXERCISE:
TaskFlow company context: 500 customers (450 SMB, 50 enterprise), ARR $8M, Series B closed last quarter. Enterprise customers represent 40% of ARR but only 10% of customer count. Series B growth target: 3x ARR in 18 months, primarily through enterprise expansion. Key enterprise gap identified in discovery: poor multi-team collaboration features and weak admin/permissions controls are the top reasons enterprise trials fail to convert.
Hands-On Steps
- Begin with bulk triage: categorize all 22 items into Now / Next / Later / Won't Do / Needs More Info before scoring. This eliminates items that should not consume scoring bandwidth.
- Run the triage prompt with the strategic context provided. Review results and apply your judgment: the AI may not know about an implicit executive commitment to a specific customer.
- For the "Now" and "Next" items identified by triage, run WSJF scoring with the full strategic context. Include the Series B growth mandate and enterprise focus as key strategic filters.
- Review WSJF scores for items where strategic context should override the algorithmic score. Log your overrides.
- Produce a final ranked list with WSJF scores and a brief rationale for any overrides.
Prompt Examples
Prompt 1 — Bulk Triage:
You are a product manager at TaskFlow, a B2B SaaS project management product.
Strategic context: We have just closed a Series B. Our primary growth mandate for the next 18 months is 3x enterprise ARR growth. Discovery research identified that multi-team collaboration gaps and weak admin/permissions controls are the #1 reason enterprise trials fail to convert. SMB growth is healthy but not the growth lever for this funding round.
Current sprint goals: We are mid-Q3 with 4 weeks remaining. Current sprint is focused on shipping a pending enterprise onboarding checklist feature.
Triage categories:
- NOW: Directly supports Q3 enterprise growth goals AND has enough context to begin refinement
- NEXT: Valid for Q4 enterprise roadmap, well-described, not Q3-ready
- LATER: Valid product investment but not aligned with current enterprise growth mandate
- WON'T DO: Duplicate, out of scope, technically infeasible, or contradicts product direction
- NEEDS MORE INFO: Cannot triage without additional context
Backlog items:
1. Advanced role-based permissions for enterprise admins | Allow enterprise admins to create custom roles with granular permissions per project, module, and data type | Submitted by: Sales (requested by 8 enterprise accounts)
2. Dark mode | Add a dark mode theme toggle to the application | Submitted by: Community forum votes (top requested cosmetic feature)
3. Bulk user import via CSV | Allow admins to import team members in bulk from a CSV file | Submitted by: Customer success (frequent onboarding pain point for >200-seat accounts)
4. AI-generated task summaries | Use AI to generate weekly summaries of task progress per project | Submitted by: Product team (exploratory)
5. Calendar integration (Google Calendar sync) | Two-way sync between TaskFlow and Google Calendar for deadline visibility | Submitted by: 3 enterprise customers directly
6. Guest access / external collaborator mode | Allow non-licensed external users to view (not edit) specific projects | Submitted by: Enterprise CS team
7. Jira migration import tool | Allow teams migrating from Jira to import their backlog and history | Submitted by: Sales (multiple enterprise prospects were on Jira)
8. Fix pagination bug on the reports page | Reports with >500 rows do not paginate correctly | Submitted by: Engineering
9. Multi-workspace support | Allow enterprise customers to create isolated workspaces per department with separate billing | Submitted by: 2 enterprise accounts (both Fortune 500)
10. API rate limit increase | Current API rate limits are blocking enterprise customer automation workflows | Submitted by: Enterprise technical contacts
11. Improved search — full-text and filtered | Current search only matches exact titles; users need to search task descriptions, comments, and custom fields | Submitted by: Multiple user segments
12. Custom fields per project type | Allow PMs to define custom data fields per project template | Submitted by: Power users
13. Mobile app — offline mode | Allow users to view and edit tasks when offline | Submitted by: Community forum
14. Audit log for admin actions | Enterprise admins need a full audit log of user actions for compliance purposes | Submitted by: 4 enterprise accounts (2 with compliance requirements)
15. In-app notifications redesign | Current notification system is noisy and causes notification fatigue | Submitted by: User research (30% of users have turned off notifications entirely)
16. Project duplication / template cloning | Allow users to clone an existing project as a template | Submitted by: Power users
17. SSO / SAML support | Enterprise customers require SSO integration for their identity providers | Submitted by: Sales (blocking several enterprise deals)
18. Advanced reporting — custom dashboards | Allow users to build custom metric dashboards combining data from multiple projects | Submitted by: Enterprise customers (frequently mentioned in QBRs)
19. Slack integration — bi-directional | Post task updates to Slack and create tasks from Slack messages | Submitted by: Multiple customer segments
20. Automated onboarding workflow trigger | When a new enterprise user joins a workspace, automatically trigger an onboarding checklist | Submitted by: CS team (related to current sprint feature)
21. Permissions inheritance model | Allow sub-projects to inherit permissions from parent projects automatically | Submitted by: Engineering (technical enabler for item #1)
22. Team capacity view | Visual capacity planning view showing team allocation across projects | Submitted by: Enterprise PMs in customer interviews
Produce a triage table with category, 1-sentence justification, and any NEEDS MORE INFO questions.
Expected output: A triage table sorting all 22 items into the five categories with rationale. You should expect: SSO, Advanced Permissions, Audit Log, Bulk Import, Multi-Workspace, API Rate Limits, Permissions Inheritance, and Automated Onboarding to land in Now or Next given the enterprise mandate. Dark mode and mobile offline to land in Later or Won't Do. Several items to need more information about timeline or scope.
Decision point after triage: Review the AI's categorization and ask yourself:
- Is SSO correctly identified as NOW? (Yes — it is explicitly blocking enterprise deals.)
- Is Permissions Inheritance in the right category? (The AI may put it in Now/Next but you need to verify it is a technical enabler for Advanced Permissions, making the sequencing explicit.)
- Did the AI flag the duplicate or related items? (Advanced Permissions and Permissions Inheritance are related; Automated Onboarding is related to the current sprint work.)
Prompt 2 — WSJF Scoring of Prioritized Items:
You are a senior product manager at TaskFlow applying WSJF prioritization to a filtered backlog.
Strategic context: TaskFlow is a B2B SaaS project management product pursuing 3x enterprise ARR growth over 18 months. Enterprise trial conversion is the primary metric. Discovery has confirmed that permissions gaps and collaboration limitations are the #1 conversion barrier.
WSJF scoring rubric (Fibonacci: 1, 2, 3, 5, 8, 13):
- User-Business Value: Direct value to enterprise users or business revenue. 13 = critical path to enterprise conversion. 8 = significant enterprise retention value. 5 = meaningful value to broad user base. 3 = moderate value to a segment. 1 = marginal or cosmetic value.
- Time Criticality: Cost of delay. 13 = blocking enterprise deals NOW / compliance deadline imminent. 8 = enterprise accounts escalating / competitive gap visible. 5 = growing demand, no acute urgency. 3 = low urgency. 1 = no time pressure.
- Risk Reduction / Opportunity Enablement: Does this reduce technical or business risk, or unlock other high-value items? 13 = enables multiple other high-value items (platform capability). 8 = eliminates a major enterprise trust or compliance risk. 5 = reduces a known operational risk. 3 = minor risk reduction. 1 = no significant risk dimension.
- Job Duration: Relative effort in story points using Fibonacci scale. Score INVERSELY: 1 = large (>21 points), 2 = large (13–20), 3 = medium-large (8–13), 5 = medium (5–8), 8 = small-medium (3–5), 13 = small (<3).
WSJF = (User-Business Value + Time Criticality + Risk Reduction/OE) / Job Duration
For each item below, score each component with a 1-sentence rationale, calculate WSJF, and produce a ranked table.
Items to score (from triage output — Now and Next):
[Paste the Now and Next items from your triage output here]
At the end, flag:
1. Items where Time Criticality is 8 or above — these should be sequenced earliest regardless of other scores
2. Items where Risk Reduction/OE is 8 or above — these may need to move up to unblock other investments
3. Items where Job Duration score is 1 or 2 (large effort) — flag for possible epic splitting before committing to a sprint
Expected output: A complete WSJF scoring table with component scores, rationale per dimension, calculated WSJF totals, and a ranked priority list. Expect SSO, Advanced Permissions, and Audit Log to score highest due to high Time Criticality (blocking enterprise deals) combined with strong User-Business Value. Permissions Inheritance may score high on Risk Reduction/OE because it enables Advanced Permissions.
Decision point after scoring: Apply your strategic overrides:
- Permissions Inheritance should be placed immediately before Advanced Permissions in the sequence regardless of its standalone WSJF score, because one depends on the other.
- Automated Onboarding Workflow is related to current sprint work — its proximity to existing work may warrant moving it up even if the WSJF score alone does not justify it.
- Log each override with a reason in your override register.
Learning Tip: In this exercise, notice that several "big bang" items — Multi-Workspace Support, Advanced Reporting, Custom Dashboards — score well on User-Business Value but may score poorly on Job Duration due to their complexity. This is WSJF working correctly: it is specifically designed to surface that shipping a smaller, high-value item faster beats shipping a larger item later. When you see these patterns, the right response is not to override the score but to ask: "Can we scope a version of this that delivers the core value with significantly less effort?" Use the Job Duration flag as a prompt to explore MVP scoping, not as a reason to defer indefinitely.
Stage 2: Generate an Outcome-Based Roadmap from Prioritized Themes
With a ranked backlog in hand, the next step is to move from item-level priorities to theme-level strategy. A ranked backlog is a stack of work; a roadmap is a coherent narrative of investments organized around outcomes. The translation from backlog to roadmap requires clustering items into themes, mapping themes to OKRs, setting initiative-level milestones, and arranging everything across a realistic time horizon.
Hands-On Steps
- Review your WSJF-ranked backlog and identify natural thematic clusters. Do not force every item into a theme — some items are standalone; what you are looking for are clusters of 2–4 related items that together tell a coherent investment story.
- Run the theme clustering prompt to have AI propose theme groupings and OKR mappings. Review and adjust.
- Run the roadmap generation prompt to produce a three-quarter initiative roadmap from the themes.
- Review sequencing for logical and technical dependencies. Flag any initiative that is scheduled before its prerequisite.
- Validate against team capacity — this is the step AI cannot do accurately without your headcount and allocation data.
Prompt Examples
Prompt 3 — Theme Clustering:
You are a senior product manager grouping a ranked backlog into strategic roadmap themes.
Company context: TaskFlow — B2B SaaS project management. Series B company targeting 3x enterprise ARR. Enterprise trial conversion and expansion are the primary growth levers. OKRs for Q3–Q4: (1) Increase enterprise trial conversion rate from 22% to 40%; (2) Reduce time-to-first-value for new enterprise accounts from 14 days to 7 days; (3) Achieve 95% renewal rate on existing enterprise accounts.
Ranked backlog (top 15 items after triage and WSJF scoring):
[Paste your WSJF-ranked list here with scores]
Your task:
1. Group these items into 3–4 strategic roadmap themes, each connected to a specific OKR
2. For each theme, write: theme name, OKR it supports, outcome statement ("By shipping these initiatives, we expect [measurable change] for [user segment]")
3. Identify any items that are "technical enablers" — initiatives that have low standalone user value but are prerequisites for high-value items; recommend how to represent these on the roadmap
4. Flag any items that do not fit neatly into a theme — these may be standalone initiatives or candidates for deferral
Output format:
## Theme: [Name]
**OKR:** [Which OKR this theme addresses]
**Outcome:** [Measurable outcome statement]
**Initiatives in this theme:** [List items]
**Technical enablers:** [List any enabling items and what they unlock]
Expected output: Three to four proposed themes such as "Enterprise Trust & Compliance," "Onboarding Acceleration," and "Cross-Team Collaboration," each with an OKR mapping and outcome statement. The permissions-related items should cluster together; SSO, Audit Log, and Bulk Import likely form the Trust & Compliance theme.
Prompt 4 — Three-Quarter Roadmap Generation:
You are a product manager drafting a three-quarter outcome-based product roadmap for TaskFlow.
Themes and initiatives (from previous step):
[Paste your validated theme structure with initiatives]
Constraints:
- Team: 4 engineers (2 senior, 2 mid-level), 1 designer, 1 QA engineer
- Q3 (weeks remaining): 4 weeks — one sprint remaining; current sprint is active with Onboarding Checklist
- Q4: Full quarter available for new work (10 sprints at 2 weeks each)
- Q1 next year: Planning horizon, subject to mid-year review
- Fixed dates: Enterprise Compliance Review with 3 accounts scheduled for November 15 (end of Q4)
- Technical prerequisite: Permissions Inheritance must ship before Advanced Permissions
For each theme, sequence the initiatives across the three quarters and generate:
## [Theme Name]
**Q3 (remaining 4 weeks):** [Initiatives completing this quarter + what will be in progress]
**Q4:** [Initiatives shipping in Q4, in order]
**Q1 next year:** [Initiatives planned for Q1]
**Key milestone:** [The measurable checkpoint demonstrating theme progress]
**Risk:** [Top risk to this theme's delivery]
Then produce a master roadmap table:
| Initiative | Theme | Quarter | Effort estimate | Key dependency | Success metric |
|---|---|---|---|---|---|
Flag any sequencing decisions that require engineering validation before committing.
Expected output: A three-quarter roadmap with initiatives sequenced across Q3–Q1, organized by theme, with effort estimates, dependency flags, and success metrics. The November 15 enterprise compliance review should drive SSO and Audit Log to ship by mid-Q4. Permissions Inheritance should precede Advanced Permissions in the sequence.
Decision point after roadmap generation: This is your most critical human judgment step. Review the generated roadmap against what you know:
- Does the effort distribution across quarters respect team capacity? (AI does not know your team's velocity; you do.)
- Are there dependencies the AI missed because they are in your head, not in the prompt? (For example, if the designer is a shared resource also working on another team, the design-dependent items may be overloaded in Q4.)
- Does the Q4 sequence make sense given the November 15 deadline?
Adjust the roadmap manually for any items where AI sequencing does not reflect reality, and log your adjustments.
Learning Tip: The roadmap generation prompt is most useful when you treat its output as a first draft that is structurally sound but technically unvalidated. The AI gets the logic of sequencing right (enablers before dependents, high Time Criticality items earlier) but cannot know your team's actual velocity, your shared resource constraints, or the interpersonal dynamics around who can work on what. Print the AI draft and literally walk through it with your engineering lead before finalizing — ask them to mark every initiative where the timing or estimate seems off. The combination of AI structural logic and engineering velocity knowledge is significantly better than either alone.
Stage 3: Produce Stakeholder-Ready Roadmap Narratives for Exec and Engineering Audiences
With a validated roadmap structure, you now produce the communication artifacts. In this stage, you generate two complete narratives: the executive narrative for the leadership team presentation, and the engineering narrative that your development team will use for sprint planning and architectural decisions.
Hands-On Steps
- Run the executive narrative prompt with the finalized roadmap. Review for accuracy of outcome claims and commitment language.
- Run the engineering narrative prompt. Ask your engineering lead to review this draft before it is finalized — they will often identify missing technical context or incorrect dependency descriptions.
- Compare the two narratives side by side. Identify any commitment that appears in the executive narrative but is qualified or absent in the engineering narrative — these are your over-commitment risks.
- Resolve any over-commitments by either qualifying the executive narrative or confirming the engineering commitment.
- Generate a one-page visual roadmap summary using the roadmap table from Stage 2. This becomes the shared artifact referenced in both narratives.
Prompt Examples
Prompt 5 — Executive Narrative:
You are a product manager writing a quarterly roadmap narrative for the TaskFlow executive team.
Audience: CEO, CPO, VP Sales, CFO. Series B company context. Meeting is the Q3/Q4 planning review. They need to leave with conviction that our product roadmap directly addresses the enterprise growth mandate from our Series B.
Roadmap themes and initiatives:
[Paste finalized roadmap structure]
Company context: [Paste Series B context, OKRs, enterprise conversion gap findings]
Generate a 5-section executive narrative (500–600 words):
1. THE SITUATION (2–3 sentences): What is the enterprise growth challenge, and what does our product roadmap do to address it?
2. OUR THREE BETS (1 paragraph each — one per theme):
- What we are investing in
- Why this over alternatives
- What measurable outcome we expect, and by when
3. HOW WE WILL KNOW IT'S WORKING: 4–5 specific metrics with baseline and target (use: enterprise trial conversion rate, time-to-first-value, enterprise NPS, API adoption rate, or other appropriate metrics)
4. WHAT COULD DERAIL US: Top 2 risks and our mitigation approach
5. WHAT WE ARE CHOOSING NOT TO DO: 3 explicit deprioritizations with 1-sentence rationale each (demonstrate strategic discipline, not indecision)
Tone: Confident, outcome-focused, business language. No feature lists, no PM jargon. This is an investment narrative.
Expected output: A 500–600 word executive narrative organized as an investment case, with three clear bets connected to enterprise growth, specific metrics, honest risk acknowledgment, and disciplined deprioritization choices. Ready for use in the leadership planning session.
Prompt 6 — Engineering Narrative:
You are a product manager writing a quarterly roadmap narrative for the TaskFlow engineering team.
Audience: Engineering lead, senior engineers, architects, QA lead. They will use this document to begin sprint planning for Q4 and to make architectural decisions. They need scope clarity, dependency understanding, and honest acknowledgment of uncertainty.
Roadmap themes and initiatives:
[Paste finalized roadmap structure with dependencies]
Technical context you know: Permissions Inheritance requires a schema migration. SSO integration requires a dedicated security review. The Audit Log is a new database table that will need to be designed for query performance. Advanced Permissions depends on Permissions Inheritance completion.
Generate an engineering-facing narrative with these sections:
1. CONTEXT (2–3 sentences): Why are we making these investments?
2. INITIATIVE BREAKDOWN — one section per initiative, ordered by sequence:
### [Initiative Name]
**What we are building:** [Plain description of scope — what does "done" look like?]
**Why this sequence:** [What this depends on / what depends on it]
**Technical considerations:** [Known complexity, architectural decisions, integration points]
**Design input needed:** [Yes/No — and by when]
**Open questions before we start:** [What needs investigation or decision]
**Our confidence in the timeline:** High / Medium / Low + brief reason
3. WHAT WE ARE NOT BUILDING THIS QUARTER: Explicit list of items deferred — so engineering can rule out scope creep questions
4. RISKS WE ARE WATCHING: Technical unknowns that could affect Q4 delivery
5. DEFINITION OF SUCCESS FOR Q4: What does a successful quarter look like from an engineering delivery standpoint?
Be specific and direct. Engineers will use this document daily — accuracy matters more than narrative elegance.
Expected output: A structured engineering roadmap narrative with per-initiative breakdowns covering scope, sequencing rationale, technical considerations, design dependencies, open questions, and confidence levels. The document serves as the engineering team's working reference for Q4 sprint planning.
Decision point after generating narratives: Compare both documents carefully. Check:
- Does the executive narrative say SSO will ship "mid-Q4" while the engineering narrative says it is "Medium confidence" due to the security review dependency? If so, qualify the executive version.
- Does the executive narrative mention "custom dashboards" while the engineering narrative does not include it? If so, remove it from the executive narrative or add it to engineering scope.
Learning Tip: Every mismatch between the executive narrative and the engineering narrative is a gap in your own alignment. You are the person responsible for ensuring that what you tell your leadership team your engineers will build is what your engineers have actually signed up to build. The discipline of reading both documents side by side — before distributing either — is how you catch and close these gaps before they become broken commitments. Build this comparison into your standard roadmap communication preparation ritual.
Stage 4: Run a Mock Roadmap Review with AI-Generated Objections and Responses
The final stage of this exercise uses AI to simulate a roadmap review session. You will receive objections from several stakeholder types based on your roadmap, prepare responses, and then practice the response preparation in a format you can use before your real planning session.
Hands-On Steps
- Run the mock review prompt — provide your executive narrative and a description of the attendees.
- Review the generated objections. Identify the three "hardest" ones — the objections you feel least confident about.
- For the hardest three objections, run the response preparation prompt to generate substantive responses.
- Review your responses for weak points: any response that relies on a promise you are not confident keeping, or that acknowledges a risk without a concrete mitigation.
- For any response you cannot make confidently, decide: is there additional analysis needed, or is this a genuine uncertainty that should be acknowledged honestly in the session?
- Practice delivering your hardest response aloud — the words need to be yours, not the AI's.
Prompt Examples
Prompt 7 — Mock Stakeholder Objections:
You are simulating a product roadmap review at TaskFlow.
The PM has just presented the Q4 roadmap (executive narrative version). The following stakeholders are in the room:
1. CEO: Focused on 3x ARR growth milestone. Concerned that the team is not moving fast enough on enterprise deals. Has previously said "we need to ship features that close deals, not nice-to-haves."
2. VP Sales: Managing a pipeline of 12 enterprise prospects, 4 of which have specifically asked for SSO. Concerned about timeline commitments. Has a history of overpromising product capabilities to prospects.
3. CFO: Focused on engineering efficiency and ROI. Skeptical of exploratory investments. Will ask about cost per feature and whether we are prioritizing correctly.
4. VP Customer Success: Responsible for renewal rate. Has 3 enterprise customers threatening to churn, two of which have mentioned the permissions limitations as a reason.
5. CTO: Concerned about technical debt accumulating from rushing enterprise features. Believes the team is taking on too much in Q4.
Generate the top 2 objections from each stakeholder — specific to this roadmap, not generic. Make the objections realistic and sharp. Then generate the "hardest question of the session" — the question that is most likely to put the PM on the defensive if unprepared.
Format:
## [Stakeholder Role]
**Objection 1:** [Specific objection in their language]
**Underlying concern:** [What are they really worried about?]
**Objection 2:** [Specific objection in their language]
**Underlying concern:** [What are they really worried about?]
## Hardest Question of the Session
[The one question that cuts across all concerns and has no easy answer]
Expected output: Ten objections (two per stakeholder) that are specific to the TaskFlow enterprise roadmap, with underlying concern analysis. Plus one "hardest question" that the PM should be most prepared for. Common hardest questions include: "What is our confidence level that these four items will actually close the four SSO-blocking deals?" or "If we have to choose between shipping SSO on time and maintaining quality, what do we do?"
Prompt 8 — Response Preparation:
You are a product management advisor helping a PM prepare responses to roadmap objections.
For each objection below, generate a response that:
1. Opens with acknowledgment of the concern's legitimacy (do not dismiss or minimize)
2. Provides specific context or evidence that addresses the concern
3. Frames the roadmap decision in terms of the stakeholder's underlying interest
4. Ends with either a clear commitment or an explicit statement of the conditions under which the decision would change
5. Is 3–5 sentences — substantive but conversational
Objections:
1. VP Sales: "You're telling me SSO won't ship until mid-Q4? I have a deal closing in October that is conditioned on SSO. Can't we just get that one thing done first and ship everything else later?"
Context: SSO is a complex SAML integration requiring a security review. Mid-Q4 is actually an aggressive estimate. There is a real risk it could slip to late Q4 if the security review surfaces issues.
2. CFO: "I see you're investing heavily in permissions and compliance features, but these don't seem to directly drive new revenue. Can you show me the ARR impact of each of these initiatives?"
Context: Enterprise accounts have directly cited permissions gaps as the reason their trials did not convert. However, you do not have a precise ARR model for each initiative — you have directional evidence from customer interviews.
3. CTO: "You have five major initiatives in Q4. That seems like too much scope for a team of four engineers. What happens if we hit the November compliance review deadline and two of these items are still in progress?"
Context: The November compliance review is for three specific enterprise accounts. The relevant initiatives for that review are SSO and Audit Log — not all five Q4 initiatives. If only SSO and Audit Log are complete by November 15, the compliance review can proceed.
4. CEO: "I look at this roadmap and I don't see anything that's going to win us the next 10 enterprise logos. Onboarding checklists and permission inheritance feel incremental. When are we building the features that make TaskFlow the obvious choice for enterprise?"
Context: The Series B thesis was specifically about solving enterprise-grade administration and compliance, not building net-new feature differentiation. The current roadmap directly executes on the thesis.
Generate a response for each objection.
Expected output: Four substantive, conversational responses to the hardest roadmap objections, each opening with acknowledgment, building to evidence, and closing with a commitment or reconsideration trigger. These responses are ready for live delivery with light personalization.
Final decision point: After reviewing the AI-generated responses, assess each one for honest confidence:
- Can you actually deliver what the response implies?
- Are there any responses that make implicit commitments you have not validated with engineering?
- Is there any objection where the honest answer is "you're right, and we need to revisit the sequencing"?
The goal of this mock review is not to prepare you to "win" the session — it is to prepare you to have a productive, evidence-based conversation that builds stakeholder trust rather than eroding it.
Learning Tip: The hardest part of the mock review exercise is identifying the objection you want to avoid. When you scan the AI-generated objections and your stomach tightens on one specific item, that is the one to spend the most preparation time on. The objection that makes you uncomfortable is not a sign that you need a better answer — it is often a sign that the discomfort is justified and the concern is valid. Use the pre-mortem mindset: if that objection reflects a real risk, what would you do differently? Addressing it head-on in the session, with a proposed mitigation, builds far more trust than a polished deflection.
Bringing It All Together — The End-to-End Workflow Checklist
After completing this hands-on exercise, you have produced:
- A triaged and WSJF-scored backlog with documented overrides
- Three to four strategic roadmap themes mapped to OKRs
- A three-quarter outcome-based roadmap with sequenced initiatives, dependencies, and success metrics
- An executive narrative (investment-and-outcome framing, 500–600 words)
- An engineering narrative (scope-and-sequencing detail, per-initiative breakdown)
- A mock stakeholder review with 10 realistic objections and prepared responses
This is the complete Module 5 skill set applied to a single, realistic scenario. The workflow you executed — triage → WSJF scoring → theme clustering → roadmap generation → audience narratives → objection preparation — is the repeatable process you take into every planning cycle.
The essential workflow checklist for each planning cycle:
- Triage incoming backlog items before scoring — eliminate non-starters first
- Score with WSJF (or your chosen framework) using explicit rubrics and strategic context
- Review scores for strategic overrides — log every override with a reason
- Cluster scored items into themes mapped to OKRs
- Generate roadmap draft — validate sequencing against technical dependencies and team capacity
- Generate executive narrative — review for over-commitments
- Generate engineering narrative — have engineering lead review before distribution
- Compare narratives side by side — close any commitment gaps
- Generate FAQ and objection preparation — 48 hours before the review session
- Distribute pre-reads 24 hours before — meeting time is for discussion, not discovery
- Capture structured meeting notes during the session
- Generate and distribute decision summary within 24 hours
- Log all decisions in the decision register with ADRs for significant choices
- Set roadmap review triggers — specific conditions under which the roadmap should be revisited
Key Takeaways
- The full prioritization-to-roadmap workflow has five stages: triage, WSJF scoring, theme clustering, roadmap generation, and stakeholder communication — each stage builds on the previous and each requires a specific type of human judgment alongside AI assistance.
- Strategic context is the single most important input to every AI prompt in this workflow; a prompt with rich strategic context produces output you can use; a prompt without it produces output you have to rebuild.
- The most critical human judgment point is validating AI-generated roadmap sequencing against actual team capacity and technical dependencies — AI gets the logic right but cannot know your velocity, shared resource constraints, or architectural realities.
- Comparing executive and engineering narratives side by side before distribution is the practice that prevents over-commitments — every mismatch between what you told leadership and what engineering has agreed to is a gap you own.
- Mock roadmap review preparation with AI-generated objections is most valuable when it surfaces the objection that makes you uncomfortable — that discomfort is a signal about genuine risk, not a reason to prepare a better deflection.
- The end-to-end workflow is a repeatable process, not a one-time exercise; teams that run this process consistently every planning cycle report faster sessions, better alignment, and fewer mid-quarter surprises than teams that rely on informal prioritization and ad-hoc communication.