·

Agentic Planning

Agentic Planning

Overview

Planning is the stage of product management where strategic intent meets operational reality. It is where discovery outputs — prioritized opportunities, validated hypotheses, strategic directives — are translated into the specific commitments that a team will execute. Done well, planning produces clarity: every engineer knows what to build, every stakeholder knows what to expect, and every sprint starts with a coherent set of goals that connect daily work to quarterly outcomes. Done poorly, planning produces backlog debt — a growing list of half-defined stories, conflicting priorities, and sprint goals that dissolve under the first contact with reality.

The planning challenge is not a lack of methodology — RICE, SAFe, story mapping, and OKR frameworks are well-understood. The challenge is that good planning is labor-intensive in proportion to how much context it needs to incorporate. A roadmap decision that properly weighs customer evidence, technical dependencies, team capacity, competitive timing, and strategic priorities requires synthesizing information from five or six different sources. A sprint plan that properly sequences stories, accounts for dependencies, and flags capacity risks requires reading every story in the queue with engineering-level attention to detail. Most PMs do not have the time to do this synthesis at the depth it requires — so they simplify, rely on intuition, and accept that their plans will need significant in-sprint adjustment.

AI changes the feasibility of deep, context-rich planning. The synthesis work that was the bottleneck — reading everything, cross-referencing sources, scoring alternatives — can be handled by AI agents in minutes. The PM's job shifts to structuring the inputs that AI needs, reviewing and adjusting the outputs it produces, and applying the political and relational context that no AI system can access. The result is plans that are more comprehensive, more accurately sequenced, and more explicitly connected to evidence than manual planning typically allows.

This topic covers four dimensions of agentic planning: roadmap updating from prioritized opportunities, sprint backlog generation from roadmap themes, the plan review and approval process, and real-time plan adaptation as context changes. By the end, you will have the knowledge and tools to implement an AI-assisted planning workflow that consistently produces sprint plans that are both strategically sound and operationally executable.


How to Chain Prioritized Opportunities Into Roadmap Updates with AI

A product roadmap is a living document — or should be. In practice, most roadmaps are quarterly or annual artifacts that are updated in big-batch planning cycles, because the work of synthesizing new opportunity data into roadmap changes is significant enough to justify only doing it on a fixed schedule. The result is a systematic lag between what the discovery process is learning and what the roadmap is committing to.

In an agentic workflow, roadmap updates are triggered continuously as opportunities are approved through the discovery review gate. Rather than batching roadmap decisions into a quarterly cycle, new opportunities are evaluated against the existing roadmap in near real-time, producing recommended roadmap changes for PM review. This does not mean the roadmap changes constantly — the PM review gate controls the frequency of actual changes. But it means the PM is always working with current recommendations rather than lagging evidence.

The input to the roadmap chaining process is the Approved Opportunity Stack: a prioritized list of opportunities that have passed through the discovery review gate, each with a full Opportunity Statement (problem, evidence, scoring, strategic alignment). The context input is the Current Roadmap: the existing set of roadmap themes, features, and commitments, with their current rationale, sequencing, and dependencies. The AI's task is to compare the two and produce a Roadmap Delta Recommendation.

The Roadmap Delta Recommendation has three components:

New Additions. Opportunities that do not overlap with any existing roadmap item and score high enough to warrant a new roadmap entry. The AI generates a proposed roadmap item for each: theme label, one-paragraph description of the problem and intended outcome, proposed quarter, dependency flags, and the opportunity evidence that supports it.

Sequence Adjustments. Existing roadmap items whose priority score has changed relative to new opportunities. If a new opportunity scores significantly higher than an item currently in Q2, the AI flags this as a sequencing conflict and proposes a swap or re-sequencing, with reasoning based on comparative scores and strategic alignment.

Conflict and Redundancy Flags. Opportunities that overlap with, contradict, or render redundant an existing roadmap item. Examples include: a new opportunity that is essentially the same problem as an existing item (suggesting the two should be merged), a new opportunity whose proposed solution approach contradicts a technical decision embedded in an existing roadmap item, or a new opportunity that is in a segment the roadmap is explicitly deprioritizing.

Handling roadmap conflicts with AI assistance requires the PM to provide explicit conflict resolution criteria in the planning context: "When two opportunities compete for the same roadmap slot, prioritize the one with higher customer evidence confidence." "When a new opportunity contradicts an existing item's approach, flag for PM review rather than auto-resolving." Clear criteria prevent the AI from making implicit trade-off decisions that should remain with the PM.

The PM reviews the Roadmap Delta Recommendation and makes the final call on each proposed change. The critical discipline is to separate two types of PM override: value-adding overrides (the PM has context the AI lacks that changes the decision) and comfort-zone overrides (the PM rejects a well-evidenced recommendation because it conflicts with existing commitments or organizational inertia). The first type is healthy and necessary; the second type is the source of roadmap staleness and should be documented as a conscious trade-off rather than silently suppressed.

Hands-On Steps

  1. Document your current roadmap update trigger: what event or schedule currently causes you to update the roadmap? Is it a quarterly planning cycle, a leadership request, a significant discovery finding, or something else? Identify the lag between "new evidence arrives" and "roadmap reflects it."
  2. Format your current roadmap for AI consumption: restructure it as a structured list of items, each with a label, one-paragraph rationale, current quarter assignment, and priority score if available. This is the baseline context the AI needs to generate meaningful delta recommendations.
  3. Write your conflict resolution criteria: a brief document (one page) that specifies how trade-offs should be resolved when new opportunities compete with existing roadmap items. Include at least five explicit criteria. This document becomes part of the planning prompt context for every roadmap update session.
  4. Run a delta recommendation exercise with a real approved opportunity: take one opportunity that has cleared your discovery review gate and run the roadmap chaining prompt (see below). Review the output against your current roadmap. How many of the AI's recommendations would you have reached independently? How long would it have taken you to get there manually?
  5. Design your roadmap update cadence for the agentic workflow: how frequently will you process the Approved Opportunity Stack against the roadmap? (Weekly is often appropriate — frequent enough to stay current, infrequent enough to avoid roadmap instability.) Set this as a recurring calendar block and define the maximum number of delta recommendations you will review per session.

Prompt Examples

Prompt:

You are a senior product strategist helping me update a product roadmap based on new discovery outputs. I will give you:
1. My current roadmap (existing committed items with rationale and quarter assignment)
2. A set of newly approved opportunities from recent discovery work
3. My conflict resolution criteria

Your task is to produce a Roadmap Delta Recommendation.

Current Roadmap:
[Paste your roadmap — list of items, each with: Item Name | Quarter | One-sentence rationale | Priority score if available | Key dependencies]

Newly Approved Opportunities:
[Paste 3-5 approved opportunity statements, each with: Problem | Evidence summary | RICE score | Strategic alignment]

My conflict resolution criteria:
- When a new opportunity scores 20% higher than an existing Q2 item on RICE, recommend a priority swap
- When opportunities overlap by more than 60% in problem scope, recommend consolidation
- When a new opportunity contradicts the technical approach of an existing item, flag for PM review rather than auto-resolve
- Items in [specific area] are frozen for this quarter — do not recommend changes to these
- Our highest-priority OKR this quarter is [OKR] — weight new opportunities accordingly

Produce a Roadmap Delta Recommendation with three sections:
1. Proposed New Additions (items that should be added, with proposed quarter and rationale)
2. Proposed Sequence Adjustments (existing items that should move, with before/after and rationale)
3. Conflict and Redundancy Flags (items that require PM judgment to resolve, with a clear description of the conflict and the options available)

For each recommendation, provide: the recommendation, the evidence that supports it, the confidence level (High/Medium/Low), and any risks or trade-offs the PM should consider.

Expected output: A structured three-section Roadmap Delta Recommendation with specific, evidence-backed proposals for additions, sequence changes, and flags. Each recommendation should be specific enough that the PM can make a binary approve/reject decision. Confidence levels help the PM quickly identify which recommendations to scrutinize most carefully.

Learning Tip: The most powerful use of AI in roadmap planning is not generating new roadmap items — it is surfacing conflicts and redundancies in your existing roadmap that you may not have noticed. Run your current roadmap through the conflict detection prompt with no new opportunities, just asking the AI to identify internal inconsistencies, overlapping scope, and sequencing anomalies. Most roadmaps contain several of these that have accumulated over time and are quietly undermining execution clarity.


How AI Generates Sprint-Ready Backlogs From Roadmap Themes

The transition from roadmap to sprint backlog is one of the most skill-intensive and time-consuming tasks in product management. A roadmap item like "Improve onboarding conversion for self-serve users" is a strategic intention — it expresses a desired outcome but says nothing specific about what to build. Converting it into a set of sprint-ready user stories requires: understanding the user experience in detail, identifying the specific friction points to address, designing the solution approach, decomposing it into buildable increments, writing stories at the right granularity, and crafting acceptance criteria that are genuinely testable. A thorough job takes two to four hours per roadmap theme.

AI can dramatically accelerate this process. Given a well-specified roadmap theme and sufficient product context, an AI agent can generate a first-draft epic structure with user stories and acceptance criteria in five to ten minutes. The PM's job shifts from generation to review: instead of writing stories from scratch, the PM reviews, adjusts, and approves a complete draft. For most themes, this review takes 30-60 minutes — a significant efficiency gain even accounting for the time required to prompt and review.

The theme-to-backlog generation pipeline works as follows:

Step 1: Theme Elaboration. The AI receives the roadmap theme and expands it into a detailed Epic Description: a more complete statement of the problem being solved, the target user persona, the intended behavioral outcome, the success metrics, and the key constraints. This step ensures the subsequent story generation is grounded in a fully articulated problem space rather than a vague theme label.

Step 2: User Journey Mapping. The AI maps the current user journey through the experience area the theme addresses, identifying each step, the primary friction points, and the moments where the proposed changes will have the most impact. This step produces a mini Journey Map that structures the story generation: stories are written to address specific journey friction points rather than being generated abstractly.

Step 3: Story Generation. The AI generates a full set of user stories covering the theme, organized as: Core Path Stories (the happy path of the intended experience), Edge Case Stories (error states, boundary conditions, unusual user behaviors), and Technical Infrastructure Stories (backend or integration work required to support the user-facing stories). Stories are written in standard format with acceptance criteria (3-5 criteria per story) and an initial complexity estimate (small/medium/large or story point estimate if team velocity data is provided).

Step 4: Backlog Readiness Check. Before the generated stories enter the sprint queue, they pass through an automated quality gate. The AI applies the INVEST criteria (Independent, Negotiable, Valuable, Estimable, Small, Testable) to each story and flags any that fail a criterion. Stories that fail the Small criterion are flagged for splitting. Stories that fail the Testable criterion are flagged for acceptance criteria revision. Stories that fail the Independent criterion (they have blocking dependencies on other stories) are flagged for sequencing.

Step 5: Sprint Slot Proposal. The AI analyzes the full generated story set and the team's capacity context (velocity, available sprint points, known dependencies) to propose a sprint slot assignment: which stories go into the next sprint, which are held for the sprint after, and which should be pushed to future planning due to dependency constraints. The proposal is not a commitment — it is a sequencing recommendation for the PM and team to review in sprint planning.

Hands-On Steps

  1. Choose one roadmap theme from your current roadmap that is coming up in the next sprint or next planning cycle. Write a detailed brief for it: what problem does it solve, who for, what does success look like, what are the key constraints?
  2. Run the theme elaboration step using AI: provide the brief and ask the AI to generate a full Epic Description using the fields listed above. Review the output — does the AI's interpretation of the theme match your intent? If not, what context was missing from your brief?
  3. Review the Journey Map output from the AI: does it accurately represent the user experience as you understand it? Where has the AI made assumptions about the journey that need to be corrected? Make corrections and note what additional context would have prevented the error.
  4. Review the generated user stories against the INVEST criteria personally, before looking at the AI's quality gate output. Compare your assessment to the AI's. Where did you agree? Where did you disagree? This calibration exercise helps you understand where the AI's story quality assessment is reliable and where it needs human correction.
  5. Run the full pipeline on two different roadmap themes and compare the output quality. Often the AI performs very differently on different theme types — it may generate excellent stories for a well-understood user-facing feature and weaker stories for a complex technical capability. Understanding these patterns helps you decide when to use AI generation versus when to write stories manually.

Prompt Examples

Prompt:

You are a senior product manager and backlog specialist. I am going to give you a roadmap theme and ask you to generate a sprint-ready backlog for it.

Roadmap theme: [Theme name and one-paragraph description]

Product context:
- Product type: [SaaS / mobile / platform / etc.]
- Target user persona: [2-3 sentence persona description]
- Technical context: [existing architecture, relevant integrations, known constraints]
- Team velocity: [average story points per sprint]
- Current sprint capacity: [available points for this theme]
- Definition of Ready: [your team's DoR criteria]

Please generate the following:

1. Epic Description (problem statement, user persona, intended behavioral outcome, success metrics, key constraints)

2. User Journey Map (step-by-step current state journey through the relevant experience area, with friction points highlighted)

3. User Stories — organized into three groups:
   a. Core Path Stories (3-5 stories covering the primary happy path)
   b. Edge Case Stories (2-3 stories covering error states and boundary conditions)
   c. Technical Stories (1-2 stories for backend/infrastructure work if applicable)
   For each story:
   - Story title
   - Story format: As a [persona], I want [action], so that [benefit]
   - Acceptance criteria (3-5 criteria, each beginning with "Given/When/Then" or "The system shall")
   - Complexity estimate: S/M/L
   - INVEST flags: list any INVEST criteria this story may not fully meet, with explanation

4. Sprint Slot Recommendation: given my available capacity, which stories should go into the next sprint vs. the sprint after? Provide a recommended sprint breakdown with total estimated points.

5. Known Unknowns: list any questions that must be answered before development can start on these stories.

Expected output: A complete, sprint-planning-ready backlog package for the given theme, including an epic description, journey map, organized user stories with acceptance criteria, a capacity-aware sprint slot recommendation, and a known unknowns list. This output is designed to be the starting point for sprint planning, not a final plan — the team reviews and adjusts in sprint planning.

Learning Tip: When you first start using AI to generate user stories, the temptation is to use the output directly without deep review — it looks good at first glance. Resist this. For the first four to six weeks, compare the AI-generated stories to stories you have written manually for similar themes, and note the specific types of errors or gaps that appear. Common failure modes include: acceptance criteria that describe implementation rather than behavior, edge case stories that miss the most likely real-world failure modes, and stories that are too large to complete in a sprint but are not flagged as such. Knowing the failure patterns helps you review AI outputs in a targeted, efficient way.


How to Review and Approve AI-Generated Plans Before Committing to Delivery

The quality of an AI-generated plan depends entirely on the quality of the context provided and the rigor of the review that follows. Committing an AI-generated sprint plan to delivery without thorough review is one of the most common — and costly — failure modes in agentic product management. The plan may look complete, well-structured, and appropriately scoped, while containing errors that will only become apparent mid-sprint: a dependency that was not accounted for, an acceptance criterion that is technically impossible to implement as written, a story scope that is 3x larger than the estimate suggests.

A robust plan review uses a structured checklist that covers five dimensions:

Dimension 1: Feasibility Review. For each story in the plan, ask: can this be built as written, given the current technical context? This review requires PM-engineering collaboration. The PM cannot assess technical feasibility alone; a tech lead or senior engineer must review stories with non-obvious technical implications. Specific checks include: are there API integrations required that do not currently exist? Does any story require data that is not currently stored or accessible? Does any story imply a design pattern that conflicts with the existing architecture? Feasibility issues at the story level are far cheaper to resolve in planning than in sprint.

Dimension 2: Sequencing Review. The sprint plan must sequence stories in a logical execution order. Check: do any stories have dependencies on other stories in the plan that are not reflected in the proposed sequence? Are there stories that should logically precede others (e.g., a backend data model change should precede the frontend story that displays the data)? Are there cross-team or cross-sprint dependencies that the plan assumes will be resolved but may not be? A good sequencing review produces a dependency-aware ordered list rather than an unordered backlog.

Dimension 3: Dependency Coverage. Identify every external dependency that the sprint plan requires: design assets (are mockups ready?), third-party services (are API credentials and environments available?), data (is the data pipeline ready for this feature?), and cross-team work (does a platform team need to complete something before your team can start?). For each dependency, assess its readiness status. Any unresolved external dependency should be flagged as a sprint risk before commitment.

Dimension 4: Capacity Alignment. Compare the total estimated story points in the plan against the team's available capacity for the sprint, accounting for: holidays and planned time-off, ongoing support or bug-fix obligations, non-sprint ceremonies (planning, review, retro, refinement), and any known non-story work (technical debt paydown, environment work, documentation). A plan that is technically at velocity may still be over-capacity once these items are subtracted. The capacity alignment check prevents the common failure mode of a sprint that starts at 100% and ends at 70%.

Dimension 5: Human Judgment Inputs. This is the most important review dimension and the one AI cannot perform. Ask: what do I know about this sprint that the AI does not? Common human judgment inputs include: team morale and energy level (is this sprint immediately after a challenging one?), informal commitments to stakeholders (a customer success call last week that implied a feature would be ready), known technical risks that are not formally documented, and organizational context (a product review with leadership scheduled mid-sprint that will require preparation time). Document these judgment inputs explicitly and adjust the plan accordingly.

The plan review gate should produce one of four outcomes: Commit (the plan is ready for sprint planning as-is), Revise (specific changes are required before the plan is committed), Reduce scope (the plan is well-formed but over-capacity; specific stories should be moved to the next sprint), or Hold (there are unresolved dependencies or fundamental issues that require resolution before the plan can proceed). Never commit a plan with unresolved Dimension 3 (dependency) issues — missing dependencies cause more mid-sprint disruption than any other planning failure.

Hands-On Steps

  1. Build your Plan Review Checklist by adapting the five-dimension framework to your team's specific context. For each dimension, write 3-5 specific questions that you will ask during every plan review. Format it as a checkbox list you can work through in 30-45 minutes.
  2. Identify your tech lead or senior engineer counterpart for the Feasibility Review dimension. Establish a regular 30-minute pre-sprint sync where you review AI-generated plans together before sprint planning. This is the most important investment in planning quality you can make — technical feasibility issues that are caught here are fixed in hours; the same issues caught in sprint take days.
  3. Create a Dependency Status tracker: a simple table with columns for Dependency Name, Owner, Expected Ready Date, and Current Status (Ready / At Risk / Blocked). Update this before every sprint planning session. Any dependency at "At Risk" or "Blocked" status must be resolved or the dependent stories must be removed from the sprint.
  4. Establish your capacity calculation method: write down exactly how you calculate available sprint capacity, accounting for all the overhead items listed above. Then compare your calculated capacity against what you have been committing in recent sprints. If the plan has consistently exceeded calculated capacity, identify the overhead items you have been underestimating.
  5. Conduct a post-mortem on your last three sprints specifically through the lens of the plan review dimensions: for each sprint, identify which planning failures occurred and which dimension of the review protocol they belong to. This tells you which dimensions to emphasize most in your plan review practice.

Prompt Examples

Prompt:

You are a sprint planning reviewer. I am going to give you an AI-generated sprint plan and I need you to review it against five quality dimensions before I commit it to the team.

Sprint context:
- Team velocity: [X story points per sprint]
- Sprint capacity this sprint: [Y points, after accounting for holidays/overhead]
- Sprint goal: [one sentence]
- Key dependencies: [list any known external dependencies and their status]
- Technical constraints: [any known technical constraints relevant to this sprint]

AI-generated sprint plan:
[Paste the proposed sprint backlog — story titles, descriptions, acceptance criteria, and point estimates]

Please review the plan against these five dimensions:

1. Feasibility: Flag any stories where the acceptance criteria or scope description implies technical work that may be more complex than estimated, requires unavailable data, or contradicts known technical constraints. For each flag, describe the concern and suggest how to resolve it.

2. Sequencing: Identify any dependency relationships between stories that are not currently reflected in the ordering. Propose a revised execution sequence that accounts for these dependencies.

3. Dependency Coverage: Based on the stories in the plan, what external dependencies (design, data, platform, third-party services) are required? Which of these are not currently confirmed as ready?

4. Capacity Alignment: Calculate the total story points in this plan and compare against the stated capacity. Flag any stories that should be moved to the next sprint if the plan is over-capacity. Suggest which stories to defer based on priority and dependency order.

5. Risk Summary: After reviewing all four dimensions, produce a ranked risk list (High/Medium/Low) with a one-sentence mitigation recommendation for each risk.

At the end, give me a Plan Review Verdict: Commit / Revise (with specific changes) / Reduce scope (with specific deferral recommendations) / Hold (with the specific issue that must be resolved first).

Expected output: A structured five-dimension plan review with specific, actionable flags and a clear commit/revise/reduce/hold verdict. The output should be specific enough that the PM can act on each flag in the review meeting without further investigation.

Learning Tip: The Plan Review Verdict is not a formality — it is one of the most consequential product decisions you make every sprint. Treat it as such. Block 45-60 minutes in your calendar before every sprint planning session specifically for the plan review, with your tech lead present for the feasibility dimension. PMs who rush this step and treat sprint planning as the first real review of the plan consistently experience mid-sprint scope changes, deadline misses, and story deferred rate above 20%. The investment in a rigorous plan review consistently produces the largest sprint predictability improvements of any agentic workflow practice.


How to Adapt Plans in Real-Time as Context Changes Using AI

Even the best-reviewed sprint plan will encounter context changes during execution. A key dependency is delayed. An engineer is unexpectedly out sick for three days. A critical customer escalation requires immediate attention. A competitor ships a feature that changes the urgency of something in the backlog. A technical discovery during implementation reveals that a story is 5x more complex than estimated. The question is not whether context will change during a sprint — it will. The question is how quickly and accurately you can assess the impact and revise the plan.

AI significantly accelerates context change assessment and re-planning. The key is having a structured re-planning prompt that captures the nature of the change, the current state of the plan, and the decision criteria the PM wants to apply, and then produces a specific set of recommended adjustments rather than generic advice.

There are four categories of context change, each requiring a slightly different re-planning approach:

Type 1: Scope Change. A story is discovered mid-sprint to be significantly larger than estimated, or new requirements emerge that expand the scope of an in-progress story. Re-planning for scope change asks: can the story be split so that the core value is delivered this sprint and the expanded scope is deferred? If not, what is the cost of completing the full scope — which other stories must be deferred to compensate? The goal is to protect the sprint goal while minimizing total deferred work.

Type 2: Capacity Change. Team capacity drops due to illness, competing demands, or underestimated overhead. Re-planning for capacity change asks: given the reduced capacity, which stories must be completed (sprint goal stories), which are deferrable without impacting the sprint goal, and which were already at risk before the capacity change? The output is a revised sprint commitment that reflects actual capacity while protecting the minimum viable sprint outcome.

Type 3: Dependency Failure. An external dependency that stories in the sprint depend on is delayed or blocked. Re-planning for dependency failure asks: which stories in the sprint are blocked by this dependency? Are there parallel tracks of work that can proceed without the dependency? Can the dependency be escalated to unblock before too much sprint capacity is lost? The output is a temporary plan adjustment that maximizes productive work while the dependency is being resolved.

Type 4: Strategy Shift. A strategic decision at the leadership level changes the priority of work mid-sprint. Re-planning for a strategy shift asks: which stories in the current sprint are still aligned with the revised strategy? Which should be stopped immediately (even if partly complete)? Which new work needs to be started, and what is its estimated cost in points? The output is a revised sprint plan with a clear accounting of scope traded out and scope traded in.

The general re-planning prompt format is: "Context has changed: [describe the specific change and its impact]. Here is my current sprint plan: [paste the plan]. Here are my re-planning constraints: [minimum acceptable sprint outcome, any non-negotiable stories, capacity available]. Recommend the specific changes to the sprint plan that best address the context change while minimizing impact on the sprint goal."

Hands-On Steps

  1. Identify the three most common context change types that have affected your team's sprints in the past six months. For each, write a one-paragraph "scenario card" — a description of the change type, typical magnitude, and the re-planning questions it usually raises.
  2. Build a re-planning prompt template for each of your three most common change types. Each template should include: the context change description format, the current plan format, the decision criteria format, and the expected output format. Test each template against a real past scenario.
  3. Establish a re-planning trigger protocol: "I will initiate a re-planning prompt when [condition] changes by [threshold]." Examples: "Capacity drops by more than 20%," "A story is re-estimated above 13 points mid-sprint," "A dependency is confirmed as blocked with more than 3 sprint days remaining." Write your thresholds explicitly.
  4. Design your re-planning stakeholder communication: when a plan changes mid-sprint, who needs to know, what do they need to know, and how quickly? Write a stakeholder communication template for a mid-sprint scope change that explains what changed, why, and what the impact is on delivery. Use AI to generate draft communications from your re-planning output.
  5. After your next mid-sprint context change event, run the re-planning prompt against it and compare the AI's recommendations to the decisions you actually made. Note where the AI's recommendation was useful, where it was wrong, and what context it lacked. Use these observations to refine your re-planning prompt template.

Prompt Examples

Prompt:

You are a sprint planning advisor helping me adapt my sprint plan to a context change that has occurred mid-sprint.

Current sprint state:
- Sprint goal: [one sentence]
- Days remaining in sprint: [X]
- Stories completed: [list completed stories with points]
- Stories in progress: [list in-progress stories with remaining estimate]
- Stories not started: [list not-started stories with points]
- Original sprint capacity: [X points]

Context change:
[Describe the specific change — e.g., "Our senior engineer is out sick for 3 days, reducing our remaining sprint capacity by approximately 15 points. Additionally, Story #47 has been re-estimated at 13 points instead of the original 5, due to a discovered integration complexity."]

My re-planning constraints:
- Sprint goal: [restate the sprint goal and indicate whether it is non-negotiable or flexible]
- Non-negotiable stories: [any stories that cannot be deferred regardless of capacity]
- Deferral preference: [describe any preference for which types of stories to defer — e.g., "prefer to defer technical stories over user-facing stories"]
- Stakeholder commitments: [any external commitments that constrain re-planning choices]

Please provide:
1. An impact assessment: given the context change, what is the updated probability of achieving the sprint goal? What is the specific capacity gap?
2. A revised sprint plan: list which stories stay in this sprint (with justification), which should be deferred (with justification), and any story splitting recommendations
3. A scope trade-off summary: what value will NOT be delivered this sprint due to this change, and what is the impact of that deferral?
4. A stakeholder communication draft: a 3-5 sentence update I can send to relevant stakeholders explaining the plan change and its impact
5. A risk flag: are there any cascade effects of this change I should be monitoring (e.g., deferred stories that will create dependencies for the next sprint)?

Expected output: A specific, actionable re-planning recommendation with a revised sprint plan, impact assessment, scope trade-off summary, stakeholder communication draft, and cascade risk flag. The output is designed to give the PM all the information needed to make and communicate a re-planning decision in under 30 minutes.

Learning Tip: The real value of AI re-planning is not the speed — it is the explicitness. When a human PM re-plans mid-sprint under pressure, the reasoning is often implicit and undocumented: stories are quietly moved, scope is silently narrowed, and stakeholders may not hear about it until the sprint review. An AI-generated re-planning output forces every change to be explicitly documented with reasoning, which creates a natural prompt to communicate proactively. Make it a habit to always send the stakeholder communication draft before the end of the day that a re-planning event occurs.


Key Takeaways

  • Agentic planning converts roadmap updates from a quarterly batch process into a continuous, evidence-triggered flow. New approved opportunities are automatically compared against the roadmap and produce delta recommendations for PM review.
  • The Roadmap Delta Recommendation has three components: new additions, sequence adjustments, and conflict/redundancy flags. The PM reviews and makes final decisions; the AI handles the comparative analysis and reasoning generation.
  • The theme-to-backlog generation pipeline converts roadmap themes into sprint-ready stories through five steps: theme elaboration, journey mapping, story generation, backlog readiness check, and sprint slot proposal. AI generates the first draft; the PM reviews for feasibility, quality, and judgment inputs.
  • The Plan Review Checklist covers five dimensions: feasibility, sequencing, dependency coverage, capacity alignment, and human judgment inputs. The verdict options are commit, revise, reduce scope, or hold.
  • AI-assisted re-planning accelerates context change response by providing a structured impact assessment, revised plan, scope trade-off summary, stakeholder communication draft, and cascade risk identification — all from a single prompt.
  • The four context change types (scope, capacity, dependency, strategy) each require slightly different re-planning approaches. Build a prompt template for each type before you need it, not after.
  • The most common failure mode in agentic planning is insufficient context in the prompts. Every planning prompt should include: the current roadmap or plan, the capacity constraints, the decision criteria, and the strategic context. Without these, AI generates generic outputs that require as much revision as a human first draft.