·

Hands On Full Discovery Cycle

Hands On Full Discovery Cycle

Overview

This topic is a complete worked example of an AI-assisted product discovery cycle, from initial market scan through validated opportunity to a stakeholder-ready discovery brief. Everything covered in the previous five topics of this module comes together here in a single, end-to-end workflow that you can use as a template for your own discovery work.

The worked example throughout this topic is a B2B SaaS product in the project management space: a hypothetical product team at a company called "FlowWork" that offers a project management tool for professional services firms. The discovery question they are investigating: should they build AI-powered resource planning features, and if so, what specific problem should they target? This is a realistic, complex discovery challenge that requires genuine synthesis across market, customer, and competitive inputs — making it a strong instructional vehicle for demonstrating the full workflow.

This is not a simplified demonstration. Every step shown here is the same step you would run on a real product. The prompts are production-quality prompts, the outputs are realistic examples of what AI will actually produce, and the judgment calls at each stage are the real judgment calls you will face in practice. The worked example is comprehensive enough that you could adapt it directly to your own product context by replacing the domain details.

By the end of this topic, you will have walked through a complete discovery cycle and produced all the artifacts that make up a stakeholder-ready discovery brief: market context summary, customer research synthesis, opportunity landscape with scores, problem framing with hypotheses, validation plan, and discovery recommendation. You will also have a complete prompt sequence that you can reuse as a discovery workflow template.


Run a Complete Discovery Cycle — From Market Scan to Validated Opportunity

A complete discovery cycle has five phases that build on each other: (1) market scanning — understanding the competitive and market context for the opportunity space; (2) customer research synthesis — understanding what your customers actually experience and need; (3) opportunity identification — translating research insights into structured product opportunities; (4) problem framing — selecting the highest-priority opportunity and framing it precisely enough to generate testable hypotheses; and (5) validation planning — designing the experiments needed to test the most critical assumptions before committing to build.

Each phase produces a structured output that becomes an input to the next phase. This sequential dependency is important: you cannot credibly prioritize opportunities without customer research grounding them, and you cannot design good experiments without a precisely framed problem and explicit hypothesis. Skipping or compressing any phase degrades the quality of all subsequent phases.

The worked example starts with the market scan. The FlowWork team has decided to investigate the AI-powered resource planning space. They have collected: excerpts from two industry analyst reports on AI in professional services software, a set of competitor changelog summaries covering the past 6 months, a collection of LinkedIn posts from professional services consultants discussing their resourcing pain, and notes from three sales calls where resource planning came up as a prospect concern.

Hands-On Steps: Phase 1 — Market Scan

  1. Gather market inputs from 4–5 distinct source types (analyst reports, competitor changelogs, practitioner commentary, sales call notes, support ticket themes).
  2. Label each input block with source type, date, and relevance context.
  3. Run the market synthesis prompt to extract patterns, trends, and competitive signals.
  4. Annotate the output with your team's prior knowledge and any contradictions you notice.
  5. Identify the 2–3 most significant market signals that should shape the discovery focus.

Prompt Example: Market Scan Synthesis

Prompt:

You are a senior product strategist synthesizing market research for a B2B SaaS product team.

FlowWork is a project management tool used by professional services firms (consulting, accounting, legal, marketing agencies) with 50-500 employees. We are investigating whether to build AI-powered resource planning features.

Below are market research inputs collected over the past 30 days, labeled by source type:

[ANALYST_REPORT | Gartner | 2024-Q4]
Professional services automation (PSA) market is growing at 12% CAGR. AI features in PSA tools are the #1 requested capability category in 2024 buyer surveys. Specifically: automated capacity forecasting (cited by 67% of buyers), AI-driven skills matching (cited by 58%), and real-time utilization alerts (cited by 52%). Vendors adding AI features are seeing 23% higher win rates in competitive evaluations.

[COMPETITOR_CHANGELOG | ResourceHub (direct competitor) | Q3-Q4 2024]
- Oct 2024: "Smart Scheduling" feature launched — AI suggests optimal project staffing based on historical performance data and team availability
- Nov 2024: Integration with LinkedIn Skills added to enhance skills matching
- Dec 2024: "Capacity Heatmap" released — visual real-time view of team utilization across projects

[COMPETITOR_CHANGELOG | PlanRight (adjacent competitor) | Q3-Q4 2024]
- Sep 2024: Acquired StaffingAI startup for $18M
- Nov 2024: Released "AI Project Brief" — auto-generates staffing recommendations from project scope
- Dec 2024: "Bench Time Optimizer" in beta — surfaces billable opportunities for underutilized team members

[PRACTITIONER_POSTS | LinkedIn | Nov-Dec 2024]
Post 1 (Senior Consultant, Big 4 adjacent firm): "Our biggest operational headache in 2024 was matching the right consultants to the right projects. Our utilization system is manual Excel. We're leaving revenue on the table every week."
Post 2 (Operations Director, 150-person agency): "We tried implementing resource planning software twice. Both times failed during rollout — the tools were too complex for project managers to actually use. We went back to spreadsheets."
Post 3 (COO, mid-size consulting firm): "The game changer will be when AI can look at a project brief and automatically draft the optimal team composition. We spend 4-6 hours per new project just on staffing decisions."

[SALES_CALL_NOTES | Nov-Dec 2024]
Call 1 (100-person consulting firm): "We're looking at 3 tools. The one that integrates with our existing HR system and doesn't require a long training period will win. Our project managers hate learning new software."
Call 2 (250-person agency): "Our current tool requires manual input every time someone's project status changes. We have 15 PMs updating it inconsistently. The data is always wrong so no one trusts it."
Call 3 (80-person consulting firm): "We had ResourceHub for 6 months. Cancelled. The AI recommendations were good in theory but the underlying data was so messy that the AI was making suggestions based on wrong information."

Synthesize these inputs to produce:
1. Three strongest market trend signals (with source citations)
2. The competitive landscape summary — what are competitors shipping and what strategic direction does it reveal?
3. Two critical market risk signals — things in the data that suggest this opportunity space is more difficult than it appears
4. One key strategic insight that should shape how FlowWork enters this space

Expected output: The AI will identify the three trend signals (AI demand is validated by analyst and practitioner data; competitors are actively shipping; buyer selection criteria is shifting toward AI-native). The competitive summary will reveal that the space is already being contested by funded, shipping competitors. The risk signals will center on data quality dependency (the sales call notes) and adoption friction (the practitioner posts about implementation failures). The strategic insight should synthesize these into a directional recommendation: FlowWork should enter this space with a data-quality-first approach, addressing the adoption failure mode before adding AI capability.

Learning Tip: The market scan is not just background information — it directly constrains your opportunity space. The two risk signals surfaced by AI in this example (data quality dependency and adoption friction) should become explicit assumptions in your assumption register. If your product strategy does not account for them, you are repeating the failure pattern that caused competitors' customers to churn. Always read the market scan output looking for these constraint signals, and carry them forward explicitly into every subsequent phase.


Use AI to Synthesize Research, Frame Problems, and Generate Hypotheses

With the market context established, the discovery cycle moves to customer research synthesis. For the FlowWork example, the team has conducted 8 interviews with operations directors and project managers at professional services firms, plus reviewed 60 support tickets from their existing customer base related to team management and scheduling.

The synthesis phase has three sub-steps: extract the key themes and pain points from the research data, frame those themes as structured product opportunities, and generate testable hypotheses for the highest-priority opportunities. This is the core intellectual work of discovery — and it is where AI acceleration has the highest impact, because this phase is both cognitively demanding and, without AI, very slow.

Hands-On Steps: Phase 2 — Customer Research Synthesis and Problem Framing

  1. Prepare interview and support ticket data with speaker labels, anonymization, and segment tags.
  2. Run the multi-source thematic synthesis prompt to extract cross-source insights.
  3. Frame the top 3 insights as JTBD-grounded opportunity statements.
  4. Select the highest-priority opportunity and run the problem precision drilling sequence.
  5. Generate testable hypotheses from the refined problem framing.

Prompt Example: Full Discovery Synthesis Sequence

Prompt 1: Thematic Synthesis

Synthesize customer research data from two sources for FlowWork, a project management tool for professional services firms.

SOURCE 1: Interview insights summary
[8 interviews with Operations Directors and Project Managers, professional services firms, 50-500 employees, conducted December 2024]

Key verbatims:
1. "I spend 30% of my Monday morning figuring out who is available for the projects starting this week. It's chaos." — Operations Director, consulting firm
2. "We have people sitting on the bench for days while other teams are overloaded. No one has visibility into each other's capacity." — Project Manager, agency
3. "When a project comes in, my first question is always 'do we have the right skills available?' And I have no good way to answer that quickly." — COO, consulting firm
4. "I tried to build a capacity dashboard in Excel. It works okay but it's always out of date because getting PMs to update it consistently is impossible." — Operations Manager
5. "The worst is when a client asks for a staffing plan on a new project and I have to make it up because I don't actually know who's available." — Client Services Director
6. "Our problem isn't the planning — we know how to plan. Our problem is that our data is always wrong. People update their project hours inconsistently." — Project Director
7. "A tool that would just automatically pull from our calendar data and show me team availability would save me hours every week." — Operations Director
8. "We'd pay for something that tells us, based on project history, which team compositions tend to perform best on which types of work." — Director of Delivery

SOURCE 2: Support tickets (60 tickets, FlowWork existing customers, Oct-Dec 2024)
Themes from ticket analysis:
- 18 tickets: "Can't see overall team availability at a glance" — users clicking through individual profiles to check availability
- 12 tickets: "Assigned project hours don't match actual logged hours" — data accuracy issues
- 11 tickets: "Can't filter resources by skill or certification" — inability to find right-fit people quickly
- 9 tickets: "No way to see who is underutilized or overloaded" — missing utilization visibility
- 6 tickets: "Integration with HR/time tracking system doesn't work" — data pipeline failures
- 4 tickets: "Historical project performance data not usable for planning" — analytics gaps

Weight the sources: interviews provide depth and context; support tickets provide frequency and scale.

Generate:
1. Top 5 cross-source insights with evidence from both sources, frequency estimate, and severity rating
2. One JTBD statement for each insight
3. Recommended top 2 opportunities for deeper investigation, with rationale

Expected output: The AI will identify insights including: inability to see real-time team availability quickly (high frequency from tickets, confirmed in interviews), data accuracy and consistency as a root cause failure (strong signal in both sources), skills-based filtering as a distinct need from availability filtering, and utilization visibility as a management decision need. The JTBD framing will translate these from feature requests into outcome statements. The recommended opportunities will likely be (1) the data accuracy/availability visibility problem (highest frequency and severity) and (2) skills-based staffing decisions (highest strategic value per interviews).


Prompt 2: Hypothesis Generation from Top Opportunity

I am now working with the highest-priority opportunity identified from the FlowWork research synthesis:

**Opportunity:** Operations directors and project managers at professional services firms cannot make accurate, timely staffing decisions because they lack a reliable, real-time view of team capacity and skills availability, which results in suboptimal project staffing, under/over-utilization of staff, and significant manual coordination time.

Generate:
1. Three precise problem framings that are all consistent with this opportunity statement but represent different possible root causes.

2. For each precise framing, generate a well-formed testable hypothesis in this format:
"We believe [action] will result in [outcome] for [user type] because [evidence]. We will know this is validated when [specific measurable indicator]."

3. For each hypothesis, assess:
   - What is the strongest evidence supporting it from the research data?
   - What would count as refuting evidence?
   - Confidence rating (1-5) based on available evidence

4. Identify which hypothesis you recommend prioritizing for validation first, and explain the reasoning.

Expected output: Three precise framings addressing: (1) the real-time data visibility gap (operations directors can't see team availability without clicking through individual profiles), (2) the data accuracy failure (existing data is too inconsistent to trust, making any planning tool unreliable), and (3) the skills matching gap (PMs have to rely on memory to know who has what skills). Three corresponding hypotheses will be generated with confidence ratings — the data accuracy hypothesis will likely be rated lower confidence (harder to test without a working solution) while the visibility hypothesis will rate highest (can be tested with a UI prototype or fake door).

Learning Tip: When the synthesis phase produces multiple strong hypotheses (as it typically will), resist the temptation to prioritize the most exciting one over the most testable one. In the FlowWork example, the "AI-driven skills matching" hypothesis is exciting but depends on solving the data accuracy problem first — testing it without clean data would produce a false negative. The hypothesis evaluation framework from Topic 4 applies here: always check for dependencies between hypotheses, and validate the upstream dependency (data quality) before testing the downstream capability (AI recommendations).


Design a Validation Experiment with AI Assistance

With hypotheses in hand, the next phase is validation planning. For the FlowWork example, the team has selected the highest-confidence, lowest-dependency hypothesis to validate first: that operations directors will actively use and find value in a real-time team capacity dashboard if it surfaces the information they currently piece together manually.

The validation approach for this hypothesis is a combination of two methods: a fake door test to validate demand (will users seek out this capability?) and a concierge MVP to validate value (if we deliver the capability manually, do users actually change their behavior and find it valuable?). Running both in parallel gives you demand signal quickly and value depth more slowly — together they provide strong evidence before any engineering investment.

Hands-On Steps: Phase 3 — Validation Experiment Design

  1. Select the top hypothesis and the appropriate experiment type(s).
  2. Run the experiment design prompt for each method.
  3. Define primary and secondary metrics with measurability and sensitivity checks.
  4. Calculate sample sizes for the A/B or fake door components.
  5. Design the feedback collection instrument for the concierge component.
  6. Document the experiment design as a shareable spec with go/no-go criteria.

Prompt Example: Complete Validation Experiment Design

Prompt:

I need to design a complete validation experiment for the following FlowWork hypothesis:

**Hypothesis:** "We believe that operations directors and project managers will actively use a real-time team capacity overview if it is surfaced at the top of the home screen, and will find enough value to rate it as one of the 3 most useful product features, because research shows they currently spend 20-30 minutes daily piecing together availability information from multiple places. We will know this is validated when: (a) at least 60% of weekly active users view the capacity overview within the first week of exposure, and (b) at least 70% of concierge participants report that the daily digest changed a specific staffing decision they made."

**Product context:**
- Current FlowWork home screen: project list view, recent activity feed, upcoming deadlines widget
- Current weekly active users in target segment (ops directors + PMs): ~2,400
- Current home screen view-rate of existing widgets: 45-65% of WAU per widget
- Engineering capacity for fake door: 2 days
- Concierge capacity: PM + 1 analyst, 3 hours/day for 2 weeks
- Customer relationship for concierge recruitment: direct access to 200 existing customers in target segment

Design a two-part validation:

**PART A — Fake Door Test**
Design a fake door test for the real-time capacity overview feature:
1. Entry point placement and design on the home screen
2. Behind-the-door experience (coming soon page with specific learning questions)
3. Primary metric and measurement approach
4. Duration and sample size requirement
5. Go/no-go threshold for this part of the validation

**PART B — Concierge MVP**
Design a concierge MVP that delivers the value proposition manually:
1. What the concierge will deliver daily (content, format, channel)
2. Recruitment approach (how to select and invite 8-12 concierge participants)
3. Daily delivery protocol
4. Feedback collection instrument (4-6 specific questions per week)
5. Go/no-go criteria for this part of the validation

**COMBINED DECISION FRAMEWORK**
Define how the results from both parts combine into a final go/no-go decision on building the feature.

Expected output: A fully specified two-part experiment design. Part A will specify a "Team Capacity" card on the home screen with a "See your team's availability" CTA leading to a "coming soon" page with three embedded questions (what would you use this for? how often would you check it? what decisions would it change?). Part B will specify a daily 8am Slack/email digest manually assembled from calendar data, showing each team member's project allocations and free hours for the current and next 3 days, with a 3-question weekly check-in. The combined decision framework will specify: if fake door CTR > 55% AND concierge value confirmation > 65%, proceed to build; if either fails, investigate the failure mode before proceeding.

Learning Tip: Always design the concierge MVP to be slightly worse than you expect the real product to be — not in terms of the core value, but in terms of convenience and polish. If customers find the manually-assembled concierge valuable despite its rough edges (arriving slightly late, not perfectly formatted, requiring them to reply to collect it), you have strong evidence that the core value proposition is real. If they find it valuable only when it is delivered at exactly the right time with perfect formatting, you have learned something important about the execution requirements for the automated product. The gap between "works despite rough edges" and "only works with perfection" is a crucial product signal.


Produce a Discovery Brief Ready for Stakeholder Review

The final phase of the discovery cycle is producing a discovery brief — the document that synthesizes everything learned into a structured, evidence-backed recommendation that stakeholders can review, challenge, and use to make investment decisions. A well-constructed discovery brief is not a pitch deck. It is a decision document: it presents the evidence, the reasoning, and the recommendation transparently enough that a skeptical, intelligent stakeholder can understand the basis for the recommendation and make a confident judgment on whether to proceed.

The discovery brief template covers six sections: executive summary (the recommendation in one page), market and competitive context (why this space matters now), customer research synthesis (what customers experience and need), opportunity assessment (the scored and ranked opportunity landscape), problem framing and hypotheses (the specific problem and validation hypotheses), and validation plan with go/no-go criteria (what we will do next and how we will decide). The AI-generation workflow for each section draws on all the analysis completed in the previous phases — this is where the modular prompt sequence pays off.

Hands-On Steps: Phase 4 — Discovery Brief Production

  1. Compile all phase outputs: market scan summary, research synthesis, opportunity register with scores, hypothesis register, and experiment designs.
  2. Run the discovery brief template prompt for each section, feeding it the relevant phase outputs as context.
  3. Review each AI-generated section against the original analysis for accuracy and completeness.
  4. Add your own strategic interpretation to the executive summary and recommendation sections — these must reflect your judgment, not just AI synthesis.
  5. Run the stakeholder review preparation prompt to anticipate and prepare for likely questions and objections.
  6. Distribute the brief 48 hours before the stakeholder review meeting with a cover note explaining the decision required.

Prompt Example: Complete Discovery Brief Generation

Prompt: Discovery Brief — Full Generation Sequence

Generate a complete discovery brief for the FlowWork resource planning opportunity. Use the following inputs from our completed discovery cycle:

**Market Context Summary:**
[PASTE YOUR MARKET SCAN OUTPUT]

**Customer Research Synthesis:**
[PASTE YOUR RESEARCH SYNTHESIS OUTPUT]

**Opportunity Assessment:**
[PASTE YOUR SCORED OPPORTUNITY REGISTER]

**Selected Problem Framing:**
[PASTE YOUR REFINED PROBLEM STATEMENT AND TOP HYPOTHESES]

**Validation Plan:**
[PASTE YOUR EXPERIMENT DESIGN OUTPUTS]

Generate each section of the discovery brief using the following template:

---

## Executive Summary (max 300 words)
- The opportunity in one sentence
- The evidence basis in two sentences (what research supports this?)
- The strategic fit in one sentence (why is this right for FlowWork now?)
- The recommended action in one sentence
- The decision required from stakeholders in one sentence

## Market and Competitive Context
- The market trend creating urgency (1 paragraph)
- The competitive landscape summary — what are competitors doing and what does it mean for FlowWork? (1 paragraph)
- The window of opportunity — how long does FlowWork have before this becomes harder? (2-3 sentences)

## Customer Research Synthesis
- The core customer problem in 2-3 sentences
- The top 3 research findings with evidence citations
- The most compelling customer verbatim (direct quote that captures the problem viscerally)
- Customer segments affected and their relative priority

## Opportunity Assessment
- The opportunity statement (solution-neutral, customer-centered)
- Opportunity scores: Impact [X/5], Effort [X/5], Confidence [X/5], Strategic Fit [X/5]
- Composite score and ranking rationale
- Top risk factor and mitigation

## Problem Framing and Hypotheses
- The refined problem statement
- Top 2 testable hypotheses with confidence ratings
- Key assumption being tested (the most critical one to validate before building)

## Validation Plan
- Experiment 1: [Type, description, primary metric, go/no-go threshold, duration]
- Experiment 2: [Type, description, primary metric, go/no-go threshold, duration]
- Resource requirements: [Team, time, budget]
- Decision date: [When will we review results and make a proceed/kill decision?]

## Recommendation
- The recommended next action
- What success looks like in 90 days
- What failure looks like and what we would do differently
- The decision we need from stakeholders at this review

---

After generating the brief, produce a separate "Stakeholder Review Preparation" section:
- Top 3 questions stakeholders are most likely to ask, with prepared answers
- Top 2 objections to anticipate, with evidence-backed responses
- One alternative point of view that a reasonable stakeholder might hold, and how you would address it

Expected output: A complete, six-section discovery brief with executive summary, all substantive sections populated from the research, and a stakeholder preparation section with likely questions and objections. The brief will be approximately 1,200–1,500 words in total — long enough to be substantive, short enough to be read before a meeting.

Full Discovery Brief Review Checklist

Before distributing the discovery brief for stakeholder review, run through this checklist:

Evidence quality:
- [ ] Every significant claim is supported by at least one named source
- [ ] The research sample is described (N, dates, participant criteria)
- [ ] High-confidence and directional claims are distinguished
- [ ] No claims are presented with more certainty than the evidence supports

Opportunity framing:
- [ ] The opportunity statement is solution-neutral
- [ ] The customer, situation, obstacle, and consequence are all specified
- [ ] The opportunity is linked to a specific product outcome or OKR

Hypothesis quality:
- [ ] Each hypothesis is falsifiable (there is possible evidence that would refute it)
- [ ] Each hypothesis specifies what user, what action, what outcome, and what measurement
- [ ] Dependencies between hypotheses are identified

Validation plan:
- [ ] Each experiment has a specific go/no-go threshold (not just "we'll see what the data shows")
- [ ] Sample sizes and durations are calculated, not estimated
- [ ] Resource requirements are realistic given current team capacity
- [ ] A decision date is specified

Recommendation:
- [ ] The recommendation is a clear action, not a recommendation to "investigate further"
- [ ] The recommendation is contingent on validation results, not pre-committed
- [ ] Success and failure criteria are specified for the 90-day horizon

Learning Tip: The most important part of the discovery brief is the executive summary, and it is also the part most likely to be generated poorly by AI without your input. AI will synthesize the content accurately, but it will not know which element of your analysis is the real insight that changes how stakeholders think about the opportunity. Before distributing the brief, rewrite the executive summary in your own words, making sure it leads with the most non-obvious, most important insight from the discovery cycle — the finding that you would not have had without doing the work. That insight is what makes the brief worth reading.


Key Takeaways

  • A complete discovery cycle has five sequential phases: market scan, customer research synthesis, opportunity identification, problem framing with hypothesis generation, and validation planning — each phase produces structured inputs to the next.
  • The market scan phase should be read not just for opportunities but for constraint signals — competitive patterns and customer failure modes that should become explicit assumptions in subsequent analysis.
  • The prompt sequence for a full discovery cycle is modular: each phase has 1–3 dedicated prompts that can be reused across different discovery contexts by changing the domain details.
  • Hypotheses should be tested in dependency order — validate the upstream assumption (data quality, basic adoption) before testing the downstream capability (AI recommendations, advanced analytics).
  • Two-method validation (fake door for demand + concierge for value) provides both a fast signal and a deep signal, giving stronger confidence than either method alone.
  • The concierge MVP should be slightly rough by design — if customers find value despite rough edges, the core value proposition is validated; if they only value it when it is polished, that is a product execution signal.
  • A discovery brief is a decision document, not a pitch deck. It must be evidence-transparent, recommendation-specific, and calibrated — distinguishing high-confidence findings from directional hypotheses.
  • The executive summary of a discovery brief should be rewritten by the PM in their own words, leading with the most non-obvious insight from the discovery cycle — this is the signal that the analysis was done well.
  • The full discovery cycle covered in this module, when supported by AI synthesis tools, can be compressed from 4–6 weeks of traditional research work to 1–2 weeks of focused AI-assisted work, enabling product teams to run more discovery cycles, test more hypotheses, and make better-informed product bets.