·

Problem Framing Hypothesis

Problem Framing Hypothesis

Overview

Problem framing is one of the most consequential and least systematically practiced skills in product management. The way a problem is framed determines what solutions get considered, what success looks like, what gets built, and — ultimately — whether the product creates the value it was intended to create. A poorly framed problem produces a well-executed solution to the wrong thing. A well-framed problem creates a shared understanding that aligns discovery, design, engineering, and business stakeholders from the beginning.

Most product teams have experienced the consequences of bad problem framing without labeling it that way: features built to spec that customers don't use, A/B tests that lift a metric but make the overall experience worse, and product bets that were technically successful but commercially disappointing. In nearly every case, the root cause was that the team solved the stated problem rather than the real problem, because no one invested in rigorous problem framing before the solution discussion began.

AI is a powerful partner for problem framing because it can rapidly generate and evaluate multiple framings from the same raw material, challenge the assumptions embedded in your current framing, and help you move from vague intuitions about customer problems to precisely structured, testable hypotheses. The goal is not to use AI to do the thinking — it is to use AI to accelerate the generative and evaluative phases of problem framing, so you can apply your own judgment to a richer set of options than you would have time to develop manually.

This topic covers four distinct activities in the problem framing and hypothesis generation workflow: refining vague problem statements into testable hypotheses, generating "How Might We" statements and reframes, mapping the problem space using JTBD and pain analysis tools, and evaluating and prioritizing hypotheses for validation. Each activity requires specific prompt techniques and produces specific deliverable formats that feed directly into your validation and experiment design work.


How to Use AI to Refine Vague Problem Statements into Testable Hypotheses

Vague problem statements are the enemy of effective product work. "Users are frustrated" is not a problem statement — it is a symptom observation. "Our onboarding isn't working" is not a problem statement — it is a performance judgment. Even more specific-sounding statements like "users find the dashboard confusing" are often too vague to drive aligned action: confusing for whom? In which context? What are they trying to do? What do they do when they are confused? A testable hypothesis requires enough precision that two people can independently agree on whether evidence supports or refutes it.

The journey from "users are frustrated" to a testable hypothesis is a sequence of precision-adding steps: identify the specific user type, identify the specific context or scenario, identify the specific behavior or decision that is failing, identify what information or capability they lack that causes the failure, and then frame the hypothesis with explicit measurability. AI can guide you through this sequence efficiently, but it needs the raw material you have: research observations, usage data, support ticket patterns, and your own domain knowledge.

The standard hypothesis structure for product work is: "We believe [specific action or change] will result in [specific measurable outcome] for [specific user type] because [evidence or reasoning]. We will know this is true when [specific measurable indicator changes]." This structure forces precision on four fronts: what you are doing, what you expect to happen, who you expect it to affect, and how you will measure it. AI can help you draft hypotheses in this structure from looser evidence, and then evaluate whether each component is sufficiently precise and falsifiable.

The gap between stated problems and underlying causes is where the most important precision work happens. "Users are frustrated with the reporting feature" is typically a reported symptom. The underlying cause might be any of: they cannot find the reporting feature, they find it and cannot understand the configuration options, they configure the report but cannot interpret the output, or they interpret the output but it does not answer the question they actually had. Each of these is a different problem requiring a different solution, and they all present as "frustrated with reporting." AI can help you generate the most likely root cause candidates from your evidence base and frame each as a distinct testable hypothesis.

Hands-On Steps

  1. Start with the vague or partially-formed problem statement you are working with. Write it down exactly as it was stated — often from a stakeholder conversation, a support escalation, or a metric anomaly.
  2. Gather all the evidence you have about this problem: relevant research observations, support ticket patterns, usage data, customer quotes.
  3. Run the problem precision prompt to get AI to generate the range of more precise problem framings possible from your evidence.
  4. For each precise problem framing, run the hypothesis structuring prompt to convert it into a well-formed testable hypothesis.
  5. Review each hypothesis for falsifiability: can you imagine evidence that would prove it wrong? If not, the hypothesis is not testable and needs to be refined.
  6. Check each hypothesis for measurability: does the "we will know this is true when" clause specify something you can actually observe or measure with available tools?
  7. Select the 2–3 hypotheses that best represent the range of possible root causes and document them in your hypothesis register.

Prompt Examples

Prompt: Problem Precision Drilling

I have a vague problem statement that I need to make precise and actionable. Here is the statement:

**Vague Problem:** "[PASTE YOUR VAGUE PROBLEM STATEMENT]"

**Available Evidence:**
[PASTE RELEVANT RESEARCH OBSERVATIONS, SUPPORT TICKET PATTERNS, USAGE DATA, AND QUOTES]

Your task:
1. Identify the key vagueness or ambiguity dimensions in this problem statement: Who specifically? In what context specifically? At what moment specifically? Why specifically?

2. Generate 4-6 more precise problem framings that are all consistent with the available evidence. Each framing should name a specific user type, a specific scenario or context, a specific failing behavior or decision, and a specific underlying cause.

3. For each precise framing, assess: How well is this supported by the evidence I have provided? (Strong / Moderate / Weak support)

4. Identify which precise framing you believe is most likely to be the root cause based on the evidence pattern, and explain your reasoning.

Format: Numbered list of framings, each with a one-paragraph description and evidence support rating.

Expected output: 4–6 precise problem framings with evidence support ratings and a recommended most-likely root cause. This output transforms a vague complaint into a set of investigable hypotheses.


Prompt: Testable Hypothesis Structuring

Convert the following precise problem framing into a well-formed testable product hypothesis.

**Precise Problem Framing:**
[PASTE ONE OF THE PRECISE FRAMINGS FROM THE PREVIOUS PROMPT]

**Available Evidence:**
[PASTE RELEVANT EVIDENCE]

Structure the hypothesis using this format:

**Hypothesis Statement:**
"We believe [specific product action or change] will result in [specific measurable outcome] for [specific user type] because [evidence or reasoning that supports this belief]."

**Falsifiability Check:**
- What specific evidence would SUPPORT this hypothesis?
- What specific evidence would REFUTE this hypothesis?
- Is there any result that would be ambiguous — neither clearly supporting nor refuting it? If so, describe it and propose how to handle it.

**Measurability Check:**
"We will know this hypothesis is validated when [specific metric or observable behavior changes in this specific way, measurable using these specific tools or methods]."

**Confidence Rating:**
On a scale of 1-5, how confident are you in this hypothesis based solely on the available evidence? Explain your rating.

Generate the hypothesis for each of the top 3 precise problem framings I provided.

Expected output: Three fully structured testable hypotheses with falsifiability checks, measurability statements, and confidence ratings. These are ready to be entered into your hypothesis register and prioritized for validation.

Learning Tip: Run a "hypothesis timeline" check on your top candidates. For each hypothesis, ask: "How long would it realistically take to generate enough evidence to validate or refute this?" Hypotheses that take 6+ months to test are not hypotheses — they are bets. For long-cycle hypotheses, look for a leading indicator: a shorter-term observable signal that would give you confidence in the direction of the hypothesis before you have the full outcome data. AI can help you identify leading indicators for each hypothesis when you explicitly ask for them.


Generating "How Might We" Statements and Problem Reframes with AI

"How Might We" (HMW) statements are a widely used design thinking tool for reframing problems as opportunity spaces. A good HMW statement is broad enough to invite multiple possible solutions but narrow enough to be productively constrained. The challenge is that most practitioners generate HMW statements too quickly and too narrowly — they essentially restate the problem as a question without genuinely exploring the reframe space. AI can dramatically expand your HMW generation by applying systematic reframing lenses that human brainstorming sessions rarely cover.

The most valuable reframes are often the ones that challenge the assumptions baked into your current problem statement. Every problem statement contains implicit assumptions about the right level of analysis, the right customer, the right constraint, and the right scope. Surfacing these assumptions is the first step toward generating genuinely different HMW framings. AI is particularly good at this because it can apply a systematic set of reframing lenses — inversion, analogical reasoning, constraint removal, scope change, customer change — without the cognitive fatigue that makes human reframing sessions shallow after the first 20 minutes.

The constraint-first then expansive approach to HMW generation produces a better range of options than starting with the most expansive framings. If you start expansive ("How might we reinvent the entire onboarding experience?"), you generate solutions that are exciting but often impossible given real constraints. If you start constraint-aware ("How might we improve activation for enterprise users within the constraints of our existing authentication infrastructure and a 2-sprint delivery budget?"), you generate solutions that are more immediately actionable but may miss important innovation opportunities. The best approach is to run both and then select from the full range.

Identifying the framing assumptions in your current problem statement is an analytical skill that AI can teach you through repeated application. When you instruct AI to surface the assumptions embedded in a problem framing, it identifies things like: "This framing assumes the solution must live inside the product — but what if it could be partly in the sales or onboarding process?", "This framing assumes the user should do the work — but what if the product could do it for them?", and "This framing assumes the problem occurs during a specific workflow — but the data might show it actually begins before users even log in." These assumption challenges generate the most creative and commercially valuable reframes.

Hands-On Steps

  1. Start with your top 2–3 refined hypothesis statements from the previous section.
  2. Run the assumption surfacing prompt to identify the implicit assumptions in each problem framing.
  3. For each assumption, generate a reframed problem that removes or inverts that assumption.
  4. Run the systematic HMW generation prompt to produce a full set of HMW statements using multiple reframing lenses.
  5. Sort the HMW statements into three categories: Within current constraints (actionable now), Stretch (requires some expansion of current approach), and Blue sky (requires major assumptions to be violated).
  6. Select 1–2 HMW statements from each category to bring into your solution ideation session, ensuring you explore across the range rather than defaulting to the most comfortable framings.
  7. Document which assumptions you chose to challenge and which you chose to accept as constraints, and explain the reasoning — this becomes part of your discovery documentation.

Prompt Examples

Prompt: Assumption Surfacing

Below is a problem framing I am working with. I want to identify the implicit assumptions embedded in this framing before generating "How Might We" statements.

**Problem Framing:**
[PASTE YOUR PROBLEM STATEMENT OR REFINED HYPOTHESIS]

Identify every assumption embedded in this framing. Apply these lenses:

1. **Who assumption:** Who is assumed to be the user or actor? What if it were a different person, role, or stakeholder?
2. **Where assumption:** Where in the workflow or journey is the problem assumed to occur? What if it occurs at a different point?
3. **What assumption:** What is assumed to be the core problem? What alternative explanations have been ruled out?
4. **Why assumption:** What causes are assumed? What alternative causes are not being considered?
5. **How assumption:** What solution approach is assumed by the framing? Is there an implicit solution already embedded in how the problem is stated?
6. **Constraint assumption:** What constraints are assumed to be fixed that might actually be changeable?
7. **Scope assumption:** What is assumed to be out of scope that might actually be relevant?

For each assumption you identify, state: the assumption, why it is embedded in the framing, and what an alternative version of the framing looks like if that assumption is challenged.

Expected output: A structured assumption audit of your problem framing, with each assumption identified, explained, and paired with an alternative framing. This is the most important upstream analysis for generating genuinely creative HMW statements.


Prompt: Systematic HMW Generation

Using the problem framing and the assumption analysis, generate a comprehensive set of "How Might We" statements.

**Problem Framing:** [PASTE YOUR PROBLEM STATEMENT]
**Assumption Analysis:** [PASTE THE OUTPUT FROM THE ASSUMPTION SURFACING PROMPT]

Generate 15-20 HMW statements covering the following reframing lenses. Label each with the lens used:

- **Narrow the challenge:** Focus on one specific moment, user, or failure mode
- **Broaden the challenge:** Zoom out to the larger job or context
- **Invert the challenge:** What if the opposite were true? How might we [make the problem worse deliberately to learn from it]?
- **Challenge the constraint:** Remove one assumed constraint and reframe
- **Change the actor:** What if a different person, tool, or system did the work?
- **Change the journey stage:** What if we intervened earlier (prevent) or later (recover)?
- **Analogous domain:** What would this HMW look like if the problem were in a completely different context (healthcare, aviation, retail)?
- **Business model lens:** How might we make the solution a revenue-generating capability rather than a cost?
- **Emotional lens:** How might we make the customer feel [specific emotion] instead of [current negative emotion]?

After generating all HMW statements:
- Mark the 3 you consider most likely to lead to conventional but validated solutions (safe bets)
- Mark the 3 you consider most likely to lead to genuinely differentiated solutions (bold bets)
- Mark the 3 that challenge assumptions most aggressively (assumption challengers)

Expected output: 15–20 HMW statements organized by reframing lens, with the safe bets, bold bets, and assumption challengers identified. This provides a rich ideation starting set that covers the full creative range from incremental to transformative.

Learning Tip: When you bring HMW statements into a team ideation session, resist the temptation to filter the list down to the "most realistic" statements before the session begins. The "assumption challenger" HMWs often produce the most creative ideas even when the ideas themselves are not immediately feasible — they shift the team's thinking in ways that improve even the conventional ideas. Present the full range, label the categories honestly, and let the team engage with the full creative space before applying feasibility constraints.


Using AI to Map Problem Spaces — Jobs-to-Be-Done, Pain Severity, and Frequency

Problem space mapping is the analytical activity that transforms a list of individual customer problems into a structured picture of where the most important opportunities lie and how they relate to each other. Without a map, product teams often work on isolated problems in isolation — fixing the onboarding flow here, improving the export function there — without a coherent view of which problems are most interconnected, which are root causes versus symptoms, or which represent the highest-leverage intervention points.

The JTBD mapping approach, combined with a pain frequency-severity matrix, gives you a two-dimensional view of the problem space: you understand what outcomes customers are trying to achieve (jobs) and you understand which obstacles are most critical to remove (high frequency + high severity problems). The intersection of these two views — which jobs are most important AND which associated problems are most painful — is where your prioritized problem space lives.

AI can help you build JTBD maps from customer research data efficiently, but the quality of the map depends on the richness of the research data you feed it. A JTBD map built from 8 well-conducted interviews will be more insightful than one built from 50 shallow survey responses, because the interview data contains the contextual richness that reveals the structure of the job — its stages, its dependencies, its success conditions, and its failure modes.

The pain matrix — plotting problems by frequency (how often it occurs) and severity (how much it affects the customer when it does) — is a visualization tool that makes prioritization conversations concrete. Problems in the high-frequency/high-severity quadrant are your non-negotiables. Problems in the low-frequency/high-severity quadrant are important for specific segments but not universally. Problems in the high-frequency/low-severity quadrant are good candidates for quick-win improvements. Problems in the low-frequency/low-severity quadrant should be deprioritized unless they disproportionately affect a critical user segment.

Hands-On Steps

  1. Gather your qualitative research outputs: thematic analysis, pain point register, JTBD statements generated in earlier sessions.
  2. Run the JTBD hierarchy mapping prompt to organize individual jobs into a hierarchical structure: big jobs (life goals), main jobs (the primary use case), and micro jobs (the specific tasks within the main job).
  3. For each main job, run the job stages mapping prompt to break the job into chronological stages and identify what the customer is doing, thinking, and feeling at each stage.
  4. Run the pain matrix placement prompt to position each identified pain point on the frequency/severity grid.
  5. Overlay the JTBD map and pain matrix: for each job stage, identify which pain points fall in the high-frequency/high-severity quadrant.
  6. Identify "pain clusters" — job stages with multiple high-priority pain points — these are the highest-leverage intervention points.
  7. Document the complete problem space map as a single reference document that can anchor all subsequent opportunity framing and hypothesis generation.

Prompt Examples

Prompt: JTBD Hierarchy Mapping

Using the customer research data and JTBD statements from earlier analysis, help me build a JTBD hierarchy for [user type].

**Research inputs:**
[PASTE JTBD STATEMENTS AND RELEVANT RESEARCH OBSERVATIONS]

Build the hierarchy as follows:

**Level 1 — Big Job (Life/Career Goal):**
What is the ultimate outcome this user is trying to achieve in their professional life or career? (1-2 big jobs)

**Level 2 — Main Jobs (Primary Use Case Goals):**
What are the core functional jobs this user needs to complete to progress toward their big job? (3-6 main jobs)

**Level 3 — Micro Jobs (Specific Task-Level Activities):**
For each main job, what are the specific task-level activities the user must perform? (4-8 micro jobs per main job)

For each job at every level, include:
- Job statement: "When [situation], I want to [motivation], so I can [outcome]"
- Current satisfaction level: How well are existing solutions meeting this job? (Underserved / Adequately served / Overserved)
- Research evidence: Which specific research data points support the identification of this job?

Expected output: A three-level JTBD hierarchy covering big jobs, main jobs, and micro jobs, with job statements, current satisfaction assessments, and research evidence for each. This is the foundational structure for problem space mapping.


Prompt: Pain Matrix Placement

Using the following pain points from my research, place each on a frequency-severity matrix.

**Pain Points:**
[PASTE YOUR PAIN POINT LIST FROM THEMATIC ANALYSIS]

**Frequency Scale:**
- High: Occurs multiple times per week for most users in this segment
- Medium: Occurs weekly or multiple times per month
- Low: Occurs monthly or less, or only for a minority of users

**Severity Scale:**
- Critical: Causes task failure, significant time loss (>30 min), or active churn/escalation risk
- Significant: Causes meaningful workaround behavior, time loss (5-30 min), or visible frustration
- Moderate: Causes minor inconvenience or slight inefficiency (< 5 min impact)

For each pain point:
1. Assign a frequency and severity rating with a brief rationale
2. Cite the research evidence that supports your rating
3. Note if you have low confidence in the rating due to limited evidence

Then produce a 2x2 matrix (text format) with all pain points placed, and identify:
- Top 3 "must-fix" pain points (High Frequency + Critical/Significant Severity)
- Top 3 "high-value segment specific" pain points (Low Frequency + Critical Severity)
- Top 3 "quick win" candidates (High Frequency + Moderate Severity)

Expected output: A complete pain matrix with all pain points placed, evidence citations, confidence flags, and prioritized lists across the three action categories.

Learning Tip: When building a JTBD hierarchy with AI, pay special attention to the "underserved" main jobs — the ones where AI assesses that existing solutions (including your own product) are not adequately meeting the job. These underserved main jobs are the problem spaces most likely to yield high-impact opportunities. Cross-reference them with your opportunity scoring from the previous topic: if your highest-scored opportunities are all addressing adequately served jobs, you may be optimizing in a space where competition is fierce rather than innovating in a space where customers are genuinely underserved.


How to Evaluate and Prioritize Hypotheses for Validation with AI Assistance

At this point in the discovery process, you typically have more hypotheses than you have time to validate. The challenge is that all hypotheses are not equally worth validating. Some are high-stakes and require validation before you can proceed; others are low-stakes and can be assumed to be true for now. Some can be validated cheaply and quickly; others require expensive and time-consuming experiments. Some are critical path for your current product decision; others are interesting but not blocking. Hypothesis prioritization is the activity that allocates your limited validation resources efficiently.

The evaluation rubric for hypothesis prioritization covers four dimensions: testability (can this be tested at all, and within realistic constraints?), falsifiability (is there a possible result that would refute it?), time-to-validate (how long would it take to generate sufficient evidence?), and cost-to-test (how much resource investment does validation require?). These four dimensions are all relevant, but they are not equally weighted in all situations. For a hypothesis underlying a major roadmap investment, you would accept high cost and long time-to-validate. For a hypothesis underlying a tactical UX change, you want fast and cheap.

The strategic-versus-tactical framing is important for hypothesis prioritization. Strategic hypotheses — about market size, customer segment viability, fundamental value proposition, or major capability bets — require more rigorous and expensive validation approaches, but they are also the most important ones to get right. Tactical hypotheses — about specific UI choices, workflow optimizations, or feature details — can typically be validated faster and cheaper, through A/B tests, usability sessions, or quick prototype studies. A common mistake is applying expensive strategic validation methods to tactical hypotheses (wasting time and money) or applying quick tactical methods to strategic hypotheses (generating insufficient evidence for major decisions).

The hypothesis stack rank that AI generates should be reviewed with your discovery team before finalizing. AI can apply the rubric consistently, but it cannot know which hypotheses are blocking your most critical product decisions right now, which ones are politically sensitive, or which ones have dependencies that make them impractical to test in isolation. Your domain knowledge and organizational context must be layered over the AI-generated ranking to produce a final validation roadmap.

Hands-On Steps

  1. Compile your full hypothesis register — all the testable hypotheses you have developed through the discovery process so far.
  2. For each hypothesis, note: the product decision or investment it is blocking (why does it need to be validated?), and the evidence you currently have (how confident are you already?).
  3. Run the hypothesis evaluation rubric prompt to score each hypothesis on testability, falsifiability, time-to-validate, and cost-to-test.
  4. Run the stack ranking prompt to produce a prioritized validation queue, combining the rubric scores with your notes on blocking decisions and current evidence.
  5. Review the stack rank with your team and apply your organizational context: override rankings where AI's assessment misses context about current priorities, team capacity, or dependencies.
  6. Select the top 3–5 hypotheses for your current discovery sprint and assign validation approach and ownership for each.
  7. Document the remaining hypotheses in a "validation backlog" — they will be revisited in future discovery cycles.

Prompt Examples

Prompt: Hypothesis Evaluation Rubric

I have a set of product hypotheses that need to be prioritized for validation. Apply the following evaluation rubric to each hypothesis.

**Hypotheses:**
[PASTE YOUR HYPOTHESIS REGISTER]

**EVALUATION RUBRIC:**

TESTABILITY (1-5): Can this hypothesis be tested within realistic constraints?
- 5: Easily testable with current tools and data in under 2 weeks
- 4: Testable with moderate effort in 2-4 weeks
- 3: Testable but requires significant setup or new research instruments
- 2: Difficult to test — requires proxy measures or long observation periods
- 1: Not practically testable with available resources and timeframe

FALSIFIABILITY (1-5): Is there a possible result that would clearly refute this hypothesis?
- 5: Clear falsifying evidence is easy to define and would be unambiguous
- 4: Falsifying evidence is definable, though results might be somewhat ambiguous
- 3: Could be refuted but the refutation criteria are complex or contested
- 2: Hard to falsify — most evidence could be interpreted as supporting
- 1: Not falsifiable as currently stated — needs to be restructured

TIME-TO-VALIDATE (1-5): How long would rigorous validation take? (1 = longest, 5 = shortest)
- 5: Can be validated within 1 sprint (2 weeks)
- 4: Can be validated within 1 month
- 3: Requires 1-2 months
- 2: Requires a full quarter
- 1: Requires 6+ months

COST-TO-TEST (1-5): What resource investment is required? (1 = highest cost, 5 = lowest cost)
- 5: Can be tested using existing data or minimal new research (< 1 day effort)
- 4: Requires 1-5 days of PM/researcher time, no new infrastructure
- 3: Requires 1-2 weeks of team effort and possibly new tooling
- 2: Requires significant engineering work or customer recruitment effort
- 1: Requires major infrastructure build or extended controlled experiment

For each hypothesis:
- Score on all four dimensions with a one-sentence rationale for each score
- Flag any hypothesis that scores 1 or 2 on Falsifiability — these cannot be validated and must be restructured
- Calculate composite score

Then stack-rank the hypotheses by composite score and identify the top 5 for immediate validation.

Expected output: A fully scored hypothesis register with dimension scores, rationale, composite scores, falsifiability flags, and a prioritized validation queue. Any hypotheses that are not falsifiable will be flagged for restructuring.


Prompt: Validation Approach Recommendation

For the top 5 hypotheses in my validation queue, recommend the most appropriate validation approach for each.

**Top 5 Hypotheses:**
[PASTE TOP 5 FROM RANKED LIST]

**Available validation methods:** Customer interviews, survey study, usability testing, prototype study, A/B test, analytics review, fake door test, concierge MVP, expert review.

**Our constraints:** [PASTE YOUR TEAM SIZE, SPRINT LENGTH, RESEARCH BUDGET, AND ANY OTHER RELEVANT CONSTRAINTS]

For each hypothesis:
1. Recommend the most appropriate validation method and explain why it is the best fit for this specific hypothesis.
2. If multiple methods could work, rank your top 2 recommendations with tradeoffs.
3. Estimate the effort required for the recommended method (days of work, participants needed, tools required).
4. Specify the exact data or evidence that would count as validation: "We will consider this hypothesis validated when [specific measurable outcome]."
5. Identify any dependencies: must another hypothesis be validated first before this one can be tested meaningfully?

Output as a validation plan table.

Expected output: A validation plan table with recommended validation methods, effort estimates, specific success criteria, and dependency mapping for the top 5 hypotheses. This is the discovery sprint planning document.

Learning Tip: Before committing to a validation plan, run a "cheapest disconfirmation" check on your top hypotheses. Ask AI: "What is the cheapest, fastest piece of evidence that could potentially refute this hypothesis?" Then go and look for that evidence before investing in a longer validation study. If the cheap disconfirmation evidence is not there (i.e., the hypothesis survives the cheapest challenge), you can proceed to the full validation with more confidence. If the cheap evidence does challenge the hypothesis, you have saved weeks of effort and learned something important at minimal cost.


Key Takeaways

  • Problem framing is a design activity with strategic consequences — the framing you choose determines which solutions get considered and which get ruled out before ideation even begins.
  • Vague problem statements must be made precise before hypotheses can be formulated; precision requires naming the specific user, specific context, specific failing behavior, and specific underlying cause.
  • A testable hypothesis must specify what action, what expected outcome, for which user, based on what evidence, measured how — all four components must be present and precise.
  • Assumption surfacing is the most important upstream step in HMW generation; challenging embedded assumptions produces the most creative and commercially valuable reframes.
  • Generate HMW statements across the full range from within-constraint to assumption-challenging; premature filtering toward "realistic" options kills creative exploration.
  • JTBD hierarchy mapping reveals the structure of the job — big jobs, main jobs, micro jobs — and identifies where jobs are underserved, which is the most strategically valuable part of the map.
  • Pain matrix placement (frequency × severity) makes prioritization conversations concrete and evidence-grounded rather than opinion-driven.
  • Hypothesis prioritization requires four criteria: testability, falsifiability, time-to-validate, and cost-to-test — plus your team's context about which hypotheses are blocking critical product decisions.
  • The "cheapest disconfirmation" check is the most efficient use of discovery resources — always look for the fastest possible refutation before investing in a full validation study.