Overview
Every product team faces the same fundamental problem: more potential opportunities than capacity to pursue them, and limited time to rigorously evaluate each one before the roadmap has to be locked. The consequence is that opportunity prioritization defaults to loudness — whoever shouts the loudest in planning sessions, whichever metric recently dipped, whichever executive sponsor pushed hardest — rather than to systematic evaluation of potential value, strategic fit, and evidence quality. AI does not eliminate the need for judgment in opportunity selection, but it dramatically improves the inputs to that judgment.
This topic covers how to use AI across the full opportunity identification and sizing workflow: extracting opportunities from research synthesis, scoring them systematically against multiple dimensions, generating market sizing models, and then stress-testing your conclusions. The goal is not to have AI make opportunity decisions for you — it is to ensure that when you make those decisions, you are working with a more complete, more rigorous, and more honestly calibrated picture of the opportunity landscape than you would have had time to build manually.
A key conceptual shift for senior practitioners: opportunity identification is a design activity, not just an analysis activity. The way you frame an opportunity — the customer, their job, the problem, the scope, and the success condition — determines which solutions you will consider and which ones you will never even imagine. AI can help you generate multiple opportunity framings from the same research data, so you can choose the framing with the most strategic leverage before committing to a solution direction.
Opportunity sizing deserves its own emphasis: it is consistently the weakest part of most product organizations' discovery practices, because it requires combining market data, customer research, and business modeling in a way that few PMs have the time or financial modeling skills to do well. AI makes viable sizing analyses accessible to any product manager willing to provide good inputs and apply appropriate skepticism to the outputs.
How to Use AI to Identify Product Opportunities from Research Synthesis
Opportunity identification is the translation layer between customer research and product work. Customer research tells you what customers experience; opportunity identification tells you where a product intervention could improve that experience in a way that is valuable to both the customer and the business. This translation requires explicit analytical work — it does not happen automatically from a pile of interview transcripts. AI can perform this translation efficiently when prompted correctly.
The standard opportunity framing structure used in modern product discovery is the opportunity statement: a customer-centered description of an unmet need, expressed without implying a specific solution. A well-formed opportunity statement has four elements: the customer (who experiences this?), the situation (in what context does it arise?), the outcome they are trying to achieve (what progress are they trying to make?), and the obstacle (what prevents them from achieving it with current solutions?). AI can generate these structured statements from raw research synthesis, but you need to prompt for the structure explicitly or you will get vague, solution-leaning outputs.
The opportunity solution tree, popularized by Teresa Torres, is one of the most useful frameworks for translating research into structured exploration. The tree maps a product outcome (a specific business or user metric goal) to a set of opportunities (the customer needs that, if met, would contribute to that outcome) to a set of solution ideas (specific product interventions for each opportunity). AI can help you build the tree by generating candidate opportunities from research data, which you then evaluate and select before generating solutions. This keeps the discovery process outcome-anchored and prevents premature solution attachment.
The distinction between surface-level problems and root-cause opportunities is important for generating opportunities that lead to durable product value rather than band-aid fixes. When a customer says "the report takes too long to generate," the surface problem is report speed. But the opportunity might be much broader: "Customers cannot get timely data to make decisions during business-critical moments." That framing opens a much richer solution space than "make the report faster." AI can help you drill from surface to root cause when prompted with the right analytical questions.
Hands-On Steps
- Start with your synthesized research outputs from the previous topic: the thematic analysis, pain severity matrix, and JTBD register.
- Run the opportunity framing prompt to generate structured opportunity statements from your research themes.
- Generate an opportunity solution tree by mapping your product team's current outcome objective to the generated opportunities.
- Use the root cause drilling prompt on your top 3–5 opportunities to ensure you are working at the right level of abstraction.
- Consolidate the opportunity statements into an opportunity register — a structured list of all candidate opportunities with their framing and supporting evidence.
- De-duplicate and merge overlapping opportunities (AI will sometimes generate near-duplicate framings from similar research themes).
- Review the opportunity register with the product team and annotate each with team confidence level (how well does this match your lived understanding of the problem space?).
Prompt Examples
Prompt: Opportunity Statement Generation
You are a product strategist generating opportunity statements from customer research synthesis.
Below is a thematic analysis from customer research, including pain point themes, JTBD framings, and pain severity ratings.
[PASTE THEMATIC ANALYSIS AND JTBD REGISTER]
For each major theme, generate 1-2 well-formed opportunity statements using this structure:
**Opportunity Statement Format:**
"[Customer type] struggle to [achieve outcome] when [situation/context] because [root obstacle], which results in [consequence for the customer]."
Requirements:
- The opportunity statement must be solution-neutral (do not imply a specific feature or product response)
- The obstacle should be specific enough to be actionable (not "the product is hard to use" but "they cannot configure the system without technical expertise they don't have")
- The consequence should connect to a business or emotional cost the customer cares about
- Each statement should represent a distinct opportunity space, not overlapping variations of the same problem
After generating all opportunity statements, identify:
1. Which 3 opportunities have the strongest supporting evidence in the research data?
2. Which opportunities, if addressed, would most contribute to [your stated product outcome goal]?
Expected output: A set of structured opportunity statements, one or two per research theme, all solution-neutral and evidence-grounded. The best-evidenced and most outcome-aligned opportunities will be highlighted for further evaluation.
Prompt: Opportunity Solution Tree Construction
I am building an opportunity solution tree for our product team. Our target product outcome for this quarter is:
**Outcome:** [PASTE YOUR SPECIFIC PRODUCT OUTCOME — e.g., "Increase activation rate from 35% to 55% for new enterprise users within 30 days of signup"]
I have generated the following opportunity statements from customer research:
[PASTE OPPORTUNITY REGISTER]
Your tasks:
1. Review each opportunity against the stated outcome. Rate each on outcome fit: Strong (directly contributes to the outcome), Moderate (indirectly contributes), Weak (unlikely to contribute to this specific outcome).
2. For the opportunities with Strong or Moderate outcome fit, build an opportunity solution tree:
- Level 1: The outcome
- Level 2: The 4-6 best-fit opportunities
- Level 3: For each opportunity, generate 3-4 solution ideas (brief, directional — not full specs)
3. Identify which branch of the tree (one opportunity and its solution ideas) represents the highest-confidence path to the outcome based on the research evidence.
Output as a structured tree. Note clearly that solution ideas at Level 3 are generative starting points for exploration, not recommendations.
Expected output: A structured opportunity solution tree with outcome → opportunities → solution ideas mapped. The highest-confidence branch will be identified as the recommended starting point for deeper investigation.
Learning Tip: When AI generates opportunity statements, watch for statements that accidentally embed assumptions about your current product or infrastructure. An opportunity statement like "users struggle to export data because our export function lacks format options" is not solution-neutral — it assumes the solution is more format options. A properly neutral framing would be "users struggle to share data with stakeholders who don't use the product, because there is no way to deliver product data in formats those stakeholders already use." Push AI to rewrite any statement that implies a specific solution by instructing: "Rewrite this statement to be fully solution-neutral — remove any reference to specific product capabilities or implementation approaches."
AI-Assisted Opportunity Scoring — Impact, Effort, Confidence, and Strategic Fit
Opportunity scoring is the mechanism that turns an unordered list of candidate opportunities into a prioritized working agenda. The challenge with scoring frameworks — RICE, ICE, effort-impact, and variants — is that they require consistent, justified scores across many opportunities, often scored by different people with different reference points. The result is usually scoring inflation (everyone scores their favorite opportunity high on reach and impact), inconsistent effort estimates, and missing confidence calibration. AI can help with all three problems when used carefully.
The four-dimension scoring model covered here — impact, effort, confidence, and strategic fit — is more robust than two-dimension models (effort vs. impact) because it explicitly separates what you know from what you assume. Confidence is arguably the most important and most neglected scoring dimension: an opportunity with high impact but low evidence confidence (you are guessing at the impact) is a very different investment bet than an opportunity with moderate impact and high evidence confidence (your research strongly supports this). Scoring without confidence calibration leads to over-investment in exciting speculations and under-investment in validated needs.
The rubric approach is what makes AI scoring consistent and defensible. Rather than asking AI to score an opportunity on a 1–10 scale with no definition of what 10 means, you provide detailed rubric definitions for each scale point on each dimension. AI applies the rubric consistently across all opportunities, which eliminates the anchor effects and comparison biases that plague human scoring sessions. After AI-generated scoring, your job is to review the scores critically and override any where your domain knowledge contradicts the AI's assessment — but you will find the AI's reasoning for each score surprisingly useful for that review.
Normalizing scores across dissimilar opportunities is a subtle but important challenge. If your opportunity set includes a small usability improvement, a major new integration, and a fundamental redesign of a core workflow, straightforward comparison on impact and effort will be misleading unless you are comparing them at the same level of scope and specificity. AI can help you establish a normalization baseline and flag opportunities that are too dissimilar in scope to be directly compared without adjustment.
Hands-On Steps
- Finalize your opportunity register from the previous section — you should have 10–20 candidate opportunities with supporting evidence.
- Define your scoring rubrics for each dimension before running the scoring prompt. Be specific about what each score level means in the context of your product, team, and business.
- Run the opportunity scoring prompt for all opportunities in your register using your defined rubrics.
- Review the scores critically, paying particular attention to confidence scores — opportunities scored high impact but low confidence are the ones that need more research before investment.
- Run the normalization check prompt to identify any scope comparability issues across your opportunity set.
- Stack-rank the opportunities by composite score and review the resulting order for face validity.
- For any opportunity where the AI-generated score strongly contradicts your instinct, run the scoring challenge prompt to surface the reasoning on both sides and make an explicit, documented decision.
Prompt Examples
Prompt: Opportunity Scoring with Rubrics
You are scoring a set of product opportunities against a four-dimension framework. Apply the following rubrics consistently to each opportunity.
**SCORING RUBRICS:**
IMPACT (1-5): What is the potential positive effect on the customer and the business if this opportunity is successfully addressed?
- 5: Directly addresses a critical pain point for our primary user segment; high likelihood of measurable improvement in [your key metric]
- 4: Addresses a significant pain point; moderate-high likelihood of metric improvement
- 3: Addresses a real but moderate pain point; some metric improvement expected
- 2: Addresses a minor inconvenience; limited metric impact expected
- 1: Edge case or cosmetic issue; minimal customer and business impact
EFFORT (1-5): How much resource investment is required to address this opportunity? (1 = high effort, 5 = low effort — inverse scale)
- 5: Could be addressed with a focused sprint effort by a small team
- 4: Requires a quarter-long investment by a small team
- 3: Requires a quarter-long investment by a full squad
- 2: Requires multiple quarters and cross-team coordination
- 1: Requires major architectural changes or multi-year investment
CONFIDENCE (1-5): How strong is the evidence base supporting our assessment of this opportunity?
- 5: Validated by primary research (interviews + survey data), supported by usage analytics, and cross-confirmed across multiple research methods
- 4: Supported by primary research from at least one method; directionally consistent with other signals
- 3: Supported by secondary research or a small number of interviews; not yet cross-confirmed
- 2: Based on anecdotal evidence, one or two data points, or internal team belief
- 1: Based on assumption only; no research evidence
STRATEGIC FIT (1-5): How well does addressing this opportunity align with our stated product strategy and business objectives?
- 5: Directly advances our primary OKR or strategic pillar
- 4: Advances a secondary strategic objective
- 3: Consistent with strategy but not directly tied to a current objective
- 2: Tangentially related; uncertain strategic contribution
- 1: Unrelated or potentially contradictory to current strategy
**OPPORTUNITIES TO SCORE:**
[PASTE YOUR OPPORTUNITY REGISTER]
For each opportunity, provide:
- Score on each dimension with a 1-2 sentence rationale
- Composite score (sum or weighted sum — specify your weighting preference)
- Overall ranking
Flag any opportunity where scores on different dimensions are in strong tension (e.g., High Impact + Very Low Confidence) as requiring special attention before investment.
Expected output: A scored opportunity register with per-dimension scores, rationale, composite scores, ranking, and tension flags. The tension flags are particularly valuable — they identify opportunities that look attractive but carry significant evidence risk.
Prompt: Score Normalization Check
I have scored the following opportunities but am concerned that some are not directly comparable due to significant differences in scope:
[PASTE SCORED OPPORTUNITY REGISTER]
Please:
1. Identify any opportunities that appear to be significantly different in scope from the others (e.g., a minor UX tweak vs. a new product module).
2. For opportunities that are too broad in scope, suggest how they could be decomposed into more comparable sub-opportunities.
3. For opportunities that are very narrow in scope, suggest whether they should be bundled with related opportunities for more meaningful comparison.
4. After applying your scope normalization suggestions, what would change in the ranking?
Expected output: A scope comparability analysis with decomposition and bundling suggestions, plus the revised ranking after normalization. This ensures your priority list is comparing like with like.
Learning Tip: After AI-generated scoring, run a "red team" pass on the top 3 scored opportunities. Ask AI: "What are the 3 strongest arguments that this opportunity is scored too high on impact?" and "What are the 3 strongest arguments that this opportunity is scored too low on effort?" This adversarial review catches scoring optimism before it gets committed to a roadmap. The highest-impact opportunities almost always look less attractive after a rigorous red team, and the highest-confidence ones become more convincing — which is exactly the calibration effect you want.
Using AI to Generate TAM/SAM/SOM Estimates and Market Sizing Models
Market sizing — Total Addressable Market (TAM), Serviceable Addressable Market (SAM), and Serviceable Obtainable Market (SOM) — is one of those product deliverables that is either skipped entirely because it feels like guesswork, or inflated to ridiculous numbers because someone found an analyst report with a large total market figure and declared it their TAM. Neither is defensible. AI can help you build sizing models that are credible — not because AI has perfect market data, but because it can help you structure bottom-up models from first principles that are auditable and adjustable.
The two approaches to market sizing are top-down and bottom-up, and they serve different purposes. Top-down sizing starts with a large market figure (from an analyst report or industry data source) and works down through segmentation factors to your specific segment. It gives you a fast directional estimate but is easily gamed and hard to defend in detail. Bottom-up sizing starts from your unit economics and customer data — how many target customers exist, what is the average revenue per customer, how many can you plausibly reach — and builds up to a total. Bottom-up models are more work but are much more defensible and actionable.
AI's role in market sizing is primarily structural: it helps you identify what variables you need, where to find them, how to connect them in a coherent model, and how to sense-check the outputs. AI is not a source of market data — you still need to find and input actual numbers from real sources. What AI does is help you build the model architecture quickly, so you spend your time finding and validating data inputs rather than debating how to structure the calculation.
The inputs that make or break a bottom-up sizing model are: number of target customers (often available from industry databases, LinkedIn, or your own CRM data), average contract value or revenue per user (from your own data or comparable public company disclosures), and realistic market penetration rate over your planning horizon (based on your current conversion rates, sales capacity, and competitive dynamics). AI can help you research each input and document the source and confidence level for each, so the model is auditable.
Hands-On Steps
- Define your market precisely before attempting any sizing. The "market" for your opportunity should be specific: which customer type, which use case, which geography, which company size band. Vague market definitions produce unreliable numbers.
- Gather input data for your bottom-up model: look for LinkedIn company and people counts for your target segment, industry association member counts, comparable company disclosures (S-1 filings, investor day presentations), and your own CRM data on conversion rates and ACV.
- Run the bottom-up sizing model prompt with your target market definition and available data.
- Run the top-down sizing prompt with industry report data as a directional check — the two models should be in the same order of magnitude.
- Build a sensitivity analysis by running the scenario modeling prompt to understand how much the estimate changes under different assumptions.
- Document every input with its source, date, and confidence level in a model assumption register.
Prompt Examples
Prompt: Bottom-Up Market Sizing Model
Help me build a bottom-up market sizing model for the following product opportunity.
**Opportunity Definition:**
[PASTE YOUR OPPORTUNITY STATEMENT]
**Target Customer:**
- Customer type: [e.g., B2B SaaS companies with 50-500 employees]
- Geography: [e.g., North America]
- Specific use case: [e.g., product analytics for PM and data teams]
**Available Input Data:**
[PASTE ANY DATA POINTS YOU HAVE — company counts, ACV data, conversion rates, etc.]
Build a bottom-up sizing model with the following structure:
1. **Total potential customer universe:** How many companies or individuals fit our target customer definition? Show the calculation chain (e.g., LinkedIn data, industry association numbers, or market research estimates) and cite the source for each figure.
2. **Qualified addressable market:** Apply filters for the characteristics that make a customer actually serviceable by our product (e.g., must have a dedicated product team, must use a compatible tech stack). Estimate the reduction from total universe to qualified universe.
3. **Revenue model:** Given our current or target pricing model, what is the expected average contract value or revenue per customer?
4. **Market sizing at different penetration rates:**
- Conservative (2% penetration): [calculate]
- Base case (5% penetration): [calculate]
- Optimistic (12% penetration): [calculate]
5. **Growth model:** Assuming the market grows at [X]% per year, what are the 3-year SAM figures under each scenario?
For every input variable, note: Source, Confidence (High/Medium/Low), and the biggest risk to this estimate.
Expected output: A fully structured bottom-up market sizing model with a calculation chain, three penetration scenarios, a 3-year growth model, and an assumption audit table. This is a presentation-ready model, not just a number.
Prompt: Market Sizing Stress Test
I have built the following market sizing model:
[PASTE YOUR SIZING MODEL]
Challenge my model with the following stress tests:
1. **Input sensitivity:** Which single input variable, if changed by 30% in either direction, would most dramatically change the outcome? Show the revised output.
2. **Assumption challenge:** What are the 3 most questionable assumptions in this model? For each, what is the realistic range, and what would the output look like at the low end of that range?
3. **Competitor adjustment:** If I assume that [number] direct competitors each capture an equal share of the market, what is our realistic SOM rather than SAM? Is my penetration rate assumption still defensible in that competitive context?
4. **Time-to-revenue reality check:** Given that product development takes time, the typical sales cycle length, and our current team capacity, what is the realistic revenue achievable from this opportunity in the first 12 months versus the full TAM/SOM model?
5. **What would need to be true:** What would need to be true about the market, our product, and our go-to-market for the optimistic scenario to materialize? Are those conditions realistic?
Expected output: A rigorous stress test of your sizing model that identifies the highest-risk assumptions, most sensitive input variables, competitive share reality, and conditions for the optimistic scenario. This is the analysis you present alongside your base case to demonstrate you have thought rigorously about the model.
Learning Tip: Always run your AI-generated sizing model past someone with financial modeling experience — a finance business partner, an experienced product leader, or an investor contact — before presenting it to executive stakeholders. AI can build a structurally sound model with the inputs you provide, but it cannot catch cases where your inputs are systematically biased (e.g., using LinkedIn follower counts as a proxy for company counts when your actual ICP is much narrower). A 30-minute model review with a financially sophisticated colleague is worth more than any amount of additional prompting.
How to Challenge and Stress-Test AI-Generated Opportunity Assessments
The most dangerous failure mode in AI-assisted opportunity analysis is not that AI produces a wrong answer — it is that AI produces a coherent, well-structured, plausible-sounding answer that is wrong in ways you only discover after you have committed resources. AI is very good at building internally consistent narratives. It is much weaker at flagging when the narrative's foundation rests on questionable assumptions, biased data inputs, or reasoning gaps. Stress-testing is the countermeasure.
Stress-testing an opportunity assessment requires deliberately adopting an adversarial stance toward your own analysis. This is cognitively difficult for humans — we are naturally inclined to seek confirmation of the opportunities we want to pursue, and we unconsciously discount evidence that contradicts them. AI makes adversarial analysis easier because you can explicitly instruct it to challenge your conclusions without any of the social friction that comes from asking a colleague to tear apart your work in a meeting.
The devil's advocate prompt is the most powerful stress-testing tool in this context. Rather than asking "what are some risks?" — which produces a polite list of standard caveats — you instruct AI to make the strongest possible case against your opportunity assessment. This forces AI to construct a coherent counter-argument, which surfaces the most serious objections and forces you to either rebut them with evidence or update your assessment.
Downside scenario modeling is the quantitative complement to devil's advocate analysis. Where the devil's advocate identifies the qualitative weaknesses in your opportunity framing, downside modeling quantifies the consequences of the most pessimistic but realistic scenarios. A product team that has thought through what happens if their opportunity turns out to be half as large as estimated, takes twice as long to address, or fails to reach the target segment, is in a much stronger position than one that has only modeled the base case.
Hands-On Steps
- After completing your opportunity scoring and sizing analysis, run the devil's advocate prompt on your top 2–3 ranked opportunities.
- For each objection the devil's advocate raises, document your rebuttal: is the objection wrong, and why? Or does it point to a genuine weakness you need to address?
- Update your opportunity assessment to acknowledge the strongest objections and document your reasoning for proceeding despite them.
- Run the downside scenario modeling prompt for the opportunity you are planning to invest in first.
- Compare the downside scenario to the cost of the initial investigation investment — is the risk/reward ratio acceptable even under pessimistic assumptions?
- Use the assessment revision prompt to produce a final opportunity brief that honestly reflects both the opportunity's strengths and its risks.
Prompt Examples
Prompt: Devil's Advocate Challenge
I have completed an opportunity assessment with the following conclusions:
**Opportunity:** [PASTE YOUR OPPORTUNITY STATEMENT]
**Evidence basis:** [PASTE YOUR EVIDENCE SUMMARY]
**Score:** Impact [X], Effort [X], Confidence [X], Strategic Fit [X]
**Market size estimate:** [PASTE YOUR SIZING SUMMARY]
**Conclusion:** This is a high-priority opportunity that warrants investment in Q[X].
Now act as a rigorous devil's advocate. Make the 5 strongest possible arguments that this opportunity assessment is WRONG or OVERRATED.
Requirements:
- Each argument must be specific and grounded — not generic risk factors but specific challenges to THIS opportunity based on what I have provided.
- Prioritize arguments that would be most difficult to rebut.
- After each argument, rate how damaging it would be if true (Critical / Significant / Minor).
- After all 5 arguments, identify the 1-2 that you believe are most likely to be actually true given the evidence available.
Do not soften the arguments. I need to hear the strongest possible case against this opportunity before committing resources.
Expected output: Five specific, grounded counter-arguments to your opportunity assessment with damage ratings, plus a prioritization of which objections are most likely to be valid. This is the adversarial review that turns a potentially over-optimistic assessment into a defensible investment decision.
Prompt: Downside Scenario Modeling
I am considering investing in the following opportunity. I want to model the downside scenarios explicitly before committing.
**Opportunity and planned investment:**
[PASTE OPPORTUNITY STATEMENT AND PLANNED TEAM + TIME INVESTMENT]
**Base case assumptions:**
[PASTE YOUR BASE CASE SIZING AND IMPACT ESTIMATES]
Model the following downside scenarios:
**Scenario A — Market Smaller Than Estimated:** The actual addressable market is 40% smaller than our estimate. What is the revised revenue potential? Does the investment still make economic sense?
**Scenario B — Customer Adoption Slower Than Expected:** Adoption takes twice as long as modeled due to [pick the most realistic adoption barrier from the research data]. What is the revenue impact in year 1 and year 2? What does the payback period look like?
**Scenario C — Competitive Response:** Within 6 months of our launch, a well-resourced competitor releases a comparable solution. How does this change the realistic market penetration rate? What would our defensible share be in that scenario?
**Scenario D — Evidence Was Wrong:** The customer research that identified this opportunity turns out to be unrepresentative — the problem is real but affects 30% of the assumed population, not 70%. What does the opportunity look like with this corrected estimate?
For each scenario: revised revenue estimate, revised ROI, and a one-sentence recommendation on whether to proceed, proceed with a smaller initial investment, or deprioritize.
Expected output: Four quantified downside scenarios with revised revenue estimates, ROI impacts, and proceed/deprioritize recommendations. This analysis gives you the full risk-adjusted picture of the opportunity before any resources are committed.
Learning Tip: After completing a stress-tested opportunity assessment, write a one-paragraph "honest summary" that a skeptical executive could read and immediately understand both the opportunity's appeal and its risks. This paragraph should contain: the best case for why this opportunity is worth pursuing, the strongest argument against it, and the specific evidence or milestone that would resolve the uncertainty. If you cannot write this paragraph clearly, your analysis is not yet complete — go back and do more work on the weakest part. The ability to give an honest summary is the real test of whether you understand an opportunity or just believe in it.
Key Takeaways
- Opportunity identification is a design activity: how you frame an opportunity determines which solutions you will consider. Use AI to generate multiple framings from the same research data and choose the most strategically leveraged one.
- Opportunity statements should be solution-neutral, customer-centered, and specific enough to be actionable — AI will produce solution-leaning statements if not explicitly constrained.
- Four-dimension scoring (Impact, Effort, Confidence, Strategic Fit) is more robust than two-dimension models because it explicitly separates what you know from what you assume.
- Confidence is the most neglected scoring dimension — an opportunity with high impact but low evidence confidence is a very different investment bet from one with moderate impact and high evidence confidence.
- Bottom-up market sizing models (built from first principles with auditable inputs) are more defensible than top-down models derived from broad industry figures.
- Every sizing model input should have a documented source, date, and confidence rating — the model's credibility rests on input credibility, not the math.
- Devil's advocate prompting — explicitly instructing AI to make the strongest case against your conclusions — is more effective than asking for generic risks.
- Downside scenario modeling should be run before any resource commitment; understanding the pessimistic scenario is as important as understanding the base case.
- The real test of opportunity analysis completeness is whether you can write a clear, honest one-paragraph summary that captures both the case for and the strongest case against investment.