·

Prompt Architecture

Prompt Architecture

Overview

Most product managers who feel frustrated with AI output quality have a prompt architecture problem, not an AI capability problem. The difference between a prompt that produces a vague, generic response and one that produces a precise, actionable product deliverable is almost always structural. Prompt architecture is the deliberate design of your instructions to the AI — the sequence, framing, specificity, and format of what you ask and how you ask it.

Prompt architecture matters because LLMs do not have intent-reading capabilities. They cannot infer what you actually need from a loosely worded request — they can only respond to what you explicitly state. When you write "help me with my roadmap," the model has to guess at the format, level of detail, audience, constraints, and decision framework you want applied. The output will be plausible but rarely useful. When you write a structurally complete prompt — with role frame, task definition, context, constraints, and output format — the model has everything it needs to produce precisely what you are looking for.

This topic covers the four primary structural elements of a high-quality product management prompt: role framing, structured instructions, output shaping, and domain-specific prompt patterns. Each section includes the conceptual foundation, practical construction steps, and ready-to-use prompt examples drawn from real product management work. By the end of this topic, you will have a mental framework for building prompts quickly and consistently — and a set of tested prompt patterns you can adapt immediately for your most frequent product tasks.

Prompt architecture is not about memorizing templates. It is about understanding why each structural element works, so you can adapt to new tasks on the fly rather than searching your prompt library every time you encounter a novel situation. The goal is a prompt-building intuition that becomes second nature within a few weeks of deliberate practice.


Role Framing — Why Telling AI "You Are a Senior PM" Changes Output Quality

Role framing is the practice of assigning the AI a specific professional identity before giving it a task. It is one of the most impactful and least-used techniques in the average PM's prompt toolkit. When you tell an LLM "You are a senior product manager at a B2B SaaS company with 10 years of experience in enterprise software," you are not engaging in a fictional exercise. You are activating a specific cluster of knowledge, vocabulary, reasoning patterns, and output conventions that the model has learned from its training data.

LLMs are trained on enormous amounts of text across every domain. That training data includes thousands of product management articles, PRDs, product strategy documents, feature announcements, PM course materials, and product-focused books. When you invoke a role frame, you are effectively telling the model: "Use the knowledge and patterns from that specific cluster of training data to generate your response." Without a role frame, the model responds from a generalist position — pulling from all available knowledge equally, which tends to produce generic, textbook-level output.

The specificity of the role frame matters significantly. "You are a product manager" activates a broad cluster of PM-related patterns. "You are a senior product manager at a SaaS company that sells to mid-market B2B companies in the financial services industry" activates a much more specific cluster — one that understands the buying dynamics of financial services firms, the compliance constraints in that sector, the typical sales-led-vs-product-led tension in mid-market SaaS, and the vocabulary those customers use. The more specific your role frame, the more targeted and contextually appropriate the output.

PM-specific role frames for different tasks:

  • For requirements and user story work: "You are a senior business analyst with 8 years of experience writing requirements for enterprise SaaS products. You use Given/When/Then acceptance criteria and are rigorous about testability and edge case coverage."
  • For discovery and research synthesis: "You are a principal product manager with deep expertise in qualitative research synthesis. You are skilled at identifying unmet user needs and ranking them by severity and frequency."
  • For roadmap and prioritization work: "You are a VP of Product at a growth-stage B2B SaaS company who reports directly to the CEO. You think in terms of business outcomes, not feature delivery, and you regularly present roadmaps to an executive board."
  • For stakeholder communication: "You are a product manager known for translating complex technical concepts into clear, jargon-free language for non-technical business stakeholders."
  • For retrospective and process work: "You are an experienced agile coach and product leader. You facilitate retrospectives with psychological safety and translate team observations into actionable process improvements."

Calibrating role specificity to task complexity is a skill. For a quick formatting task — "reformat this user story into Given/When/Then" — a brief role frame is sufficient. For a complex analysis — "evaluate this product strategy against competitive dynamics and customer research" — a rich, specific role frame meaningfully improves the output.

Hands-On Steps

  1. Open your most recent AI session where you felt the output was generic or unhelpful. Read the original prompt. Notice whether it included any role frame.
  2. Write a role frame for that specific task. Include: job title, years of experience, domain specialty, company type, and any specific skill relevant to the task.
  3. Rerun the same task with only the role frame added — keep everything else the same. Compare the outputs.
  4. Build a "role frame library" — a list of 5–7 role frames you use most frequently in your product work. Store them where you can paste them quickly (notes app, Notion snippet, or browser bookmark).
  5. Practice varying the specificity of a role frame for the same task type. Run the task with (a) no role frame, (b) generic role frame ("senior PM"), (c) specific role frame (domain + company type + skill). Document which produces the best output for your use case.
  6. For your highest-frequency AI task (e.g., writing user stories, summarizing stakeholder meetings), establish a default role frame you always use.

Prompt Examples

Prompt:

You are a senior product manager at a B2B SaaS company serving mid-market logistics firms. You have 10 years of experience and are known for writing requirements that engineering teams can execute without back-and-forth clarification.

Task: Review the following user story and rewrite it to production quality. Add a clear rationale, improve the acceptance criteria for testability, and flag any ambiguities engineering would need resolved before development.

User story: As a dispatcher, I want to see all active drivers on a map so I can assign tasks quickly.

Expected output: A rewritten user story with a clear user role, specific goal, measurable benefit, 4–6 Given/When/Then acceptance criteria, and a bulleted list of clarification questions for engineering — all in the professional register of an experienced B2B SaaS PM.


Prompt:

You are a VP of Product at a growth-stage SaaS company preparing for a board meeting. You have been asked to defend the Q3 roadmap prioritization decisions against questions from board members who want to know why the enterprise reporting feature was deprioritized in favor of self-serve onboarding improvements.

Prepare a 3-paragraph talking points document that:
1. Acknowledges the enterprise reporting priority
2. Explains the strategic rationale for the onboarding investment using outcome-based language
3. Commits to a specific timeline and condition for when enterprise reporting will be addressed

Tone: Confident, data-informed, strategically grounded. No jargon.

Expected output: A polished three-paragraph executive communication that frames the deprioritization as a strategic choice, not a oversight — with concrete language about future commitment, suitable for delivery to a board-level audience.

Learning Tip: The single highest-ROI role framing habit is to always specify the audience for the output within the role frame. Instead of "you are a senior PM," try "you are a senior PM presenting to a non-technical executive audience." The intended audience changes vocabulary, level of detail, tone, and structure far more than most other framing choices.


Structured Instructions — Task, Context, Constraints, and Output Format

The most reliable framework for constructing high-quality product management prompts is what we will call the TCCO structure: Task, Context, Constraints, and Output format. Each element serves a specific function in directing the model's response, and omitting any of them introduces ambiguity that degrades output quality. Understanding what each element does — and what happens when it is missing — enables you to build better prompts faster.

Task: The task element is the explicit statement of what you want the model to do. It should use action verbs and be unambiguous about the deliverable. Weak task statements are open-ended questions ("What should we do about our onboarding?") or vague imperatives ("Help me think about the roadmap."). Strong task statements are specific imperatives with a defined deliverable: "Generate five hypothesis statements for why onboarding completion rates are below target," "Prioritize the following 8 feature requests using ICE scoring," "Rewrite this stakeholder update for a non-technical executive audience in under 150 words."

Context: The context element provides the situational information the model cannot know without you telling it — your product, your users, your current state, the relevant history or data, and the business goal the task is in service of. Context is not background — it is the minimum information required to make the task solvable. A context for a prioritization task should include current OKRs, user segment, and key constraints. A context for a user story task should include the product area, the user's current-state behavior, and the capability gap you are trying to address.

Constraints: Constraints tell the model what it must NOT do or must stay within. They are systematically under-used in product prompts and are one of the highest-leverage elements. Constraints include: word count limits, format restrictions, technological or capacity limitations that must be respected, decisions that are already made and should not be re-opened, and scope boundaries ("focus only on the mobile experience, not the desktop application"). Without explicit constraints, the model will make plausible assumptions — which may or may not match your actual constraints.

Output format: Telling the model exactly how to format its response is not micromanagement — it is a precision tool. An output format instruction saves you post-processing time, ensures the output is directly usable in your workflow, and prevents the model from choosing a format that buries the analysis you need. For PM work, common output formats include: markdown tables, numbered lists, user story format, Given/When/Then acceptance criteria, decision matrix with weighted columns, executive summary in bullet points, or RICE score table.

Before/after illustration: Here is the same underlying request written without structure and with full TCCO structure:

Without structure: "Can you help me prioritize the features on my roadmap?"

With TCCO structure: "You are a senior product manager. Task: Prioritize the following 6 feature requests for our Q3 roadmap using ICE scoring (Impact, Confidence, Ease, each scored 1–10). Context: Our product is a B2B project management SaaS for construction firms. Our Q3 OKR is to improve 30-day retention by 15 percentage points. Current 30-day retention is 42%. Constraints: Feature #4 (API integrations) is already committed to an enterprise client and cannot be removed regardless of score. Features #5 and #6 are dependencies — #6 cannot be scheduled before #5. Output format: Return a markdown table with columns: Feature | Impact | Confidence | Ease | ICE Score | Rationale. Sort by ICE score descending. Add a 'Dependencies' column flagging where #4, #5, #6 constraints apply."

The structured version produces a directly usable prioritization table in one pass. The unstructured version requires multiple follow-up exchanges.

Hands-On Steps

  1. Take a prompt you have used in the last week that produced a mediocre output. Deconstruct it: Does it have a Task? Context? Constraints? Output format? Identify which elements are missing.
  2. Rewrite the prompt using the full TCCO structure. For each element, ask: "Am I being specific enough that there is only one reasonable interpretation?"
  3. For the Context element, ask: "What is the minimum information the AI needs to make this task solvable?" Resist adding more than that — context should be targeted, not comprehensive.
  4. For the Constraints element, list at least two constraints: one "must not" (something to exclude) and one boundary (a scope or format limit).
  5. For the Output format, specify the exact structure you want: table, numbered list, sentence format, word count limit. If you want a table, specify the column names.
  6. Run the original prompt and the TCCO-structured prompt side by side. Rate each output on: relevance, specificity, usability without editing, and accuracy to constraints.

Prompt Examples

Prompt (full TCCO structure):

You are a senior business analyst with expertise in enterprise SaaS.

Task: Write 3 user stories for the feature described below, each targeting a different user role.

Context:
- Product: B2B document management SaaS for legal firms
- Feature: Automated document version control with audit trail
- User roles: Legal assistant, Associate attorney, Partner
- Business goal: Reduce time spent on document version reconciliation (currently 45 min/matter on average)
- Current state: Users manually track versions by renaming files with date suffixes; no audit trail exists

Constraints:
- Each story must reference the user's specific workflow, not just the general feature
- Do not include implementation details (how it is built) — focus on user value
- Each story must be independently deliverable (not dependent on the others being done first)

Output format:
For each story:
- User story: As a [role], I want [capability] so that [outcome]
- Rationale: Why this matters specifically for this role (2 sentences)
- Acceptance criteria: 3 Given/When/Then criteria

Expected output: Three role-specific user stories, each with distinct workflow rationale, independently scoped, with testable acceptance criteria — ready for backlog grooming without further editing.


Prompt (TCCO for exec communication):

You are a product manager known for clear executive communication.

Task: Rewrite the technical status update below for a non-technical Chief Revenue Officer audience.

Context:
- The CRO cares about: deal velocity, revenue impact, customer commitments, and competitive positioning
- The update will be read in a 2-minute email scan before a leadership meeting
- Original update was written by an engineering lead for an internal technical audience

Constraints:
- Maximum 120 words
- No technical jargon (no: API, webhooks, latency, microservices, CI/CD, infrastructure)
- Do not change the facts — only change the language and framing
- Must include: current status, business impact, and one action item for the CRO if needed

Original technical update:
[Paste technical update here]

Output format: Single paragraph, plain language, ending with a bolded "Action required:" line (or "No action required" if no CRO action is needed).

Expected output: A clean 100–120 word executive update that frames the technical work in revenue and customer terms, with a clear action item call-out — ready to paste into the leadership email.

Learning Tip: When you are in a hurry and cannot write a full TCCO prompt, at minimum include your Constraints and Output format. These two elements are the most frequently missing and produce the most dramatic improvement when added. Task and Context are usually at least implicit in what you write; Constraints and Output format almost never are.


Output Shaping — Getting AI to Produce Tables, Prioritized Lists, User Stories, and Decision Matrices

Output shaping is the art of specifying exactly what the AI's response should look like before the model generates it. Most product managers under-invest in this element — they specify what they want to know but not what form they want the answer in. The result is output that may contain the right information but requires significant reformatting, editing, or extraction before it can be used. With output shaping, you get a directly usable deliverable in the first pass.

The foundational principle is: the more precisely you specify the output format, the less post-processing you need to do. This is not about controlling the AI's reasoning — it is about controlling the shape of the result. You are telling the model the output format so that it can structure its reasoning to produce that format, rather than choosing a format that may or may not match your workflow.

Tables are the most useful output format for prioritization, comparison, and requirements work. When requesting a table, always specify: the column names, what each column should contain, and the sort order. "Return a table" without this specification will produce a generic table that probably does not have the columns you need. "Return a markdown table with columns: Feature | User Segment | Business Value (1–5) | Effort (1–5) | Recommendation, sorted by Business Value descending" produces exactly what you need.

Prioritized lists work best when you specify the ranking criteria and the format of each list item. "Return a numbered list of 5 opportunities, each formatted as: [Opportunity Title] — [one-sentence description] — [ranking rationale in 10–15 words]" produces a clean, scannable list ready for presentation.

User stories require you to specify the format explicitly: "Format each story as: User story (As a / I want / So that) + Rationale (2 sentences) + Acceptance Criteria (Given/When/Then, 3–5 criteria)." Without this, the model may write stories in prose narrative form or omit acceptance criteria.

Decision matrices are one of the most powerful but rarely-used AI output formats for product managers. A well-specified decision matrix prompt produces a structured evaluation that you can take directly into a stakeholder meeting. Specify: the decision options as rows, the evaluation criteria as columns, the scoring method, and whether you want a recommended option with rationale.

Chaining output formats across multiple prompts — using the output of one prompt as the structured input for the next — is a technique that enables you to run multi-step analysis without retyping. If you ask the AI to produce a table of feature opportunities in one prompt, you can reference that table in the next prompt: "Using the table from the previous message, generate a user story for the top-ranked opportunity." This chaining pattern turns AI into a workflow tool rather than a one-off question-answering service.

Hands-On Steps

  1. Review five recent AI tasks where you edited or reformatted the output before using it. For each, write what output format specification would have eliminated that editing.
  2. Build a "output format library" — a collection of 8–10 format specifications for your most common PM deliverables. Include: format name, column/section specification, sort order, and a one-line usage note.
  3. Practice decision matrix prompts: take a current product decision (build vs. buy, feature A vs. feature B, now vs. later) and specify a decision matrix with at least 4 weighted criteria columns.
  4. Practice chaining: run a two-step prompt sequence where step 2 explicitly references the output of step 1. Confirm the model picks up the structure correctly.
  5. Test table prompts with and without explicit column specifications. Compare the column choices the model makes when unconstrained vs. what it produces with your specifications.
  6. For your most frequent PM deliverable (user stories, sprint summaries, stakeholder updates), write a canonical output format specification. Save it as a reusable snippet.

Prompt Examples

Prompt (decision matrix):

You are a senior product manager preparing a build-vs-buy analysis.

Task: Evaluate these three options for adding a reporting module to our B2B SaaS product: (A) Build in-house, (B) License Looker and embed it, (C) License Metabase and embed it.

Context: We serve mid-market B2B clients in logistics. Reporting is a frequently requested feature but not our core differentiator. Engineering team is 8 FTEs; current sprint capacity is fully committed to roadmap items through Q3.

Constraints: Budget cap for external licensing is $3,000/month. Solution must be embeddable in our existing React frontend. Time to delivery must be under 90 days.

Output format: Return a markdown table with these exact columns:
| Criteria | Weight (1-3) | Option A: Build | Option B: Looker | Option C: Metabase |

Criteria rows to include: Time to delivery | Engineering cost | Licensing cost | Customizability | Maintenance burden | Strategic fit

After the table, add a "Recommendation:" section with your top choice and a 3-sentence rationale.

Expected output: A fully populated decision matrix with weighted scoring for all three options across six criteria, followed by a clear recommendation with business rationale — ready for a product review meeting.


Prompt (chaining output formats):

Step 1 of 2:
You are a senior PM. Using the user research themes below, identify the top 5 product opportunities. Format your output as a numbered list, each item structured as:

[#]. [Opportunity Title] | Affected segment: [segment] | Frequency: High/Med/Low | Severity: High/Med/Low | One-line description

Research themes:
- Construction project managers spend 30+ minutes daily reconciling task status across email, text, and Jira
- Field workers cannot access project documents on mobile without a laptop
- Subcontractors frequently miss task deadlines because they do not have visibility into the project schedule
- Foremen want to submit daily progress reports but find the current form too complex
- Project owners request live cost tracking but the current system updates only weekly

Output only the numbered list. No commentary.

Expected output (Step 1): A clean numbered list of 5 opportunities with consistent formatting — ready to be used as input for Step 2 (e.g., "Using the list above, generate a user story for opportunity #1").

Learning Tip: Save every output format specification that produces a directly usable deliverable in a "format library" note. Categorize by deliverable type: user stories, acceptance criteria, prioritization tables, stakeholder summaries, retrospective themes. After 30 days of deliberate practice, you will have 20–25 format specifications that cover 80% of your AI tasks — eliminating the need to re-specify format from scratch every time.


Prompt Patterns That Work Best for Product Discovery, Planning, and Communication

Beyond individual structural elements, there are recurring prompt patterns that work particularly well for specific product management use cases. These patterns are distilled from the types of tasks that product managers perform most frequently and the prompt structures that consistently produce the highest-quality outputs for those tasks. Learning to recognize which pattern applies to your current task — and being able to construct that pattern quickly — is the hallmark of a prompt-fluent product professional.

Discovery pattern: Discovery prompts work best when you front-load the research data and ask the model to extract and rank insights, not just summarize. The key structural element is an explicit ranking instruction with specified criteria. "Here is [research data]. Identify the top 5 [insight type] and rank by [criterion]" reliably produces a prioritized, actionable synthesis rather than a flat summary. The ranking instruction forces the model to apply analytical judgment, not just organize the content.

Planning pattern: Planning prompts work best when you provide the constraints before the creative generation task. "Given these OKRs, these capacity constraints, and these dependencies, propose a [time period] roadmap with rationale" is significantly more useful than "What should be on our roadmap?" because the constraints boundary conditions prevent the model from generating a theoretically ideal roadmap that ignores real-world limits. Always state what is fixed before asking for what should be generated.

Communication pattern: Communication prompts work best when you specify three things: the audience (with their primary concerns), the format and length, and the "what must be true" criteria for the output. "Rewrite this for a [audience who cares about X], in [format and length], ensuring it [specific quality criterion]" is a complete communication prompt. The "what must be true" element is often missing and is the most important — it is the equivalent of an acceptance criterion for the communication task.

Analysis pattern: Analysis prompts work best when you separate the "analyze this" instruction from the "recommend this" instruction, and ask for the analysis first. Getting the model to surface observations, contradictions, and patterns before jumping to recommendations produces deeper analysis. "First, list what the data shows. Then, identify any contradictions or anomalies. Then, recommend next steps." Asking for recommendations before analysis tends to produce recommendations that are rationalized after the fact rather than derived from the data.

Challenge/stress-test pattern: One of the most valuable and underused patterns in product work is asking the AI to argue against your position. "What are the three strongest objections to the approach described above? For each, write a one-paragraph counter-argument that a skeptical executive might make." This pattern surfaces blind spots, prepares you for stakeholder pushback, and often produces insights that improve the approach before you commit to it.

Hands-On Steps

  1. For each of the five patterns above (discovery, planning, communication, analysis, challenge), write a real prompt from a current work task using that pattern.
  2. Run all five prompts. For each, rate the output quality on specificity, actionability, and how much editing it needed before being usable.
  3. Identify which pattern is most underused in your current workflow. Commit to using it at least once per day for the next two weeks.
  4. Combine patterns: take a planning prompt output and run it through the challenge/stress-test pattern. "Here is the roadmap you just generated. Now argue against it — what are the top 3 risks or flaws in this plan?"
  5. For your next discovery session, use the discovery pattern with real user research data. Explicitly include a ranking instruction with two criteria (e.g., frequency AND severity).
  6. Write a communication prompt for the last stakeholder update you sent. Specify the exact audience role, their primary concerns, the format, and the "must be true" quality criteria.

Prompt Examples

Prompt (discovery pattern):

You are a senior product manager specializing in user research synthesis.

Here is a summary of 14 user interviews with construction project managers. The interviews focused on their current experience with document management and communication in multi-subcontractor projects.

Interview themes extracted:
- 11/14 mention difficulty knowing which document version is current
- 9/14 cite email as their primary coordination tool, with frustration about lost threads
- 8/14 describe a daily 20–30 minute "status update call" they find redundant but feel they cannot eliminate
- 6/14 mention that subcontractors frequently start work without reading the latest drawings
- 5/14 express desire for a mobile-first experience but worry about change management
- 4/14 flag compliance and audit trail requirements driven by client contracts

Identify the top 5 unmet needs from this data. Rank them by (1) frequency across respondents and (2) severity of business impact. For each need, write: Need statement | Frequency rank | Severity assessment | One sentence on why it matters to address this first.

Expected output: Five ranked, named unmet needs with explicit frequency scores, severity assessments, and strategic rationale — a directly usable discovery synthesis.


Prompt (challenge/stress-test pattern):

You are a skeptical, experienced Chief Product Officer reviewing a junior PM's roadmap proposal.

Below is the proposed Q3 roadmap for a B2B construction SaaS. Your job is to challenge it — not to destroy it, but to surface the assumptions, risks, and strategic gaps that need to be addressed before it is presented to the board.

For each of the 4 roadmap themes below, write:
1. The strongest objection an executive might raise
2. The assumption underlying the roadmap item that, if wrong, would undermine its value
3. One question you would require answered before approving this item

Roadmap themes:
1. Self-serve onboarding redesign (target: reduce time-to-first-value from 14 to 7 days)
2. Mobile document access for field workers
3. Automated daily progress report generation
4. Subcontractor portal with project schedule visibility

Do not soften the objections. Be direct and analytically rigorous.

Expected output: Four sets of executive-level objections, each with an identified assumption and a clarifying question — producing a pre-mortem of your roadmap that prepares you for the hardest stakeholder questions.


Prompt (planning pattern with constraints front-loaded):

You are a senior product manager building a 3-month roadmap.

Fixed constraints (do not change these):
- Q3 OKR: Increase 30-day user retention from 42% to 55%
- Engineering capacity: 6 FTEs at 70% availability (one engineer on parental leave July–August)
- Committed deliverables: Enterprise API integration (3-week effort, must ship by end of July per sales contract)
- Frozen: No changes to the mobile app; mobile team is allocated to a separate initiative
- Budget: No new tool purchases this quarter

Available opportunities (choose from these to build the roadmap):
A. Self-serve onboarding redesign — estimated 4 weeks, high retention impact
B. In-app notification center — estimated 3 weeks, medium retention impact
C. Reporting dashboard v2 — estimated 6 weeks, medium-high retention impact
D. Automated daily digest email — estimated 2 weeks, medium retention impact
E. Contextual help tooltips — estimated 2 weeks, medium retention impact
F. Onboarding email sequence redesign — estimated 1 week, medium retention impact

Given the OKR, constraints, and capacity, propose a sequenced 3-month roadmap. For each item you include:
- Include it with start week and rationale
- If you exclude an item, explain why

Format: Timeline table by month | Then a "Excluded items" section with reasoning.

Expected output: A realistic, constraint-respecting 3-month roadmap with sequencing rationale, showing which opportunities were included and excluded and why — directly usable in a roadmap planning meeting.

Learning Tip: The most powerful prompt pattern combination in product management is discovery + challenge in sequence. First, run a discovery prompt to synthesize insights and generate opportunities. Then, immediately run a challenge prompt on the top recommendation: "What are the three strongest arguments against prioritizing this opportunity?" This two-step pattern surfaces blind spots before you present the recommendation to stakeholders — and it takes under 10 minutes.


Key Takeaways

  • Prompt architecture is the primary determinant of AI output quality for product managers — not the model's capability, but the structure of your instructions.
  • Role framing activates domain-specific knowledge and vocabulary; more specific role frames produce more targeted, contextually relevant outputs.
  • The TCCO structure (Task, Context, Constraints, Output format) is the core framework for building effective product prompts. Constraints and Output format are the most commonly missing elements.
  • Output shaping — specifying exact tables, lists, formats, and column names — eliminates post-processing time and produces directly usable deliverables in one pass.
  • Five domain-specific prompt patterns cover 80% of PM use cases: discovery (rank by criteria), planning (constraints first), communication (audience + format + quality criteria), analysis (observations before recommendations), and challenge (argue against your position).
  • Chaining prompt outputs — using the structured output of one prompt as the structured input of the next — enables multi-step product workflows without re-typing context.
  • A prompt library of tested, structured templates for your most frequent tasks is one of the highest-leverage investments you can make in your AI-assisted practice.