Research-Backed Curriculum

Master the Art of Prompt Engineering

Go from writing basic prompts to architecting production AI systems. A comprehensive guide built on academic research and real PM workflows.

5 Modules
Beginner to Expert
8 Techniques
Research-Validated
20+ Templates
Ready to Use
🎯 Zero-Shot
🔗 Chain-of-Thought
🌳 Tree of Thoughts
prompt.txt
Role: Senior Product Manager
Context: B2B SaaS, Series B
Task: Analyze feature requests
Format: RICE prioritization
Constraints: Q1 roadmap only
Examples: 3 past decisions
Output: Ranked list + rationale
🎯 Learning Objectives
1
Recall the core components of effective prompts and how LLMs process instructions
2
Explain when to use different prompting techniques (Zero-shot, Few-shot, CoT, ToT)
3
Apply research-backed techniques to common PM tasks like PRDs and research synthesis
4
Analyze prompt failures and select appropriate mitigation strategies
5
Evaluate prompt quality using systematic frameworks and metrics
6
Create production-ready prompt chains for complex multi-step workflows
1

Foundations: How LLMs Process Prompts Remember & Understand

Before mastering techniques, understand the underlying mechanics. This foundational knowledge enables you to debug failures and adapt to new models.

Core Mental Model
LLMs Predict, They Don't Understand
Large Language Models predict statistically likely continuations of your input. Your prompt shapes the probability distribution of possible outputs. Understanding this helps you write prompts that constrain the output space toward your desired result, rather than hoping for the right answer.
How Your Prompt Shapes Output Probability
Vague Prompt Wide output space = unpredictable Precise Prompt Narrow output space = reliable
🧠

The Five Components of Effective Prompts

Research identifies five elements that consistently improve prompt performance. Remember these as the RCFCE framework:

  1. Role - Who is the AI? ("You are a senior product manager...")
  2. Context - What background information is needed?
  3. Format - How should the output be structured?
  4. Constraints - What boundaries or limits apply?
  5. Examples - What does good output look like?

Key Insight

Working memory holds 3-7 items. Structure prompts to respect this limit. Group related instructions, use clear sections, and avoid information overload.

📍

Position Matters

Models exhibit recency bias. Critical instructions belong at the START and END. Middle content gets deprioritized in long prompts ("lost in the middle" effect).

🌡️

Temperature Guide

0-0.3: Factual, deterministic (analysis)
0.4-0.6: Balanced (documentation)
0.7-1.0: Creative (brainstorming)

💰

Token Economics

Input tokens cost less than output. A well-structured 500-token prompt often outperforms a rambling 2000-token one. Quality over quantity.

❌ Ineffective (Missing Components)
Write about our new feature.
✓ Effective (All Five Components)
[Role] You are a product marketing manager. [Context] We're launching a dashboard analytics feature for SMB users. [Format] Write 3 bullet points, each under 20 words. [Constraint] Focus on time-saving benefits, avoid technical jargon. [Example] Like: "See your week's performance in 30 seconds, not 30 minutes."
2

Core Prompting Techniques Understand & Apply

Master the research-validated techniques that form the foundation of effective prompting. Learn when to use each approach and see worked examples.

Technique Selection Flow
Simple Task?
Zero-Shot
|
Need Format?
Few-Shot
|
Complex Reasoning?
Chain-of-Thought
0️⃣
Zero-Shot Prompting
Beginner Low Token Cost

What it is: Asking the model to perform a task without providing examples. The model relies entirely on its pre-training knowledge.

When to use: Simple, well-defined tasks. Classification. When you want baseline performance before adding complexity.

Worked Example Beginner
Classify the following customer feedback as Positive, Negative, or Mixed: "The new dashboard loads faster, but I can't find the export button anymore." Classification:
Why This Works
Clear task definition + specific output options + single input to process. No examples needed because classification categories are unambiguous.
💡
Zero-Shot Enhancement

Adding "Let's think step by step" to zero-shot prompts (Zero-Shot CoT) significantly improves performance on reasoning tasks without requiring examples.

🔢
Few-Shot Prompting
Intermediate Format Control

What it is: Providing 2-5 input-output examples before your actual request. Enables in-context learning where the model learns patterns from demonstrations.

Research findings: Example order matters significantly. Place your most important example last. Quality > quantity - 2-3 good examples often outperform 10 mediocre ones.

Worked Example Intermediate
Convert customer complaints into user stories. EXAMPLE 1: Complaint: "I can never remember my password and resetting takes forever" User Story: As a user, I want a simpler password reset process so that I can regain access quickly. EXAMPLE 2: Complaint: "The mobile app crashes when uploading photos" User Story: As a mobile user, I want stable photo uploads so that I can share content without interruption. EXAMPLE 3: Complaint: "Finding my past orders requires clicking through too many pages" User Story: As a returning customer, I want quick access to my order history so that I can track and reorder efficiently. NOW CONVERT: Complaint: "The search never finds what I'm looking for even with exact product names" User Story:
Example Selection Strategy
Three diverse examples covering different complaint types (authentication, stability, navigation). Consistent format throughout. Most similar example to target (search/finding) placed last to leverage recency.
⚠️
Common Mistake

Don't stuff edge cases into examples. Research shows that format consistency matters more than covering every scenario. Save edge cases for separate prompts.

🔗
Chain-of-Thought (CoT) Prompting
Intermediate Reasoning Tasks

What it is: Prompting the model to show intermediate reasoning steps before reaching a conclusion. Transforms opaque predictions into interpretable reasoning traces.

Research impact: CoT achieved near 100% success on arithmetic tasks where standard prompting achieved only 7.3%. The explicit reasoning process catches errors and enables self-correction.

Worked Example Intermediate
Help me prioritize these three features for Q1. Think through this step-by-step. FEATURES: A) Dark mode (High user demand, Low effort) B) API v2 (Medium demand, High effort, enables partnerships) C) Mobile notifications (Medium demand, Medium effort) CONTEXT: - Company goal: Expand enterprise partnerships - Engineering capacity: 2 major features per quarter - Current enterprise NPS: 42 (target: 50) ANALYSIS STEPS: Step 1: Evaluate each feature against company goal Step 2: Calculate effort vs impact ratio Step 3: Consider second-order effects Step 4: Identify dependencies and sequencing Provide your step-by-step analysis, then final prioritization with reasoning.
Why CoT Works Here
Prioritization requires weighing multiple factors. Explicit steps prevent the model from jumping to conclusions. The reasoning trace lets you verify logic and catch errors before accepting recommendations.
🔄
Self-Consistency
Advanced High-Stakes Decisions

What it is: Generate multiple independent reasoning chains for the same problem, then select the most consistent answer. Based on the principle that correct reasoning paths converge on the same conclusion.

When to use: High-stakes decisions, complex analysis where you'd normally want multiple human reviewers. The cost of 3x API calls is trivial compared to wrong strategic decisions.

Worked Example Advanced
Evaluate this acquisition opportunity using THREE independent analysis approaches. TARGET: [Company X] - AI-powered customer support platform PRICE: $15M, ARR: $2M, Growth: 80% YoY APPROACH 1 - FINANCIAL LENS: Analyze purely from financial metrics. Consider revenue multiple, burn rate, unit economics, integration costs. Reach a conclusion independently. APPROACH 2 - STRATEGIC LENS: Analyze from strategic fit. Consider market positioning, technology assets, team capabilities, competitive dynamics. Reach a conclusion independently. APPROACH 3 - RISK LENS: Analyze from risk perspective. Consider integration complexity, cultural fit, customer overlap, regulatory concerns. Reach a conclusion independently. SYNTHESIS: After completing all three analyses: - If all three align → High confidence recommendation - If two align → Moderate confidence, examine the dissenting view - If all differ → Flag for deeper analysis, identify key uncertainties Provide final recommendation with confidence level.
3

Advanced Reasoning Frameworks Apply & Analyze

Graduate to sophisticated techniques for complex reasoning, multi-step workflows, and agentic applications.

🌳
Tree of Thoughts (ToT)
Advanced Exploration Required

What it is: Extends Chain-of-Thought by exploring multiple reasoning branches simultaneously. At each step, generate k candidate thoughts, evaluate them, and pursue promising paths. Enables backtracking when a path fails.

Research finding: ToT significantly outperforms CoT on creative problem-solving and puzzles. The branching structure mirrors how humans explore solution spaces.

Tree of Thoughts Structure
Initial Problem
Branch A
Solution A1
Branch B
Solution B1
Worked Example Advanced
Design a pricing strategy for our new enterprise tier. Use Tree of Thoughts exploration. CONTEXT: - Current pricing: $49/user/month (Professional) - Target: Enterprise companies (500+ employees) - Competitors: $75-150/user/month for enterprise EXPLORATION PROCESS: BRANCH 1 - VALUE-BASED PRICING: Generate 3 different value-based approaches. Evaluate each on: customer perception, revenue potential, competitive positioning. Score 1-10. BRANCH 2 - COMPETITIVE PRICING: Generate 3 competitive positioning strategies. Evaluate each on the same criteria. Score 1-10. BRANCH 3 - HYBRID MODELS: Generate 3 hybrid approaches combining elements. Evaluate each. Score 1-10. PRUNING: Eliminate approaches scoring below 6 on any dimension. DEEP DIVE: For top 2 surviving approaches, explore implementation details, risks, and customer reaction scenarios. FINAL SYNTHESIS: Recommend primary strategy with fallback option.
ReAct (Reasoning + Acting)
Expert Tool Use & Agents

What it is: Framework that interleaves reasoning traces with actions. The model thinks about what to do, executes an action (API call, search, calculation), observes the result, then reasons about next steps. Foundation of modern AI agents.

Research impact: ReAct overcomes hallucination in knowledge tasks by grounding responses in actual data. Achieves 34% improvement over imitation learning on decision-making benchmarks.

ReAct Pattern Expert
Answer this question using the ReAct framework with available tools. QUESTION: What was the revenue growth rate of our top competitor last quarter? AVAILABLE TOOLS: - search(query): Search the web - calculate(expression): Perform calculations - lookup(company, metric): Access financial database FORMAT: Alternate between Thought, Action, and Observation. Thought 1: I need to identify who our top competitor is, then find their revenue data. Action 1: lookup("competitor_list", "market_share") Observation 1: [Results: CompanyX (32%), CompanyY (28%), Our Company (25%)] Thought 2: CompanyX is our top competitor. Now I need their Q3 and Q4 revenue. Action 2: lookup("CompanyX", "quarterly_revenue_2024") Observation 2: [Q3: $45M, Q4: $52M] Thought 3: I can now calculate the growth rate. Action 3: calculate("(52-45)/45 * 100") Observation 3: 15.56% Final Answer: Our top competitor (CompanyX) had a revenue growth rate of 15.6% last quarter.
Why ReAct Matters for PMs
This pattern underpins modern AI assistants. Understanding it helps you design AI-powered features, debug agent failures, and communicate with engineering about AI system architecture.
🕸️
Graph of Thoughts (GoT)
Expert Non-Linear Reasoning

What it is: Extends Tree of Thoughts by allowing non-linear dependencies between reasoning nodes. Thoughts can connect back to earlier nodes or merge paths, forming a graph structure. This models real-world reasoning where insights from one branch inform another.

Research finding: GoT excels at sorting, set operations, and tasks requiring aggregation of partial solutions. The graph structure enables operations impossible with linear or tree structures.

Graph of Thoughts: Non-Linear Dependencies
Start A B Merge A+B Result feedback loop
Worked Example Expert
Analyze our product-market fit using Graph of Thoughts to handle interdependencies. PROBLEM: Determine if we should pivot our B2B SaaS to focus on SMB vs Enterprise. GRAPH STRUCTURE: NODE A - Market Analysis: Analyze SMB market size, growth, and accessibility. NODE B - Market Analysis: Analyze Enterprise market size, growth, and accessibility. NODE C - Capability Assessment (depends on A, B): Given market insights from A and B, assess our current capabilities and gaps for each segment. NODE D - Competitive Position (depends on A, B): Analyze competitive density and our differentiation potential in each segment. NODE E - Synthesis (merges C, D): Combine capability gaps with competitive positioning to identify the path of least resistance. NODE F - Refinement (feedback from E to A, B): Based on synthesis, what additional market data would change our confidence? Re-examine critical assumptions. NODE G - Final Recommendation (depends on E, F): Provide recommendation with confidence level and key dependencies. Execute this graph, showing the reasoning at each node and how insights propagate.
When GoT Beats Linear Approaches
Strategic decisions have circular dependencies - market size affects strategy, but strategy also determines which market data matters. GoT captures this by allowing feedback loops and merging parallel analyses into unified insights.
⛓️
Prompt Chaining
Advanced Production Workflows

What it is: Breaking complex tasks into specialized prompts where each output feeds into the next. Each prompt does one thing well. Enables optimization, validation, and debugging at each step.

When to use: Complex multi-step workflows, production systems, when you need intermediate validation, or when different steps require different parameters (e.g., low temp for extraction, higher for synthesis).

Three-Stage Chain: Extract → Analyze → Synthesize
Raw Data
Extract (temp: 0.1)
Analyze (temp: 0.3)
Synthesize (temp: 0.5)
Output
Why Chaining Outperforms Single Prompts

Each step uses optimized parameters. Intermediate outputs can be validated. Errors are isolated and debuggable. You can swap individual steps without rebuilding the entire flow. This is how production AI systems are built.

Technique Best For Token Cost Complexity
Zero-Shot Simple tasks, classification, baselines Low Beginner
Few-Shot Format consistency, style matching Medium Beginner
Chain-of-Thought Math, logic, step-by-step reasoning Medium Intermediate
Self-Consistency High-stakes decisions, verification High (3x+) Intermediate
Tree of Thoughts Creative problems, exploration High Advanced
Graph of Thoughts Interdependent analyses, feedback loops High Expert
ReAct Tool use, grounded responses Variable Expert
Prompt Chaining Production workflows, complex pipelines Variable Advanced
4

PM-Specific Applications Apply & Create

Ready-to-use templates for everyday PM tasks. Each template applies the techniques you've learned to real workflows.

Strategy

Strategic PRD Generator

Executive-level PRD with business case, not just requirements

You are a Staff PM at a [stage] [industry] company with deep experience shipping products at scale. Generate a strategic PRD for: [Feature Name] CONTEXT: - Company OKRs this quarter: [OKRs] - Target user segment: [Persona + segment size] - Current pain point severity: [Data if available] - Competitive pressure: [What competitors are doing] - Technical constraints: [Known limitations] THINK STEP-BY-STEP before writing: 1. Why now? What's the strategic timing rationale? 2. What's the cost of NOT doing this? 3. How does this ladder to company strategy? OUTPUT STRUCTURE: 1. Executive Summary (3 sentences: problem, solution, expected impact) 2. Strategic Context & Opportunity Sizing 3. User Problem Statement with evidence 4. Proposed Solution with scope boundaries 5. Success Metrics (leading and lagging indicators) 6. Key Risks and Mitigations 7. Dependencies and Stakeholders 8. Open Questions requiring resolution Format each section with clear headers. Be specific about metrics and timelines.
Research

User Research Synthesis

Transform interview transcripts into actionable insights

You are a senior UX researcher with expertise in qualitative analysis. Synthesize these user interviews using a structured chain approach. STEP 1 - EXTRACT (be exhaustive): - Direct quotes that reveal pain points - Emotional language and frustration indicators - Workarounds users have created - Unmet needs (stated and unstated) - Moments of delight or satisfaction - Frequency and severity indicators STEP 2 - ANALYZE (find patterns): - Cluster similar themes across participants - Identify: Universal (80%+), Common (50-80%), Niche (<50%) - Note contradictions between users - Map to user journey stages - Quantify where possible (X of Y users mentioned...) STEP 3 - SYNTHESIZE (make actionable): - Top 3 insights with supporting evidence - Prioritized opportunity areas - Recommended next steps - Open questions for follow-up research - Segments that emerged from data TRANSCRIPTS: [Paste transcripts here - include participant IDs if available] Output format: Use headers, bullet points, and include direct quotes as evidence.
Analysis

Competitive Intelligence Deep Dive

Multi-perspective competitive analysis using ToT

Analyze the competitive landscape for [Your Product] using Tree of Thoughts with multiple strategic lenses. COMPETITORS TO ANALYZE: [List 3-5 competitors] YOUR CURRENT POSITIONING: [Brief description] BRANCH 1 - FEATURE COMPARISON (Objective): - Create feature matrix across all competitors - Identify parity features vs differentiators - Spot feature gaps in the market - Note pricing/packaging differences BRANCH 2 - MARKET POSITIONING (Strategic): - How does each competitor position themselves? - What customer segments do they target? - What's their messaging and value prop? - Where are they investing (job postings, announcements)? BRANCH 3 - VULNERABILITY ASSESSMENT (Offensive): - Where is each competitor weakest? - What are their customers complaining about? (G2, Reddit, Twitter) - What would it take to win their customers? - What can we do that they structurally cannot? BRANCH 4 - THREAT ASSESSMENT (Defensive): - What could each competitor do to hurt us? - What are they likely to do in the next 12 months? - Where are we most vulnerable? - What early warning signals should we monitor? SYNTHESIS: - Top 3 strategic opportunities - Top 3 competitive threats - Recommended competitive positioning - Features to prioritize based on analysis
Prioritization

RICE Scoring with CoT

Rigorous prioritization with explicit reasoning

You are a data-driven PM. Prioritize these features using the RICE framework with explicit Chain-of-Thought reasoning. CONTEXT: - Total active users: [Number] - Planning period: [Quarter/Half] - Team capacity: [X engineers for Y weeks] FEATURES TO PRIORITIZE: [List each feature with brief description] For EACH feature, think step-by-step: REACH (users impacted per quarter): - What % of users would encounter this feature? - Show calculation: [total users] × [% encountering] = Reach - Consider: new vs existing users, feature discoverability IMPACT (0.25 = minimal, 0.5 = low, 1 = medium, 2 = high, 3 = massive): - What behavior change do we expect? - How does this affect our north star metric? - Justify score with specific reasoning CONFIDENCE (100% = high, 80% = medium, 50% = low): - Do we have user research supporting this? - Have competitors validated this works? - What are the key assumptions? EFFORT (person-weeks, engineering only): - Break down: frontend, backend, design, QA - Include technical debt or dependencies - Factor in unknowns with buffer CALCULATION: RICE Score = (Reach × Impact × Confidence) / Effort OUTPUT FORMAT: | Feature | Reach | Impact | Confidence | Effort | RICE Score | Then provide ranked list with reasoning for top 3.
Communication

Multi-Audience Update Generator

Calibrated communication for different stakeholders

Create three versions of this update optimized for different stakeholder audiences. RAW UPDATE CONTENT: [Paste your detailed update here - include all context, metrics, blockers, next steps] VERSION 1 - EXECUTIVE BRIEFING (C-suite, Board): Constraints: - Maximum 3 bullet points - Lead with business impact and metrics - Red/Yellow/Green status with one-line explanation - Only surface decisions that require their input - No technical jargon whatsoever - Time to read: under 30 seconds VERSION 2 - TECHNICAL STAKEHOLDERS (Engineering leads, Architects): Constraints: - Include technical context and tradeoffs - Highlight decisions that need their input - Note dependencies on other teams - Include timeline with technical milestones - Flag technical risks clearly - Time to read: 2-3 minutes VERSION 3 - TEAM UPDATE (Product team, direct collaborators): Constraints: - Full context and nuance - All blockers and mitigation plans - Specific next steps with owners - Links to relevant docs/tickets - Open questions and discussion points - Celebration of wins and acknowledgments For each version, include: - Subject line optimized for that audience - The adapted content - Call to action (if any)
Strategy

Red Team Analysis

Find fatal flaws before stakeholders do

You are a ruthlessly critical analyst hired to find fatal flaws. Your job is to prevent bad decisions by stress-testing this strategy. Do not be polite - be thorough. STRATEGY/PROPOSAL TO STRESS-TEST: [Paste your strategy, PRD, or proposal here] ATTACK VECTORS - Analyze each systematically: 1. MARKET ASSUMPTIONS: - Which assumptions about market size are weakest? - What if customer behavior doesn't change as expected? - Is the timing assumption valid? Why now? 2. COMPETITIVE RESPONSE: - How would each major competitor respond? - What could they do in 90 days to neutralize this? - Are we underestimating anyone? 3. EXECUTION RISKS: - Top 3 ways this fails during execution - What dependencies could break? - Where are the single points of failure? 4. CUSTOMER REALITY CHECK: - Would customers actually pay for/use this? - What's the honest customer reaction? - Are we solving a vitamin or painkiller problem? 5. INTERNAL POLITICS: - Who loses if this succeeds? - What organizational resistance will emerge? - Do we have the right people to execute? 6. SECOND-ORDER EFFECTS: - What unintended consequences could occur? - How might this cannibalize existing products? - What precedents does this set? OUTPUT: 1. Top 5 flaws ranked by (Likelihood × Impact) 2. For each flaw: specific mitigation recommendation 3. Kill criteria: What would make you abandon this strategy? 4. One-line verdict: Ship / Rework / Kill
Metrics

Success Metrics Framework

Define comprehensive success metrics for any feature

You are a product analytics expert. Help me define a comprehensive metrics framework for this feature. FEATURE: [Feature name and brief description] GOAL: [What success looks like] LAUNCH DATE: [Planned date] Create a metrics framework covering: 1. NORTH STAR METRIC: - What single metric best captures value delivered? - Why this metric over alternatives? - Current baseline and target 2. LEADING INDICATORS (measure within 2 weeks): - Adoption: How many users try the feature? - Activation: How many complete key action? - Engagement: Frequency and depth of use? - For each: define exactly how to measure, baseline, target 3. LAGGING INDICATORS (measure after 4-8 weeks): - Retention: Do users come back? - Business impact: Revenue, cost savings, etc. - Satisfaction: NPS, CSAT changes - For each: measurement method, baseline, target 4. GUARDRAIL METRICS (things that shouldn't get worse): - What existing metrics could this hurt? - At what threshold do we pause/rollback? - How will we monitor these? 5. COUNTER METRICS (gaming prevention): - How could the primary metric be gamed? - What balancing metric prevents this? 6. SEGMENTATION PLAN: - Which user segments to analyze separately? - What cohorts matter for this feature? OUTPUT FORMAT: Create a metrics table with: Metric | Type | Measurement Method | Baseline | Target | Owner
Execution

Sprint Planning Assistant

Optimize sprint scope and identify risks

You are an experienced Agile coach. Help optimize our sprint planning. SPRINT CONTEXT: - Sprint length: [X weeks] - Team capacity: [X story points or hours] - Carry-over from last sprint: [Items if any] - Sprint goal: [High-level objective] CANDIDATE ITEMS FOR SPRINT: [List items with estimates] KNOWN CONSTRAINTS: - Team availability: [PTO, meetings, etc.] - Dependencies on other teams: [List] - Technical debt budget: [% of capacity] ANALYZE AND RECOMMEND: 1. SCOPE ANALYSIS: - Total estimated effort vs capacity - Buffer recommendation (typically 20-30%) - Items that fit vs don't fit 2. RISK ASSESSMENT: - Which items have high estimation uncertainty? - Which have external dependencies? - Which are on the critical path? 3. SPRINT COMPOSITION CHECK: - Mix of feature work vs bugs vs debt - Distribution across team members - Parallelization opportunities 4. RECOMMENDED SPRINT SCOPE: - Must-have items (committed) - Stretch items (if ahead of schedule) - Items to defer (with reasoning) 5. SPRINT RISKS TO MONITOR: - Top 3 risks with mitigation plans - Early warning signals - Escalation triggers 6. DEFINITION OF DONE CHECKLIST: - What must be true for sprint success?
Research

Customer Feedback Analyzer

Extract insights from support tickets, reviews, NPS

You are a customer insights analyst. Analyze this batch of customer feedback and extract actionable insights. FEEDBACK TYPE: [Support tickets / App reviews / NPS responses / Social mentions] TIME PERIOD: [Date range] SAMPLE SIZE: [Number of items] FEEDBACK DATA: [Paste feedback items here] ANALYSIS FRAMEWORK: 1. SENTIMENT DISTRIBUTION: - Positive / Neutral / Negative breakdown - Trend vs previous period if known 2. TOPIC CLUSTERING: - Group feedback into major themes - For each theme: - Frequency (% of total) - Average sentiment - Representative quotes (3 per theme) - Root cause hypothesis 3. URGENCY TRIAGE: - Critical issues (churn risk, legal, safety) - High-priority (many users, severe impact) - Medium-priority (common but manageable) - Low-priority (edge cases, nice-to-have) 4. FEATURE REQUESTS EXTRACTED: - Explicit requests with frequency - Implicit needs (reading between lines) - Jobs-to-be-done revealed 5. COMPETITOR MENTIONS: - Which competitors mentioned? - In what context? - Switching triggers identified 6. ACTIONABLE RECOMMENDATIONS: - Quick wins (fix this week) - Short-term improvements (this quarter) - Strategic considerations (roadmap input) OUTPUT: Provide executive summary first, then detailed analysis.
Launch

Go-to-Market Brief

Comprehensive launch planning document

You are a product marketing expert. Create a comprehensive GTM brief for this launch. PRODUCT/FEATURE: [Name] LAUNCH DATE: [Target date] LAUNCH TYPE: [Major release / Minor feature / Beta] PRODUCT DETAILS: [Describe what's launching] TARGET AUDIENCE: [Primary and secondary segments] GENERATE GTM BRIEF: 1. POSITIONING & MESSAGING: - One-line description (tweet-length) - Value proposition statement - Key messages (3 max) - Proof points for each message - Competitive differentiation 2. LAUNCH TIER & TACTICS: Based on impact, recommend launch tier: - Tier 1 (Major): Press, event, full campaign - Tier 2 (Medium): Blog, email, social push - Tier 3 (Minor): In-app announcement, changelog 3. CHANNEL STRATEGY: - Owned channels: [Blog, email, in-app, social] - Earned channels: [PR, reviews, community] - Paid channels: [If applicable] - Timeline for each channel 4. INTERNAL ENABLEMENT: - Sales team briefing points - Support team FAQ - Customer success talking points - Internal announcement 5. SUCCESS METRICS: - Launch day metrics - Week 1 targets - Month 1 targets 6. RISK MITIGATION: - Potential negative reactions - Prepared responses - Rollback criteria 7. LAUNCH CHECKLIST: - Pre-launch (T-2 weeks) - Launch day - Post-launch (T+1 week)
Technical

Technical Spec Reviewer

PM lens review of engineering specs

You are a senior PM reviewing a technical spec. Your job is to ensure the spec will deliver the intended user value and catch issues before development. TECHNICAL SPEC: [Paste the technical spec or design doc here] ORIGINAL PRD/REQUIREMENTS: [Paste or summarize the requirements] REVIEW THROUGH PM LENS: 1. REQUIREMENTS COVERAGE: - Are all PRD requirements addressed? - Are there gaps or misunderstandings? - Any requirements over-engineered? 2. USER EXPERIENCE IMPLICATIONS: - How will this feel to users? - Performance implications (load times, latency)? - Error states and edge cases handled? - Accessibility considerations? 3. SCOPE ASSESSMENT: - Is scope appropriate for timeline? - What's the MVP vs nice-to-have? - Are there simpler alternatives considered? 4. RISK IDENTIFICATION: - Technical risks that could delay launch - Dependencies on other teams/systems - Security or privacy concerns - Scalability for expected load 5. QUESTIONS FOR ENGINEERING: - Clarifying questions on approach - Alternative approaches to discuss - Tradeoffs that need PM input 6. TESTING REQUIREMENTS: - What should QA focus on? - Edge cases to specifically test - Performance benchmarks needed 7. LAUNCH CONSIDERATIONS: - Feature flags needed? - Rollout strategy implications - Monitoring and alerting needs OUTPUT: Provide categorized feedback with priority (Must-address / Should-discuss / Nice-to-have)
Experimentation

A/B Test Design

Rigorous experiment design and analysis plan

You are an experimentation expert. Design a rigorous A/B test for this feature. FEATURE/CHANGE: [What we're testing] HYPOTHESIS: [What we believe will happen and why] CURRENT STATE: [Control experience] PROPOSED CHANGE: [Treatment experience] DESIGN THE EXPERIMENT: 1. HYPOTHESIS STATEMENT: - Null hypothesis (H0) - Alternative hypothesis (H1) - One-tailed or two-tailed test? 2. PRIMARY METRIC: - What single metric determines success? - Current baseline value - Minimum detectable effect (MDE) - what lift matters? 3. SECONDARY METRICS: - Supporting metrics to monitor - Expected directional impact 4. GUARDRAIL METRICS: - Metrics that must not decrease - Thresholds for stopping the test 5. SAMPLE SIZE CALCULATION: - Based on baseline, MDE, and significance level - Estimated traffic and duration needed - Power analysis (typically 80%) 6. SEGMENTATION: - User segments to analyze separately - Stratification if needed 7. RANDOMIZATION: - User-level vs session-level - Holdout considerations - Geographic or temporal factors 8. ANALYSIS PLAN: - Statistical test to use - Significance threshold (typically p < 0.05) - How to handle multiple comparisons - When to call the test 9. RISKS AND MITIGATIONS: - Sample ratio mismatch - Novelty effects - Seasonal factors - Interaction with other tests 10. DOCUMENTATION: - Create experiment brief template
Prompt Analyzer
Your Prompt
Analysis
Enter a prompt and click "Analyze" to get feedback based on the RCFCE framework and research-backed best practices.
📚

PM Frameworks with AI Prompts

Classic PM frameworks made more powerful with structured prompts. Each framework includes a ready-to-use prompt that applies the methodology systematically.

Discovery

Jobs-to-be-Done (JTBD) Analysis

Uncover the underlying jobs customers are hiring your product to do

You are a JTBD expert trained in Clayton Christensen's methodology. Analyze this product/feature using Jobs-to-be-Done framework. PRODUCT/FEATURE: [Name and description] TARGET USER: [User segment] CONTEXT: [Usage context and situation] Apply the JTBD framework systematically: 1. FUNCTIONAL JOBS (What are they trying to accomplish?): - Core functional job statement: "When [situation], I want to [motivation], so I can [outcome]" - Related jobs in the job chain - Job steps (beginning, middle, end) - Identify underserved job steps 2. EMOTIONAL JOBS (How do they want to feel?): - Personal dimension: How do they want to feel about themselves? - Social dimension: How do they want to be perceived by others? - Emotional jobs that current solutions fail to address 3. CONSUMPTION CHAIN JOBS: - Purchase and onboarding jobs - Usage and maintenance jobs - Upgrade and switching jobs 4. COMPETING SOLUTIONS: - What are they hiring today to do this job? - Why would they "fire" current solution? - What workarounds have they created? 5. JOB METRICS: - Speed: How quickly can they get the job done? - Reliability: How predictably? - Convenience: How easily? 6. OUTCOME STATEMENTS: Create 5-10 outcome statements in format: "[Direction: Minimize/Maximize] + [Metric] + [Object of control] + [Contextual clarifier]" Example: "Minimize the time it takes to find relevant information when making a decision" 7. INNOVATION OPPORTUNITIES: - Underserved outcomes (high importance, low satisfaction) - Overserved outcomes (low importance, high satisfaction) - New market opportunities OUTPUT: Prioritized list of job opportunities with strategic recommendations.
Prioritization

Kano Model Analysis

Categorize features by customer satisfaction impact

You are a product strategist expert in the Kano Model. Analyze these features using Kano classification. FEATURES TO ANALYZE: [List features with brief descriptions] USER SEGMENT: [Target users] PRODUCT CONTEXT: [Product stage, market position] Apply Kano Model systematically: 1. MUST-HAVE (Basic Expectations): - Features whose absence causes extreme dissatisfaction - Features whose presence doesn't increase satisfaction - "Table stakes" - customers assume these exist - Identify which features are must-haves and why 2. PERFORMANCE (Linear Satisfiers): - Features where more is better - Direct correlation: better execution = higher satisfaction - Often the basis for competitive differentiation - Identify performance features and the dimension that matters 3. DELIGHTERS (Attractive): - Features that surprise and exceed expectations - Absence doesn't cause dissatisfaction - Presence creates disproportionate satisfaction - Often become tomorrow's must-haves - Identify potential delighters and why they would delight 4. INDIFFERENT: - Features customers don't care about either way - Candidates for cutting or deprioritization - Identify indifferent features and evidence 5. REVERSE: - Features some customers actively dislike - Segment-specific anti-features - Identify potential reverse features 6. KANO DECAY ANALYSIS: - Which delighters have become performance features? - Which performance features are becoming must-haves? - Time-based evolution of each feature category 7. PRIORITIZATION MATRIX: | Feature | Category | Development Effort | Recommendation | 8. STRATEGIC RECOMMENDATIONS: - Features to prioritize this quarter - Features to cut or defer - Opportunities for differentiation - Risks if must-haves are neglected
Discovery

Opportunity Solution Tree

Teresa Torres' framework for continuous discovery

You are a discovery coach trained in Teresa Torres' Opportunity Solution Tree methodology. Help me build an OST. DESIRED OUTCOME (Business/Product Metric): [The outcome you're trying to drive] CURRENT STATE: - Current metric value: [X] - Target: [Y] - Timeline: [When] USER RESEARCH INPUT: [Paste interview notes, feedback, or observations] Build the Opportunity Solution Tree: 1. OUTCOME (Root): - Clearly defined, measurable outcome - Explain why this outcome matters - How does it connect to business goals? 2. OPPORTUNITY SPACE (Branches): Identify 5-7 distinct opportunity areas from research: For each opportunity: - Opportunity statement (user need, not solution) - Evidence from research (quotes, observations) - Size of opportunity (how many users affected?) - Frequency (how often does this occur?) 3. OPPORTUNITY PRIORITIZATION: Rank opportunities by: - Opportunity sizing score - Alignment with outcome - Customer pain intensity - Recommended focus areas 4. SOLUTIONS (Leaves): For the top 2-3 opportunities, generate: - 3-5 potential solutions per opportunity - Range from quick fixes to bold bets - Include non-obvious alternatives 5. EXPERIMENTS (Sub-leaves): For each promising solution: - Assumption to test - Smallest experiment to learn - Success criteria - Timeline 6. OST VISUALIZATION: Create text-based tree structure: OUTCOME ├── Opportunity 1 │ ├── Solution 1a → Experiment │ ├── Solution 1b → Experiment │ └── Solution 1c → Experiment ├── Opportunity 2 │ ├── Solution 2a → Experiment │ └── Solution 2b → Experiment └── Opportunity 3 └── Solution 3a → Experiment 7. RECOMMENDED PATH: - Highest-potential opportunity to pursue first - Recommended starting experiment - Learning goals for next 2 weeks
Strategy

North Star Framework

Amplitude's framework for defining your key product metric

You are a product strategy consultant expert in Amplitude's North Star Framework. Help define our North Star Metric. COMPANY CONTEXT: - Company: [Name and description] - Business model: [How you make money] - Stage: [Startup/Growth/Enterprise] - Current focus: [What matters most right now] USER CONTEXT: - Primary user: [Main user persona] - Core value delivered: [What value do users get?] Apply the North Star Framework: 1. NORTH STAR METRIC CANDIDATES: Generate 5 potential North Star Metrics, each must: - Express value delivered to customers - Be a leading indicator of revenue - Be measurable - Be actionable by the product team For each candidate: - Metric name and definition - Why it represents value delivered - How it connects to revenue - Potential issues or limitations 2. EVALUATION CRITERIA: Score each candidate (1-5) on: - Breadth: Does it capture value for most users? - Depth: Does it reflect meaningful engagement? - Leading: Does it predict future business success? - Game-proof: Is it hard to artificially inflate? - Actionable: Can the product team influence it? 3. RECOMMENDED NORTH STAR METRIC: - Selected metric with justification - Precise definition (how exactly to measure) - Current baseline - Target and timeline 4. INPUT METRICS (3-5): Input metrics are the factors that drive the North Star. For each input metric: - Name and definition - How it influences the North Star - Which team owns it - Target value 5. NORTH STAR CONSTELLATION: Visualize as: [North Star Metric] / | \ \ [Input 1][Input 2][Input 3][Input 4] 6. ANTI-METRICS: What could go wrong if we optimize only for the North Star? - Potential negative side effects - Guardrail metrics to monitor 7. COMMUNICATION PLAN: - How to explain this to the company - Dashboard requirements - Review cadence
Strategy

Working Backwards (Amazon PR/FAQ)

Amazon's method for starting with the customer and working backwards

You are trained in Amazon's Working Backwards methodology. Help me create a PR/FAQ document for this product idea. PRODUCT IDEA: [Brief description] TARGET CUSTOMER: [Who is this for?] LAUNCH TIMEFRAME: [Hypothetical launch date] Create a Working Backwards document: 1. PRESS RELEASE (Write as if launching today): [CITY, DATE] — [Company] today announced [Product Name], a new [category] that [main benefit]. [Product Name] enables [target customers] to [key capability], which [value delivered]. "[Quote from company leader about customer problem and how this solves it]," said [Name, Title]. "[Second sentence about vision or impact]." [Product Name] includes: • [Feature 1 and benefit] • [Feature 2 and benefit] • [Feature 3 and benefit] "[Customer quote about the problem they had and how this helps]," said [Customer Name, Title, Company]. [Product Name] is available [availability details] for [pricing if applicable]. To learn more, visit [URL]. 2. FREQUENTLY ASKED QUESTIONS: CUSTOMER FAQ: Q: Who is this product for? Q: What problem does this solve? Q: How is this different from [competitor/alternative]? Q: How much does it cost? Q: How do I get started? Q: What if it doesn't work for me? INTERNAL FAQ: Q: Why should we build this now? Q: What is the estimated market size? Q: What are the biggest risks? Q: What dependencies do we have? Q: How will we measure success? Q: What's the 3-year vision? 3. WORKING BACKWARDS VALIDATION: - Is the customer problem clearly articulated? - Would the press release excite customers? - Are the benefits concrete and compelling? - Does this feel like a press release customers would share? - What's missing or unclear? 4. RISKS AND OPEN QUESTIONS: - Technical feasibility concerns - Go-to-market challenges - Competitive responses - Questions that need answers before building
Planning

Impact Mapping

Connect features to business goals through actors and impacts

You are an expert in Gojko Adzic's Impact Mapping methodology. Help create an impact map for this goal. BUSINESS GOAL: [Specific, measurable goal] TIMELINE: [When to achieve this] CURRENT STATE: [Where we are now] Build the Impact Map: 1. WHY (Goal - Center): - Restate goal in SMART format - Why this goal matters to the business - How we'll measure success - What happens if we don't achieve it? 2. WHO (Actors - First Ring): Identify all actors who can influence this goal: - Primary users who directly affect the goal - Secondary users who indirectly affect it - Internal stakeholders - External parties (partners, competitors) For each actor: - Who are they specifically? - Why do they matter for this goal? - What's their current relationship with us? 3. HOW (Impacts - Second Ring): For each key actor, identify behavior changes needed: - What should they START doing? - What should they STOP doing? - What should they do MORE of? - What should they do DIFFERENTLY? For each impact: - How does this behavior change drive the goal? - What's blocking this behavior today? - How will we measure behavior change? 4. WHAT (Deliverables - Outer Ring): For each impact, identify possible deliverables: - Features that could cause this impact - Content or communication - Process changes - Partnerships or integrations Rate each deliverable: - Likelihood of causing the impact (H/M/L) - Effort to build (H/M/L) - Dependencies 5. IMPACT MAP VISUALIZATION: [GOAL] | +-------------+-------------+ | | | [Actor 1] [Actor 2] [Actor 3] | | | +----+----+ +----+----+ | | | | | | | | [Impact][Impact] [Impact][Impact] [Impact] | | | [Deliverable] [Deliverable] [Deliverable] 6. PRIORITIZATION: - Which actor-impact pairs are highest leverage? - Which deliverables should we build first? - What should we explicitly NOT do? 7. ASSUMPTIONS TO VALIDATE: - What must be true for this map to work? - How will we test these assumptions?
Discovery

Value Proposition Canvas

Strategyzer's framework for product-market fit

You are an expert in Strategyzer's Value Proposition Canvas. Help me achieve product-market fit for this product. PRODUCT: [Name and brief description] CUSTOMER SEGMENT: [Target customer] Build the Value Proposition Canvas: CUSTOMER PROFILE (Right Side): 1. CUSTOMER JOBS: Functional jobs (tasks to complete): - [Job 1] - [Job 2] Social jobs (how they want to be perceived): - [Job 1] - [Job 2] Emotional jobs (feelings they seek): - [Job 1] - [Job 2] Supporting jobs (buying, learning, etc.): - [Job 1] - [Job 2] 2. PAINS (What frustrates them): - Undesired outcomes or problems - Obstacles preventing job completion - Risks they want to avoid Rank pains: Extreme / Severe / Moderate 3. GAINS (What they want to achieve): - Required gains (expected baseline) - Expected gains (what they anticipate) - Desired gains (beyond expectations) - Unexpected gains (delighters) Rank gains: Essential / Nice-to-have VALUE MAP (Left Side): 4. PRODUCTS & SERVICES: - Core product/feature - Supporting services - Enabling elements 5. PAIN RELIEVERS: For each significant pain, how do we relieve it? | Pain | How We Relieve It | Strength (1-10) | 6. GAIN CREATORS: For each important gain, how do we create it? | Gain | How We Create It | Strength (1-10) | FIT ANALYSIS: 7. FIT ASSESSMENT: - Which pains do we NOT address? - Which gains do we NOT create? - Where is our fit strongest? - Where are the gaps? 8. PRIORITIZATION: Based on fit analysis: - Features to double down on - Features to add - Features to remove - Messaging implications 9. COMPETITIVE POSITIONING: How does our value map compare to alternatives?
Design

CIRCLES Method

Lewis Lin's structured approach to product design questions

You are a product design expert trained in Lewis Lin's CIRCLES Method. Walk through a structured product design for this scenario. DESIGN CHALLENGE: [Product/feature to design] COMPANY CONTEXT: [Company and its mission] Apply the CIRCLES Method: C - COMPREHEND THE SITUATION: - What is the company and what do they do? - What is the product/feature in question? - Why is this important now? - What constraints exist (time, resources, platform)? - Clarifying questions I would ask: 1. [Question] 2. [Question] 3. [Question] I - IDENTIFY THE CUSTOMER: - Who are the potential user segments? - For each segment: - Demographics and characteristics - Needs and motivations - Current behavior - Selected target segment and justification - Persona summary for chosen segment R - REPORT THE CUSTOMER'S NEEDS: Using the chosen segment: - List 5-7 customer needs/pain points - Rank by frequency and severity - Top 3 needs to focus on: 1. [Need] - Why it matters 2. [Need] - Why it matters 3. [Need] - Why it matters C - CUT THROUGH PRIORITIZATION: Apply prioritization framework: | Need | Reach | Impact | Confidence | Effort | Score | - Selected need to solve (with justification) - What we're explicitly NOT solving (and why) L - LIST SOLUTIONS: For the prioritized need, brainstorm solutions: 1. [Solution 1] - Pros/Cons 2. [Solution 2] - Pros/Cons 3. [Solution 3] - Pros/Cons 4. [Solution 4] - Pros/Cons 5. [Solution 5] - Pros/Cons - Selected solution and reasoning E - EVALUATE TRADE-OFFS: For the selected solution: - Pros and why they matter - Cons and mitigation strategies - Risks and assumptions - Dependencies and prerequisites S - SUMMARIZE YOUR RECOMMENDATION: - One-paragraph recommendation - Key success metrics (2-3) - MVP scope definition - Future roadmap considerations - Open questions and next steps
5

Evaluation & Quality Assurance Evaluate & Create

Systematic approaches to measuring prompt performance and catching failures before production.

📊

Evaluation Dimensions

Assess prompts across five dimensions:

  • Accuracy: Correct outputs vs ground truth
  • Consistency: Same input → similar outputs
  • Robustness: Handles edge cases gracefully
  • Efficiency: Token cost vs quality
  • Usefulness: Actually solves the user's problem
⚠️

Common Failure Modes

Watch for these production failures:

  • Context overflow: Prompt exceeds window
  • Format drift: Output structure degrades
  • Instruction conflict: Contradictory rules
  • Hallucination triggers: Asking unknowables
  • Prompt injection: User input hijacks
Pre-Production Checklist
Tested with 20+ diverse inputs covering expected range
Edge cases and failure modes documented
Output parsing validated (no format breaks)
Token usage and cost within budget
Prompt injection vulnerabilities assessed
Fallback behavior defined for API failures
Version controlled with change documentation
📋

Quick Reference Card

RCFCE Framework

Role → Who is the AI?
Context → Background info
Format → Output structure
Constraints → Boundaries
Examples → Show good output

Technique Selection

Simple task → Zero-Shot

Format matters → Few-Shot

Reasoning needed → CoT

High stakes → Self-Consistency

Exploration → Tree of Thoughts

Dependencies → Graph of Thoughts

Tools/data → ReAct

Common Mistakes

❌ Vague instructions

❌ No format specification

❌ Leading questions

❌ Too many examples

❌ Contradictory rules

❌ Single-prompt complexity

🎓

Final Assessment

Test your understanding of prompt engineering concepts. Complete all questions to see your score and share your results.

1 / 10
You're building an AI assistant to help your sales team draft personalized outreach emails. The emails need to follow a specific structure, but the tone should adapt to each prospect. What's the best approach?
2 / 10
Your prompt asks the AI to "analyze customer feedback and provide insights." The outputs are vague and unhelpful. What's the most likely issue?
3 / 10
In which scenario would Chain-of-Thought prompting likely NOT provide significant benefit?
4 / 10
You're designing an AI system that will automatically categorize and route incoming support tickets. Which architecture best balances reliability and cost?
5 / 10
Your prompt includes instructions at the beginning, 2000 words of context in the middle, and asks for output at the end. The AI frequently ignores some of your initial instructions. What's happening?
6 / 10
You need to make a critical go/no-go decision on launching a feature. You want to ensure the AI considers multiple perspectives and doesn't miss important risks. Which approach is best?
7 / 10
A stakeholder says "Users want a dark mode feature." Using Jobs-to-be-Done thinking, what question should you prompt the AI to explore first?
8 / 10
You're using AI to brainstorm potential names for a new product feature. Your first few attempts returned generic, predictable suggestions. What should you adjust?
9 / 10
You need to analyze whether to enter a new market segment. The analysis requires looking at market size, your capabilities, competitive dynamics, and how insights from each inform the others. Which technique handles this interdependency best?
10 / 10
Your PRD-writing prompt includes: "Write a comprehensive PRD. Be thorough but concise. Include all details but keep it brief. Make it complete yet succinct." What's wrong with this prompt?

Answer all questions before submitting