Strategy
Strategic PRD Generator
Executive-level PRD with business case, not just requirements
You are a Staff PM at a [stage] [industry] company with deep experience shipping products at scale.
Generate a strategic PRD for: [Feature Name]
CONTEXT:
- Company OKRs this quarter: [OKRs]
- Target user segment: [Persona + segment size]
- Current pain point severity: [Data if available]
- Competitive pressure: [What competitors are doing]
- Technical constraints: [Known limitations]
THINK STEP-BY-STEP before writing:
1. Why now? What's the strategic timing rationale?
2. What's the cost of NOT doing this?
3. How does this ladder to company strategy?
OUTPUT STRUCTURE:
1. Executive Summary (3 sentences: problem, solution, expected impact)
2. Strategic Context & Opportunity Sizing
3. User Problem Statement with evidence
4. Proposed Solution with scope boundaries
5. Success Metrics (leading and lagging indicators)
6. Key Risks and Mitigations
7. Dependencies and Stakeholders
8. Open Questions requiring resolution
Format each section with clear headers. Be specific about metrics and timelines.
Research
User Research Synthesis
Transform interview transcripts into actionable insights
You are a senior UX researcher with expertise in qualitative analysis. Synthesize these user interviews using a structured chain approach.
STEP 1 - EXTRACT (be exhaustive):
- Direct quotes that reveal pain points
- Emotional language and frustration indicators
- Workarounds users have created
- Unmet needs (stated and unstated)
- Moments of delight or satisfaction
- Frequency and severity indicators
STEP 2 - ANALYZE (find patterns):
- Cluster similar themes across participants
- Identify: Universal (80%+), Common (50-80%), Niche (<50%)
- Note contradictions between users
- Map to user journey stages
- Quantify where possible (X of Y users mentioned...)
STEP 3 - SYNTHESIZE (make actionable):
- Top 3 insights with supporting evidence
- Prioritized opportunity areas
- Recommended next steps
- Open questions for follow-up research
- Segments that emerged from data
TRANSCRIPTS:
[Paste transcripts here - include participant IDs if available]
Output format: Use headers, bullet points, and include direct quotes as evidence.
Analysis
Competitive Intelligence Deep Dive
Multi-perspective competitive analysis using ToT
Analyze the competitive landscape for [Your Product] using Tree of Thoughts with multiple strategic lenses.
COMPETITORS TO ANALYZE: [List 3-5 competitors]
YOUR CURRENT POSITIONING: [Brief description]
BRANCH 1 - FEATURE COMPARISON (Objective):
- Create feature matrix across all competitors
- Identify parity features vs differentiators
- Spot feature gaps in the market
- Note pricing/packaging differences
BRANCH 2 - MARKET POSITIONING (Strategic):
- How does each competitor position themselves?
- What customer segments do they target?
- What's their messaging and value prop?
- Where are they investing (job postings, announcements)?
BRANCH 3 - VULNERABILITY ASSESSMENT (Offensive):
- Where is each competitor weakest?
- What are their customers complaining about? (G2, Reddit, Twitter)
- What would it take to win their customers?
- What can we do that they structurally cannot?
BRANCH 4 - THREAT ASSESSMENT (Defensive):
- What could each competitor do to hurt us?
- What are they likely to do in the next 12 months?
- Where are we most vulnerable?
- What early warning signals should we monitor?
SYNTHESIS:
- Top 3 strategic opportunities
- Top 3 competitive threats
- Recommended competitive positioning
- Features to prioritize based on analysis
Prioritization
RICE Scoring with CoT
Rigorous prioritization with explicit reasoning
You are a data-driven PM. Prioritize these features using the RICE framework with explicit Chain-of-Thought reasoning.
CONTEXT:
- Total active users: [Number]
- Planning period: [Quarter/Half]
- Team capacity: [X engineers for Y weeks]
FEATURES TO PRIORITIZE:
[List each feature with brief description]
For EACH feature, think step-by-step:
REACH (users impacted per quarter):
- What % of users would encounter this feature?
- Show calculation: [total users] × [% encountering] = Reach
- Consider: new vs existing users, feature discoverability
IMPACT (0.25 = minimal, 0.5 = low, 1 = medium, 2 = high, 3 = massive):
- What behavior change do we expect?
- How does this affect our north star metric?
- Justify score with specific reasoning
CONFIDENCE (100% = high, 80% = medium, 50% = low):
- Do we have user research supporting this?
- Have competitors validated this works?
- What are the key assumptions?
EFFORT (person-weeks, engineering only):
- Break down: frontend, backend, design, QA
- Include technical debt or dependencies
- Factor in unknowns with buffer
CALCULATION:
RICE Score = (Reach × Impact × Confidence) / Effort
OUTPUT FORMAT:
| Feature | Reach | Impact | Confidence | Effort | RICE Score |
Then provide ranked list with reasoning for top 3.
Communication
Multi-Audience Update Generator
Calibrated communication for different stakeholders
Create three versions of this update optimized for different stakeholder audiences.
RAW UPDATE CONTENT:
[Paste your detailed update here - include all context, metrics, blockers, next steps]
VERSION 1 - EXECUTIVE BRIEFING (C-suite, Board):
Constraints:
- Maximum 3 bullet points
- Lead with business impact and metrics
- Red/Yellow/Green status with one-line explanation
- Only surface decisions that require their input
- No technical jargon whatsoever
- Time to read: under 30 seconds
VERSION 2 - TECHNICAL STAKEHOLDERS (Engineering leads, Architects):
Constraints:
- Include technical context and tradeoffs
- Highlight decisions that need their input
- Note dependencies on other teams
- Include timeline with technical milestones
- Flag technical risks clearly
- Time to read: 2-3 minutes
VERSION 3 - TEAM UPDATE (Product team, direct collaborators):
Constraints:
- Full context and nuance
- All blockers and mitigation plans
- Specific next steps with owners
- Links to relevant docs/tickets
- Open questions and discussion points
- Celebration of wins and acknowledgments
For each version, include:
- Subject line optimized for that audience
- The adapted content
- Call to action (if any)
Strategy
Red Team Analysis
Find fatal flaws before stakeholders do
You are a ruthlessly critical analyst hired to find fatal flaws. Your job is to prevent bad decisions by stress-testing this strategy. Do not be polite - be thorough.
STRATEGY/PROPOSAL TO STRESS-TEST:
[Paste your strategy, PRD, or proposal here]
ATTACK VECTORS - Analyze each systematically:
1. MARKET ASSUMPTIONS:
- Which assumptions about market size are weakest?
- What if customer behavior doesn't change as expected?
- Is the timing assumption valid? Why now?
2. COMPETITIVE RESPONSE:
- How would each major competitor respond?
- What could they do in 90 days to neutralize this?
- Are we underestimating anyone?
3. EXECUTION RISKS:
- Top 3 ways this fails during execution
- What dependencies could break?
- Where are the single points of failure?
4. CUSTOMER REALITY CHECK:
- Would customers actually pay for/use this?
- What's the honest customer reaction?
- Are we solving a vitamin or painkiller problem?
5. INTERNAL POLITICS:
- Who loses if this succeeds?
- What organizational resistance will emerge?
- Do we have the right people to execute?
6. SECOND-ORDER EFFECTS:
- What unintended consequences could occur?
- How might this cannibalize existing products?
- What precedents does this set?
OUTPUT:
1. Top 5 flaws ranked by (Likelihood × Impact)
2. For each flaw: specific mitigation recommendation
3. Kill criteria: What would make you abandon this strategy?
4. One-line verdict: Ship / Rework / Kill
Metrics
Success Metrics Framework
Define comprehensive success metrics for any feature
You are a product analytics expert. Help me define a comprehensive metrics framework for this feature.
FEATURE: [Feature name and brief description]
GOAL: [What success looks like]
LAUNCH DATE: [Planned date]
Create a metrics framework covering:
1. NORTH STAR METRIC:
- What single metric best captures value delivered?
- Why this metric over alternatives?
- Current baseline and target
2. LEADING INDICATORS (measure within 2 weeks):
- Adoption: How many users try the feature?
- Activation: How many complete key action?
- Engagement: Frequency and depth of use?
- For each: define exactly how to measure, baseline, target
3. LAGGING INDICATORS (measure after 4-8 weeks):
- Retention: Do users come back?
- Business impact: Revenue, cost savings, etc.
- Satisfaction: NPS, CSAT changes
- For each: measurement method, baseline, target
4. GUARDRAIL METRICS (things that shouldn't get worse):
- What existing metrics could this hurt?
- At what threshold do we pause/rollback?
- How will we monitor these?
5. COUNTER METRICS (gaming prevention):
- How could the primary metric be gamed?
- What balancing metric prevents this?
6. SEGMENTATION PLAN:
- Which user segments to analyze separately?
- What cohorts matter for this feature?
OUTPUT FORMAT:
Create a metrics table with: Metric | Type | Measurement Method | Baseline | Target | Owner
Execution
Sprint Planning Assistant
Optimize sprint scope and identify risks
You are an experienced Agile coach. Help optimize our sprint planning.
SPRINT CONTEXT:
- Sprint length: [X weeks]
- Team capacity: [X story points or hours]
- Carry-over from last sprint: [Items if any]
- Sprint goal: [High-level objective]
CANDIDATE ITEMS FOR SPRINT:
[List items with estimates]
KNOWN CONSTRAINTS:
- Team availability: [PTO, meetings, etc.]
- Dependencies on other teams: [List]
- Technical debt budget: [% of capacity]
ANALYZE AND RECOMMEND:
1. SCOPE ANALYSIS:
- Total estimated effort vs capacity
- Buffer recommendation (typically 20-30%)
- Items that fit vs don't fit
2. RISK ASSESSMENT:
- Which items have high estimation uncertainty?
- Which have external dependencies?
- Which are on the critical path?
3. SPRINT COMPOSITION CHECK:
- Mix of feature work vs bugs vs debt
- Distribution across team members
- Parallelization opportunities
4. RECOMMENDED SPRINT SCOPE:
- Must-have items (committed)
- Stretch items (if ahead of schedule)
- Items to defer (with reasoning)
5. SPRINT RISKS TO MONITOR:
- Top 3 risks with mitigation plans
- Early warning signals
- Escalation triggers
6. DEFINITION OF DONE CHECKLIST:
- What must be true for sprint success?
Research
Customer Feedback Analyzer
Extract insights from support tickets, reviews, NPS
You are a customer insights analyst. Analyze this batch of customer feedback and extract actionable insights.
FEEDBACK TYPE: [Support tickets / App reviews / NPS responses / Social mentions]
TIME PERIOD: [Date range]
SAMPLE SIZE: [Number of items]
FEEDBACK DATA:
[Paste feedback items here]
ANALYSIS FRAMEWORK:
1. SENTIMENT DISTRIBUTION:
- Positive / Neutral / Negative breakdown
- Trend vs previous period if known
2. TOPIC CLUSTERING:
- Group feedback into major themes
- For each theme:
- Frequency (% of total)
- Average sentiment
- Representative quotes (3 per theme)
- Root cause hypothesis
3. URGENCY TRIAGE:
- Critical issues (churn risk, legal, safety)
- High-priority (many users, severe impact)
- Medium-priority (common but manageable)
- Low-priority (edge cases, nice-to-have)
4. FEATURE REQUESTS EXTRACTED:
- Explicit requests with frequency
- Implicit needs (reading between lines)
- Jobs-to-be-done revealed
5. COMPETITOR MENTIONS:
- Which competitors mentioned?
- In what context?
- Switching triggers identified
6. ACTIONABLE RECOMMENDATIONS:
- Quick wins (fix this week)
- Short-term improvements (this quarter)
- Strategic considerations (roadmap input)
OUTPUT: Provide executive summary first, then detailed analysis.
Launch
Go-to-Market Brief
Comprehensive launch planning document
You are a product marketing expert. Create a comprehensive GTM brief for this launch.
PRODUCT/FEATURE: [Name]
LAUNCH DATE: [Target date]
LAUNCH TYPE: [Major release / Minor feature / Beta]
PRODUCT DETAILS:
[Describe what's launching]
TARGET AUDIENCE:
[Primary and secondary segments]
GENERATE GTM BRIEF:
1. POSITIONING & MESSAGING:
- One-line description (tweet-length)
- Value proposition statement
- Key messages (3 max)
- Proof points for each message
- Competitive differentiation
2. LAUNCH TIER & TACTICS:
Based on impact, recommend launch tier:
- Tier 1 (Major): Press, event, full campaign
- Tier 2 (Medium): Blog, email, social push
- Tier 3 (Minor): In-app announcement, changelog
3. CHANNEL STRATEGY:
- Owned channels: [Blog, email, in-app, social]
- Earned channels: [PR, reviews, community]
- Paid channels: [If applicable]
- Timeline for each channel
4. INTERNAL ENABLEMENT:
- Sales team briefing points
- Support team FAQ
- Customer success talking points
- Internal announcement
5. SUCCESS METRICS:
- Launch day metrics
- Week 1 targets
- Month 1 targets
6. RISK MITIGATION:
- Potential negative reactions
- Prepared responses
- Rollback criteria
7. LAUNCH CHECKLIST:
- Pre-launch (T-2 weeks)
- Launch day
- Post-launch (T+1 week)
Technical
Technical Spec Reviewer
PM lens review of engineering specs
You are a senior PM reviewing a technical spec. Your job is to ensure the spec will deliver the intended user value and catch issues before development.
TECHNICAL SPEC:
[Paste the technical spec or design doc here]
ORIGINAL PRD/REQUIREMENTS:
[Paste or summarize the requirements]
REVIEW THROUGH PM LENS:
1. REQUIREMENTS COVERAGE:
- Are all PRD requirements addressed?
- Are there gaps or misunderstandings?
- Any requirements over-engineered?
2. USER EXPERIENCE IMPLICATIONS:
- How will this feel to users?
- Performance implications (load times, latency)?
- Error states and edge cases handled?
- Accessibility considerations?
3. SCOPE ASSESSMENT:
- Is scope appropriate for timeline?
- What's the MVP vs nice-to-have?
- Are there simpler alternatives considered?
4. RISK IDENTIFICATION:
- Technical risks that could delay launch
- Dependencies on other teams/systems
- Security or privacy concerns
- Scalability for expected load
5. QUESTIONS FOR ENGINEERING:
- Clarifying questions on approach
- Alternative approaches to discuss
- Tradeoffs that need PM input
6. TESTING REQUIREMENTS:
- What should QA focus on?
- Edge cases to specifically test
- Performance benchmarks needed
7. LAUNCH CONSIDERATIONS:
- Feature flags needed?
- Rollout strategy implications
- Monitoring and alerting needs
OUTPUT: Provide categorized feedback with priority (Must-address / Should-discuss / Nice-to-have)
Experimentation
A/B Test Design
Rigorous experiment design and analysis plan
You are an experimentation expert. Design a rigorous A/B test for this feature.
FEATURE/CHANGE: [What we're testing]
HYPOTHESIS: [What we believe will happen and why]
CURRENT STATE: [Control experience]
PROPOSED CHANGE: [Treatment experience]
DESIGN THE EXPERIMENT:
1. HYPOTHESIS STATEMENT:
- Null hypothesis (H0)
- Alternative hypothesis (H1)
- One-tailed or two-tailed test?
2. PRIMARY METRIC:
- What single metric determines success?
- Current baseline value
- Minimum detectable effect (MDE) - what lift matters?
3. SECONDARY METRICS:
- Supporting metrics to monitor
- Expected directional impact
4. GUARDRAIL METRICS:
- Metrics that must not decrease
- Thresholds for stopping the test
5. SAMPLE SIZE CALCULATION:
- Based on baseline, MDE, and significance level
- Estimated traffic and duration needed
- Power analysis (typically 80%)
6. SEGMENTATION:
- User segments to analyze separately
- Stratification if needed
7. RANDOMIZATION:
- User-level vs session-level
- Holdout considerations
- Geographic or temporal factors
8. ANALYSIS PLAN:
- Statistical test to use
- Significance threshold (typically p < 0.05)
- How to handle multiple comparisons
- When to call the test
9. RISKS AND MITIGATIONS:
- Sample ratio mismatch
- Novelty effects
- Seasonal factors
- Interaction with other tests
10. DOCUMENTATION:
- Create experiment brief template
Discovery
Jobs-to-be-Done (JTBD) Analysis
Uncover the underlying jobs customers are hiring your product to do
You are a JTBD expert trained in Clayton Christensen's methodology. Analyze this product/feature using Jobs-to-be-Done framework.
PRODUCT/FEATURE: [Name and description]
TARGET USER: [User segment]
CONTEXT: [Usage context and situation]
Apply the JTBD framework systematically:
1. FUNCTIONAL JOBS (What are they trying to accomplish?):
- Core functional job statement: "When [situation], I want to [motivation], so I can [outcome]"
- Related jobs in the job chain
- Job steps (beginning, middle, end)
- Identify underserved job steps
2. EMOTIONAL JOBS (How do they want to feel?):
- Personal dimension: How do they want to feel about themselves?
- Social dimension: How do they want to be perceived by others?
- Emotional jobs that current solutions fail to address
3. CONSUMPTION CHAIN JOBS:
- Purchase and onboarding jobs
- Usage and maintenance jobs
- Upgrade and switching jobs
4. COMPETING SOLUTIONS:
- What are they hiring today to do this job?
- Why would they "fire" current solution?
- What workarounds have they created?
5. JOB METRICS:
- Speed: How quickly can they get the job done?
- Reliability: How predictably?
- Convenience: How easily?
6. OUTCOME STATEMENTS:
Create 5-10 outcome statements in format:
"[Direction: Minimize/Maximize] + [Metric] + [Object of control] + [Contextual clarifier]"
Example: "Minimize the time it takes to find relevant information when making a decision"
7. INNOVATION OPPORTUNITIES:
- Underserved outcomes (high importance, low satisfaction)
- Overserved outcomes (low importance, high satisfaction)
- New market opportunities
OUTPUT: Prioritized list of job opportunities with strategic recommendations.
Prioritization
Kano Model Analysis
Categorize features by customer satisfaction impact
You are a product strategist expert in the Kano Model. Analyze these features using Kano classification.
FEATURES TO ANALYZE:
[List features with brief descriptions]
USER SEGMENT: [Target users]
PRODUCT CONTEXT: [Product stage, market position]
Apply Kano Model systematically:
1. MUST-HAVE (Basic Expectations):
- Features whose absence causes extreme dissatisfaction
- Features whose presence doesn't increase satisfaction
- "Table stakes" - customers assume these exist
- Identify which features are must-haves and why
2. PERFORMANCE (Linear Satisfiers):
- Features where more is better
- Direct correlation: better execution = higher satisfaction
- Often the basis for competitive differentiation
- Identify performance features and the dimension that matters
3. DELIGHTERS (Attractive):
- Features that surprise and exceed expectations
- Absence doesn't cause dissatisfaction
- Presence creates disproportionate satisfaction
- Often become tomorrow's must-haves
- Identify potential delighters and why they would delight
4. INDIFFERENT:
- Features customers don't care about either way
- Candidates for cutting or deprioritization
- Identify indifferent features and evidence
5. REVERSE:
- Features some customers actively dislike
- Segment-specific anti-features
- Identify potential reverse features
6. KANO DECAY ANALYSIS:
- Which delighters have become performance features?
- Which performance features are becoming must-haves?
- Time-based evolution of each feature category
7. PRIORITIZATION MATRIX:
| Feature | Category | Development Effort | Recommendation |
8. STRATEGIC RECOMMENDATIONS:
- Features to prioritize this quarter
- Features to cut or defer
- Opportunities for differentiation
- Risks if must-haves are neglected
Discovery
Opportunity Solution Tree
Teresa Torres' framework for continuous discovery
You are a discovery coach trained in Teresa Torres' Opportunity Solution Tree methodology. Help me build an OST.
DESIRED OUTCOME (Business/Product Metric):
[The outcome you're trying to drive]
CURRENT STATE:
- Current metric value: [X]
- Target: [Y]
- Timeline: [When]
USER RESEARCH INPUT:
[Paste interview notes, feedback, or observations]
Build the Opportunity Solution Tree:
1. OUTCOME (Root):
- Clearly defined, measurable outcome
- Explain why this outcome matters
- How does it connect to business goals?
2. OPPORTUNITY SPACE (Branches):
Identify 5-7 distinct opportunity areas from research:
For each opportunity:
- Opportunity statement (user need, not solution)
- Evidence from research (quotes, observations)
- Size of opportunity (how many users affected?)
- Frequency (how often does this occur?)
3. OPPORTUNITY PRIORITIZATION:
Rank opportunities by:
- Opportunity sizing score
- Alignment with outcome
- Customer pain intensity
- Recommended focus areas
4. SOLUTIONS (Leaves):
For the top 2-3 opportunities, generate:
- 3-5 potential solutions per opportunity
- Range from quick fixes to bold bets
- Include non-obvious alternatives
5. EXPERIMENTS (Sub-leaves):
For each promising solution:
- Assumption to test
- Smallest experiment to learn
- Success criteria
- Timeline
6. OST VISUALIZATION:
Create text-based tree structure:
OUTCOME
├── Opportunity 1
│ ├── Solution 1a → Experiment
│ ├── Solution 1b → Experiment
│ └── Solution 1c → Experiment
├── Opportunity 2
│ ├── Solution 2a → Experiment
│ └── Solution 2b → Experiment
└── Opportunity 3
└── Solution 3a → Experiment
7. RECOMMENDED PATH:
- Highest-potential opportunity to pursue first
- Recommended starting experiment
- Learning goals for next 2 weeks
Strategy
North Star Framework
Amplitude's framework for defining your key product metric
You are a product strategy consultant expert in Amplitude's North Star Framework. Help define our North Star Metric.
COMPANY CONTEXT:
- Company: [Name and description]
- Business model: [How you make money]
- Stage: [Startup/Growth/Enterprise]
- Current focus: [What matters most right now]
USER CONTEXT:
- Primary user: [Main user persona]
- Core value delivered: [What value do users get?]
Apply the North Star Framework:
1. NORTH STAR METRIC CANDIDATES:
Generate 5 potential North Star Metrics, each must:
- Express value delivered to customers
- Be a leading indicator of revenue
- Be measurable
- Be actionable by the product team
For each candidate:
- Metric name and definition
- Why it represents value delivered
- How it connects to revenue
- Potential issues or limitations
2. EVALUATION CRITERIA:
Score each candidate (1-5) on:
- Breadth: Does it capture value for most users?
- Depth: Does it reflect meaningful engagement?
- Leading: Does it predict future business success?
- Game-proof: Is it hard to artificially inflate?
- Actionable: Can the product team influence it?
3. RECOMMENDED NORTH STAR METRIC:
- Selected metric with justification
- Precise definition (how exactly to measure)
- Current baseline
- Target and timeline
4. INPUT METRICS (3-5):
Input metrics are the factors that drive the North Star.
For each input metric:
- Name and definition
- How it influences the North Star
- Which team owns it
- Target value
5. NORTH STAR CONSTELLATION:
Visualize as:
[North Star Metric]
/ | \ \
[Input 1][Input 2][Input 3][Input 4]
6. ANTI-METRICS:
What could go wrong if we optimize only for the North Star?
- Potential negative side effects
- Guardrail metrics to monitor
7. COMMUNICATION PLAN:
- How to explain this to the company
- Dashboard requirements
- Review cadence
Strategy
Working Backwards (Amazon PR/FAQ)
Amazon's method for starting with the customer and working backwards
You are trained in Amazon's Working Backwards methodology. Help me create a PR/FAQ document for this product idea.
PRODUCT IDEA: [Brief description]
TARGET CUSTOMER: [Who is this for?]
LAUNCH TIMEFRAME: [Hypothetical launch date]
Create a Working Backwards document:
1. PRESS RELEASE (Write as if launching today):
[CITY, DATE] — [Company] today announced [Product Name], a new [category] that [main benefit].
[Product Name] enables [target customers] to [key capability], which [value delivered].
"[Quote from company leader about customer problem and how this solves it]," said [Name, Title]. "[Second sentence about vision or impact]."
[Product Name] includes:
• [Feature 1 and benefit]
• [Feature 2 and benefit]
• [Feature 3 and benefit]
"[Customer quote about the problem they had and how this helps]," said [Customer Name, Title, Company].
[Product Name] is available [availability details] for [pricing if applicable]. To learn more, visit [URL].
2. FREQUENTLY ASKED QUESTIONS:
CUSTOMER FAQ:
Q: Who is this product for?
Q: What problem does this solve?
Q: How is this different from [competitor/alternative]?
Q: How much does it cost?
Q: How do I get started?
Q: What if it doesn't work for me?
INTERNAL FAQ:
Q: Why should we build this now?
Q: What is the estimated market size?
Q: What are the biggest risks?
Q: What dependencies do we have?
Q: How will we measure success?
Q: What's the 3-year vision?
3. WORKING BACKWARDS VALIDATION:
- Is the customer problem clearly articulated?
- Would the press release excite customers?
- Are the benefits concrete and compelling?
- Does this feel like a press release customers would share?
- What's missing or unclear?
4. RISKS AND OPEN QUESTIONS:
- Technical feasibility concerns
- Go-to-market challenges
- Competitive responses
- Questions that need answers before building
Planning
Impact Mapping
Connect features to business goals through actors and impacts
You are an expert in Gojko Adzic's Impact Mapping methodology. Help create an impact map for this goal.
BUSINESS GOAL: [Specific, measurable goal]
TIMELINE: [When to achieve this]
CURRENT STATE: [Where we are now]
Build the Impact Map:
1. WHY (Goal - Center):
- Restate goal in SMART format
- Why this goal matters to the business
- How we'll measure success
- What happens if we don't achieve it?
2. WHO (Actors - First Ring):
Identify all actors who can influence this goal:
- Primary users who directly affect the goal
- Secondary users who indirectly affect it
- Internal stakeholders
- External parties (partners, competitors)
For each actor:
- Who are they specifically?
- Why do they matter for this goal?
- What's their current relationship with us?
3. HOW (Impacts - Second Ring):
For each key actor, identify behavior changes needed:
- What should they START doing?
- What should they STOP doing?
- What should they do MORE of?
- What should they do DIFFERENTLY?
For each impact:
- How does this behavior change drive the goal?
- What's blocking this behavior today?
- How will we measure behavior change?
4. WHAT (Deliverables - Outer Ring):
For each impact, identify possible deliverables:
- Features that could cause this impact
- Content or communication
- Process changes
- Partnerships or integrations
Rate each deliverable:
- Likelihood of causing the impact (H/M/L)
- Effort to build (H/M/L)
- Dependencies
5. IMPACT MAP VISUALIZATION:
[GOAL]
|
+-------------+-------------+
| | |
[Actor 1] [Actor 2] [Actor 3]
| | |
+----+----+ +----+----+ |
| | | | | | |
[Impact][Impact] [Impact][Impact] [Impact]
| | |
[Deliverable] [Deliverable] [Deliverable]
6. PRIORITIZATION:
- Which actor-impact pairs are highest leverage?
- Which deliverables should we build first?
- What should we explicitly NOT do?
7. ASSUMPTIONS TO VALIDATE:
- What must be true for this map to work?
- How will we test these assumptions?
Discovery
Value Proposition Canvas
Strategyzer's framework for product-market fit
You are an expert in Strategyzer's Value Proposition Canvas. Help me achieve product-market fit for this product.
PRODUCT: [Name and brief description]
CUSTOMER SEGMENT: [Target customer]
Build the Value Proposition Canvas:
CUSTOMER PROFILE (Right Side):
1. CUSTOMER JOBS:
Functional jobs (tasks to complete):
- [Job 1]
- [Job 2]
Social jobs (how they want to be perceived):
- [Job 1]
- [Job 2]
Emotional jobs (feelings they seek):
- [Job 1]
- [Job 2]
Supporting jobs (buying, learning, etc.):
- [Job 1]
- [Job 2]
2. PAINS (What frustrates them):
- Undesired outcomes or problems
- Obstacles preventing job completion
- Risks they want to avoid
Rank pains: Extreme / Severe / Moderate
3. GAINS (What they want to achieve):
- Required gains (expected baseline)
- Expected gains (what they anticipate)
- Desired gains (beyond expectations)
- Unexpected gains (delighters)
Rank gains: Essential / Nice-to-have
VALUE MAP (Left Side):
4. PRODUCTS & SERVICES:
- Core product/feature
- Supporting services
- Enabling elements
5. PAIN RELIEVERS:
For each significant pain, how do we relieve it?
| Pain | How We Relieve It | Strength (1-10) |
6. GAIN CREATORS:
For each important gain, how do we create it?
| Gain | How We Create It | Strength (1-10) |
FIT ANALYSIS:
7. FIT ASSESSMENT:
- Which pains do we NOT address?
- Which gains do we NOT create?
- Where is our fit strongest?
- Where are the gaps?
8. PRIORITIZATION:
Based on fit analysis:
- Features to double down on
- Features to add
- Features to remove
- Messaging implications
9. COMPETITIVE POSITIONING:
How does our value map compare to alternatives?
Design
CIRCLES Method
Lewis Lin's structured approach to product design questions
You are a product design expert trained in Lewis Lin's CIRCLES Method. Walk through a structured product design for this scenario.
DESIGN CHALLENGE: [Product/feature to design]
COMPANY CONTEXT: [Company and its mission]
Apply the CIRCLES Method:
C - COMPREHEND THE SITUATION:
- What is the company and what do they do?
- What is the product/feature in question?
- Why is this important now?
- What constraints exist (time, resources, platform)?
- Clarifying questions I would ask:
1. [Question]
2. [Question]
3. [Question]
I - IDENTIFY THE CUSTOMER:
- Who are the potential user segments?
- For each segment:
- Demographics and characteristics
- Needs and motivations
- Current behavior
- Selected target segment and justification
- Persona summary for chosen segment
R - REPORT THE CUSTOMER'S NEEDS:
Using the chosen segment:
- List 5-7 customer needs/pain points
- Rank by frequency and severity
- Top 3 needs to focus on:
1. [Need] - Why it matters
2. [Need] - Why it matters
3. [Need] - Why it matters
C - CUT THROUGH PRIORITIZATION:
Apply prioritization framework:
| Need | Reach | Impact | Confidence | Effort | Score |
- Selected need to solve (with justification)
- What we're explicitly NOT solving (and why)
L - LIST SOLUTIONS:
For the prioritized need, brainstorm solutions:
1. [Solution 1] - Pros/Cons
2. [Solution 2] - Pros/Cons
3. [Solution 3] - Pros/Cons
4. [Solution 4] - Pros/Cons
5. [Solution 5] - Pros/Cons
- Selected solution and reasoning
E - EVALUATE TRADE-OFFS:
For the selected solution:
- Pros and why they matter
- Cons and mitigation strategies
- Risks and assumptions
- Dependencies and prerequisites
S - SUMMARIZE YOUR RECOMMENDATION:
- One-paragraph recommendation
- Key success metrics (2-3)
- MVP scope definition
- Future roadmap considerations
- Open questions and next steps