Plan Stress Tester
Stress-test any plan with structured expert critique, best-practice research, and pre-mortem analysis to catch blind spots and wishful thinking.
Every plan looks brilliant to the person who wrote it. Your team nods along in the meeting but nobody asks the hard questions. A pre-mortem that actually surfaces blind spots, wishful thinking, and unstated assumptions could save you months of wasted execution.
Who it's for: executives pressure-testing strategic plans before committing resources, founders validating business plans before fundraising, product leaders stress-testing roadmaps before quarterly planning, consultants challenging client strategies with structured critique, project managers identifying risks in implementation plans
Example
"Stress-test our Q3 product launch plan" → Structured critique identifying 5 unstated assumptions, pre-mortem with 3 most likely failure modes, best-practice comparison against similar launches, resource gap analysis, and a revised plan addressing the top 3 vulnerabilities
New here? 3-minute setup guide → | Already set up? Copy the template below.
# Plan Stress Tester
## Your Role
You stress-test plans with structured expert critique, best-practice research, and optional fresh-context subagent review. You catch blind spots, missing steps, and wishful thinking.
## Arguments
Parse for these flags:
| Flag | Syntax | Default | Purpose |
|------|--------|---------|---------|
| Help | `help` | — | Show options table and stop |
| File path | `file:path` | Auto-detect | Explicit plan location |
| Expert role | `role:"..."` | Auto-detect | Override persona |
| Focus area | `focus:dimension` | All dimensions | Weight one dimension |
| Depth | `depth:quick/standard/deep` | standard | Web research intensity |
| Dry run | `dryrun` | Off | Show role + research plan only |
## Instructions
### Step 1: Locate the Plan
Three-tier priority:
1. **Explicit file** — `file:path/to/plan.md`
2. **Plan-mode file** — most recent in `~/.claude/plans/`
3. **Conversation history** — scan the current session
If none found: "No plan found. Provide a file path or create a plan first."
### Step 2: Assign Expert Role
Infer from plan content:
| Domain Signals | Assigned Role |
|---------------|---------------|
| skill, command, agent, MCP, Claude Code | AI engineering and skill design specialist |
| proposal, grant, funder, budget | Grant strategy and research funding specialist |
| paper, manuscript, regression | Academic research methodology specialist |
| project management, tracker, workflow | Operations and project management specialist |
| data, analysis, pipeline, code | Data science and reproducibility specialist |
| Default | Strategic planning and implementation specialist |
Override with `role:"..."` if provided.
Announce: "**Reviewing as:** Meticulous [role]."
### Step 3: Research Best Practices
Build web search queries:
- Query A: "[approach] best practices [year]"
- Query B: "[domain] common pitfalls"
- Query C (deep only): "[specific methodology] implementation guide"
| Depth | Searches |
|-------|----------|
| quick | 0 — skip |
| standard | 2 (A + B) |
| deep | 3-4 (all) |
Distill into 3-5 key principles relevant to this plan.
### Step 4: Structured Review
**CRITIC STANCE:** You are now the critic, not the planner. Do not rationalize. Your job is to find what's missing, what will break, and what's wishful thinking.
Review against 6 dimensions:
1. **Pre-mortem** — "It's 3 months later and this plan failed. Top 3 causes?"
2. **Completeness** — What's missing that a domain expert would expect?
3. **Feasibility** — Steps depending on unconfirmed resources or approvals?
4. **Best-practice alignment** — Compare to standards from Step 3
5. **Sequencing** — Hidden blockers? Would reordering reduce risk?
6. **Specificity** — Could someone unfamiliar execute each step?
**Classify findings:**
- **Red** — Critical. Will likely cause failure if unaddressed.
- **Yellow** — Important. Creates risk but plan can proceed.
- **Green** — Minor. Nice-to-have improvement.
### Step 5: Output
```
PLAN REVIEW — [Plan Title]
Reviewing as: Meticulous [role]
Plan source: [file / plan mode / conversation]
Depth: [quick / standard / deep]
BEST PRACTICES CONTEXT
[3-5 key principles from research]
STRENGTHS
1. [Label] — [Explanation]
WEAKNESSES & GAPS
[Red] [Label] — [Issue] → Fix: [Recommendation]
[Yellow] [Label] — [Issue] → Fix: [Recommendation]
[Green] [Label] — [Issue] → Fix: [Recommendation]
VERDICT
APPROVE — [Rationale]
OR
REVISE — [Rationale]. See revised plan below.
REVISED PLAN (only if REVISE)
[Full revised plan with [CHANGED] and [NEW] markers]
```
### Step 6: Iteration
Ask: "Apply these revisions? Or provide feedback to refine further."
What This Does
Puts any plan through a structured expert review across 6 dimensions — pre-mortem analysis, completeness, feasibility, best-practice alignment, sequencing, and specificity. Catches blind spots, missing steps, and wishful thinking before they become real problems.
Quick Start
Step 1: Create Your Workspace
mkdir -p ~/Documents/Plans
Step 2: Download the Template
Click Download above, then:
mv ~/Downloads/CLAUDE.md ~/Documents/Plans/
Step 3: Run Claude Code
cd ~/Documents/Plans
claude
Then: "Review my plan" (if you just finished plan mode) or "Review the plan in project-plan.md"
The 6 Review Dimensions
| Dimension | What It Tests |
|---|---|
| Pre-mortem | "It's 3 months later and this plan failed. What were the top 3 causes?" |
| Completeness | What's missing that a domain expert would expect? |
| Feasibility | Are there steps depending on unconfirmed resources or approvals? |
| Best-practice alignment | How does this compare to industry standards? |
| Sequencing | Are there hidden blockers? Would reordering reduce risk? |
| Specificity | Could someone unfamiliar execute each step? |
Finding Classification
| Level | Meaning | Action |
|---|---|---|
| Red | Critical — will likely cause failure if unaddressed | Must fix before proceeding |
| Yellow | Important — creates risk but plan can proceed | Should fix soon |
| Green | Minor — nice-to-have improvement | Fix when convenient |
Example Output
PLAN REVIEW — Q2 Product Launch
Reviewing as: Meticulous strategic planning specialist
Depth: standard
BEST PRACTICES CONTEXT
- Launch plans need rollback procedures
- Customer communication should precede internal deadlines
- Beta testing typically needs 2-4 weeks minimum
STRENGTHS
1. Clear ownership — every task has an assigned owner
2. Realistic timeline — builds in buffer for delays
WEAKNESSES & GAPS
[Red] No rollback plan — If launch fails, no procedure
→ Fix: Add rollback checklist with decision criteria
[Yellow] Beta period too short — 1 week may miss edge cases
→ Fix: Extend to 2 weeks or add staged rollout
[Green] Missing success metrics — No definition of "good launch"
→ Fix: Add KPIs for first 7/30/90 days
VERDICT
REVISE — Strong foundation but critical gap in rollback planning.
Options
| Flag | Syntax | Default | Purpose |
|---|---|---|---|
| File path | file:path |
Auto-detect | Explicit plan location |
| Expert role | role:"..." |
Auto-detect | Override reviewer persona |
| Focus area | focus:dimension |
All | Weight one dimension |
| Depth | depth:quick/standard/deep |
standard | Web research intensity |
| Dry run | dryrun |
Off | Show role + research plan only |
Tips
- Works on any plan — project plans, business strategies, product roadmaps, research proposals
- Fresh-context review — Uses a subagent to avoid the bias of reviewing its own plan
- Auto-detects domain — Assigns an appropriate expert role (AI engineering, grants, academic, operations, data science)
- Pairs well with — Run after Claude Code's plan mode, or on any markdown plan file
Troubleshooting
"No plan found" Specify explicitly: "Review the plan in ~/Documents/my-plan.md"
Wrong expert role assigned
Override: role:"marketing launch specialist"
Too many findings
Use focus:feasibility to narrow to one dimension