What You Can Do with AI
Real tasks. Real prompts. Copy, paste, and get results in minutes.
Ship features 10x faster
How to use this
- Open Cursor (cursor.com). Open your project folder (or create a new one).
- Open the Composer (Cmd+I on Mac / Ctrl+I on Windows). Select "Agent" mode from the dropdown.
- Paste PROMPT 1 (the PRD-Lite) first — fill in every field. This becomes the spec that Cursor references throughout the build.
- Then paste PROMPT 2 (Plan-First). Cursor will output a file tree, checklist, and commands — review the plan BEFORE letting it write any code. Once approved, say: "Execute the plan."
--- PROMPT 1: The PRD-Lite (fill this in — 10 lines that replace a 10-page spec) --- # App: [YOUR APP NAME] ## Goal: [ONE SENTENCE — what does this app do?] ## Users: [WHO uses it and WHY — e.g. "Freelancers who need to send invoices faster"] ## Core flows: 1) [FLOW 1 — e.g. "User creates an invoice with line items"] 2) [FLOW 2 — e.g. "User sends invoice via email with payment link"] 3) [FLOW 3 — e.g. "User sees payment status on dashboard"] ## Data model: - [ENTITY 1]: [FIELDS — e.g. "Invoice: id, client_name, items[], total, status, created_at"] - [ENTITY 2]: [FIELDS — e.g. "Client: id, name, email, company"] ## Non-goals: [WHAT NOT TO BUILD — e.g. "No recurring invoices, no multi-currency, no mobile app"] ## Constraints: [TECH STACK + RULES — e.g. "Next.js 15, Tailwind, Supabase, deploy to Vercel"] ## Acceptance tests: - [ ] User can [CORE ACTION 1] - [ ] User can [CORE ACTION 2] - [ ] User can [CORE ACTION 3] --- FILLED EXAMPLE --- # App: QuickBill ## Goal: Simple invoicing for freelancers — create, send, and track invoices ## Users: Solo freelancers who send 5–20 invoices/month and hate complex tools ## Core flows: 1) User creates an invoice with client info and line items 2) User sends invoice via email with a Stripe payment link 3) User sees payment status (paid/unpaid/overdue) on a dashboard ## Data model: - Invoice: id, client_id, items[], total, status, due_date, created_at - Client: id, name, email, company - LineItem: id, description, quantity, rate ## Non-goals: Recurring invoices, multi-currency, team features, mobile app ## Constraints: Next.js 15, Tailwind, Supabase for DB + auth, Stripe for payments, deploy to Vercel ## Acceptance tests: - [ ] User can create and save a new invoice - [ ] User can send invoice to client email with Stripe payment link - [ ] Dashboard shows real-time payment status --- PROMPT 2: The Plan-First Prompt (paste this AFTER the PRD-Lite) --- You are building the app described above. Before writing ANY code, produce a plan ONLY: 1) Complete file tree (every file you'll create or modify) 2) Step-by-step implementation checklist (8–12 steps, in dependency order) 3) Terminal commands I'll need to run (installs, migrations, env setup) 4) Risks or unknowns (things that might go wrong or need clarification) Do not write code yet. I want to review and approve the plan first.
Why this works: Two prompts, two phases. The PRD-Lite forces you to clarify scope before writing a line of code. The Plan-First prompt makes Cursor map out the entire build — file tree, steps, commands, risks — so you review the plan before it touches your codebase. This is the difference between vibe coding and chaos. The filled example shows exactly what a good PRD-Lite looks like.
Get production-grade code reviews with one command
How to use this
- Create the commands folder in your project: mkdir -p .claude/commands
- Save the prompt below as a file at: .claude/commands/review-pr.md (copy the entire content including the YAML header between the --- markers).
- Open Claude Code in your terminal (run: claude). Navigate to your project directory.
- Type /review-pr and press Enter. Claude Code automatically reads your git diff and produces a structured code review. Run it before every PR.
SAVE THIS FILE TO: .claude/commands/review-pr.md --- name: review-pr description: Reviews code for bugs, security, and style. context: fork agent: Explore allowed-tools: Read, Grep, Glob, Bash(git *) --- # /review-pr You are a senior code reviewer with 10+ years of experience. ## Context (auto-populated — these commands run automatically) - Status: !`git status -sb` - Changed files: !`git diff --name-only` - Diff: !`git diff` ## Review Checklist 1. **Bugs**: Edge cases, null/undefined checks, off-by-one errors, race conditions 2. **Security**: SQL injection, XSS, exposed secrets, auth bypasses, input validation 3. **Performance**: O(n) traps in loops, unnecessary re-renders, missing indexes, N+1 queries 4. **Style**: Naming clarity, function length, code duplication, readability ## Output Format 1. **Summary** (3 bullets max — what changed and why) 2. **Major Issues** (blockers that must be fixed before merge) 3. **Improvements** (non-blocking suggestions ranked by impact) 4. **Test Plan** (specific things to test before merging) ## Rules - Never say "looks good" without citing specific evidence - If no issues found, explain exactly what you verified - Flag any hardcoded values, magic numbers, or TODO comments
Unlock this prompt
No spam. Just the prompt.
Why this works: Save this file once. Type /review-pr in Claude Code and it automatically reads your git status, diff, and changed files — then produces a structured review covering bugs, security, performance, and style. The YAML header (context: fork, allowed-tools) tells Claude Code to run in a safe sandbox with read-only git access. It's a senior engineer on call 24/7.
Run autonomous AI dev loops overnight
How to use this
- Install Ralph: pip install ralph-cli (or follow the setup at github.com/anthropics/ralph). Initialize in your project: ralph init
- Create a task list file at fix_plan.md in your project root. List every task as a checkbox: - [ ] Task description. Ralph works through these one by one.
- Run PROMPT 1 by starting Ralph: ralph run --goal "Build auth API" --plan fix_plan.md. Ralph picks up tasks, implements them, runs tests, and checks them off autonomously.
- When you wake up, run PROMPT 2 (the Verifier) in Claude Code to audit the overnight work before merging. Type: claude and paste the Verifier prompt with the git diff.
--- PROMPT 1: The Autonomous Task Runner (Ralph configuration) --- ## Goal [YOUR GOAL — e.g. "Build a user authentication API with registration, login, and JWT sessions"] ## Context - Source code: [YOUR SOURCE DIRECTORY — e.g. src/] - Specs: [PATH TO REQUIREMENTS — e.g. .ralph/specs/requirements.md] - Task list: @fix_plan.md ## Loop Protocol (Ralph follows this cycle automatically) 1. Read @fix_plan.md, pick the next unchecked task 2. Search the codebase before assuming anything exists 3. Implement the smallest possible change for that task 4. Run tests for the changed code (create tests if none exist) 5. Update @fix_plan.md (check off completed task) 6. Move to the next task. If blocked, add a note and skip. ## Completion Signal When all @fix_plan.md tasks are complete and tests pass: RALPH_STATUS: current_task: "All tasks complete" EXIT_SIGNAL: true ## EXAMPLE fix_plan.md: - [ ] Set up Express server with basic health check endpoint - [ ] Add user registration endpoint (POST /auth/register) - [ ] Add password hashing with bcrypt - [ ] Add login endpoint (POST /auth/login) returning JWT - [ ] Add JWT middleware for protected routes - [ ] Add GET /auth/me endpoint (protected) - [ ] Write integration tests for all auth endpoints - [ ] Add rate limiting to auth endpoints --- PROMPT 2: The Verifier (run in Claude Code the next morning) --- You are a senior reviewer. Review this git diff from an overnight autonomous AI coding session. !`git diff main..HEAD` Output: 1) **High-risk issues**: Security vulnerabilities, data loss risks, auth bypasses 2) **Correctness issues**: Edge cases, error handling gaps, missing validation 3) **Missing tests**: Only for NEW functionality the agent added 4) **Code quality**: Naming, structure, duplication, hardcoded values 5) **Merge recommendation**: YES / NO / NEEDS_CHANGES (with specific items to fix) Be strict. Assume the AI agent may have taken shortcuts or hallucinated edge cases. Trust nothing without evidence in the diff.
Available when AI School launches
This prompt will be available in the app. Join the waitlist for early access.
Join the WaitlistWhy this works: Ralph picks up a task list, works through it autonomously — reading the codebase, implementing changes, running tests, and checking off items. You wake up to a completed PR. The Verifier prompt is your safety net: a second AI audits everything with a "guilty until proven innocent" mindset. The example fix_plan.md shows exactly how to structure tasks for best results.
Convert specs into engineering tasks
How to use this
- Create the commands folder if it doesn't exist: mkdir -p .claude/commands
- Save the prompt below as a file at: .claude/commands/spec-to-tasks.md
- Open Claude Code in your terminal: claude. Navigate to your project.
- Type: /spec-to-tasks add user onboarding flow (replace with your feature description). Claude Code will break it into milestones, tasks, and a starter plan.
SAVE THIS FILE TO: .claude/commands/spec-to-tasks.md --- name: spec-to-tasks description: Converts feature specs into engineering task breakdowns with milestones and starter plan. --- # /spec-to-tasks Convert a feature idea into an actionable engineering plan. ## Input Feature description: $ARGUMENTS ## Instructions 1. Restate the goal in one clear sentence 2. Ask at most ONE clarifying question (if critical info is missing) 3. Break down into 3–7 milestones, each with specific tasks 4. For each task: estimate complexity (S/M/L) and list likely files to touch 5. Include a "First 30 minutes" starter plan — the exact steps to begin ## Output Format ### Goal [One sentence restatement] ### Milestones #### Milestone 1: [Name] - [ ] Task 1 (S) — likely: src/components/... - [ ] Task 2 (M) — likely: src/api/... #### Milestone 2: [Name] - [ ] Task 3 (M) — likely: src/... ... ### First 30 Minutes 1. [Exact first step — e.g. "Create src/features/onboarding/ directory"] 2. [Second step — e.g. "Define the OnboardingStep type in types.ts"] 3. [Third step — e.g. "Build the OnboardingWizard shell component"] ### Risks & Open Questions - [Anything that could block or needs a decision] ## Rules - 3–7 milestones per feature (not more) - Checkboxes for all tasks (so they can be tracked) - Label file paths as "likely" since codebase may differ - If the project repo is available, scan for relevant patterns, naming conventions, and existing utilities before suggesting new ones
Available when AI School launches
This prompt will be available in the app. Join the waitlist for early access.
Join the WaitlistWhy this works: Type /spec-to-tasks followed by any feature description and Claude Code restates the goal, breaks it into milestones with checkboxes, identifies likely files to touch, and gives you a "First 30 minutes" starter plan. It turns vague product ideas into actionable engineering work you can start immediately.
Connect AI to any data source
How to use this
- Set up MCPs in Cursor: Go to Settings > MCP Servers. Add the servers you need (Brave Search, GitHub, database, etc.). Each MCP is a JSON config entry.
- Or set up in Claude Code: Add MCP configs to your .claude/mcp.json file in your project root.
- Copy the prompt below. Replace the example with your actual research task and codebase paths.
- Run the prompt in Cursor's Composer (Agent mode) or Claude Code. AI will use the MCP tools to search the web, read your code, analyze the gap, and write deliverables — all in one shot.
--- EXAMPLE MCP CONFIG (add to Cursor Settings > MCP or .claude/mcp.json) ---
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-brave-search"],
"env": { "BRAVE_API_KEY": "[YOUR_KEY from brave.com/search/api]" }
}
}
}
--- PROMPT: Multi-Source Research + Action (run in Cursor Agent mode or Claude Code) ---
I need to evaluate whether we should adopt [TECHNOLOGY/FRAMEWORK — e.g. "Next.js 15 App Router"] in our project.
Use your available tools to:
1. RESEARCH: Search the web for "[TECHNOLOGY] new features [YEAR]" — summarize the top 5 features relevant to our stack
2. AUDIT: Read our current codebase in [YOUR SOURCE PATH — e.g. /src/app] and identify:
- Current patterns we use (routing, data fetching, state management)
- Which new features would improve our codebase
- Which features conflict with our current approach
3. ANALYZE: Compare our current setup vs. the new features. For each:
- Effort to migrate (S/M/L)
- Impact on DX and performance (1–5)
- Breaking changes to watch for
4. DELIVER: Write a migration plan to [OUTPUT PATH — e.g. /docs/migration-plan.md] with:
- Recommended features to adopt (ranked by impact/effort)
- Step-by-step migration checklist
- Files that need to change
- Estimated timeline
Be specific. Reference actual file paths and code patterns from our codebase.Available when AI School launches
This prompt will be available in the app. Join the waitlist for early access.
Join the WaitlistWhy this works: With MCPs, AI doesn't just generate text — it acts. It searches the web for real-time docs, reads your actual codebase, analyzes the gap, and writes a migration plan referencing your real files. One prompt replaces an afternoon of research, reading docs, and writing specs. MCPs are what turn AI from a chatbot into a co-worker.
Featured Courses
Start the Vibe Coder plan and master these first.
Get notified when AI School launches.
Be the first to access 66+ courses on 50+ AI tools.
Join the WaitlistFrequently Asked Questions
How does Cursor help developers ship faster?
Cursor is an AI-powered code editor that understands your entire codebase. With a 10-line PRD-Lite spec and a plan-first prompt, Cursor generates file trees, step-by-step checklists, and implementation code — letting you build features in minutes instead of hours.
What is Claude Code and how do developers use it?
Claude Code is a terminal-based AI assistant that can read your codebase, run commands, and make changes. Developers build custom skills (like /review-pr and /spec-to-tasks) that automate production-grade code reviews, task breakdowns, and feature planning — all from the command line.
What are autonomous AI dev loops?
Tools like Ralph let you define a goal, a task list, and a loop protocol — then AI works through the tasks autonomously, running tests and checking off items. You review the diff in the morning. A second AI (the Verifier) audits the work for security, correctness, and missing tests.
What are MCPs in AI coding?
MCPs (Model Context Protocol servers) let AI access external data sources — Brave Search, databases, APIs, file systems. This means AI can research documentation, read your codebase, and write migration plans all in one prompt, without you copying and pasting context.
What is vibe coding?
Vibe coding is a development approach where you describe what you want in natural language and AI writes the code. Tools like Cursor, Claude Code, and Lovable make this possible. Developers who adopt vibe coding report shipping 10x faster while maintaining code quality through AI-assisted reviews.






