Ready-to-run playbooks that sequence tested prompts into complete business outcomes. Cold outreach campaigns, SEO content pipelines, data extraction, product launches — already built, already benchmarked, pick one and execute.
Every playbook is a step-by-step execution order. Run step 1, plug output into step 2, and so on until you hit the business outcome.
For every prompt we tested 5 frontier models (Claude, GPT, Gemini, Grok, Haiku) and tell you which delivers the best quality per dollar — for that specific step.
Every playbook ships as a Claude Skill, MCP-ready config, and structured JSON. Import into your agent, team workflow, or trigger manually.
Buy with card via Stripe — or pay in USDC on Base or Solana. AI agents can discover via /catalog.json and purchase autonomously via x402.
Behind each playbook step is a full model comparison. Here's what one looks like — the same data that lets us pick the right model per task inside every workflow.
| Model | Quality | Latency | Cost | Tokens out |
|---|---|---|---|---|
| Claude Opus 4.7 | 9.2 | 3.1s | $0.018 | 142 |
| GPT-5 | 8.9 | 2.4s | $0.014 | 156 |
| Gemini 2.5 Pro | 7.6 | 1.8s | $0.009 | 171 |
| Claude Haiku 4.5 | 7.8 | 0.9s | $0.002 | 138 |
Every workflow step has a dedicated page with the raw prompt, 5 model outputs, and quality scores — free to read, individually.