All insights
AI Product Building Coding Tools AI Agents

Treat AI like a distributed team, not a single assistant

Running 15 parallel Claude streams with specialized roles (writer, reviewer, architect) produces better results than one perfect conversation

Boris Cherny — How I Use Claude Code · · 15 connections

Cherny runs 5 Claude instances in terminal, 5-10 on claude.ai in browser — 15 parallel streams simultaneously, each with a different task. This isn’t multitasking for efficiency; it’s a mental model shift. (Note: the “distributed team” framing is editorial — Cherny describes the workflow patterns, and this insight synthesizes them into a team metaphor.)

The writer/reviewer pattern makes this concrete: Session A implements, Session B reviews the implementation, feedback flows back to Session A. Test-first variant: Session A writes tests, Session B writes code to pass them. Fan-out for batch: a shell loop processes dozens of files in parallel. Each stream has isolated context, which addresses The context window is the fundamental constraint — everything else follows — instead of cramming everything into one conversation, you distribute work across separate context windows. The intelligence/judgement split in The intelligence-to-judgement ratio determines which professions AI automates first suggests which streams can run fully autonomously (high-intelligence tasks like code generation) versus which need human checkpoints (high-judgement tasks like architecture decisions).

Cherny’s CLAUDE.md insight reinforces Compound engineering makes each unit of work improve all future work: it’s checked into git, the whole team contributes, and every mistake becomes institutional knowledge. The AI gets smarter every sprint. Combined with PostToolUse hooks for auto-formatting and smart permissions shared via settings.json, the system becomes a self-improving engineering organization, not just a tool you prompt. The natural extension — where Parallel agents create a management problem, not a coding problem — reveals that as you scale to many simultaneous agents, the bottleneck shifts from coding speed to coordination overhead. Shipper’s Deputies and Sheriffs — distributed agent teams with hierarchical authority replace centralized software formalizes this into an organizational model: Deputies are personal agents trained by individuals, Sheriffs manage permissions across the team — essentially the org chart of agents mirroring the org chart of humans. Elvis Sun’s OpenClaw pushes the specialization further: An orchestrator agent that manages other agents solves the parallel coordination problem without human bottleneck assigns different models to different task types (Codex for backend, Claude for frontend, Gemini for UI), while Multi-model code review creates adversarial robustness — each model catches what others miss uses three models to review each PR — not just parallel work, but parallel judgment. Perplexity’s system demonstrates this at platform scale: AI is the computer — orchestration across 19 models is the product, not any single model shows that orchestrating 19 backend models as a unified agent system is the product — the differentiation is the orchestration layer, not any individual model.

Connected Insights

Referenced by (7)