All insights

Compound engineering makes each unit of work improve all future work

The 80/20 ratio (80% plan+review, 20% work+compound) ensures learning compounds across iterations, not just code

Dan Shipper & Kieran Klaassen (Every) — Compound Engineering · · 28 connections

Most engineering treats work as linear — you finish one thing, start the next. Compound engineering inverts this: each unit of work makes the next one easier. The four-step loop is Plan → Work → Review → Compound, with 80% of time in Plan and Review. The Review phase is the compounding mechanism — it’s where patterns get extracted, mistakes get documented, and Markdown skill files may replace expensive fine-tuning grow. The Compound step explicitly records learnings for future use — this is what makes it compound, not just iterative.

The 12-agent parallel review (architecture, security, performance, simplicity reviewers running simultaneously) is an implementation of Declarative beats imperative when working with agents — each reviewer has success criteria, not step-by-step instructions. The result compounds because updated CLAUDE.md files, pattern docs, and skill files mean the agent on iteration N+1 is fundamentally better than on iteration N.

This is the engineering equivalent of Persistent agent memory preserves institutional knowledge that walks out the door with employees: documentation isn’t a chore at the end — it’s the mechanism that makes compound growth possible. Compilation scales but curation compounds — two camps for knowledge graph construction makes the same point about knowledge systems: compiled wikis grow fast but hit a quality ceiling, while curated knowledge (like this graph) compounds through validated connections — the review phase IS the curation mechanism. Skip the review phase and you get linear productivity. Do it consistently and you get exponential. shadcn’s /done skill demonstrates this practically — Session capture turns ephemeral AI conversations into a compounding knowledge base turns every Claude session into searchable development memory, making the review-and-extract pattern lightweight enough to sustain daily. OpenAI’s Codex team took this to the extreme: Harness engineering — humans steer, agents execute, documentation is the system of record describes how they built a million-line codebase with zero manually-written code by making documentation the primary engineering artifact — the compound loop operating at organizational scale. When applied to products rather than engineering process, Proprietary feedback loops create moats that widen with every interaction shows the same compounding dynamic: each user interaction improves the system for all future interactions. And Revealed preferences trump stated preferences — track what users do, not what they say sharpens what to compound on — track actual behavior, not survey responses. Clay calls the human-organizational equivalent Negative maintenance teammates reduce future work for everyone around them — teammates who proactively reduce future work for others are applying compound engineering to organizational friction.

Connected Insights

Referenced by (19)

Verification is the single highest-leverage practice for agent-assisted coding Autonomous coding loops need small stories and fast feedback to work Don't be the discriminator — be the patron, not the judge Treat AI like a distributed team, not a single assistant Treat an agent as an operating system, not a stateless function Session capture turns ephemeral AI conversations into a compounding knowledge base Building real projects teaches AI skills faster than following structured curricula Tools are a new kind of software — contracts between deterministic systems and non-deterministic agents Evaluate agent tools with real multi-step tasks, not toy single-call examples Harness engineering — humans steer, agents execute, documentation is the system of record CLAUDE.md should be a routing table, not a knowledge base Context layers must be living systems, not static artifacts Proprietary feedback loops create moats that widen with every interaction Revealed preferences trump stated preferences — track what users do, not what they say A mediocre agent inside a strong harness outperforms a stronger agent inside a messy one Accumulated agent traces produce emergent world models — discovered, not designed Negative maintenance teammates reduce future work for everyone around them Latent demand is the strongest product signal — make the thing people already do easier Compilation scales but curation compounds — two camps for knowledge graph construction