Any system that increases output speed without proportional error-correction feedback will compound errors at scale. Humans serve as natural bottlenecks: they make mistakes, feel consequences, and modify behavior. This negative feedback loop limits the daily rate of errors to something sustainable. Agents lack this mechanism entirely — they perpetuate identical errors indefinitely, and orchestrated agent armies generate mistakes at rates where consequences surface long after the damage is done.
Worse, agents operating with purely local visibility become “merchants of complexity.” Without system-wide understanding, they create duplication, unnecessary abstractions, and poor architectural decisions drawn from patterns in their training data. Enterprise codebases typically degrade slowly over years; agent-driven systems can reach equivalent chaos within weeks. The root cause is not context window size but genuine search limitations — agent recall decreases proportionally to codebase size.
The general principle extends beyond AI: any speed multiplier (automation, parallelization, delegation) applied to a process without adequate feedback creates a proportional error multiplier. The remedy is not to slow down universally but to ensure feedback loops scale with speed — Agents that store error patterns learn continuously without fine-tuning or retraining provides one mechanism (storing error patterns), while Verification is the single highest-leverage practice for agent-assisted coding provides another (making agents verify their own work). The critical design choice from Every optimization has a shadow regression — guard commands make the shadow visible: when you accelerate a system, always ask what error-correction mechanism you just outran.