Economists have known since Samuelson (1938) that what people do diverges from what they say. In AI products, this gap is amplified: users say they want “more control” but skip configuration screens; they say accuracy matters most but tolerate errors when speed improves; they claim to read documentation but engagement data shows they don’t. Building on stated preferences produces features nobody uses.
Revealed preference data — what users actually click, edit, redo, skip, and abandon — is the ground truth for product decisions. This extends Decision traces are the missing data layer — a trillion-dollar gap from agent decisions to user decisions: capturing not just what the system decided but what the user did in response. The combination creates a closed loop where Compound engineering makes each unit of work improve all future work applies to the product itself — each interaction reveals a preference that improves the next interaction. Boris Cherny’s Latent demand is the strongest product signal — make the thing people already do easier is the product-building version of this: he literally walks around the Anthropic office standing behind engineers to observe how they use Claude Code, and every major feature (CLAUDE.md, plan mode, co-work) emerged from what people were already trying to do. The practical implication is instrumenting user behavior from day one, not waiting for enough scale to “justify” analytics. The cost of building without behavioral data isn’t visible in the short term — it shows up as features that demo well but don’t retain users. In enterprise workflows, Agent edits are automatic decision instrumentation — every human correction is a structured signal scales this principle: when an agent proposes and a human corrects, the delta is a revealed preference about what actually matters — no survey needed.