All insights
AI Product Building Architecture Future of AI

Decision traces are the missing data layer — a trillion-dollar gap

Systems store what happened but not why; capturing the reasoning behind decisions creates searchable precedent and a new system of record

Jaya Gupta & Ashu Garg — Foundation Capital, Context Graphs · · 22 connections

Current systems store what happened but not why. They don’t store who approved the deviation, under what policy, what precedent justified the exception, what context from multiple systems led to that decision. That “why” lives in Slack threads, deal desk conversations, and people’s heads.

When you capture decision traces over time, they form a context graph — entities connected by decision events with “why” links explaining the reasoning. This becomes searchable precedent: “how did we handle this situation before?” instead of re-learning the same edge case in Slack every quarter.

The key architectural question is: are you in the write path or the read path? By the time a decision lands as final state in a system of record, the “why” is gone. Salesforce stores current state (not state at decision time). Snowflake and Databricks receive data via ETL after decisions are made — they get the output of decisions, not the reasoning. The strategic surface is the point where decisions become binding: the approval step, the redline, the escalation, the agent proposal, the human override. Systems-of-agents startups sit in the write path by default, capturing rationale at the moment decisions become binding. And Agent edits are automatic decision instrumentation — every human correction is a structured signal shows the concrete mechanism: every time a human corrects an agent’s proposal, tacit judgment becomes a structured signal.

This connects to Context is the product, not the model and explains why Persistent agent memory preserves institutional knowledge that walks out the door with employees — memory files are a lightweight version of decision traces. At the enterprise scale, Tribal knowledge is the irreducible human input that enables agent automation captures the same gap: the most critical context is implicit, conditional, and historically contingent — exactly the kind of “why” that decision traces aim to formalize. Without Observability is the missing discipline for agent systems — you can't improve what you can't measure to track how agents actually behave, even well-designed decision traces miss the performance context. And Revealed preferences trump stated preferences — track what users do, not what they say extends the principle further: capture not just agent reasoning but user reactions to that reasoning. At scale, these context graphs require Permissioned inference is harder than permissioned retrieval — enterprise context graphs need reasoning-level access control — controlling not just who sees data but whose history shapes reasoning. The value of traces extends beyond human audit: Traces not scores enable agent improvement — without trajectories, improvement rate drops hard demonstrates that agents improving other agents need full reasoning trajectories — scores alone drop improvement rate hard.

Connected Insights

Referenced by (14)