Glean’s trace learning replays actions that would write to apps like Google Drive, Salesforce, Jira, Asana, or Slack in a shadow path without touching production data. This lets the system learn from realistic end-to-end flows — including the write operations that are most informative about workflow patterns — without impacting customer data.
This is a critical enabler for Agents that store error patterns learn continuously without fine-tuning or retraining: you can’t learn from write-path failures if you never execute the write path. Shadow execution solves the tension between learning and safety, serving a similar architectural role to Rollback safety nets enable autonomous iteration — not model intelligence — both make aggressive exploration safe, but shadow execution does it by never touching production at all rather than by reverting changes after the fact. The implication is that any agent system serious about Traces not scores enable agent improvement — without trajectories, improvement rate drops hard needs a shadow execution layer for the subset of traces that involve external writes.