LLMs can automate much of initial context gathering — scanning query history to find the most referenced tables, extracting definitions from dbt or LookML models. But the most important context is implicit, conditional, and historically contingent, and only exists as tribal knowledge inside teams. A concrete example: “for CRM data, look at Affinity for all new USCAN deals from 2025 onwards but Salesforce for all global leads before that.” No automated scan discovers that rule.
This is the human refinement step that provides the final crucial links enabling true agent automation. It connects to Persistent agent memory preserves institutional knowledge that walks out the door with employees — tribal knowledge is exactly the institutional knowledge that walks out the door with employees. The implication is that Context is the product, not the model needs a qualifier: the context itself requires human curation to be valuable. Purely automated knowledge capture, however sophisticated, can’t close the last mile. However, trace learning can capture a significant portion of this tribal knowledge indirectly — Agents need workflow-level tool strategies, not individual tool instructions — the hard part is how tools combine shows how Glean distills workflow patterns from execution traces, encoding the implicit conventions and sequencing that constitute tribal knowledge without requiring explicit documentation.