All insights

Context is the product, not the model

Anyone can call the API — differentiation comes from the data you access, skills you build, UX you design, and domain knowledge you encode

@nicbstme (Nicolas Bustamante) — Lessons from Building AI Agents for Financial Services · · 27 connections

The model is not your product. The experience around the model is your product. Anyone can call Claude or GPT — the API is the same for everyone. Your differentiation is the data you have access to, the skills you’ve built, the UX you’ve designed, the reliability you’ve engineered, and how well you know the industry.

The real work isn’t prompting — it’s turning messy data into clean, structured context the model can use. Everything becomes markdown (narrative), CSV (structured data), or JSON metadata (searchable). This connects directly to why Files are the universal interface between humans and agents — when context is your moat, you need formats that both agents and humans can work with.

This is also why Decision traces are the missing data layer — a trillion-dollar gap represents a trillion-dollar opportunity — the “why” behind decisions is the highest-value context, and no one captures it systematically yet. The companies that own Markdown skill files may replace expensive fine-tuning also own the context layer. But context engineering only matters once the underlying model clears the capability bar — Model-market fit comes before product-market fit — without it, no amount of product excellence drives adoption shows that legal AI succeeded at 87% accuracy while finance AI failed at 56%, regardless of context quality. The enterprise data agent space validates this thesis dramatically — Data agent failures stem from missing business context, not SQL generation gaps shows that even well-connected agents fail without proper business definitions and source-of-truth resolution.

Connected Insights

Referenced by (22)

Files are the universal interface between humans and agents Similarity is not relevance — relevance requires reasoning Structure plus reasoning beats flat similarity for complex domains Boring tech wins for AI-native startups — simpler stack means faster AI-assisted shipping SaaS survives as the governance and coordination layer — determinism still rules Open source captures value through services, not software Prompt caching makes long context economically viable The context window is the fundamental constraint — everything else follows The three-layer AI stack: Memory, Search, Reasoning Production agents route routine cases through decision trees, reserving humans for complexity Personal software grows through relationship, not configuration Agents eat your system of record — the rigid app was the constraint, not the schema Model-market fit comes before product-market fit — without it, no amount of product excellence drives adoption Harness engineering — humans steer, agents execute, documentation is the system of record Decision traces are the missing data layer — a trillion-dollar gap Markdown skill files may replace expensive fine-tuning Domain-specific skill libraries are the real agent moat, not core infrastructure Autopilots capture the work budget — six dollars in services for every one in software Data agent failures stem from missing business context, not SQL generation gaps Tribal knowledge is the irreducible human input that enables agent automation Metadata consumed by LLMs needs trigger specifications, not human summaries Vertical models beat frontier models in their domain — specialization wins on every metric