All insights
AI Product Building AI Agents Architecture

Production agents route routine cases through decision trees, reserving humans for complexity

Handle exact matches and known patterns without AI; invoke the model for ambiguity, and route genuinely complex cases to human judgment

@vasuman — AI Agents 101 · · 11 connections

Most agent tutorials show every user input flowing through an LLM. Production agents do the opposite: structured decision logic handles the 80% of routine cases, the model handles genuinely ambiguous situations, and the complex 20% routes to human judgment. This addresses both cost and latency — the two things that kill agents in production.

The pattern is a direct extension of Declarative beats imperative when working with agents: the decision logic encodes known paths declaratively, while the LLM handles the ambiguous middle ground that can’t be pre-specified. It’s also why Verification is the single highest-leverage practice for agent-assisted coding matters more in production — when the LLM is only called for harder cases, each call is higher-stakes and needs verification. The orchestration layer, not the agent, enforces guardrails and permissions — validating requests, checking access, and feeding results back. This connects to why Context is the product, not the model applies at the system level, not just the prompt level.