When Claude Code starts a session, it builds a listing of every available skill with its description and scans this listing to decide “is there a skill for this request?” This means the description field is not a summary — it is a trigger specification. Writing “Process content into graph nodes” tells the model what the skill does; writing “Use when you have a URL or article to extract insights from” tells it when to activate. The difference is the same as In agent-native architecture, features are prompts — not code: you are programming the model’s routing behavior through natural language.
This principle generalizes beyond skills to any metadata consumed by LLMs: API descriptions, tool schemas, system prompt routing tables. The insight from CLAUDE.md should be a routing table, not a knowledge base — that CLAUDE.md should be minimal IF-ELSE conditionals — is the same pattern applied to project configuration. Designing metadata for its actual consumer (an LLM doing pattern-matching) rather than its historical consumer (a human reading documentation) is a quiet but high-leverage shift in Context is the product, not the model thinking. The flip side of this consumer shift: when the consumer can infer, Inference capability lowers input fidelity requirements — smart listeners make imprecise input work — not just for metadata format, but for every input surface the LLM touches.