Traditional dictation demanded perfect transcription because the consumer was a dumb text field. Voice-to-LLM works because the consumer infers intent from imperfect signal — you can mumble, trail off, restart a sentence, and the system reconstructs what you meant. The bottleneck was never the microphone; it was the listener.
This principle extends beyond voice. Typo-ridden prompts (“csn you push”, “Pairl April 16th”) work because Declarative beats imperative when working with agents — when you specify outcomes rather than instructions, the consumer’s inference fills the gaps between intent and expression. The same dynamic explains why Technical knowledge can become a liability when working with AI: experts over-specify with precise implementation details, while novices give loose intent that inference-capable systems handle better. The design implication for ai-native-product-architecture is that interfaces consumed by LLMs can tolerate far lower input fidelity than those consumed by deterministic parsers — a fundamental shift in how we think about input validation and error handling.