All insights

Proprietary feedback loops create moats that widen with every interaction

When usage generates data that competitors cannot replicate — correction patterns, preference signals, domain-specific edge cases — the product improves faster than any new entrant can catch up

Nikunj Kothari — Revealed Preferences · · 6 connections

The strongest AI moat isn’t proprietary data at rest — it’s proprietary data in motion. Every user interaction generates correction patterns, preference signals, and edge-case resolutions that feed back into the system. A competitor starting from scratch faces not just a data gap but an accumulating gap: the incumbent’s system improves with every interaction while the challenger has zero signal.

This goes beyond Domain-specific skill libraries are the real agent moat, not core infrastructure (which captures static expertise) to dynamic expertise that self-updates. Skills encode what you knew yesterday; feedback loops encode what you’re learning today. The mechanism works through Compound engineering makes each unit of work improve all future work applied to the product itself — each user session doesn’t just complete a task but improves future task completion. And Cross-user knowledge transfer works without fine-tuning — just a database and prompt engineering shows the infrastructure is surprisingly simple: a database and prompt engineering, not expensive training pipelines. The key is designing the capture mechanism from day one — retrofitting feedback loops onto a product that wasn’t built for them is orders of magnitude harder.