Intercom’s Apex — described as the world’s first specialized customer service LLM — beat every frontier model in production tests over six months on resolution rate, latency, hallucination rate, and cost. Fin already resolves over 2M customer issues per week across ~8k companies. This suggests that Model-market fit comes before product-market fit — without it, no amount of product excellence drives adoption has a corollary: once you cross the capability threshold, vertical training data creates a compounding advantage that generalist models cannot match.
The implication for Domain-specific skill libraries are the real agent moat, not core infrastructure is that the moat extends below the skill layer into the model itself. When the model is trained on millions of real customer service conversations, prompt engineering and skill libraries become less critical — the domain knowledge lives in the weights. This challenges the assumption that Context is the product, not the model, at least for domains with enough training data to specialize.