Intercom's new post-trained Fin Apex 1.0 beats GPT-5.4 and Claude Sonnet 4.6 at customer service resolutions
Intercom’s Bold AI Gambit: Fin Apex 1.0 Challenges OpenAI and Anthropic in Customer Service
In a move that’s sending shockwaves through the tech industry, customer service giant Intercom has unveiled Fin Apex 1.0, a purpose-built AI model that claims to outperform leading frontier models from OpenAI and Anthropic on customer support metrics. This announcement marks a significant shift in the AI landscape, as a legacy software company takes the unusual step of developing its own proprietary model.
The Numbers That Matter
Fin Apex 1.0 boasts impressive performance metrics that have caught the attention of industry insiders:
- 73.1% resolution rate: Outperforming GPT-5.4 and Claude Opus 4.5 at 71.1%, and Claude Sonnet 4.6 at 69.6%
- 3.7-second response time: 0.6 seconds faster than the next-fastest competitor
- 65% reduction in hallucinations: Compared to Claude Sonnet 4.6
- One-fifth the cost: Of using frontier models directly
These numbers aren’t just impressive on paper; they translate to real-world impact for businesses. As Intercom CEO Eoghan McCabe explains, “If you’re running large service operations at scale and you’ve got 10 million customers or a billion dollars in revenue, a delta of 2% or 3% is a really large amount of customers and interactions and revenue.”
The Catch: What’s Under the Hood?
Here’s where things get interesting – and a bit controversial. When pressed about the base model Fin Apex was built on, Intercom declined to provide specifics, only confirming it’s “in the size of hundreds of billions of parameters.” This secrecy has raised eyebrows in the AI community, especially given the company’s claims of transparency.
This approach stands in stark contrast to the backlash faced by AI coding startup Cursor when it was accused of burying the fact that its Composer 2 model was built on fine-tuned open-weights models. Intercom’s stance – transparent about using open-weights but secretive about which ones – is a delicate balancing act that may not satisfy skeptics.
Post-Training: The New Frontier
Intercom’s argument is that the base model simply doesn’t matter anymore. “Pre-training is kind of a commodity now,” McCabe says. “The frontier, if you will, is actually in post-training. Post-training is the hard part. You need proprietary data. You need proprietary sources of truth.”
This philosophy is backed by Intercom’s investment in its AI team, which has grown from 6 to 60 researchers over the past three years. The company post-trained its chosen foundation using years of proprietary customer service data, creating reinforcement learning systems grounded in real resolution outcomes.
The Numbers Behind the Numbers
The financial impact of this AI-first pivot is already evident:
- Fin is approaching $100 million in annual recurring revenue
- Growing at 3.5x, making it the fastest-growing segment of Intercom’s $400 million ARR business
- Projected to represent half of Intercom’s total revenue early next year
These figures represent a remarkable turnaround for a company that McCabe admits was “in a really bad place” before its AI pivot.
The Broader Implications
Intercom’s move aligns with a broader trend described by Andrej Karpathy as the “speciation” of AI models – a proliferation of specialized systems optimized for narrow tasks rather than general intelligence. This approach is particularly suited to customer service, one of only a few enterprise AI use cases that have found genuine economic traction.
However, this raises questions about the durability of domain-specific models. Will frontier labs eventually close this gap? McCabe believes the labs face structural limitations, arguing that “the generic models are going to be able to keep up with the domain-specific models right now.”
Beyond Efficiency to Experience
Early enterprise AI adoption focused heavily on cost reduction, but McCabe sees the conversation shifting towards experience quality. “Originally it was like, ‘Holy shit, we can actually do this for so much cheaper.’ And now they’re thinking, ‘Wait, no, we can give customers a far better experience,'” he said.
The vision extends beyond simple query resolution. McCabe imagines AI agents that function as consultants – a shoe retailer’s bot that doesn’t just answer shipping questions but offers styling advice and shows customers how different options might look on them.
Pricing and Availability
For existing Fin customers, the upgrade to Apex comes at no additional cost. Intercom confirmed that customer pricing remains unchanged – users continue to pay per outcome as before, at $0.99 per resolved interaction, and automatically benefit from the new model.
However, Apex is not available as a standalone model or through an external API. It is accessible only through Fin, meaning businesses cannot license the model independently or integrate it into their own products.
What’s Next
Intercom plans to expand Fin beyond customer service into sales and marketing – positioning it as a direct competitor to Salesforce’s Agentforce vision, which aims to provide AI agents across the customer lifecycle.
For the broader SaaS industry, Intercom’s move raises uncomfortable questions. If a 15-year-old customer service company can build a model that outperforms OpenAI and Anthropic in its domain, what does that mean for vendors still relying on generic API calls?
McCabe’s answer is stark: “If you can’t become an agent company, your CRUD app business has a diminishing future.”
Tags
AI model, customer service, Fin Apex, Intercom, OpenAI, Anthropic, GPT-5.4, Claude Opus 4.5, post-training, proprietary data, SaaS, enterprise AI, resolution rate, hallucinations, cost efficiency, domain-specific AI, speciation of AI models
Viral Sentences
- “If you can’t become an agent company, your CRUD app business has a diminishing future.”
- “Pre-training is kind of a commodity now. The frontier, if you will, is actually in post-training.”
- “Customer service has always been pretty shit. Even the very best brands, you’re left waiting on a call, you’re bounced around different departments.”
- “We’re by far the first in the category to train our own model. There’s no one else that’s going to have this for a year or more.”
- “Maybe the future is that Anthropic has a big offering of many different specialized models. Maybe that’s what it looks like.”
- “Fin is approaching $100 million in annual recurring revenue and growing at 3.5x, making it the fastest-growing segment of the company’s $400 million ARR business.”
- “The generic models are trained on generic data on the internet. The specific models are trained on hyper-specific domain data.”
- “Originally it was like, ‘Holy shit, we can actually do this for so much cheaper.’ And now they’re thinking, ‘Wait, no, we can give customers a far better experience.'”
- “If you’re running large service operations at scale and you’ve got 10 million customers or a billion dollars in revenue, a delta of 2% or 3% is a really large amount of customers and interactions and revenue.”
- “We are very transparent that we have used an open-weights model, just not which one.”
,



Leave a Reply
Want to join the discussion?Feel free to contribute!