Attio raises $52M Series B led by Google Ventures to build an AI-native CRM challenging Salesforce
Aug 26, 2025 with Nicolas Sharp
Key Points
- Attio closes $52 million Series B led by Google Ventures, reaching 5,000 customers by solving the hard engineering problem of building CRM-scale data infrastructure to feed LLMs useful context.
- The startup shifts pricing from pure seat-based to hybrid model pairing seats with AI credits, expecting outcome-based pricing to rise as LLM value becomes measurable to end users.
- Attio uses LLM-generated code execution natively, making customization accessible to go-to-market operators instead of requiring Salesforce-style specialist engineers fluent in proprietary languages.
Summary
Attio, an AI-native CRM, has closed a $52 million Series B led by Google Ventures. The company has 5,000 customers, roughly half of them tech companies, and spent three years building before launching publicly about two years ago.
The core pitch is that legacy CRMs force businesses to adapt their go-to-market motion to fit the software. Attio inverts that: a flexible data model ingests emails, calendar data, and product signals, then uses that context to automate action. CEO Nicholas Sharp argues LLMs are only as useful as the context you feed them, and that building the data infrastructure to do that at CRM scale was the hard engineering problem that took years to solve.
The second structural bet is what Sharp calls "code as the new no-code." Salesforce's flexibility runs through Apex, a proprietary language few people know. Attio builds native code execution into the product from the start, then uses LLMs to generate that code. Because LLMs are particularly good at code generation in constrained environments, customisation that previously required specialist engineers becomes accessible to the go-to-market operators Attio is targeting.
Gross margins and pricing
Early on, inference costs weren't a focus. Over the past six to twelve months, Sharp says Attio has reached a "very good" margin position through two levers: models have gotten cheaper, and the team built runtime model routing that matches task complexity to model cost. Summarising a call transcript, for example, runs through a cheaper model first for compaction, with fewer tokens passed to a higher-quality reasoning model.
Pricing is shifting from pure seat-based to a hybrid model. Seats come with generous automation and AI credits; customers who want to automate heavily can buy additional credits. Sharp expects outcome-based pricing to become more prominent as the value delivered by the LLM layer becomes more visible to end users.