Turbopuffer's Simon Eskildsen on adding Thrive Capital, reaching trillions of objects, and building a concentrated cap table like a team
Dec 18, 2025 with Simon Hørup Eskildsen
Key Points
- Turbopuffer added Thrive Capital as investor with existing backer Lux Capital doubling down, maintaining a deliberately concentrated cap table with fewer than ten backers selected on demonstrated operational value.
- The vector search startup now stores trillions of objects across customers including Anthropic, Notion, and Cursor, growing from 5 to 22 employees in 2024 with revenue outpacing headcount growth.
- Turbopuffer built on CPUs rather than GPUs to avoid regional capacity constraints and regional latency mismatches, a hardware strategy that lets it serve vector search across 100 billion documents at low latency.
Summary
Turbopuffer founder Simon Eskildsen announced the addition of Thrive Capital as an investor, with existing backer Lux Capital also doubling down. No round size or valuation was disclosed, consistent with Eskildsen's deliberate posture of minimal fundraising transparency. The cap table philosophy is explicit: few investors, highly concentrated, selected on demonstrated value-add tracked in an internal tier ranking system. Thrive earned its spot after two years of what Eskildsen describes as material operational support.
The company now stores trillions of objects in its system and has grown the team from 5 to 22 people in 2024, with revenue growing faster than headcount. Customers include Anthropic, Cursor, Notion, Linear, and Atlassian. A public endorsement from Ivan at Notion — praising response times in a shared Slack channel — was cited as the metric Eskildsen cares about most, explicitly over fundraising figures.
Product and Market Position
Turbopuffer is positioning around two converging trends. First, AI-native SaaS products are connecting larger volumes of data to LLMs, requiring vector search infrastructure with significantly better unit economics than incumbents. Second, customers increasingly want to run search across tens to hundreds of billions of documents simultaneously — a scale previously only achievable with custom-built indexes at Meta or Google. Turbopuffer claims it can now deliver vector search across 100 billion documents at low latency with minimal compute.
On full-text search, Eskildsen says Turbopuffer is reaching performance parity with or better than Lucene, the library underpinning Elasticsearch and OpenSearch. A key architectural advantage is optimization for LLM-generated queries, which are substantially longer and more precise than human search queries and create a different performance trade-off profile that Turbopuffer can exploit.
Hardware Strategy
Turbopuffer has deliberately avoided GPU dependency, building on CPUs for easier procurement and broader regional availability. GPU-dependent infrastructure is creating real operational friction — one vendor mentioned in the conversation currently has GPU capacity only on the US West Coast, creating latency and availability mismatches for East Coast workloads. At Turbopuffer's current scale, the team is beginning to pre-request compute SKUs from Google and Amazon months in advance, a capacity planning discipline Eskildsen traces to his time at Shopify. The company has also engineered across multiple CPU generations to stay flexible amid SKU constraints.