Redpoint's Sai Senthilkumar: Cursor hit $500M ARR in under a year — AI dev tools are becoming winner-take-most
Jun 10, 2025 with Sai Senthilkumar
Key Points
- Cursor reached $500M ARR in under a year, signaling that AI coding tools are collapsing the fragmented dev-tools market into a winner-take-most dynamic.
- Inference costs are falling 100 times faster than EC2 pricing did, while AI consumption runs 10 times higher than cloud adoption at equivalent stages.
- Model labs like OpenAI are moving up the stack to own IDE distribution and compete directly with startups, forcing founders to build on improvements in the underlying LLM rather than patching current deficiencies.
Summary
Sai Senthilkumar, a partner at Redpoint Ventures, makes a clean case that AI coding tools are collapsing the old assumption that dev-tools markets resist monopolization. Cursor hitting $500M ARR in under a year is his exhibit A. Two years ago, the consensus was that LLMs could autocomplete but couldn't reshape software development. That consensus is now clearly wrong.
Winner-take-most dynamics in coding
The traditional DevOps toolchain was fragmented by design — discrete tools for developers on the left, SRE and incident-response tooling on the right, each owning a distinct slice of the delivery lifecycle. Coding assistants like Cursor, Windsurf, and GitHub Copilot started at the IDE and are now visibly expanding rightward: code review, deployment, full-stack generation. Lovable already takes a single prompt to a live production app, collapsing what used to require designers, engineers, and multiple handoffs.
Senthilkumar identifies Cursor and Anthropic as the clear leaders in the coding layer today. GitHub Copilot, though no longer the trendy choice, is still running at a $500M run rate — itself a standalone public company by most standards. Cognition grew 40% last month even after OpenAI launched a competing coding agent. The market is large enough that multiple players are scaling simultaneously, but Senthilkumar's view is that the IDE layer in particular has winner-take-most characteristics that didn't exist in earlier dev-tools cycles.
The addressable market underpins that conviction. Tens of millions of developers, at even moderate compensation assumptions, represents a $1.6 trillion annual spend pool. That is what makes coding the strongest product-market-fit story in AI right now.
From IDE to agent orchestration
The next leg of the shift moves the human out of the center. Senthilkumar argues that companies like Cognition and Factory aren't IDE plays — they're async agent plays. A developer will task an agent, step away, and return to approve a pull request. Software engineers don't disappear in this model; they become orchestrators. Product managers gain direct access to production-grade tooling for the first time. The interaction layer itself may change: Andrej Karpathy's vibe-coding workflow — dictating to Whisper rather than typing — signals that voice could replace the keyboard as the primary interface for coding work.
Inference cost versus cloud cost
On the macro adoption curve, Senthilkumar offers a direct comparison. EC2 pricing fell fast during the cloud era and unlocked broad consumption. Inference costs are falling 100 times faster than EC2 did, while AI application consumption is running 10 times higher than cloud adoption at an equivalent stage. The combined effect is roughly 1,000x more consumption than the cloud buildout produced at the same point in its cycle.
He frames this alongside a structural market expansion: a large portion of professional services revenue — work currently done by humans — is being converted into product-based software revenue because LLMs can encapsulate those workflows in code.
Incumbents versus upstarts
The incumbents-versus-upstarts question doesn't resolve cleanly. Senthilkumar's view is that some categories favor incumbents — MongoDB absorbed the vector search opportunity by extending its existing suite rather than ceding ground to a dedicated vector database startup. But the IDE is a counterexample: two years ago, few would have predicted that a small team could out-execute GitHub Copilot. The lesson he draws is that AI-native wedges exist where the incumbent's architecture is a liability, not an asset.
He uses hyperscalers as an analogy for the model labs. AWS open-sourced Elasticsearch under a permissive license and began competing directly with Elastic — which responded by tightening its license. Elastic still exists as a ~$10 billion business today. The model providers are doing something similar: OpenAI acquiring Windsurf to own IDE distribution, and Deep Research now indexing GitHub, Google Docs, Gmail, and SharePoint — a direct Glean competitor, without saying so explicitly. Senthilkumar's expectation is that wherever product-market fit is strongest, the foundation model labs will move up the stack.
For startups, the survivable position is building on the right side of model improvement — applications that get stronger as the underlying LLM gets stronger, not ones patching current model deficiencies that will be closed in months.
Open source as a double-edged sword
Redpoint has backed Hashi Corp and ClickHouse, so Senthilkumar has a close view of what open-source commercialization actually requires. His framing, borrowed from Databricks CEO Ali Ghodsi, is that building a large open-source business means hitting two home runs simultaneously: generating genuine open-source traction and then separately figuring out monetization. Databricks did it with Spark. MongoDB did it later, releasing its cloud product just before its IPO — the only enterprise software company where a late-stage cloud launch actually accelerated revenue growth after the fact.
The risk he flags for open-source founders is that their own project is typically their largest competitor. Enterprises can fork and self-host. The sustainable path is identifying enterprise features — compliance, security, managed reliability — that buyers will pay for rather than build internally. The total-cost-of-ownership question ultimately decides whether the commercial product wins: if internal engineering and maintenance costs exceed the vendor's price, the deal closes. If not, the strategy needs revisiting.