Sequoia's David Cahn: the AI talent arms race is just getting started — we're only in inning two
Jun 19, 2025 with David Cahn
Key Points
- Sequoia partner David Cahn argues the AI talent arms race is only in inning two, with $100 million signing bonuses for top researchers accelerating competition among companies that control a third of U.S. public market value.
- Meta and other labs are deliberately capping AI research teams at 50 people, betting that top-tier talent concentration drives output more than headcount, but this logic depends on compute scale as a durable moat amid porous IP.
- The AI ecosystem remains dramatically short of the $600 billion annual revenue needed to justify current infrastructure spending, while multiple unprofitable labs burn capital on both compute and talent with no clear path to profitability.
Summary
David Cahn, partner at Sequoia Capital, argues the AI industry is only in the second inning of a talent arms race that will escalate significantly before it peaks. His framing draws a direct parallel to professional sports: a small number of individuals — he estimates 7 to 10 people at the top of each major tech company — are driving decisions at organizations that collectively represent roughly a third of total U.S. public market capitalization.
The Talent Arms Race Logic
The catalyst for Cahn's recent blog post was the emergence of $100 million signing bonuses for top AI researchers, a dynamic he ties directly to the Scale AI acquisition and surrounding rumors. His core argument is that the behavior of these companies reveals their true beliefs more clearly than any public statement. Meta, which generated $63 billion in net income in 2024, is effectively going all-in — its new AI research lab is reported by Bloomberg to target roughly 50 people, a deliberately constrained team size Cahn connects to a recurring pattern in tech history, citing Steve Jobs assembling 50 people to build the product that defined Apple and Elon Musk using a similarly sized team on Tesla Autopilot.
The 50-person ceiling is not arbitrary. Cahn observes that as AI research organizations have grown, headcount has not translated into proportionally more output. The top 20% of researchers drive 80% of results — a dynamic he half-jokingly dubs a "con" (a Cahn-sized team). At that scale, 50 researchers at $100 million each equals $5 billion in total talent spend, a number that becomes more defensible when framed against the potential value of reaching AGI.
The IP Porosity Problem
One structural tension Cahn raises is that AI talent compensation lacks the containment mechanisms that justify similar pay in adjacent fields. High-frequency trading firms enforce strict non-competes, garden leave provisions, and complex contracts to prevent strategy leakage. AI research operates in a fundamentally more porous environment. Researchers move fluidly between OpenAI, Anthropic, Meta, Google, and others; foundational ideas like the transformer architecture and reinforcement learning from human feedback have diffused rapidly through open publication. Cahn's view is that there is effectively no proprietary IP in AI research at the ideas level — compute scale may be the only durable moat — which raises a legitimate question about whether the economic logic of $100 million signing bonuses holds under scrutiny.
The Revenue Gap and Narrative Shift
Cahn revisits his earlier "$600 billion question" framework, originally published as the $200 billion question and later revised upward. The logic: Nvidia revenue serves as a proxy for data center spending, currently running at roughly $300 billion annually, which implies the AI ecosystem needs to generate $600 billion in revenue to sustain a 50% gross margin. When he first published the analysis, OpenAI had approximately $3 billion in annual revenue and the entire ecosystem was roughly 10% of the way toward that threshold. Twelve months later, OpenAI has reached $10 billion, the coding AI sector has hit $3 billion, but the ecosystem remains dramatically short of what the infrastructure investment requires.
Cahn is pointed about a narrative shift he sees happening quietly across the industry. A year ago, prominent voices were forecasting AGI in 2026 and dismissing skeptics. Now, with Sam Altman's "gentle singularity" essay circulating, the dominant framing has softened considerably — AI will change lives, but gradually. Cahn notes the convenience of that shift for companies that have consumer subscription businesses and need sustained adoption curves rather than a single transformative event. He does not accuse any single company of bad faith, but frames the narrative evolution as something the whole ecosystem has to reckon with, particularly given the volume of specific timeline promises made over the past two years.
The Sustainability Question
The medium-term tension Cahn flags is underappreciated: multiple unprofitable, multi-billion-dollar AI labs are simultaneously burning capital on both compute and now talent, while revenue generation lags far behind infrastructure spend. OpenAI is structurally better positioned than most, given its subscription revenue base. For pure research labs without comparable commercial traction, the next three to five years represent a genuine stress test. Cahn's mental model is that the AI ecosystem is currently being carried by its own momentum — self-reinforcing arms race dynamics and competitive game theory keep everyone upping the ante — but momentum is not a business model. The arms race continues because every participant believes the prize justifies the cost. Whether the economics ultimately validate that belief remains, by Cahn's own assessment, an open and unresolved question.