Dwarkesh Patel on AI acceleration, the missing ChatGPT moment for agents, and what's underrated about intelligence
Mar 28, 2025 with Dwarkesh Patel
Key Points
- Patel argues usefulness, not cost, is the constraint blocking AI's economic impact—models cost $0.02 per million tokens while agents remain unreliable and call centers fully staffed.
- The transformative breakthrough for AI agents will likely come from a foundation lab shipping a model architected for computer use, not from startups that have failed at agent-building for years.
- Patel assigns 10–20% probability to AI stagnation lasting fifty-plus years, driven almost entirely by risk that deep learning is the wrong paradigm; if the paradigm holds, he sees containment as nearly impossible.
Summary
Dwarkesh Patel, host of the Dwarkesh Podcast and author of The Scaling Era — a compilation of his AI interviews with figures including Mark Zuckerberg, Demis Hassabis, and Dario Amodei — argues that the missing ingredient in AI's economic impact is not cheaper models but smarter ones. At roughly $0.02 per million tokens, cost is not the constraint. Usefulness is. Call center workers still have their jobs, and agents still don't reliably work, which Patel reads as a fundamental model limitation rather than a product or deployment problem.
The missing ChatGPT moment for agents
The Ghibli image wave this week was the closest recent analogue to a mainstream AI breakthrough — ordinary users one-shotting beautiful outputs without prompt engineering. But Patel argues the genuinely transformative equivalent for agents hasn't arrived yet, and probably won't come from a startup. Years of agent-building attempts have failed, and his expectation is that the breakthrough will come from whichever foundation lab ships a model specifically architected for computer use.
What intelligence alone won't solve
Patel pushes back on the idea that raw intelligence is the only variable that matters. The more underrated factor, he argues, is the human "global hive mind" — the accumulated specialization, capital deepening, and parallel experimentation that lets civilization build iPhones and skyscrapers. AI needs to replicate not just intelligence but that distributed, compounding structure. The Deep Blue moment is the historical warning: every time AI clears a capability threshold, it turns out to have captured only a slice of what intelligence actually requires.
P(stagnation)
Asked for his probability that AI stalls for fifty-plus years, Patel puts it at 10–20%, driven almost entirely by the risk that deep learning is simply the wrong paradigm. If the paradigm holds, he sees containment as nearly impossible: even AGI without a further intelligence explosion would be economically transformative enough that suppression would be extraordinarily difficult. He draws on the economic growth analogy — global GDP has compounded at roughly 2% annually since the 1750s, up from ~0.2% before, and there was no obvious physical ceiling that blocked that transition.
The blob and billions of AIs
On what "billions of AIs" actually means structurally, Patel is candid that nobody knows how the architecture will resolve. He floats an idea from a J. Coetzer interview — "the blob" — where a central entity gains far more compute than any human leader, reading every pull request, writing every press release, handling every customer interaction simultaneously. Whether that manifests as parallel copies of a single model or something architecturally different is an open question.
Apple and the innovator's dilemma
On Apple, Patel's view is that treating AI as a feature — the 25th department improving Siri's diction — is the category error. The companies likely to win are the ones treating it as the organizing logic of everything else. The counterpoint he acknowledges: SSI's closed-loop approach, where the AI accelerates AI research internally without external deployment, might work — but he puts that at roughly 50/50.
Underrated figures
Asked who in the book is most underrated, Patel lands on Carl Shulman. He describes Shulman as the original source for a remarkable range of ideas now circulating in AI discourse — the software-only singularity, intelligence explosion mechanics, economic modeling of transformative AI growth regimes — ideas Shulman distributes through conversations rather than writing. Patel also flags Ajeya Cotra, whose decade-long work modeling evolution as a computational pathfinding exercise to bound AI timelines he calls foundational, though he notes she has gained more recognition than Shulman.
Independence and value capture
On his own business, Patel says he came close to signing with a podcast network early on that offered production support in exchange for 50% of lifetime revenue — a deal he walked away from. His read, consistent with the TBPN hosts' framing, is that editorial independence is the asset: being tied to a single foundation model company or investor would constrain the conversations that make the show valuable in the first place.