Tyler Cowen: AI adoption is being wildly underestimated, and America's AI lead is a form of soft power over China
Apr 14, 2025 with Tyler Cowen
Key Points
- Tyler Cowen argues AI adoption is wildly underestimated outside tech circles, citing that five prominent figures at a prestigious New York event recognized neither the term nor concept of AGI.
- China's leading AI models rely on American AI foundations, making Beijing dependent on Western reasoning embedded in the underlying systems and creating a structural soft power advantage for the United States.
- Equity markets are mispricing disruption risk in mid-tier software and small-business services, with companies likely devastated within five years as AI capability outpaces institutional ability to absorb it.
Summary
Tyler Cowen's central argument is that AI adoption is being wildly underestimated — not by the tech bubble, but by virtually everyone outside it. At a prestigious New York event two or three months ago, Cowen used the phrase 'AGI' with five well-known people and not one recognized the term. That's the baseline.
AI as soft power
Cowen's most pointed argument concerns geopolitics. DeepSeek and Manus, China's leading AI models, are built on American AI foundations. As the Chinese government deepens its reliance on these systems, it becomes dependent on Western modes of thought embedded in the underlying models. Beijing can censor outputs on Taiwan or Tiananmen Square, but stripping out the underlying reasoning would make the models substantially dumber. The smartest entities in China, Cowen argues, are already effectively American — and that gap widens as AI capability compounds. He frames this as the Francis Fukuyama liberal convergence thesis being realized through AI rather than through politics.
Tariffs vs. the AI race
On tariffs, Cowen is direct: the AI race matters far more than trade policy, and winning it requires free trade in AI inputs. Export controls on China are worth attempting despite uncertain efficacy — he sees little downside to trying the first-order policy. He acknowledges Ben Thompson's counter-argument that chip restrictions could increase the likelihood of a Taiwan invasion by removing China's economic stake in the island's stability, calling it 'not impossible,' but says the expected value calculation doesn't clearly favor Thompson's position. His working assumption is that China will eventually develop competitive chips and lithography domestically regardless.
GDP growth and institutional drag
Cowen pushes back on the idea that AI will rapidly deliver 3–5% real GDP growth. The sectors that adapt quickly — competitive, market-driven industries — get cheaper fast, which shrinks their share of GDP. The sectors that don't adapt — healthcare, government, higher education, K–12, nonprofits — represent more than half the economy and are structurally resistant. The better the AI gets, the more human institutional failure becomes the binding constraint. Meaningful acceleration in the growth rate requires rebuilding almost every major institution, which Cowen calls a generational project.
Who wins in the labor market
Cowen reframes the 'charisma vs. intelligence' question. The people who know a lot and understand it are now orders of magnitude more productive, managing what he calls 'armies of AIs.' You don't need more of them, but the ones you have matter far more. The traits that compound are taste — knowing which model to ask, which answer to trust — combined with the initiative to orchestrate AI at scale. Charisma alone isn't the answer; inspiration is closer to it.
What isn't priced in
Equity markets aren't reflecting the disruption risk, Cowen says. Companies with mid-tier software are likely to be 'devastated within five years' — slower than the most aggressive forecasts but well within a timeframe relevant to share prices. He singles out the current trend of acquiring small-business accounting firms at eight-times earnings as a case study in mispricing: the technology that makes those firms valuable is the same technology that will undercut their service offering.
On AI doomers
Cowen is skeptical of the 'AI 2027' style catastrophe scenarios, not because he rules out risk, but because the people making those arguments aren't acting on them — none are short the market. His challenge: take it to peer-reviewed journals the way climate researchers did, rather than publishing blog posts with seventeen bullet points. His practical position is that there is no pause option. 'I would rather be an American paperclip than a Chinese paperclip.'
Robots and near-term physical AI
Humanoid robots remain too far out to forecast usefully. Cowen's attention is on 'smarts in a box.' Driverless cars are the near-term inflection point he watches most closely — Waymo is expanding to Washington D.C. this year, which he thinks will shift mainstream perception more than any model release, precisely because the performance is observable and the safety record is real.
The through-line across every topic is timing. Cowen thinks almost everyone — executives, investors, policymakers — is sleeping on how fast the capability curve is moving relative to how slowly institutions will absorb it. The gap between those two speeds is where most of the risk and most of the opportunity sits.