Byrne Hobart: AI bubble is pro-social — it coordinates the entire supply chain to overbuild, and that's the point
Nov 19, 2025 with Byrne Hobart
Key Points
- Byrne Hobart argues the AI bubble functions as a coordination mechanism that synchronizes overbuilding across the supply chain to satisfy future demand, not a malfunction.
- A post-bust sentiment freeze poses greater risk than leverage inexperience because AI's capital intensity means institutional investors abandoning the space would halt progress entirely.
- Hobart predicts specialized models routed through orchestration layers will proliferate globally rather than 50 competing general-purpose frontier models consolidating AI development.
Summary
Byrne Hobart, partner at Anomaly and author of The Diff, co-wrote Boom: Bubbles and the End of Stagnation to reshape how investors and executives understand speculative excess. He argues the AI bubble functions as a coordination mechanism, not a market failure.
How bubbles coordinate supply chains
The standard critique of bubbles misses their central function. When suppliers across a chain simultaneously believe a technology is transformational, they overbuild for current demand. This synchronized overbuilding is what actually satisfies future demand. If TSMC doesn't believe AI matters, it doesn't build enough fabs. Nvidia can't ship enough chips. Model quality stalls. The wild valuations are the signal that synchronizes the entire system at once—what Hobart calls "just-in-time manufacturing of the future."
This directly answers Ben Thompson's concern that GPU hardware depreciates too fast to leave durable infrastructure behind. Power generation has much longer lead times than chips, and gas turbine suppliers can contractually lock in customers regardless of AI sentiment shifts. In a bubble-pop scenario, the stranded asset might actually be abundant cheap power, which would lower inference costs and extend the economic life of existing GPU fleets well beyond depreciation schedules.
Coreweave illustrates the subtlety. Its business operates in a narrow band where AI is real but not so disruptive that the next GPU generation renders its fleet worthless. Magnetar, which appears more frequently than Nvidia in Coreweave's prospectus, has a long track record of structuring bets on relative timing and volatility. Its presence on the cap table reads as a sophisticated bet that AI is simultaneously underhyped long-term and overhyped near-term.
What actually concerns him
Hobart worries less about leverage inexperience among tech leaders than about a post-bust sentiment freeze. After the dot-com collapse, internet investing became socially and institutionally untouchable for years. Zuckerberg starting a social network in 2004 looked dated, not visionary. AI is different because of capital intensity. You could build Facebook in a dorm room, but AI progress stops cold if institutional investors decide the space is uninvestable. A sentiment-driven funding drought would be far more damaging here than it was in the early 2000s.
Specialized models, not 50 frontier competitors
On sovereign AI and international bubble propagation, Hobart doubts the world needs 50 near-frontier general-purpose models. His bet is on narrow, deeply specialized models instead—a Rust-only coding model that hasn't "polluted its mind" with C or C++, or a model approximating a single expert's judgment. These route through an orchestration layer that delegates to the right sub-model dynamically. A French tax law agent negotiating with a US tax law agent on a cross-border M&A deal illustrates the pattern. General-purpose frontier competition consolidates. Specialized model diversity expands.
Deployment moves slower than hype
The bottleneck isn't capability but accountability structure. Organizations assume that if something intelligent goes wrong, a specific human made the mistake. Scaling output 100x with AI doesn't eliminate errors; it diffuses blame, and most large enterprises resist that. AI-native products will enter through smaller, more tolerant customers first, grow with them, then get acquired or adopted by legacy players once the feature set catches up. This mirrors the Stripe playbook for payments applied to intelligence.
Two forces shape internal adoption. Top-down mandates push AI adoption from the C-suite. Bottom-up worker resistance stems from rational self-interest—employees don't want to demonstrate their role is automatable. The "labor is the TAM" framing investors use for early-stage AI bets works as scope-setting, similar to Netflix calling sleep its biggest competitor. The real addressable market is narrower: work that produces a sequence of tokens, whether a document, a spreadsheet model, or code. The messier the physical and institutional context, the harder it is for AI to operate without a human world model to draw on.
The cognitive split
Hobart's sharpest observation concerns what AI does to the value of thinking itself. Writing used to signal knowledge. You had to read widely and reason carefully before producing a coherent argument. Models break that signal. A nine-year-old can prompt ChatGPT, answer follow-up questions, and produce a polished case for her side of a sibling dispute. The effort was loadbearing, and now it isn't. He predicts a widening gap between a cognitive overclass that thinks deliberately because they enjoy it and a cognitive underclass for whom thinking becomes progressively optional. He considers this choice about which group one joins something parents can make explicit to their children now.