Tomasz Tunguz: AI data center buildout now exceeds 1% of US GDP — echoing dot-com infrastructure boom
Oct 22, 2025 with Tomasz Tunguz
Key Points
- AI data center capital spending now exceeds 1% of US GDP, a threshold originally forecast for AI's economic contribution, not infrastructure consumption alone.
- Extended GPU depreciation schedules at major operators mask physical reality: Google internal analysis shows failure rates spike significantly after three years, half the assumed asset life.
- New chip architectures claiming 90% performance gains threaten current hardware valuations, while token growth and agentic AI simultaneously pressure utilization economics in opposite directions.
Summary
AI infrastructure spending has crossed a threshold that few anticipated this soon. The data center buildout now exceeds 1% of US GDP, a milestone that Tomasz Tunguz of Theory Ventures notes was originally forecast for AI's contribution to GDP, not its capital consumption. The parallel to the dot-com era is deliberate and data-driven, with Tunguz pointing to Lucent and Nortel as the cautionary archetypes of infrastructure companies that soared and collapsed when closed-loop vendor financing ran out of external demand.
Vendor Financing: Real Risk, Wrong Target
The $110 billion in NVIDIA vendor financing making headlines is not, in Tunguz's view, a straightforward red flag. Vendor financing is structurally common, and the 2012 ASML customer co-investment program, in which TSMC, Intel, and Samsung collectively contributed $6.8 billion to accelerate EUV lithography, is cited as a working precedent. The mechanism breaks down only when capital circulates in a closed system with no net new GDP entering. That condition does not currently apply, given OpenAI's revenue trajectory, enterprise AI adoption, and the repatriation of BPO workloads from offshore markets.
The more concentrated risk sits with bondholders and lenders, not with the AI labs themselves. OpenAI is not the primary balance sheet risk-taker. Its infrastructure partners and creditors are. The $500 billion annual CapEx run rate industrywide makes the return on invested capital question and depreciation schedule assumptions the more consequential variables to watch.
Depreciation Schedules as a Stress Indicator
Several major data center operators have extended depreciation schedules, in some cases nearly doubling them. Amazon moved from a three-year to a six-year schedule, then pulled back to five years earlier in 2025. This matters because these schedules underpin the collateral valuations supporting the debt structures. Google internal analysis found GPU failure rates at 50 to 70% utilization increase significantly after three years, which is half of the extended depreciation window, creating a potential mismatch between asset life assumptions and physical reality.
Adding pressure on the deflationary side, a Cornell Research study published this week, tied to a company called SGI, claims a new chip architecture delivers 90% performance improvement relative to current GPUs, with a second-generation design targeting a 100x improvement from that baseline. If even a fraction of that holds, the useful economic life of currently deployed hardware compresses further.
Token Growth as the Bull Signal
Google has now released three data points on token generation, enough to begin tracking whether growth is decelerating. Tunguz does not believe the cause would be demand erosion. Efficiency gains from algorithmic improvements are the more likely explanation. The net vector between surging token demand and deflationary chip and model improvements remains genuinely uncertain, but both forces are accelerating simultaneously.
OpenAI is on a trajectory toward one billion monthly active users, a scale that, combined with Google and Microsoft both declaring themselves hardware-constrained in 2025 earnings calls, suggests internal confidence among the hyperscalers is high. The transition from AI as knowledge compression to AI as task execution in the agentic phase is the demand multiplier that could make current infrastructure assumptions conservative rather than aggressive.
Agentic Commerce as the Next Advertising Market
The advertising market is structurally open for the first time in roughly fifteen years. Google and Meta currently dominate, with approximately $250 billion in search ad revenue and $265 billion in social respectively. AI-native interfaces are disrupting the query-to-purchase funnel in ways that create new auction surfaces.
Two distinct monetization models are emerging. The first is in-context transactional advertising, where a purchase decision is made within the same session as the research query. The second is retargeting across unrelated content, analogous to Meta's social feed ad model. Tunguz notes that retargeting historically performs best for considered purchases, citing the automobile category as an example, while impulse formats suit lower-consideration items. The more aggressive scenario under discussion involves AI platforms taking a 30% transaction cut rather than a 1% affiliate fee, a model that would mirror the Apple App Store structure and that merchants would absorb if conversion performance justifies it.
Founder Playbook for a Potential Contraction
For founders operating in the current environment, Tunguz's primary recommendation is to build balance sheet aggressiveness now while capital costs remain low. The companies that navigated the 2021 correction most effectively were those with sufficient runway to acquire, pivot, or develop new products. The 2021 cycle also demonstrated that product-market fit is not a permanent state. Classic software companies lost it overnight as AI alternatives emerged, and that dynamic will repeat.
Oracle, Neo-Clouds, and Utilization Questions
Oracle has declined roughly 17% over the past 30 days, a move Tunguz partly attributes to reporting from The Information detailing approximately 10% gross margins on certain infrastructure contracts. The hyperscalers, Google, Microsoft, Amazon, show strong GPU utilization internally. The more uncertain variable is utilization rates at neo-cloud providers, where unit economics are less transparent. Tunguz believes the major labs will avoid structurally burning their infrastructure partners because these are relationships with a 20 to 30 year horizon.
Reinforcement Learning: Further Away Than It Looks
On the agentic capability side, the current tool-calling architecture underlying products like Cursor and Claude Code is more rudimentary than its market positioning implies. The loop of querying a model, feeding the output back in, and repeating is functional but brittle. Andrej Karpathy's framing of a decade-long agent development arc resonates with Tunguz, who notes that creating training environments for specific enterprise workflows mirrors the fragility seen in fine-tuned models, where 30 to 50% of prompts break when either the model or the task updates. The harder unsolved problem is generating reward functions for complex business tasks, an area still largely in academic research rather than production deployment. Whether current transformer architectures can solve this or whether a new architecture is required remains an open question.