Interview

Bernstein semiconductor analyst Stacy Rasgon: AI demand shows no signs of slowing and it's not like 2001

Nov 17, 2025 with Stacy Rasgon

Key Points

  • Nvidia has $500 billion in cumulative orders for Blackwell and Rubin chips with only 6 million of 20 million units shipped, signaling consensus earnings estimates are too low through 2027.
  • Hyperscalers are self-funding AI infrastructure primarily through operating cash flows rather than debt, structurally unlike the 2001 dot-com era when oil majors ignored the telecom buildout.
  • Bernstein's Rasgon recommends owning high-quality AI names like Nvidia and Broadcom while acknowledging that large customer commitments like OpenAI deals likely have only initial tranches firmly locked in.
Bernstein semiconductor analyst Stacy Rasgon: AI demand shows no signs of slowing and it's not like 2001

Summary

Stacy Rasgon, senior semiconductor analyst at Bernstein Research, sees no credible near-term slowdown in AI infrastructure spending and pushes back firmly on bubble comparisons to 2001.

Nvidia's Order Book Points to Upside

At a recent Washington DC event called GTC, Jensen Huang disclosed $500 billion in cumulative orders for Blackwell and Rubin chips across 2025 and 2026. Of roughly 20 million chips ordered, only 6 million have shipped to date, leaving approximately 14 million units to be delivered across five quarters. Rasgon reads that slide as a direct signal that consensus estimates for the upcoming earnings period are too low.

Nvidia's valuation, currently in the mid-20s price-to-earnings, is not yet at levels Rasgon considers stretched. The stock now represents roughly 8 to 9% of the S&P 500.

Demand Is the Only Variable That Matters

Rasgon frames every bear case for AI semiconductors around a single question: is demand there or is it not. His answer right now is unambiguous. Hyperscaler capex continues to rise, neoclouds are capacity-constrained rather than demand-constrained, and memory prices are climbing because supply cannot keep pace. A spending air pocket is inevitable at some point, but current visibility from company guidance and order pipelines does not indicate one in 2025, 2026, or even 2027, given that several large OpenAI compute deals with Broadcom, Nvidia, and AMD do not begin shipping until late 2026.

2001 Comparison Doesn't Hold

The structural difference between now and the dot-com era is who is doing the spending. In 2000, the largest companies by market cap were ExxonMobil and Chevron, neither of which was funding the telecom buildout. Today, capex is being deployed primarily by the most profitable businesses ever built, funded largely off income statements rather than debt. A Wall Street Journal estimate cited in the conversation put total AI infrastructure spending at roughly $2.4 trillion, of which $1.4 trillion is funded by operating cash flows and approximately $800 billion by private credit. Rasgon considers that ratio manageable for now.

The exception he flags is Oracle, which lacks the balance sheet of a hyperscaler and is increasingly reliant on debt markets to fund a commitment that reportedly stepped up by $300 to $400 billion in a single quarter. CoreWeave faces the same structural dependency. By contrast, Google, Meta, and Amazon are still primarily self-funding.

China Is a Strategic Problem, Not a Near-Term Numbers Problem

China is already out of Nvidia's financial model, so the revenue impact is currently contained. Strategically, Rasgon sees the export restrictions as a long-term risk. Huang has publicly acknowledged roughly $50 billion in lost opportunity. The deeper concern is that blocking Nvidia from China accelerates consolidation around Huawei as a local alternative, and if Chinese developers are forced to build their ecosystem around Huawei's CUDA-equivalent stack, Nvidia could face a more robust global competitor over time. For now, Huawei's chips are manufactured on inferior process nodes at domestic fabs like SMIC, limiting their power efficiency and making them uncompetitive outside China.

High-Quality Names, No Need to Overcomplicate

Rasgon's positioning advice for the cycle has been consistent: own high-quality AI names, primarily Nvidia and Broadcom, and ignore most of the rest. He acknowledges being too lukewarm on AMD, which has appreciated regardless. His read on the broader market structure is that the current phase still resembles the exponential portion of an S-curve, which lifts even lower-quality names. That dynamic will not last indefinitely, but it has not broken yet.

On backlog quality and contract enforceability, Rasgon is cautious. Semiconductor history, including COVID-era forced order fulfilment, shows that nominally non-cancellable orders are often renegotiated, and companies that insisted on enforcement ended up with customers who stopped ordering for a year. Even the large OpenAI compute commitments, he notes, likely have only the first tranche firmly committed at this stage.