Nvidia posts record $44B quarter despite China chip ban — surpasses Meta in quarterly revenue
Jun 3, 2025
Key Points
- Nvidia posted $44B in record quarterly revenue, up 69% year-over-year, while projecting $8B in lost China revenue from US export controls and H20 chip shipment bans.
- The company's inference-to-training chip split remained flat at 40-60 despite management's bullish messaging on reasoning AI demand, raising questions about whether the shift is nascent or overstated.
- Jensen Huang announced NVLink Fusion, licensing Nvidia's chip-to-chip connectivity to third-party processors, positioning the company as the connective tissue in hyperscaler custom silicon strategies.
Summary
Nvidia posted record quarterly revenue of $44 billion for fiscal Q1, a 69% year-over-year increase, despite being unable to ship $2.5 billion in H20 processors to China due to US export controls. The company guided Q2 revenue at approximately $45 billion and projected an $8 billion revenue loss for the current quarter from the China ban. Nvidia stock surged 5% after hours and now surpasses Meta in quarterly revenue.
The earnings call centered on inference workloads and reasoning models. CFO Colette Crest said the company is witnessing "a sharp jump in inference demand" driven by reasoning AI, which requires "hundreds to thousands of times more tokens per task" than previous single-shot inference. Microsoft processed over 100 trillion tokens in Q1, a five-fold increase year-over-year. CEO Jensen Huang framed reasoning AI as a step-function demand driver that has "busted through concerns about hallucination" and proven AI's problem-solving capability.
One metric complicates this narrative. Nvidia reported that 40% of its chips are used for inference and 60% for training, the same split as the prior quarter. Ben Thompson flagged this as notably flat despite bullish rhetoric from management about inference momentum. Huang and Crest cited public data points such as Microsoft's token counts and Google and OpenAI scaling as evidence of the shift, but neither provided updated guidance on the inference-to-training ratio. Thompson sketches two explanations. Either the shift is genuinely nascent and future orders will reflect it, or management is messaging aggressively about inference demand without yet seeing proportional order growth.
Nvidia's dominance in China is eroding under export controls rather than competition. Huang said Nvidia held 95% market share in China four years ago and now holds 50%. Chinese rivals like Huawei Ascend are more expensive per compute cycle, and demand for GPUs in China remains uncapped. If supply were not constrained by US policy, Nvidia would retain its position. The real moat is software. Nvidia's CUDA ecosystem creates network effects that insulate it from pure performance competition. AMD matches Nvidia on raw compute but lacks the developer ecosystem. Huang made a counter-intuitive case to US policymakers: allowing Nvidia to sell to China serves American interests because it locks China into dependence on US software platforms, whereas export restrictions only accelerate Chinese chip development. This argument mirrors historical patterns where companies gained market share by ceding manufacturing leverage. Nvidia's situation inverts that logic, since its losses stem from US government policy rather than Chinese competition.
Huang announced NVLink Fusion at Computex, allowing customers to combine Nvidia's high-speed networking with third-party CPUs and AI accelerators. This move acknowledges that major cloud providers like Microsoft, Amazon, and Google are building proprietary chips and will not stop. Rather than resist, Nvidia is licensing its chip-to-chip connectivity technology to remain central to custom configurations. Grace Hopper paired an Nvidia ARM CPU with an Nvidia GPU. Grace Blackwell pairs an Nvidia ARM CPU with two Nvidia GPUs. Under NVLink Fusion, customers can mix and match, pairing Nvidia GPUs with non-Nvidia CPUs or vice versa. Nvidia's confidence rests on the assumption that even with optionality, customers will choose Nvidia chips because they perform best. This is a strategic hedge. As hyperscalers reduce reliance on off-the-shelf systems, Nvidia shifts from selling complete rigs to licensing the connective tissue that holds heterogeneous clusters together, maintaining its position as the default choice while appearing cooperative.