Nvidia invests additional $2B into CoreWeave to accelerate capacity buildout
Jan 26, 2026
Key Points
- Nvidia invests an additional $2 billion into CoreWeave to accelerate GPU cloud capacity buildout, with CoreWeave deploying Nvidia's Vera CPU as its first customer.
- Jensen Huang frames the circular investment structure as allowing Nvidia to participate in every layer of the AI stack, though critics argue it creates conflicted incentives.
- Inference demand shows no signs of cooling as thinking models, research applications, and agentic programming drive token consumption higher even as models shrink.
Summary
Nvidia is investing an additional $2 billion into CoreWeave to accelerate GPU cloud capacity buildout. Nvidia will also make its Vera CPU available as a standalone offering, with CoreWeave deploying it first.
Jensen Huang called the investment a vote of confidence in CoreWeave's growth, management, and business model, describing it as a "small percentage" of the capital CoreWeave ultimately needs to raise. Huang defended the structure, arguing it allows Nvidia to "participate in every layer of the AI stack."
The investment reflects a broader dynamic: Nvidia, the world's most valuable company, has pledged tens of billions toward AI companies that consume its chips while simultaneously bankrolling infrastructure deployment that sustains demand for its products. Bloomberg documented this pattern in a chart showing Nvidia trading services, investments, and hardware back and forth with portfolio companies.
CorWeave's valuation sits around $49 billion. The $2 billion represents roughly 4% of that valuation. The company has announced high-profile customer wins recently, including with Meta. Nvidia is already a major investor and has demand guarantees in place with CoreWeave, which counts GPU purchases among its primary expenses.
The circular-deal structure has drawn criticism despite Huang's pushback. CoreWeave has demonstrated elite execution as a neo-cloud provider, and the market has responded positively. Nvidia's stock is up 30% in the last month and recovering from November lows, though still off summer highs.
Inference demand shows no sign of slowdown. H100 rental prices continue climbing. New applications keep emerging. Thinking models generate far more internal reasoning tokens than vanilla inference. Deep research reports drove another token-generation step function. Agentic programming creates 20-minute token-generation bursts. Even as models get smaller and open-source alternatives proliferate, usage scales proportionally, maintaining pressure on GPU capacity and utilization.