Commentary

Jevons paradox vs. Nvidia bear case: tech Twitter reacts to DeepSeek's market shock

Jan 27, 2025

Key Points

  • DeepSeek's $6 million training run triggered a $2 trillion equity loss across AI names, splitting tech Twitter on whether cheaper infrastructure kills or expands hardware demand.
  • Nvidia's real vulnerability lies in export constraints and potential custom silicon displacement, not retail panic; the company disclosed 22 percent of last quarter's billings flowed through Singapore.
  • Mid-market model companies like Anthropic face structural pressure: they lack ChatGPT's 500 million users and must amortize billion-dollar training costs as commodity compute collapses.

Summary

DeepSeek's efficiency gains have triggered a market reckoning that is splitting tech Twitter along a single axis: whether cheaper AI infrastructure kills hardware demand or creates it.

The shock is real. A $6 million training run produced a model that spooked the market into a $2 trillion equity loss across AI names. On Sunday, when CEO messaging would normally wait for Monday-morning spin control, both Satya Nadella and major OpenAI stakeholders rushed to the same argument: Jevons paradox. As efficiency rises, demand explodes. Cheaper compute doesn't shrink the market—it expands it beyond recognition.

The bear case for Nvidia is tighter than the retail panic suggests. Nvidia's export constraints to China are real: over one-third of sales ($40 billion last year) flow through Singapore, despite Nvidia's official statements that Singapore shipments are "insignificant." The company disclosed that 22 percent of last quarter's billings landed there. DeepSeek proved that optimized teams can train frontier models on restricted hardware. If inference becomes the bottleneck and custom silicon (like Google's TPU designs for test-time compute scaling) edges out general-purpose GPUs, Nvidia's moat narrows. The architecture is still too young to call, but the risk is material.

The bull case relies on scale dynamics. Four to five years into machine intelligence buildout, foundational model costs are collapsing while demand from hyperscalers—and now China at the state level—is committing hundreds of billions annually to compute. If AI becomes as cheap as recoloring a phone screen, product builders will enable inference-heavy features that were prohibitively expensive weeks ago. Custom feeds, email filtering, real-time document summarization—all suddenly viable. That abundance drives installation, not decline.

What matters most is the layer. App-layer founders who run unprofitable generative AI companies already priced in 95 percent cost reductions. DeepSeek was expected, not shocking. The actual pressure lands on the middle: Mistral, Anthropic, Cohere. These companies sell API access without runaway consumer adoption. They need to amortize billion-dollar training costs over time. If their models get lapped while compute collapses, the unit economics don't recover. Anthropic in particular faces hard questions—it lacks ChatGPT's 500 million users and OpenAI's distribution moat.

On the consumer side, the story is bleaker for challengers. ChatGPT is free at chat.com. DeepSeek is also free. The average user sees no functional difference. Sam Altman's response—seeding expensive o1 reasoning queries into the free tier—is a direct answer: don't compete on baseline capability; win on reasoning and premium features. That's defensible. But it assumes OpenAI can hold distribution while the commodity model layer commoditizes.

Two structural insights cut through the noise. First, the b-to-b market is now ruthlessly competitive on price and performance alone. Developers don't care about brand or polish. Second, consumer AI will likely concentrate around whoever owns the default—the home screen slot, the search default, the first page you see. That's where sustainable value accrues. Everything else trends toward margin compression.

Pavel Asparouhov's observation holds: if your worldview on AI shifts every three weeks, you may not know what's happening. The entrepreneurs who understood cost curves and scaling dynamics eight weeks ago aren't panicking today. The panic is concentrated in infrastructure plays betting on continued scarcity, and in mid-market model companies that failed to build distribution or earn consumer trust before the floor dropped.