News

GPT-5 is facing a 'reverse DeepSeek moment' — botched launch creating false impression of stalled AI progress

Aug 19, 2025

Key Points

  • OpenAI's botched GPT-5 launch—marked by rate caps, missing features, and a broken router—created a false narrative that AI progress has stalled, the inverse of January's DeepSeek shock to markets.
  • GPT-5 actually comprises three state-of-the-art models across performance classes, and OpenAI's usage spiked post-release, but execution failures obscured sustained momentum from GPT-4 through GPT-5.
  • OpenAI's silence on retiring GPT-4 allowed critics to dominate the narrative, risking poisoned policy conversations in Washington about whether AI progress warrants regulatory attention.

Summary

OpenAI's GPT-5 launch is creating a false impression of stalled AI progress, the inverse of the DeepSeek moment that spooked markets in January. Rate caps, missing features, and a broken router caused users to downgrade to GPT-5 Fast and lose access to GPT-4 variants. That negative signal is now feeding a narrative in Washington and among market observers that AI capabilities have plateaued and AGI is no longer a concern.

The reality is materially different. GPT-5 actually comprises three distinct models: GPT-5 Fast, GPT-5 Thinking, and GPT-5 Pro. At least two of them, and likely all three, are state-of-the-art within their respective performance classes. OpenAI's model usage spiked after the release, not declined. The naming was cleaner than previous iterations. The product strategy of offering a router in GPT-5 Auto was sound.

The launch's execution obscured what it actually delivered. Because Anthropic and others had already released reasoning models, the incremental gains from GPT-5's reasoning capabilities look smaller when comparing GPT-4 directly to GPT-5. That's a mistake. The full progression from GPT-3 to GPT-4 to GPT-4.5 to GPT-5 represents sustained momentum, not stalling. GPT-5 is a refinement optimized for efficiency that's breaking new ground, but the compressed timeline and poor initial user experience obscured that story.

Dean Ball argues that the jump between April 2024 (GPT-4 Turbo) and April 2025 (o3) is larger than the jump between GPT-3 and GPT-4, though that claim carries ambiguity since GPT-3 and GPT-4 spanned roughly four years, not one. Regardless, the current trajectory is steep.

The real problem is perception management. OpenAI's instinct to retire GPT-4 in favor of a simplified model switcher was correct, but the company failed to own that decision publicly. Had leadership explicitly said the company is optimizing for consumers, the narrative might have held. Instead, silence allowed the "OpenAI is flailing" crowd to fill the void, a dynamic that risks poisoning policy conversations in Washington about whether AI progress warrants regulatory attention.