News

OpenAI releases first open-source model in years — GPT-O OSS matches o3 performance and runs on consumer hardware

Aug 5, 2025

Key Points

  • OpenAI releases GPT-O OSS, its first open-source reasoning model in years, matching o3 performance on consumer hardware at a $4M training cost that represents roughly ten days of company revenue.
  • The 12B parameter model is distilled from frontier training runs, meaning OpenAI's knowledge of how to build reasoning models may be as valuable as the released weights themselves.
  • The move arrives as Chinese competitors like Qwen and DeepSeek gain open-source momentum and startups like Reflection raise $1B to build competing reasoning models, leaving OpenAI's long-term competitive advantage unclear.

Summary

OpenAI released GPT-O OSS, its first open-source model in years. The 12B parameter version performs roughly on par with o3 on challenging benchmarks, including health-related problems. Sam Altman framed the release as a bet on individual empowerment and research acceleration.

The small model trained for approximately $500K, while the large model cost around $4M. That represents roughly ten days of OpenAI's current revenue run rate. Both models are distilled from OpenAI's frontier training runs (GPT-4, GPT-4.5, and beyond), meaning the $4M figure reflects the cost to compress and optimize that knowledge into a runnable form, not the original R&D cost of frontier reasoning capability. OpenAI's knowledge of how to build these models may be as valuable as the weights themselves.

Less than one year elapsed between o1's announcement in September 2024 and the release of an o3-level open-source model. Months earlier, Altman polled users on what they wanted: a frontier reasoning model or something runnable on a phone. Responses were split. He has now delivered both.

The release arrives as Chinese models like Qwen and DeepSeek have gained momentum in the open-source space, and as startups like Reflection are raising $1B to build competing open-source reasoning models. Meta remains a significant player in open-source AI, and whether they respond is now material.

OpenAI claims the models perform comparably to its frontier models on internal safety benchmarks and has worked to mitigate biosecurity risks. The company is offering a $500K red-teaming prize fund to encourage researchers and developers to identify novel safety issues. GPT-O OSS is not multimodal and lacks image understanding, which leaves room for fine-tuning experiments but constrains immediate use cases.

The long-term competitive significance remains unclear. Whether this represents a foundational platform layer, a dominant but contestable position, or a niche win in a specific workflow depends on how the models are adopted and whether competitors respond effectively. The models are efficient enough to be genuinely useful, but their impact on AI accessibility and the broader competitive landscape is still unfolding.