Interview

Sam Altman on Codex 5.3, the Frontier platform, and why every company is now an API company

Feb 5, 2026 with Sam Altman

Key Points

  • OpenAI launched Codex 5.3 with mid-turn interactivity, letting users steer models during long tasks rather than waiting for unguided output to complete.
  • Altman argues the bottleneck has shifted from raw AI capability to orchestrating agent teams and integrating them into existing systems, making forward-deployed engineers essential.
  • Every company is becoming an API company as agents consume services directly, forcing business models and revenue-sharing to adapt as software layers thin.
Sam Altman on Codex 5.3, the Frontier platform, and why every company is now an API company

Summary

OpenAI launched Codex 5.3, which Altman describes as the best coding model in the world. The update combines improvements from feedback on Codex 5.2 into a single model: smarter programming, significantly faster performance, and mid-turn interactivity. Users can now steer the model during long, multi-hour tasks rather than waiting for unguided work to complete. Altman says early expert users noticed the difference within hours of deployment.

Mid-turn capability matters because agents can now handle corrections in real time. Altman compares it to training a new coworker: you give feedback early and often, and the model learns. Without that, you're either hoping for a one-shot success or collecting errors to fix later.

Agent management

Altman frames the near-term future as one where users manage teams of agents rather than individual models. As agents improve, teams will operate at higher levels of abstraction. The bottleneck isn't raw intelligence but the ability to orchestrate and monitor complex workflows. Tools that make agent management easy will matter more than raw capability gains for a while.

Forward-deployed engineers will remain essential in the near term. They help non-AI-native companies figure out integration: hooking agents into existing systems, deciding whether to fine-tune on proprietary code, orchestrating agents from different vendors, and managing security and data isolation. The anxiety around data exposure and context exploits is real and immediate.

API companies

Altman anchors a broader shift in how software gets built and distributed: "Every company is an API company now, whether they want to be or not." Agents will consume services directly, ordering Ubers through ChatGPT and paying other companies on behalf of users. Business models and revenue-sharing will need to adapt, but precedent exists. Uber itself only made sense as a platform when iPhone made location-aware mobile ordering viable.

On SaaS durability, Altman acknowledges that some companies will disappear as layers thin, but he heard little panic from the SaaS leaders he's spoken to recently. Their take: we can generate software now too, and we have systems of record and user trust. Some won't survive. Others will transform.

Codex desktop and the experience gap

Codex desktop has surprised Altman with its adoption. The insight is straightforward: 10% of polish on the user experience goes extremely far when there's massive capability overhang. The product removes friction for non-professional developers—no IDE setup, no environment configuration. Altman used it to build an auto-completing to-do list that would attempt tasks and ask questions when stuck.

The long-term vision is a unified AI across surfaces: desktop when you're at a desk, mobile for quick task additions, all backed by a single coherent memory and context. That still requires reasonably technical users today, but Altman expects versions that handle general knowledge work and computer control without technical barriers. The goal is people building sophisticated things without looking at code.

Altman sees the open-source project OpenClaw as exemplifying how innovation happens in this space. One-person projects can ship user experiences that large companies hesitate to launch because of liability, privacy, and legal concerns. He calls it the homebrew computer club spirit and sees it as essential to the ecosystem.

Metrics and benchmarks

Altman dismisses the idea that standard benchmarks will remain meaningful. Most current charts will be obsolete in a few years. People initially expected very long context windows to be the answer, but the actual pattern is agents breaking work into pieces and orchestrating sub-agents, similar to how people work. That already works with current limitations.

The next chart will just be GDP impact. What comes after that, he doesn't know. One reason he finds that question interesting: GDP as currently measured could decline while quality of life rises sharply. That's deflationary and novel, and society hasn't practiced it.

Compute and research labs

On the semiconductor bottleneck, chips are the immediate constraint, though energy sometimes swaps in. Altman argues the U.S. should aggressively fund wafer capacity expansion and rebuild the supply chain talent base rather than hoping market forces alone solve it.

On research labs sprouting across Silicon Valley, Altman is enthusiastic. OpenAI hoped to prove industry could do research again after a long period where that capability had atrophied. He expects some labs to fail, some to succeed, and some to merge into other efforts. The best acquisitions will likely blend research and product work.

On data as a resource, Altman agrees the metaphor applies now more than when it was trendy a decade ago. But compute power is closer to the true constraint. The last eight years have shown smooth returns from scaling: put in more compute, data, and ideas, and log-linear improvements follow. No one's been right about it topping out, even when it seemed to pause briefly.

Video generation and discovery

On Sora, Altman acknowledges that generating video is compelling but watching other people's generated video is not, at least not yet. This mirrors broader AI: people love using ChatGPT; reading other people's ChatGPT outputs is less interesting. The most viable use case so far is memes in group chats and personalized content, making yourself look cooler, generating caricatures that say something about you, or putting yourself into fictional scenarios like Studio Ghibli scenes. These work because they're personalized and about the user.

Disney partnerships will unlock character generation at scale. Altman personally prefers opening the floodgates to restrict-and-release strategies, though Disney has its own preferences.

Advertising and bots

OpenAI's Super Bowl ad was intentionally not a direct-response play. It celebrated the revolution and spoke to researchers and builders, not mass-market users. Some loved it, some hated it, many missed it, and Altman felt okay about that split.

On Anthropic's ads that criticize OpenAI's supposed ad-injection plans, Altman calls them "well played" but notes the irony of using deceptive advertising to critique deceptive ads. OpenAI's core principle is simple: no ads in the LLM stream. That would feel dystopian. He respects his users enough not to do it.

On the broader social problem, Altman is interested in assertion of humanity, verifying that content is from a person, rather than detection of AI. But social platforms may not have strong incentives to solve the bot problem in the short term since engagement is up.

External noise and internal focus

Altman describes an odd gap between external noise and internal focus. Twitter erupts over something while OpenAI is deploying new models, companies are transforming, and the team is solving compute bottlenecks. Someone corrects the record, but by then the narrative has moved on. He finds the internal reality far less chaotic than media coverage suggests, mostly just busy building.

One acknowledged pressure: any word or sentence can be spun into a headline and require a correction. The original story gets wider circulation than the fix. But Altman views it as structural to OpenAI's position. It's central to every stakeholder's anxiety about AI, every competitor's attack surface, and every person's question about their own future.