Interview

Reflection AI raises $2B to build the 'American DeepSeek' — open-weight frontier models trained in the US

Oct 10, 2025 with Misha Laskin

Key Points

  • Reflection AI closes $2 billion from DST Global, Sequoia, and Lightspeed to build US-trained open-weight models positioned as a compliance-safe alternative to DeepSeek for locked-out enterprises.
  • The startup bets that chip access and algorithm co-design with frontier hardware unavailable to Chinese competitors will compound capability over time in ways larger labs cannot prioritize.
  • Reflection plans to monetize via services and customization tools layered on open models, targeting enterprises seeking cost reduction and performance on proprietary data that closed APIs cannot handle.
Reflection AI raises $2B to build the 'American DeepSeek' — open-weight frontier models trained in the US

Summary

Reflection AI has closed a $2 billion raise to build what its CEO explicitly frames as the 'American DeepSeek', open-weight frontier models trained in the US and distributed globally. The round was led by DST and 17 BD Capital, with participation from existing backers including Lightspeed, Sequoia, and CRB.

Misha, cofounder and CEO, co-founded the company roughly 18 months ago alongside Janus, a founding engineer at DeepMind who contributed to AlphaGo and Gemini. The team now numbers approximately 60 researchers and engineers drawn from frontier labs, with a meaningful presence in the UK.

The Strategic Thesis

Reflection's core argument is structural, not just technical. Many enterprises are effectively locked out of DeepSeek's models due to legal exposure, data provenance concerns, and compliance risk tied to Chinese-origin AI. A fully permissive, US-built open-weight model at comparable capability levels removes that blocker immediately, even before any performance leap is achieved.

Beyond compliance, Reflection is betting on chip access as a durable advantage. Chinese labs, constrained by export controls, had to co-design algorithms around the hardware available to them. Reflection intends to do the same co-design exercise with frontier chips that remain inaccessible to those competitors. The team is also investing in reinforcement learning as a differentiation vector, though specifics are not being disclosed.

Business Model

The commercial architecture resembles a Red Hat-style services and enablement layer built on top of open models, rather than a pure API token business. Misha identifies two primary enterprise use cases driving demand: cost reduction where closed-model performance is strong but prohibitively expensive, and performance improvement on proprietary or niche data distributions that closed models were never trained on. Both require deep customization, which open weights enable but do not simplify.

The pitch to large enterprises, sovereigns, and scaling startups currently spending heavily on closed APIs is that open models offer control over both cost and capability. Reflection's commercial play is to own the underlying intelligence layer and then build evaluation, customization, and agent tooling on top, since releasing a raw model without that support stack is, in Misha's framing, insufficient to drive real adoption.

Competitive Positioning

Reflection is making an explicit focus bet that larger labs like OpenAI cannot credibly replicate. Open-weight models cannot be the top strategic priority for a company whose primary commercial incentives point elsewhere. For capability to compound in the open-weight stack, the commercial incentive and the research incentive have to be fully aligned, and Reflection's argument is that only a dedicated open-intelligence company can sustain that alignment over time.