Safe Superintelligence raises over $1B at a $30B valuation with Green Oaks leading
Feb 18, 2025
Key Points
- Safe Superintelligence raises $1B+ at $30B valuation, a sixfold jump from its $5B prior valuation, with Green Oaks leading a $500M check.
- Ilya Sutskever and CEO Daniel Gross are betting on a no-ship strategy, withholding any product until achieving safe superintelligence, a bet only credible founders can afford to make.
- SSI's stealth approach directly contradicts OpenAI and xAI's constant model releases, gambling that internal evaluation and focused execution will outpace public benchmarking and mindshare tactics.
Summary
Safe Superintelligence Raises $1B+ at $30B Valuation
Safe Superintelligence closed a funding round exceeding $1 billion at a valuation above $30 billion, with Green Oaks leading a $500 million check. The round marks a significant step up from the company's prior $5 billion valuation.
Ilya Sutskever, who left OpenAI to cofound SSI alongside CEO Daniel Gross, is steering the company with a stated commitment not to release any product until it achieves safe superintelligence. This no-ship strategy is only viable for founders with Sutskever's credibility and Gross's track record—they're essentially asking investors to trust billions in capital based on their technical judgment and execution alone.
The valuation sits in the rarefied air of the world's most valuable private technology companies. Given that frontier AI development consumes enormous compute resources and SSI has pledged to stay private until its safety goals are met, the capital will almost certainly flow toward GPU procurement from Nvidia. The economics of the space mean that even if AI becomes a commodity—which several speakers in the episode suggest is happening—there is still a trillion-dollar market at stake across knowledge work disruption. At that scale, funding multiple competing teams to billions of dollars makes mathematical sense, similar to how oil exploration companies justify massive capital deployment despite oil being a commodity.
Daniel Gross's path to SSI's CEO role is worth noting. He previously founded AI companies that were acquihired, notably by Apple. Rather than take a traditional post-acquisition role, he ended up as CEO of one of the most heavily funded AI startups. The episode's hosts note this as a masterclass in founder negotiation—after an acquihire, the playbook should be to ask the acquiring company about succession planning and aggressively campaign for the top role rather than accept a middle-management position or rest on acquisition proceeds.
SSI's positioning also reflects a broader tension in the foundation model wars. Competitors like OpenAI and xAI release models constantly to maintain mindshare and investor confidence. SSI is betting the opposite: that building in stealth, focused only on internal evals rather than public benchmarks, is a competitive advantage. Whether that strategy works depends entirely on whether Sutskever and Gross deliver AGI-level capabilities before capital or talent constraints force them into the market.