Social Network 2 is officially in development — hosts perform a satirical table read
Jun 26, 2025
Key Points
- Meta has hired three OpenAI researchers—Lucas Bayer, Alexander Colesnikov, and Juwi Hua—in a direct poach confirmed by an OpenAI spokesperson.
- Hosts perform a satirical table read imagining Meta's open-sourcing strategy for Llama models as a bet to beat OpenAI's closed API moat through community-driven development.
- The sketch captures real tensions in Meta's AI positioning: distributing model weights widely while controlling safety, and funding $60 billion in capex to enable an openness doctrine.
Summary
The hosts perform a satirical table read of a purported Social Network 2 script imagining a scene between Mark Zuckerberg and Nat Friedman at Meta HQ. The dialogue draws heavily on technical AI terminology—Llama parameters, token context windows, rotary positional embeddings, RLHF, DPO—and centers on Zuckerberg's strategy of open-sourcing Meta's language models as a competitive bet against OpenAI's closed API approach.
The script positions open weights as a democratization play. By publishing Llama weights, Meta lets the broader developer community build on the foundation faster than Meta's internal teams could alone. Friedman's character raises a counterargument: open-sourcing lets competitors like Mistral inherit Meta's work without the original talent, and he questions whether Purple Llama's safety measures can compete against OpenAI's revenue-generating API moat.
Zuckerberg's character frames the bet as an ecosystem play rather than a monopoly grab. Llama functions as the public manifesto while Scout and Maverick represent the horizon. The strategy depends on alignment happening in real-time through community feedback and reinforcement learning from human feedback (RLHF), betting that a billion hands shaping the technology openly will outpace a closed room.
The hosts note they found the script at a gym and hedge that it may be a prank, but the table read captures genuine technical and strategic tensions in Meta's AI positioning. These include the friction between distributing model weights widely and controlling safety, the choice between betting on ecosystem momentum versus direct revenue, and the gap between Zuckerberg's stated commitment to openness and the $60 billion AI capex that underwrites it.
Meanwhile, Meta has poached three OpenAI researchers—Lucas Bayer, Alexander Colesnikov, and Juwi Hua (spelled phonetically in the transcript)—according to sources familiar with the matter and an OpenAI spokesperson confirmation.