Redpoint's Jacob Effron on AI's next application wave, robotics data breakthroughs, and valuation reality
Dec 17, 2025 with Jacob Andreu
Key Points
- Physical Intelligence's pi-0.5 model shows that egocentric human video data drives measurable improvements once a robotics model reaches sufficient baseline capability, mirroring how scale unlocked reinforcement learning post-AlphaGo.
- Consumer AI consolidation will favor IP-licensed platforms with large audiences over features embedded in general-purpose models like ChatGPT and Claude, as studios and artists concentrate distribution.
- Redpoint's managing director sees valuation multiples as stretched but justified by market scale, noting that inference cost deflation is fast enough to normalize application-layer gross margins despite near-term compression.
Summary
Jacob Effron, managing director at Redpoint Ventures ($6B fund), co-runs the firm's early growth strategy focused on Series B and C rounds, with AI as a primary investing theme. His first deal at Redpoint was Ramp's Series B, a benchmark he says set the bar for team velocity.
Application Layer: What Works Now, What's Next
Coding has been the dominant AI application breakout of 2024, but Effron points to meaningful traction in customer support, healthcare, and legal as well. The critical question for 2025 is whether current model capabilities are sufficient to unlock the next wave of verticals, or whether that wave requires another round of model improvement. He frames it as a binary worth watching closely.
On the infrastructure side, Effron is contrarian-bullish heading into 2026. A relative stabilization in model development, which others read as a slowdown, he reads as an opportunity. A more stable model surface area means infrastructure builders finally have something durable to build against.
Consumer AI and the IP Platform Play
Effron sees the most interesting consumer AI dynamic emerging around IP licensing and fan interaction, not raw model quality. He argues that major IP holders, artists and studios alike, will consolidate around whichever platform aggregates the largest audience, rather than distributing across Gemini, ChatGPT, and Claude separately. He cites the OpenAI-Disney arrangement and Suno's deal with Universal as early signals of this pattern. Standalone apps with strong audience density, not embedded features in general-purpose models, are where he expects IP owners to land.
Robotics: A Data Breakthrough Worth Watching
Effron flagged a significant result from Physical Intelligence (a Redpoint portfolio company), released the day of the recording. The finding: once their robotics model pi-0.5 reached a sufficient baseline capability trained on teleoperated data, injecting large volumes of egocentric human video data produced measurable model improvements, an emergent capability that did not appear at lower model quality thresholds.
The analogy he draws is instructive. Reinforcement learning post-AlphaGo failed broadly because base models weren't strong enough to leverage it. Pre-training at scale changed that. The same dynamic may now be playing out in robotics, where scale unlocks the ability to use abundant human video data that was previously unusable. Near-term commercial deployment, in his view, starts in controlled enterprise environments before any home or consumer setting.
Valuations: Expensive but Not Irrational
Effron acknowledges the valuation environment is stretched but argues it has to be held alongside the scale of markets being addressed. A company can move from seed to Series B in five days in the current environment, and Redpoint's eight-person team structure is designed to handle that fluidity across stages. He pushes back on the idea that the overall venture funding data tells a clean bearish story, attributing headline declines to the wind-down of anomalously large 2021 funds deployed in six to twelve months, plus harder conditions for emerging and first-time managers. SPV activity and direct hyperscaler investment, including Nvidia and the large cloud providers, are filling volume that traditional fund data doesn't fully capture.
Gross Margins and Inference Costs
On the question of LLM inference costs compressing software gross margins, Effron is broadly unconcerned. He notes that any specific set of AI capabilities a portfolio company was running six months ago now costs roughly ten times less. The concern is real at the margin, citing the Notion Wall Street Journal data point showing gross margins compressing from approximately 95% to 85%, but the underlying price deflation in foundation model inference, driven by intense competition among labs, is fast enough that he expects gross margins to normalize well for application-layer companies over time. The same deflationary trend he expects to extend into video and other modalities.