Interview
Irregular emerges from stealth as an AI security company building a high-fidelity simulator to find novel attacks before they happen
Sep 18, 2025 with Dan Lahav
Key Points
- Irregular emerges from stealth with a simulator that stress-tests AI models against novel attack scenarios, claiming first detection of AI-to-AI jailbreaking and AI bypassing Windows Defender.
- The startup positions itself as the security platform for frontier AI labs—already working with OpenAI, Anthropic, and Google DeepMind—handling agent-layer and enterprise defenses those labs won't build themselves.
- Lahav frames the near-term threat as human actors weaponizing AI against already-risky model capabilities, not autonomous rogue AI; the startup is backed by Sequoia.
Summary
{
"long_summary": "Dan Lahav's Irregular emerged from stealth positioning itself as the security counterpart to the frontier AI labs — OpenAI, Anthropic, Google DeepMind — rather than a conventional cybersecurity vendor. The company says it is already working with all three.\n\nThe core product is a high-fidelity simulator that stress-tests AI models against novel attack scenarios, both as targets and as potential attackers. Irregular claims two firsts: detecting AI-to-AI jailbreaking, where one model attacks another, and observing AI bypassing Windows Defender to move laterally between endpoints. The simulator's purpose is to map attack vectors that don't yet have a known shape — which matters because anomaly detection, a pillar of current security infrastructure, requires a baseline. If you don't know what an AI-native attack looks like, you can't build one.\n\n**Why labs outsource this**\n\nLahav draws a clean line between what labs own and what they don't. Defenses baked into the neural nets themselves are lab territory. But defenses on the agent layer and in the enterprise environment are not — and given the number of verticals, contexts, and deployment scenarios, there is too much surface area for any single lab to cover. The analogy he reaches for is the cloud transition: when infrastructure changes at speed, there is a window to build the platform company that secures the new stack end to end. Checkpoint did it for enterprise networks; he argues Irregular can do it for AI agents.\n\n**Near-term threat model**\n\nOn the question of rogue AI versus human-directed attacks, Lahav is clear that the near-term risk is human actors using AI as an attack amplifier. He points specifically to capability disclosures in OpenAI's and Anthropic's system cards around bio and chemical assistance, and says his concern is bad actors — including state-linked organizations — gaining access to models already rated at elevated risk levels. The longer-horizon risk of autonomous rogue AI action is real but secondary for now.\n\nIrregular is backed by Sequoia. No funding amount was disclosed."
}
You might also like...
Dan Lahav on Irregular's emergence from stealth: building the security stack for the AI agent era
Sep 18, 2025
Promptfoo raises $18.4M Series A from Insight Partners and a16z to secure AI applications at the enterprise layerJul 29, 2025
Vercel vs. Replit vs. Cloudflare: the Next.js CVE that sparked a developer platform warMar 24, 2025