Interview

Dan Lahav on Irregular's emergence from stealth: building the security stack for the AI agent era

Sep 18, 2025 with Dan Lahav

Key Points

  • Irregular emerges from stealth claiming the enterprise security stack built for static systems cannot defend against autonomous AI agents, positioning itself as the foundational security layer for the agentic AI transition.
  • The startup's high-fidelity simulator lets security researchers stress-test AI models against novel attack vectors including AI-to-AI jailbreaking and lateral movement across enterprise systems, with partnerships already in place at OpenAI, Anthropic, and Google DeepMind.
  • Lahav frames near-term threat as human-directed misuse of advanced models for bio and chemical attacks, while treating autonomous rogue AI behavior as a longer-horizon risk.
Dan Lahav on Irregular's emergence from stealth: building the security stack for the AI agent era

Summary

Irregular emerged from stealth positioning itself as the security layer for the AI agent era. CEO Dan Lahav argues the existing security stack — built for static enterprise environments — won't survive the shift to autonomous AI systems, and that the window to build the defining platform is now.

The core product is a high-fidelity simulator that lets security researchers place any model into synthetic scenarios to map novel attack patterns: AI jailbreaking other AI, agents bypassing endpoint defenses like Windows Defender, and agents moving laterally across systems. Irregular claims it was first to document AI-to-AI jailbreaking in a controlled environment.

The lab partnership model is the strategic bet. Irregular is already working with OpenAI, Anthropic, and Google DeepMind — not as an outsourced replacement for internal safety work, but as a complement. Lahav draws the analogy to Palo Alto Networks and CrowdStrike, which operate externally to the enterprises they protect. The argument is that some defenses belong baked into the model weights (lab territory), some sit at the agent layer, and some must live in the enterprise environment itself. No single party can cover the full stack.

On the threat model, Lahav splits near-term from long-term risk. The near-term concern is human-directed misuse: bad actors using advanced models to accelerate attacks, with particular worry around bio and chemical capabilities flagged in OpenAI and Anthropic's own system cards. The longer-horizon risk is autonomous rogue behavior by AI systems themselves, though Lahav treats that as secondary for now.

The company frames its moment explicitly around infrastructure transitions: Checkpoint built the firewall business as enterprises adopted networks; cloud-native security players emerged as workloads moved off-premise. Lahav's bet is that agentic AI is the next such transition, and the platform to secure it hasn't been built yet.