Ramp launches first AI agent for corporate expense management, automating approvals using calendar and email context
Jul 10, 2025 with Karim Atiyeh
Key Points
- Ramp launches its first AI agent to automate corporate expense approvals by cross-referencing employee calendars, email, and company policies without human intervention.
- Ramp routes high-stakes approval decisions to expensive frontier models while using cheaper models for routine tasks like merchant normalization, prioritizing engineering speed over per-token cost optimization.
- The company is building secure interfaces to let enterprise agents handle legal review, IT procurement, and vendor negotiation with access to Ramp transaction data, with no timeline disclosed.
Summary
Ramp has launched its first AI agent, targeting what Kareem (Ramp's representative) describes as the "messy middle" between finance teams and the rest of the business. The agent is designed for controllers and operates by ingesting a company's full expense policy and every transaction on the platform, then automating the approval and classification decisions that currently require back-and-forth between employees and finance staff.
The practical mechanics go beyond simple rule-matching. When a large restaurant charge appears, the agent cross-references the employee's calendar and email to determine whether a team dinner justifies the spend, then rules on policy compliance automatically. It can also act externally, browsing the web, calling vendors, or hitting third-party APIs to retrieve missing receipts or validate transaction context without human intervention.
On the architecture distinction between agentic and legacy automation: Ramp draws a clear line between its older deterministic pipelines, which handled receipt tagging and merchant normalization with fixed logic and high auditability, and the new agent, which receives high-level instructions and selects its own tool paths. The older approach required exhaustive pre-specification of every decision branch; the new one trades some predictability for flexibility and faster iteration. Both paradigms coexist inside Ramp's stack.
On LLM cost and model selection: Ramp uses an internal stack-ranking framework to match inference cost to task stakes. High-volume, low-risk work such as merchant name normalization uses cheaper, smaller models. Low-volume, high-stakes decisions get the newest frontier models regardless of per-token cost. Kareem is explicit that engineering time is the most valuable input and that over-optimizing inference spend is a distraction, though he references micro-optimizing AWS costs in 2014 as a point of contrast.
On latency tradeoffs: For synchronous, user-facing queries the agent routes to faster models to avoid users defaulting back to Slack. Asynchronous background tasks, which make up the majority of agentic workflows at Ramp, tolerate slower, more capable models because the work hands off between people naturally. Kareem flagged that current AI browser products, specifically citing Comet, still feel too slow for routine use, and argued that speed was the core innovation behind Chrome's dominance over competitors.
On third-party data access: Ramp argues its B2B position makes data ownership cleaner than consumer applications. The data it needs, transaction metadata from Visa networks, email receipts, calendar context, is owned by the businesses on its platform. Kareem notes the main friction encountered has been technical, data cleaning and speed, rather than third parties actively blocking access. He contrasts this with his prior company Parabus, a consumer savings agent that faced systematic CAPTCHA blocking from Amazon and Walmart.
On opening Ramp's platform to external agents: The company is actively working through how to expose secure interfaces so that enterprise agents handling legal review, IT procurement, or vendor negotiation can interact with Ramp data with customer awareness and appropriate permissioning. No timeline or specific partners were disclosed.
On the fully autonomous finance future: Kareem's view is that LLM capability is no longer the bottleneck for tasks like autonomous tax filing. The constraint is context, specifically extracting tacit knowledge from people's heads, connecting the right data sources, and building UX patterns that make that extraction efficient. He frames the current moment as analogous to the pre-GUI era of personal computing, where the underlying compute exists but the interaction paradigms that unlock mass utility are still being invented.