Palantir's chief architect Akshay Krishnaswamy on forward-deployed engineering and AI in enterprise workflows
Apr 9, 2025 with Akshay Krishnaswamy
Key Points
- Palantir positions itself as an operating system for complex decision-making, embedding data, compliance rules, and business logic into a shared model called an ontology that lets companies like AIG and Cleveland Clinic respond faster to disruptions like tariffs.
- Palantir's forward-deployed engineering model runs product development from the field backward, not from engineering headquarters, with field engineers expected to iterate with product teams overnight and ship new versions within days.
- Krishnaswamy argues enterprise AI adoption requires gradual automation starting at 10 percent confidence rather than full autonomy, and that generic benchmarks are useless without workflow-specific unit tests tied to actual business processes.
Summary
Akshay Krishnaswamy, Palantir's chief architect and a 12.5-year veteran of the company, makes the case that Palantir is best understood as an operating system for complex decision-making — one that has evolved from counterterrorism and battlefield intelligence into commercial workflows at companies like AIG, Cleveland Clinic, General Mills, and Airbus.
What Palantir actually builds
The company's core is a shared model of the world — what Palantir calls an ontology — that maps not just data objects but the actions users can take, the compliance rules governing those actions, and the business logic that governs state changes. Krishnaswamy describes it as a digital twin of a business process, not just an asset. In practice, that means nurse scheduling at Cleveland Clinic and HCA, underwriting automation at AIG (where Alex Karp and Anthropic's Dario Amodei appeared on stage with the CEO), and supply chain navigation at General Mills and Tyson.
The tariff disruption creates a direct sales motion for Palantir. Companies running multiple ERPs, MES systems, and PLM systems — each modeling operations in slightly different ways — are finding those systems rigid when they need to respond fast. Palantir's pitch is less "we'll fix your tariff problem" and more "we'll give you the cockpit to navigate as conditions change."
Forward-deployed engineering
Krishnaswamy is direct that the forward-deployed model is widely misunderstood. The original forward-deployed engineer was Shyam Sankar, now Palantir's CTO, and the model is organizational, not just a job title. The field has primacy — it drives what gets built — and engineers in the field are expected to spend evenings talking with product teams, then show up two days later with a new version. Treating that as glorified sales engineering misses the point: the entire product development process runs front-to-back from the field, not from a privileged engineering core in Palo Alto. Krishnaswamy concedes the model only makes sense where the problem domain is complex enough to justify it.
A partner ecosystem of Palantir alumni building dedicated consultancies is now emerging, though Krishnaswamy frames those as newer and smaller. The real leverage for startups, he argues, is building on Foundry and AIP directly — getting data integration, ontology, workflow tooling, and SDKs as a starting point rather than rebuilding from scratch against a hyperscaler. Palantir held its second developer conference three weeks before the interview, and announced a new startups cohort shortly before the segment aired.
Agents in enterprise
On agentic AI, Krishnaswamy is bullish but skeptical of the "bit flip" pitch — the idea that an enterprise can simply switch on autonomous AI and trust the outcome. His view is that agents need to be introduced incrementally into existing workflows, starting at 10% automation and sliding upward as confidence builds. The analogy he uses is training an intern: gradual leveling up, not a sudden handoff. The workflows where Palantir is seeing real traction — nurse scheduling, underwriting, supply chain — all share the same structure: human users and AI operating in the same context, with guardrails that make the system's behavior measurable and auditable.
For model selection, Krishnaswamy argues that generic benchmarks like MMLU or LM Arena are insufficient for enterprise use. The right evals are workflow-specific unit tests — actual in-vivo tests tied to the business process, whether that is an underwriting workflow at AIG or a scheduling decision at a hospital. Enterprises that cannot measure AI performance in their specific operational context cannot control or improve it.
DeepSeek and model risk
Krishnaswamy stops short of calling the Manchurian Candidate scenario — a foreign model with hidden behavior that activates inside a DoD network — pure fiction. His argument is composite risk: mechanistic interpretability of large models is still limited, so even a non-malicious DeepSeek model cannot be fully understood. Add the Chinese government's legal authority over domestic AI companies, and the risk profile is high enough that no US government customer should take it. His preferred frame is maintaining a full stack of options — proprietary commercial models, western open-source models like Llama — so the US is never forced to rely on a foreign model from a rival regime. Palantir is model-agnostic by design, and Krishnaswamy sees that as a feature rather than a gap.
Security demand from commercial clients
Corporate security awareness is rising well beyond ITAR-regulated industries. Krishnaswamy says the threat profile against prominent US companies is increasing, and security is no longer solely a CISO concern — it has moved to the CEO and board level. Companies building the next generation of AI-enabled operations want assurance they are building on a foundation that will not need to be rebuilt, given the experience of the previous generation of enterprise software.
At 12.5 years in, Krishnaswamy's read on Palantir's position is that the company is still at the beginning — more customers, more domains, and a developer ecosystem that did not exist a few years ago.