Technology

How Pentagon AI Contracts Work—From Maven to Now

The U.S. military now relies on commercial AI for everything from drone analysis to battlefield targeting. Here is how Pentagon AI contracts work, why they spark fierce debate, and what safeguards exist.

R
Redakcia
4 min read
Share
How Pentagon AI Contracts Work—From Maven to Now

From Drone Footage to the Kill Chain

In April 2017, the Pentagon quietly launched Project Maven—officially the Algorithmic Warfare Cross Functional Team—to solve a mundane but urgent problem: military analysts were drowning in drone surveillance footage and could not process it fast enough. The idea was simple: let machine-learning algorithms scan video feeds, flag objects of interest, and free human analysts for higher-level work.

What began as a narrow computer-vision experiment has since evolved into a sprawling AI-assisted targeting and battlefield management ecosystem. Roughly 25,000 U.S. military personnel now use Maven-derived systems across every major combatant command, fusing data from more than 179 sources—satellite imagery, drone feeds, signals intelligence—to compress targeting timelines from hours to minutes.

How the Contracts Are Structured

The Pentagon does not build its own frontier AI models. Instead, it signs agreements with commercial AI labs—companies like Google, OpenAI, xAI, Palantir, and until recently Anthropic—granting the military access to their most advanced systems on classified networks. These are air-gapped, compartmentalized computing environments designed to handle top-secret information.

Individual contracts can be worth up to $200 million and typically permit the government to use AI models for "any lawful purpose." The companies deploy their models via cloud infrastructure inside secure government facilities, where they assist with tasks ranging from intelligence analysis to mission planning.

Crucially, once a model enters classified networks, the originating company has limited visibility into—and no veto power over—how the technology is ultimately used in operations. Contracts may include aspirational language about avoiding autonomous weapons or domestic surveillance, but legal enforceability remains unclear.

The Ethics Battleground

Pentagon AI contracts have triggered some of Silicon Valley's fiercest internal debates. The pattern was set in 2018, when more than 3,000 Google employees signed an open letter opposing the company's original Maven contract, arguing that AI should not be weaponized. Google withdrew, published a set of AI ethics principles, and Palantir took over as the primary contractor.

The tension resurfaced in early 2026, when Anthropic refused Pentagon demands to remove guardrails preventing autonomous targeting and mass surveillance from its Claude AI models. The Department of Defense responded by designating Anthropic a "supply-chain risk" and ordering a phase-out of its products within six months.

Meanwhile, Google—the company that once walked away on ethical grounds—signed a new classified AI agreement with the Pentagon. More than 950 Google employees protested, but the deal went through. OpenAI and xAI secured similar arrangements, each with non-binding language about responsible use.

What Safeguards Exist

The Department of Defense adopted five Responsible AI principles in February 2020, requiring AI systems to be responsible, equitable, traceable, reliable, and governable. A dedicated Chief Digital and Artificial Intelligence Office (CDAO) oversees implementation, and the DoD published a Responsible AI Strategy and Implementation Pathway in 2022.

In practice, these principles require human oversight at key decision points, audit trails for AI-assisted recommendations, and testing protocols before deployment. However, critics argue that once models operate inside classified environments, independent oversight becomes nearly impossible. The gap between stated principles and operational reality remains the central tension in military AI governance.

Why It Matters

Pentagon AI contracts sit at the intersection of national security, corporate ethics, and technological power. Supporters argue that AI-assisted intelligence saves lives by enabling faster, more precise military decisions. Opponents warn that compressing the kill chain—the sequence from target identification to strike—to seconds rather than hours raises profound questions about accountability, algorithmic bias, and the erosion of human judgment in life-and-death decisions.

As AI capabilities advance and more companies compete for defense contracts, the rules governing how these tools are used in warfare will shape not just military strategy but the broader relationship between democratic societies and the technologies they create.

Stay updated!

Follow us on Facebook for the latest news and articles.

Follow us on Facebook

Related articles