The US military employed Claude in strikes against Iran after Donald Trump outlawed Anthropic. This is what changed


According to reports cited from The Wall Street Journal and Axios, the U.S. military used Claude AI models developed by Anthropic during Operation Epic Fury, a joint U.S.–Israel strike on Iran.

The operation reportedly involved intelligence analysis, target selection support, and battlefield simulations.

However, shortly before this, U.S. President Donald Trump had designated Anthropic as a “supply chain risk,” signaling that U.S. government agencies would phase out the company’s models.

Why was Claude still used?

The key reason appears to be infrastructure lock-in and transition timing:

  • Claude was reportedly the only AI model already integrated into classified U.S. defense networks.

  • Replacing an AI system embedded in secure military systems is not immediate.

  • Reports suggest a transition period of roughly six months is required to safely migrate to another provider.

  • Operational continuity during an active military campaign likely overrode immediate policy shifts.

In short: policy changed faster than infrastructure could.

What about OpenAI?

After the Anthropic designation, OpenAI reportedly signed an agreement to work with U.S. defense authorities.

CEO Sam Altman has stated that OpenAI would not allow its models to be used for domestic mass surveillance or autonomous weapons. However:

  • Deployment into classified military systems takes time.

  • Security clearance, testing, and integration procedures are extensive.

  • Immediate replacement during an active conflict is unlikely.

Why was Anthropic designated a supply chain risk?

Reports suggest tensions emerged after Anthropic CEO Dario Amodei declined to permit certain military-related uses of Claude, including:

  • Domestic mass surveillance

  • Development of autonomous weapons systems

The U.S. government reportedly labeled the company a supply chain risk following those disagreements.

Amodei has indicated that Anthropic plans to legally challenge that designation.

The bigger issue

This situation highlights three broader dynamics:

  1. AI dependency in defense systems – Once integrated into classified networks, replacing AI tools is technically complex.

  2. Policy vs. operational reality – Government decisions can move faster than technical transitions.

  3. Ethical tension – AI companies increasingly face pressure over military use of their models.

It’s worth noting that official, fully verified public disclosures on the exact operational role of AI in this strike remain limited, and much of the reporting relies on unnamed sources.


 

buttons=(Accept !) days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !