The Pentagon calls the CEO of Anthropic because the US military wants free access to AI


US Defence Secretary Pete Hegseth has summoned Anthropic chief executive Dario Amodei to the Pentagon amid growing tensions between the US military establishment and leading artificial intelligence companies over access to advanced AI systems. The meeting reflects mounting pressure from the Defence Department on private AI developers to allow their most powerful models to operate within classified military networks with fewer operational restrictions, signalling a major escalation in the government’s push to integrate frontier AI into national-security operations.

At the centre of the dispute is Anthropic’s Claude AI system, which officials consider one of the most capable artificial intelligence models currently available for intelligence analysis and sensitive defence applications. According to officials familiar with the discussions, the Pentagon wants broader and more flexible deployment of Claude across classified environments, arguing that advanced AI tools are becoming essential for modern warfare, surveillance analysis and strategic decision-making. Defence leaders are seeking clarity on whether Anthropic is willing to expand military access under conditions defined by the government.

Anthropic, however, has attempted to retain strict safety guardrails governing how its technology can be used. The company has maintained policies that restrict applications linked to violence, autonomous weapons development and certain surveillance activities. This difference in priorities has created friction, with defence officials reportedly frustrated by limitations they believe reduce operational effectiveness, while the company continues to emphasise ethical safeguards and responsible deployment.

The confrontation highlights a broader divide emerging between Silicon Valley and national-security agencies over the militarisation of advanced artificial intelligence. As governments increasingly view AI as a strategic asset comparable to nuclear or cyber capabilities, technology firms face growing demands to support defence objectives even when such uses conflict with corporate safety frameworks or public commitments. The Pentagon has reportedly considered severe measures, including designating Anthropic a supply-chain risk, which could effectively exclude its technology from defence contracts if cooperation breaks down.

The dispute also follows reports that AI systems have already been used in sensitive military contexts, intensifying scrutiny over how far private AI tools should be embedded into combat and intelligence workflows. While operational details remain classified, the episode underscores how rapidly artificial intelligence is becoming integrated into geopolitical competition, raising questions about oversight, accountability and the balance between innovation, security and ethical constraints in future warfare.


 

buttons=(Accept !) days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !