Anthropic has declined requests from the US Department of Defense to remove built-in safeguards from its artificial intelligence system, Claude, stating that the company will not permit its technology to be used for fully autonomous weapons or large-scale domestic surveillance. Chief executive Dario Amodei said the company could not agree to the Pentagon’s demand that AI contractors allow “any lawful use” of their systems, arguing that certain applications could undermine democratic principles rather than strengthen national security.
Amodei clarified that Anthropic supports the use of artificial intelligence to enhance national defense and protect democratic nations. The company has already deployed its AI models within classified government networks and national laboratories, where they assist with intelligence analysis, operational planning, cybersecurity tasks and advanced modeling. He also noted that Anthropic had previously restricted access to entities linked to the Chinese Communist Party, even at the cost of significant revenue, to align its operations with national security priorities.
The disagreement primarily revolves around two areas the company considers unacceptable. Anthropic has drawn a firm line against mass domestic surveillance, warning that advanced AI systems could combine vast amounts of publicly available data to create detailed profiles of individuals at scale, posing serious risks to privacy and civil liberties. The company has also rejected the use of its technology in fully autonomous weapons systems, arguing that current AI models are not reliable enough to make life-and-death targeting decisions without human oversight and could endanger both civilians and military personnel.
According to Amodei, the Pentagon has pressured the company by threatening to remove Anthropic from defense systems, classify it as a supply-chain risk, or potentially invoke the Defense Production Act to compel compliance. Defense officials reportedly issued an ultimatum requiring unrestricted military access to the technology, raising tensions between ethical safeguards and defense procurement demands. Amodei described these pressures as contradictory, noting that the same authorities simultaneously characterized the company as both a security risk and an essential partner.
Despite the standoff, Anthropic has indicated it remains willing to cooperate with the Defense Department within ethical boundaries and has offered to collaborate on research aimed at improving AI reliability for defense applications. The company said that if it is ultimately removed from government systems, it would assist in ensuring a smooth transition to prevent disruption to ongoing military operations while maintaining its commitment to responsible AI deployment.