Sam Altman has acknowledged that OpenAI should have taken more time before finalising its agreement with the United States Department of Defense on AI usage. The deal was announced just hours after Anthropic lost its Pentagon contract, triggering significant backlash online.
In a post on X, Altman said OpenAI has since updated its contract with the Pentagon to clarify its principles.
Why OpenAI Signed the Deal
Altman explained that the urgency behind the agreement was driven by a desire to prevent tensions from escalating between the US defense establishment and the AI industry. Anthropic’s contract had reportedly been terminated after it declined to remove safeguards that limited unrestricted AI use.
Altman admitted that the timing of OpenAI’s announcement made the move appear “opportunistic and sloppy,” calling it a learning experience as the company navigates higher-stakes decisions in the future. He also said Anthropic should not be labelled a “supply chain risk” by the US government and expressed hope that the Department of Defense would offer Anthropic similar terms.
User Backlash and App Shifts
Following the Pentagon deal, uninstallations of ChatGPT reportedly surged. Data from Sensor Tower showed a 295 per cent day-to-day increase in uninstalls on February 28. Meanwhile, Anthropic’s Claude chatbot saw downloads rise by as much as 51 per cent and reached the top spot on the Apple App Store in the US.
Pop star Katy Perry also shared a screenshot of Claude with a heart emoji on X, signalling support for Anthropic’s stance.
Changes to the Agreement
Altman shared details from an internal memo outlining new safeguards in the revised contract. He stated that OpenAI’s systems cannot be used for mass domestic surveillance of US nationals under laws such as the Fourth Amendment, the National Security Act of 1947 and the FISA Act of 1978. According to Altman, the Department understands this limitation to prohibit deliberate tracking or monitoring of US persons.
He further clarified that no intelligence agency within the Defense Department, including the National Security Agency, will use OpenAI’s systems under the current agreement. Any such use would require a separate contractual modification.
Although autonomous weapons were not explicitly addressed, Altman noted that the technology is not ready for certain high-risk applications and that safety trade-offs remain insufficiently understood. He also stated that he would not comply with any unconstitutional order related to OpenAI’s AI systems, even if doing so carried personal legal consequences.
