After ChatGPT uninstalls an almost 300% spike, OpenAI corrects a careless US military contract


Sam Altman has acknowledged that OpenAI should have taken more time before finalising its agreement with the United States Department of Defense on AI usage. The deal was announced shortly after Anthropic lost its Pentagon contract, triggering criticism online.

On X, Altman said OpenAI has since revised its contract with the Pentagon to make its guiding principles clearer.

Why the deal was signed

Altman explained that the decision to move quickly was intended to prevent further friction between the US defense establishment and AI firms. Anthropic’s contract had reportedly been terminated after it declined to remove safeguards that restricted unrestricted AI deployment.

He admitted, however, that the timing made OpenAI’s move appear “opportunistic and sloppy,” calling it a learning experience as the company faces increasingly high-stakes decisions. Altman also said Anthropic should not be labelled a “supply chain risk” by the US government and expressed hope that it would be offered similar terms.

User backlash and app trends

After the Pentagon agreement became public, ChatGPT saw a spike in uninstalls in the US. Data from Sensor Tower showed day-to-day uninstalls rising sharply on February 28. Meanwhile, downloads of Anthropic’s Claude chatbot increased significantly, and the app briefly reached the top position on the US Apple App Store.

Singer Katy Perry also shared a screenshot of Claude with a heart emoji on X, signalling support for Anthropic’s stance.

Changes to the agreement

Altman shared details from an internal memo outlining new safeguards. OpenAI’s systems, he said, cannot be used for mass domestic surveillance of US citizens, referencing protections under the Fourth Amendment, the National Security Act of 1947 and the FISA Act of 1978. The memo states that the Defense Department recognises this restriction as prohibiting deliberate tracking or monitoring of US nationals.

He further clarified that intelligence agencies such as the National Security Agency would not be allowed to use OpenAI’s systems under the current agreement. Any such access would require a separate contract modification.

Although autonomous weapons were not directly addressed, Altman noted that the technology is not ready for many high-risk applications and that safety trade-offs are not yet fully understood. He also stated that he would refuse to comply with any unconstitutional order related to OpenAI’s AI systems, even if it resulted in legal consequences for him personally.


 

buttons=(Accept !) days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !