OpenAI CEO Sam Altman addresses the gathering on the AI Influence Summit, in New Delhi, India, February 19, 2026.
Bhawika Chhabra | Reuters
OpenAI CEO Sam Altman stated Monday that the corporate “should not have rushed” its latest cope with the U.S. Division of Protection and would make some revisions to the settlement.
It comes after the ChatGPT maker introduced it had struck a brand new cope with the Protection Division on Friday, simply hours after the White Home directed federal businesses to cease utilizing rival AI firm Anthropic’s instruments, and hours earlier than Washington would perform strikes on Iran.
In a publish on X, Altman stated OpenAI would amend the contract to incorporate some new language, together with that “the AI system shall not be deliberately used for home surveillance of U.S. individuals and nationals.”
He added that the Protection Division had affirmed that OpenAI’s instruments wouldn’t be utilized by intelligence businesses such because the NSA.
“There are various issues the know-how simply is not prepared for, and plenty of areas we do not but perceive the tradeoffs required for security,” Altman stated, including that the corporate would work with the Pentagon on technical safeguards.
The CEO additionally admitted he had made a mistake and “should not have rushed” to get the deal out on Friday.
“We had been genuinely making an attempt to de-escalate issues and keep away from a a lot worse end result, however I believe it simply seemed opportunistic and sloppy,” he stated.
The acknowledgment comes after a public feud between Anthropic and Washington over safeguards for its Claude AI methods. Protection Secretary Pete Hegseth additionally stated the corporate can be designated a supply-chain risk.
Anthropic had sought ensures that its instruments wouldn’t be used for functions resembling home surveillance within the U.S., or to function and develop autonomous weapons with out human management.
The dispute started after it was revealed that Anthropic’s Claude had been utilized by the U.S. navy in its raid to seize Venezuelan president Nicolás Maduro in January, although the corporate didn’t publicly object to that use case.
OpenAI’s cope with the Pentagon got here proper after talks between Anthropic and the Protection Division broke down, prompting public backlash on-line, with many customers reportedly ditching ChatGPT for Claude on app shops.
In his publish, Altman additional addressed the controversy, saying: “In my conversations over the weekend, I reiterated that Anthropic shouldn’t be designated as a [supply chain risk], and that we hope the [Department of Defense] presents them the identical phrases we have agreed to.”