Sam Altman Says OpenAI Is Renegotiating Pentagon Deal After “Opportunistic and Sloppy” Agreement
By INGLOBE Magazine News Desk | March 3, 2026
OpenAI CEO Sam Altman has confirmed that the company is renegotiating its controversial agreement with the U.S. Department of Defense after acknowledging the initial deal appeared rushed and poorly communicated. The revised contract aims to introduce clear safeguards preventing the use of OpenAI’s artificial intelligence for domestic surveillance of American citizens.
The announcement follows criticism from legal experts, rival AI companies, and even OpenAI employees who questioned the ethical implications of the original agreement between the AI developer and the Pentagon.
OpenAI Revises Pentagon AI Agreement
In an internal memo that later circulated widely on social media, Altman admitted the agreement had been finalized too quickly.
“The issues are super complex and demand clear communication,” Altman wrote. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
The updated contract language will explicitly prohibit OpenAI’s technology from being intentionally used for domestic surveillance of U.S. citizens or nationals. According to the company, the new terms align with several major legal frameworks including the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978.
Restrictions on Intelligence Agencies
Katrina Mulligan, OpenAI’s head of national security partnerships and a former senior official at the Pentagon, National Security Council, and Department of Justice, confirmed that several intelligence organizations would be restricted under the revised agreement.
Defense Intelligence Components such as the National Security Agency (NSA), the National Geospatial-Intelligence Agency (NGA), and the Defense Intelligence Agency (DIA) will not be able to use OpenAI’s services without a separate contract amendment.
The new terms also address concerns around commercially purchased datasets, including cell phone location information and data collected from consumer fitness applications. These types of datasets have historically existed in a legal gray area regarding surveillance.
AI Industry Dispute Over Surveillance Safeguards
The debate intensified after reports revealed that AI company Anthropic had previously pushed for similar protections in its own negotiations with the Pentagon. Anthropic reportedly demanded strict guarantees preventing the use of its AI systems for domestic surveillance or autonomous weapons programs.
Negotiations between Anthropic and the U.S. military ultimately collapsed when the company insisted on stronger safeguards.
According to reporting from The Atlantic, Anthropic’s refusal to compromise on these conditions led to the Trump administration labeling the company a “supply-chain risk,” escalating tensions between AI developers and government agencies.
Legal Experts Question Enforcement
Despite the revised contract language, some legal experts remain skeptical about how effectively the restrictions can be enforced.
Charlie Bullock, senior research fellow at the Institute for Law & AI, described the updated terms as an improvement but noted they do not address broader concerns about autonomous weapons systems.
“This seems like a significant improvement over the previous language with respect to surveillance,” Bullock wrote in a post on X. “However, it does not address autonomous weapons concerns, nor does it claim to.”
Some analysts have suggested that independent legal review of the full contract would provide greater transparency and reassurance to both employees and the public.
Backlash From OpenAI Employees
The Pentagon agreement triggered significant backlash within OpenAI itself. Many employees signed an open letter supporting Anthropic’s position on stricter safeguards.
Public reaction also followed quickly. Anthropic’s AI assistant Claude briefly surged to the top of Apple’s App Store rankings as consumers showed support for the company’s stance.
Outside OpenAI’s San Francisco headquarters, critics even left chalk graffiti on sidewalks condemning the company’s partnership with the military.
Some researchers within OpenAI also publicly criticized the deal. Aidan McLaughlin, a research scientist at the company, posted on X that he personally did not believe “this deal was worth it,” a message that received nearly half a million views.
Growing Debate Over AI and Military Use
The controversy highlights the growing tension between the rapidly expanding artificial intelligence industry and government agencies seeking to deploy AI tools for national security purposes.
As governments explore advanced AI technologies for defense, companies face increasing pressure to balance national security cooperation with ethical safeguards, transparency, and public trust.
OpenAI’s renegotiation with the Pentagon may become a defining moment in shaping how AI companies collaborate with military institutions in the future.