Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
The meeting came at the end of a week where a conflict between Secretary of War Pete Hegseth and OpenAI rival Anthropic burst into public acrimony, ending with the apparent cancellation of Anthropic’s contracts with the Pentagon and with the federal government in general.
Altman said the government is willing to let OpenAI build its own “safety stack”—that is, a layered system of technical, policy, and human controls that sit between a powerful AI model and real-world use—and that if the model refuses to perform a task, then the government would not force OpenAI to make it do so.
OpenAI would retain control over how technical safeguards are implemented and which models are deployed and where, and would limit deployment to cloud environments rather than “edge systems.” (In a military context, edge systems are a category that could include aircraft and drones.) In what would be a major concession, Altman told employees that the government said it is willing to include OpenAI’s named “red lines” in the contract, such as not using AI to power autonomous weapons, conduct domestic mass surveillance, or engage in critical decision-making.
OpenAI and the Department of War did not immediately respond to requests for comment.
Sasha Baker, head of national security policy at OpenAI, and Katrina Mulligan, who leads national security for OpenAI for Government, also spoke at the OpenAI all-hands, according to the source. One of those officials said the relationship between Anthropic and the government had broken down because Anthropic cofounder and CEO Dario Amodei had offended Department of War leadership, including publishing blog posts that “the department got upset about.”
Anthropic, a company founded by people who left OpenAI over safety issues, had been the only large commercial AI maker whose models were approved for use at the Pentagon, in a deployment done through a partnership with Palantir. But Anthropic’s management and the Pentagon have been locked for several days in a dispute over limitations that Anthropic wanted to put on the use of its technology. Those limitations are essentially the same ones that Altman said the Pentagon would abide by if it used OpenAI’s technology.
Anthropic had refused Pentagon demands that it remove safeguards on its Claude model that restrict its use for domestic mass surveillance or fully autonomous weapons, even as defense officials insisted that AI models must be available for “all lawful purposes.” The Pentagon, including Secretary of War Pete Hegseth, had warned Anthropic it could lose a contract worth up to $200 million if it did not comply. Altman has previously said OpenAI shares Anthropic’s “red lines” on limiting certain military uses of AI, underscoring that even as OpenAI negotiates with the U.S. government, it faces the same core tension now playing out publicly between Anthropic and the Pentagon.
The OpenAI all-hands came just after President Trump announced that the federal government will stop working with Anthropic, in a dramatic escalation of the government’s clash with the company over its AI models.
“I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” Trump said in a post on Truth Social. The Department of War and other agencies using Anthropic’s Claude models will have a six-month phase-out period, he said.
At the OpenAI all-hands, staff were told that the most challenging aspect of the deal for leadership was concern over foreign surveillance, and that there was a major worry about AI-driven surveillance threatening democracy, according to the source. However, company leaders also seemed to acknowledge the reality that governments will spy on adversaries internationally, recognizing claims that national security officers “can’t do their jobs” without international surveillance capabilities. References were made to threat intelligence reports showing that China was already using AI models to target dissidents overseas.
This story was originally featured on Fortune.com
Hence then, the article about sam altman tells staff at an all hands that openai is negotiating a deal with the pentagon after trump orders the end of anthropic contracts was published today ( ) and is available on Fortune ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Sam Altman tells staff at an all-hands that OpenAI is negotiating a deal with the Pentagon, after Trump orders the end of Anthropic contracts )
Also on site :