The company has disputed the Pentagon’s statements that it can somehow still control Claude AI deployed in military networks
AI developer Anthropic insists it has no backdoor or “kill switch” for its Claude AI once it is deployed in classified Pentagon military networks, according to a new court filing.
The US military and the tech firm found themselves embroiled in a policy dispute earlier this year, with the Pentagon insisting on using the system for “all lawful military purposes,” while the company stood by its AI safeguards related to mass surveillance and fully autonomous weapons use.
The Pentagon ultimately ended its partnership with Anthropic, designating the tech firm a “supply chain risk,” a rare label typically reserved for entities linked to Washington’s foreign adversaries. The designation bars the company from not only working with the US government directly but also any other contractors from using its products as well.
In a new filing to a federal appeals court in Washington, DC, Anthropic disputed the key US administration claim that the firm still retained some degree of control over Claude AI once it was deployed to classified systems and effectively granted itself an “operational veto.” The firm said it has “no back door or remote kill switch,” while its “personnel cannot log into a department system to modify or disable a running model.”
Read more New AI too dangerous for public release – AnthropicThe AI system supplied to the Pentagon comes as a “static” model, the company argued. Once it is deployed, it “does not degrade or change on its own, and Anthropic cannot push undisclosed or unsanctioned changes to a model after the department has deployed it.”
Anthropic was formally designated a “supply-chain risk to national security” on February 27, while US President Donald Trump accused it of being run by “leftwing nut jobs.” The company challenged the label in court, with the legal battle yielding thus far mixed results.
Earlier this month, the DC court rejected Anthropic’s request for a pause on the supply chain risk designation. In a parallel case in California, however, a court sided with the company, temporarily blocking the administration decision. With the split decision, Anthropic remains barred from working with the Pentagon but can still continue its partnerships with other agencies while the legal battle goes on.
Hence then, the article about anthropic says no kill switch in ai deployed by us military was published today ( ) and is available on Russia Today ( News ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Anthropic says no ‘kill switch’ in AI deployed by US military )
Also on site :
- Teen gang member dies in custody
- Fortnite Ben 10 collaboration: Release date, what to expect & more
- Protesters stabbed at London rally against war on Iran