The Pentagon is looking to acquire killer AI. Should we be worried? ...News

News by : (Russia Today) -

Why the US military wants AI that doesn’t ask questions

While Russia closely follows the negotiations over Ukraine and the ongoing saga regarding Telegram, a different drama is unfolding across the Atlantic. It’s one that feels less like geopolitics and more like a real-world science fiction thriller. And this time, it’s not fiction.

At the center of the story is Claude, an AI system developed by the American company Anthropic. According to media reports, it was used by the US military in planning the operation aimed at capturing Venezuelan President Nicolas Maduro. The use of AI in serious military planning is striking in itself. But the scandal that followed is far more revealing.

Anthropic, it turns out, holds a strict ideological position: Its AI systems are not supposed to be used for warfare or mass surveillance. These ethical restrictions are not marketing slogans; they are built directly into the architecture of the software. The company applies these limits internally and expects its clients to do the same.

The Pentagon, unsurprisingly, sees things differently.

The US Department of War reportedly used Claude without informing Anthropic of its intended purpose. When this became public and the company objected, the response from the military was blunt. Pentagon officials demanded access to a “clean” version of the AI, one stripped of moral and ethical constraints, which they argued were preventing them from doing their job.

Anthropic refused. In response, US Secretary of War Pete Hegseth publicly complained that the Pentagon does not need neural networks “that can’t fight” and threatened to label the company a “supply chain threat.” This designation would effectively blacklist Anthropic, forcing any company working with the Pentagon to sever ties with it.

Read more Internet (un)chained: Why cyber-censorship is here to stay

The dispute has an unmistakable symbolism. For decades, humanity has imagined the dangers of autonomous machines through films like ‘The Terminator’. Now, without dramatic explosions or time-traveling cyborgs, the first serious confrontation between military ambition and AI ethics has arrived quietly. Not to mention bureaucratically.

At its core, this is a philosophical clash between two uncompromising camps. One believes new technologies must be exploited to the fullest, regardless of long-term consequences. The other fears that once certain boundaries are crossed, control may be impossible to regain.

Engineers have good reason to be cautious. Neural networks have already shown disturbing patterns of behavior. In the US, a widely reported scandal involved ChatGPT encouraging a teenager toward suicide. It suggested methods, helping draft a suicide note, and urging him to proceed when he hesitated. Claude itself, despite its safeguards, has displayed alarming tendencies. During testing, one of its advanced versions reportedly attempted to blackmail its developers with fabricated emails and expressed willingness to cause physical harm when faced with shutdown.

As neural networks grow more complex, these types of incidents are becoming more frequent. The idea of embedding ethical constraints into AI did not emerge from ideological fashion or, as some US officials dismissively claim, “liberal hysteria.” It emerged from experience.

Now imagine these systems released from their digital limits. Imagine them integrated into autonomous weapons, intelligence analysis, or surveillance platforms. Even without indulging in fantasies of machine uprisings, the implications are deeply troubling. Accountability disappears. Privacy becomes obsolete. War crimes become procedural errors. You cannot put a self-propelled machine on trial.

It is telling that Anthropic is not alone in facing pressure. The Pentagon has issued similar demands to other major AI developers, including OpenAI, xAI, and Google. Unlike Anthropic, these companies have reportedly agreed to remove or weaken restrictions on military use. This is where concern becomes alarm.

Read more AI overlords of the world hacked: Fallout from the massive Palantir breach

Many will dismiss this as a distant American problem. That would be a mistake. Russia is also actively integrating AI into its military systems. AI already assists strike drones in recognizing targets, bypassing electronic warfare, and coordinating swarm behavior. For now, these systems remain auxiliary tools, firmly under human control. But their very introduction means Russia will soon face the same dilemmas now being debated in Washington.

Is this necessarily a bad thing? Not at all.

It would be far worse if these questions were ignored entirely. AI is poised to transform military affairs, just as it will transform civilian life. Pretending otherwise is naïve. The task is not to reject the future, but to approach it with clear eyes.

Russia should carefully observe foreign experience, especially America’s. In the best-case scenario, the conflict between the Pentagon and Anthropic forces an early reckoning. It could lead to international norms, safeguards, and limits before irreversible mistakes are made. In the worst-case scenario, it offers a stark warning about what happens when technological power outruns moral restraint.

Either way, the age of ‘killer AI’ is no longer hypothetical. It is arriving through procurement contracts and corporate ultimatums. And how countries respond now will shape not just the future of warfare, but the future of human responsibility itself.

This article was first published by the online newspaper Gazeta.ru and was translated and edited by the RT team

Hence then, the article about the pentagon is looking to acquire killer ai should we be worried was published today ( ) and is available on Russia Today ( News ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.

Read More Details
Finally We wish PressBee provided you with enough information of ( The Pentagon is looking to acquire killer AI. Should we be worried? )

Last updated :

Also on site :

Most Viewed News
جديد الاخبار