Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos ...Middle East

News by : (Fortune) -

Anthropic has accidentally leaked the source code for its popular coding tool Claude Code. 

The leak comes just days after Fortune reported that the company had inadvertently made close to 3,000 files publicly available, including a draft blog post that detailed a powerful upcoming model that presents unprecedented cybersecurity risks. The model is known internally as both “Mythos” and “Capybara,” according to the leaked blog post obtained by Fortune.The source code leak exposed around 500,000 lines of code across roughly 1,900 files. When reached for comment, Anthropic confirmed that “some internal source code” had been leaked within a “Claude Code release.” 

A spokesperson said: “No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”

The latest data leak is potentially more damaging to Anthropic than the earlier accidental exposure of the company’s draft blog post about its forthcoming model. While the latest security lapse did not expose the weights of the Claude model itself, it did allow people with technical knowledge to extract additional internal information from the company’s codebase, according to a cybersecurity professional Fortune asked to review the leak. 

Claude Code is perhaps Anthropic’s most popular product and has seen soaring adoption rates from large enterprises. At least some of Claude Code’s capabilities come not from the underlying large language model that powers the product but from the software “harness” that sits around the underlying AI model and instructs it how to use other software tools and provides important guardrails and instructions that govern its behavior. It is the source code for this agentic harness that has now leaked online.The leak potentially allows a competitor to reverse-engineer how Claude Code’s agentic harness works and use that knowledge to improve their own products. Some developers may also seek to create open-source versions of Claude Code’s agentic harness based on the leaked code.

The leaked code also provided further evidence that Anthropic has a new model with the internal name Capybara that the company is actively preparing to launch, according to Roy Paz, a senior AI security researcher at LayerX Security. It revealed that the company has a “fast” and “slow” version of the new model and that it will likely be a replacement for Opus as the most advanced model on the market, he said.

Currently, Anthropic markets each of its models in three different sizes. The largest and most capable model versions are branded Opus; slightly faster and cheaper, but less capable, versions are branded Sonnet; and the smallest, cheapest, and fastest are called Haiku. In the draft blog post obtained by Fortune last week, Anthropic describes Capybara as a new tier of model that is even larger and more capable than Opus, but also more expensive.

The newest leak, first made public in an X post, appears to have happened after Anthropic uploaded all of Claude Code’s original code to NPM, a platform developers use to share and update software, instead of only the finished version that computers actually run. The mistake looks like a “human error” after someone took a shortcut that bypassed normal release safeguards, Paz said.

“Usually, large companies have strict processes and multiple checks before code reaches production, like a vault requiring several keys to open,” he told Fortune. “At Anthropic, it seems that the process wasn’t in place and a single misconfiguration or misclick suddenly exposed the full source code.”

Paz also raised concerns about how the tool connects to Anthropic’s internal systems. Even without special encrypted access keys that would normally be required to access such systems, it appears possible to access internal services that should be restricted, Paz said. He warned this could give malicious actors, including nation-states, new opportunities to exploit Anthropic’s models to build more powerful cyberattack tools and bypass the safeguards meant to constrain them.

Anthropic’s current most powerful model, Claude 4.6 Opus, is already classed by the company as a dangerous model when it comes to cybersecurity risks. Anthropic has said its current Opus models are capable of autonomously identifying zero-day vulnerabilities in software. While these capabilities are intended to help companies detect and fix flaws, they could also be weaponized by hackers, including nation-states, to find and exploit vulnerabilities.

This isn’t the first time Anthropic has inadvertently leaked details about its popular Claude Code tool. In February 2025, an early version of Claude Code accidentally exposed its original code in a similar breach. The exposure showed how the tool worked behind the scenes as well as how it connected to Anthropic’s internal systems. Anthropic later removed the software and took the public code down.

This story was originally featured on Fortune.com

Hence then, the article about anthropic mistakenly leaks its own ai coding tool s source code just days after accidentally revealing an upcoming model known as mythos was published today ( ) and is available on Fortune ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.

Read More Details
Finally We wish PressBee provided you with enough information of ( Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos )

Last updated :

Also on site :

Most Viewed News
جديد الاخبار