Thursday, April 2, 2026

Top 5 This Week

Related Posts

Cloud Code Leak Explained: What Anthropic Accidentally Leaked

Anthropic selection It appears to have inadvertently revealed how one of its most important artificial intelligence products works. Large internal documents related to its agent artificial intelligence tool Claude Code have been released.

Anthropic accidentally exposed the internal systems of its AI tool Claude Code (Bloomberg)
Anthropic accidentally exposed the internal systems of its AI tool Claude Code (Bloomberg)

This issue occurs when version 2.1.88 of the @anthropic-ai/claude-code package on npm contains a 59.8 MB JavaScript source map file (.map) used only for debugging. This version went live publicly and exposed sensitive information.

How the leak spreads

Solayer Labs intern Chaofan Shou (@Fried_rice) shared the findings on X at 4:23am ET. The post included a download link and quickly attracted attention.

Also read: Cloud can now perform tasks for you on the computer, and the internet says “entry level jobs are over”

Within hours, the massive 512,000-line TypeScript code base was copied to GitHub and studied by thousands of developers.

For Anthropic, this is more than just a minor mistake. With annualized revenue reportedly at $19 billion as of March 2026, the breach is seen as a significant loss of valuable intellectual property.

What is the content of the leak?

According to Venture BeatOne of the biggest revelations in the leak is how Anthropic is solving a major AI problem called “contextual entropy,” where the AI ​​gets confused during long sessions.

Developers have discovered a three-tier memory system they describe as a “self-healing memory” system.

  • A file named MEMORY.md acts as a small index that is always loaded
  • It does not store data, only a pointer to the location of the data
  • The actual information is stored in separate files and is only loaded when needed
  • Old conversations are not completely reloaded, but instead searched using keywords

This setting follows “strict write rules,” which means the system only updates memory after a successful operation. This prevents storage errors.

The system also treats its own memory as “cues,” meaning it verifies information rather than blindly trusting it.

Also read: Pentagon officials see little chance of restarting human-AI trade

The leak also revealed a feature called KAIROS that allows Claude Code to run as a background agent. Through a process called autoDream, the system can improve and organize the user’s memory when they are inactive and increase efficiency when they resume work.

Internal model details were also revealed, including codenames such as Capybara, Fennec and Numbat. Data shows that even advanced models still face challenges, with some versions experiencing higher false positive rates than earlier versions.

Another feature, “Undercover Mode,” shows that AI can contribute to public projects without revealing its identity. The system includes instructions such as “You are operating UNDERCOVER… Your commit message… must not contain any internal Anthropic information. Do not reveal your identity.”

User security risks

The leak also raised security concerns. With the system structure now public, attackers may try to exploit weaknesses. A separate supply chain attack involving the axios npm package during the same time period increased the risk to users who installed the update on March 31, 2026.

What can users do now?

Anthropic recommends switching to its native installer and avoiding the affected npm versions. Users are also advised to follow a zero-trust approach, check their systems, and rotate API keys if needed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles