Anthropic Claude Code Source Leak: What Happened and Why It Matters

In late March 2026, Anthropic — the AI safety–focused company behind the Claude family of models — suffered two consecutive security lapses that exposed internal files and the full source code of its flagship developer tool, Claude Code. Here’s a summary of what happened, how the company responded, and why it matters.

Incident 1: Internal Files Exposed (March 27)

On March 27, Fortune reported that Anthropic had accidentally made nearly 3,000 internal files publicly accessible. Among the exposed documents was a draft blog post describing a powerful new AI model the company had not yet announced — reportedly referred to internally as “Mythos.”

The exact mechanism of the exposure was not disclosed in detail, but the incident raised immediate questions about Anthropic’s internal access controls and release procedures.

Incident 2: Claude Code Source Code Leak (March 31)

Just four days later, on March 31, a far more significant leak occurred. When Anthropic pushed Claude Code version 2.1.88 to npm, the package accidentally included source map files that exposed:

  • ~2,000 source code files
  • 512,000+ lines of code
  • The complete architectural blueprint of Claude Code

Security researcher Chaofan Shou discovered the leak almost immediately and posted about it on X. Within hours, developers had published detailed analyses of the exposed code. One described the product as “a production-grade developer experience, not just a wrapper around an API.”

Anthropic’s official response: “This was a release packaging issue caused by human error, not a security breach.”

What Was Leaked — and What Wasn’t

It’s important to note what the leak did and did not include:

Leaked Not Leaked
Claude Code CLI application source code Claude AI model weights
System prompt / behavioral instructions Training data or datasets
Tool integration architecture API keys or user credentials
Internal engineering patterns Core model architecture (transformer details)

In other words, the leak revealed the software scaffolding — how Anthropic instructs the model to behave, what tools it can access, and where its operational limits are set — rather than the AI model itself.

The DMCA Takedown Fiasco (April 1)

Anthropic’s attempt to contain the leak made things worse. The company filed a DMCA takedown notice with GitHub, but the request was overly broad, resulting in approximately 8,100 repositories being taken down — including legitimate forks of Anthropic’s own publicly released Claude Code repository.

According to GitHub’s public DMCA records, the original notice swept up far more than the offending code. Developers whose unrelated projects were caught in the dragnet were understandably frustrated.

Anthropic’s head of Claude Code, Boris Cherny, acknowledged the mistake on X and retracted the bulk of the notices, ultimately limiting the takedown to 1 repository and 96 forks that actually contained the leaked source code.

An Anthropic spokesperson told TechCrunch: “The repo named in the notice was part of a fork network connected to our own public Claude Code repo, so the takedown reached more repositories than intended. We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks.”

Why It Matters

1. Competitive Exposure

Claude Code has become one of Anthropic’s most strategically important products. According to The Wall Street Journal, OpenAI reportedly shelved its video generation tool Sora — just six months after launch — partly to refocus resources on developer tools in response to Claude Code’s growing momentum. The leaked architecture gives competitors a detailed look at the engineering decisions behind this product.

2. IPO Timing

Anthropic is reportedly preparing for an IPO. Two security incidents in a single week — especially from a company that markets itself as the “responsible AI” leader — creates a narrative problem. As TechCrunch noted: “Leaking your source code as a public company? You better believe there’s a shareholder lawsuit coming.”

3. Trust & Safety Credibility

Anthropic has built its brand around being the careful AI company — publishing extensive AI safety research, employing top researchers, and even battling the Department of Defense over principled concerns. Back-to-back accidental exposures undercut that narrative, regardless of whether the leaks constituted a true “security breach.”

Timeline Summary

Date Event
March 27 Fortune reports ~3,000 internal files exposed, including unreleased model details
March 31 Claude Code v2.1.88 ships with full source maps — 512,000+ lines of code leaked via npm
March 31 Security researcher Chaofan Shou posts discovery on X; developers begin analysis
March 31 Anthropic files DMCA takedown with GitHub
April 1 ~8,100 GitHub repos taken down (most unrelated); developer backlash
April 1 Anthropic retracts bulk of takedown, limits to 1 repo + 96 forks

Sources

Disclaimer: This article is for informational purposes only. All information is sourced from publicly available reports. ECONPLEX is not affiliated with Anthropic.

📊 Real-time economic indicators & market analysis

Stay ahead with ECONPLEX — your comprehensive economic intelligence platform

Visit ECONPLEX →

🌐 www.econplex.com

Leave a Comment