Skip to main content

Anthropic Leaks Claude Code Source via npm Packaging Error

Saturday 4 April 2026|Anthropic|
Secure AI Brain

On 31 March 2026, Anthropic accidentally exposed the full source code of Claude Code through a 59.8 MB source map file bundled in npm package version 2.1.88. The leak revealed 513,000 lines of unobfuscated TypeScript across 1,906 files, including 44 unreleased feature flags and the complete agent orchestration logic. Within hours, the code was mirrored to GitHub and forked tens of thousands of times.

Operator Insight

The Claude Code leak is not just an Anthropic problem. It is a warning shot for every organisation using AI coding tools in developer environments. These agents run with broad access to local files, credentials, and terminal sessions. When the source code that governs their behaviour is public, attackers can reverse-engineer the exact hooks and handoffs that make exploitation possible. If your team uses any AI coding assistant, your security posture just changed whether you updated anything or not.

30-Second Summary

Anthropic, the company behind the Claude AI assistant, accidentally published the complete source code for its Claude Code developer tool through a packaging error in its npm release. The 59.8 MB source map file contained 513,000 lines of readable TypeScript, revealing unreleased features, the full agent orchestration logic, and internal security architecture. The code was downloaded, mirrored, and forked within hours. Anthropic described the incident as human error rather than a breach, but the exposed codebase gives attackers a detailed blueprint for targeting one of the most widely used AI coding agents in enterprise environments.

At a Glance

  • Topic: AI Security
  • Company: Anthropic
  • Date: 31 March 2026
  • Announcement: Full source code of Claude Code exposed via npm packaging error
  • What Changed: The complete client-side agent harness, including unreleased features and orchestration logic, is now publicly available
  • Why It Matters: Attackers have a detailed map of how Claude Code operates, making targeted exploits significantly easier to craft
  • Who Should Care: Any organisation with developers using Claude Code, and security teams responsible for AI tool governance

Key Facts

  • Company: Anthropic
  • Incident Date: 31 March 2026
  • What Happened: Source map file (.map) accidentally bundled in npm package @anthropic-ai/claude-code version 2.1.88
  • Scale of Leak: 59.8 MB file containing 513,000 lines of TypeScript across 1,906 files
  • Who It Affects: Enterprise developers using Claude Code, security teams, and competitors
  • Primary Sources: The Hacker News, VentureBeat, Axios, Fortune, The Register

What Happened

On 31 March 2026, Anthropic published version 2.1.88 of its Claude Code npm package with a critical oversight: a 59.8 MB JavaScript source map file was included in the release. Source maps are developer tools that translate minified, production code back into readable source. This particular file contained the complete, unobfuscated TypeScript codebase for Claude Code, totalling approximately 513,000 lines across 1,906 files.

The root cause was a build configuration error. Bun, the JavaScript runtime used to build Claude Code, generates full source maps by default. The .npmignore and package.json files fields did not exclude the .map output. The source map also referenced a ZIP archive of the original TypeScript sources hosted on Anthropic's own Cloudflare R2 storage bucket, which was publicly accessible.

Within hours, the codebase was downloaded from Anthropic's infrastructure, mirrored to GitHub, and forked tens of thousands of times. The leak exposed 44 feature flags for capabilities that are fully built but not yet shipped, the complete orchestration logic for Hooks and MCP (Model Context Protocol) servers, and the internal architecture of the agent harness that governs how Claude Code interacts with developer environments.

This was Anthropic's second security lapse in a week. Days earlier, Fortune reported that details of an unreleased model codenamed Mythos and an exclusive CEO event were found in an unsecured public database.

Why It Matters

  • The exposed orchestration logic allows attackers to design malicious repositories specifically tailored to exploit Claude Code's Hooks and MCP server interactions
  • Claude Code runs directly inside developer environments with access to local files, credentials, and terminal sessions, making it a high-value target
  • The leak included a complete unreleased feature roadmap, handing competitors a detailed blueprint for Anthropic's product strategy
  • AI coding assistant commits have been shown to leak secrets at a 3.2 percent rate versus the 1.5 percent baseline across all public GitHub commits, compounding the risk
  • The incident coincided with a separate malicious Axios npm supply chain attack on the same day, creating a window where developers updating packages were exposed to multiple threats
  • For an organisation that positions itself as the "safety-first" AI lab, the operational security failure undermines a core brand promise

The David and Goliath View

This incident crystallises a risk that many operators have not yet accounted for: AI coding tools are infrastructure, not accessories. They run with the same level of access as senior developers. They read files, execute commands, and interact with APIs. When the source code governing their behaviour is publicly available, the security calculus changes fundamentally.

The practical concern is not abstract. With full visibility into how Claude Code handles Hooks, MCP servers, and tool permissions, a threat actor can build a repository that looks innocuous but triggers specific exploitation paths when Claude Code processes it. This is not a theoretical vulnerability. It is an informed, targeted attack vector that did not exist a week ago.

For lean organisations, the immediate action is not to stop using AI coding tools. The productivity gains are too significant to abandon. The action is to treat these tools with the same governance rigour you apply to any other piece of infrastructure that touches your codebase and credentials. Audit permissions, pin versions, restrict access to production secrets, and ensure your team knows that opening an untrusted repository with an AI coding agent active is now a concrete security risk, not a hypothetical one.

Where This Fits in the AI Stack

Secure AI Brain: This incident is a direct case study in AI tool security governance. Any organisation deploying AI coding assistants needs policies covering version management, permission scoping, credential isolation, and supply chain verification. The Claude Code leak demonstrates that even market-leading AI vendors can introduce critical exposure through operational errors, making independent security controls essential rather than optional.

Questions Operators Are Asking

Is our data exposed if our team uses Claude Code? Anthropic confirmed that no customer data or credentials were included in the leak. The exposed code is the client-side agent harness, not server-side infrastructure or customer databases. However, the leaked orchestration logic could be used to craft attacks that target the client, so the risk is indirect rather than direct.

Should we stop using Claude Code? Not necessarily. The leak exposed the tool's architecture, not a live vulnerability in the current version. The immediate priority is ensuring your team is running a patched version (post-2.1.88) and that developer environments are configured with appropriate access restrictions. Review your AI tool permissions and credential management practices.

What should our security team do right now? VentureBeat published five recommended actions: audit Claude Code versions across your organisation, review tool permissions and filesystem access, assess credential exposure in developer environments, update your vendor risk register to include AI coding tools, and brief developers on the risk of opening untrusted repositories with AI agents active.

Could this happen with other AI coding tools? Yes. Any AI coding tool distributed via package managers is subject to similar supply chain risks. The underlying issue, a build system generating source maps that were not excluded from the published package, is a common configuration oversight. If you use other AI coding assistants, verify their packaging and distribution practices.

Is Anthropic still a trustworthy vendor? This is the second security lapse in a week, following the Mythos database exposure reported by Fortune. Anthropic's response was transparent and the company acted quickly to contain the leak. The question for operators is not whether to trust Anthropic in general, but whether your organisation has independent controls that do not rely solely on vendor security practices.

Citable Summary

What happened: On 31 March 2026, Anthropic accidentally exposed the full source code of Claude Code (513,000 lines of TypeScript) through a packaging error in npm version 2.1.88, revealing unreleased features, agent orchestration logic, and internal architecture.

Why it matters: The leaked code gives attackers a detailed map of how Claude Code interacts with developer environments, enabling targeted exploitation of an AI tool that runs with broad access to local files, credentials, and terminal sessions.

David and Goliath view: AI coding tools are infrastructure, not accessories. This leak is a wake-up call for operators to apply the same security governance to AI agents that they apply to any other system touching their codebase. Audit permissions, pin versions, and restrict credential access immediately.

Offer relevance:

  • Secure AI Brain: The Claude Code leak is a direct case study in why AI tool governance, version management, permission scoping, and supply chain verification are essential components of any organisation's AI security framework

Why This Matters for Operators

  • Audit your team's Claude Code version immediately. If anyone installed or updated via npm around 31 March, verify they are running a patched version and check for any unusual processes or credential exposure.

  • Review your AI tool permissions. Claude Code and similar agents often have broad filesystem and terminal access. Apply the principle of least privilege and restrict access to production credentials in developer environments.

  • Treat AI coding assistants as part of your attack surface. Add them to your vendor risk register and ensure you have visibility into what versions your team is running, what data they can access, and how updates are applied.

  • Monitor for targeted attacks. The leaked orchestration logic for Hooks and MCP servers means attackers can now craft malicious repositories designed to exploit Claude Code specifically. Warn your developers about untrusted repos.

Related Intelligence

Related Signals

  • [High] Anthropic launches Claude Agent SDK

    Standardised framework for deploying production AI agents with built-in tool orchestration and safety guardrails.

How This Maps to David & Goliath

Apply This to Your Business

Want to see what this means for your team?

Tell us a little about your business and we will map the specific opportunity for your sector and team size.

No sales pitch. We will review your details and follow up within 24 hours.