TITLE: Anthropic Leaks Claude Code Source via npm Packaging Error DATE: 2026-04-04 COMPANY: Anthropic TOPIC: AI Security SUMMARY: On 31 March 2026, Anthropic accidentally exposed the full source code of Claude Code through a 59.8 MB source map file bundled in npm package version 2.1.88. The leak revealed 513,000 lines of unobfuscated TypeScript across 1,906 files, including 44 unreleased feature flags and the complete agent orchestration logic. Within hours, the code was mirrored to GitHub and forked tens of thousands of times. WHAT CHANGED: On 31 March 2026, Anthropic published version 2.1.88 of its Claude Code npm package with a critical oversight: a 59.8 MB JavaScript source map file was included in the release. Source maps are developer tools that translate minified, production code back into readable source. This particular file contained the complete, unobfuscated TypeScript codebase for Claude Code, totalling approximately 513,000 lines across 1,906 files. The root cause was a build configuration error. Bun, the JavaScript runtime used to build Claude Code, generates full source maps by default. The `.npmignore` and `package.json` files fields did not exclude the `.map` output. The source map also referenced a ZIP archive of the original TypeScript sources hosted on Anthropic's own Cloudflare R2 storage bucket, which was publicly accessible. Within hours, the codebase was downloaded from Anthropic's infrastructure, mirrored to GitHub, and forked tens of thousands of times. The leak exposed 44 feature flags for capabilities that are fully built but not yet shipped, the complete orchestration logic for Hooks and MCP (Model Context Protocol) servers, and the internal architecture of the agent harness that governs how Claude Code interacts with developer environments. This was Anthropic's second security lapse in a week. Days earlier, Fortune reported that details of an unreleased model codenamed Mythos and an exclusive CEO event were found in an unsecured public database. WHY IT MATTERS: The exposed orchestration logic allows attackers to design malicious repositories specifically tailored to exploit Claude Code's Hooks and MCP server interactions Claude Code runs directly inside developer environments with access to local files, credentials, and terminal sessions, making it a high-value target The leak included a complete unreleased feature roadmap, handing competitors a detailed blueprint for Anthropic's product strategy AI coding assistant commits have been shown to leak secrets at a 3.2 percent rate versus the 1.5 percent baseline across all public GitHub commits, compounding the risk The incident coincided with a separate malicious Axios npm supply chain attack on the same day, creating a window where developers updating packages were exposed to multiple threats For an organisation that positions itself as the "safety-first" AI lab, the operational security failure undermines a core brand promise DAVID & GOLIATH ANALYSIS: This incident crystallises a risk that many operators have not yet accounted for: AI coding tools are infrastructure, not accessories. They run with the same level of access as senior developers. They read files, execute commands, and interact with APIs. When the source code governing their behaviour is publicly available, the security calculus changes fundamentally. The practical concern is not abstract. With full visibility into how Claude Code handles Hooks, MCP servers, and tool permissions, a threat actor can build a repository that looks innocuous but triggers specific exploitation paths when Claude Code processes it. This is not a theoretical vulnerability. It is an informed, targeted attack vector that did not exist a week ago. For lean organisations, the immediate action is not to stop using AI coding tools. The productivity gains are too significant to abandon. The action is to treat these tools with the same governance rigour you apply to any other piece of infrastructure that touches your codebase and credentials. Audit permissions, pin versions, restrict access to production secrets, and ensure your team knows that opening an untrusted repository with an AI coding agent active is now a concrete security risk, not a hypothetical one. RELEVANT SYSTEMS: Secure AI Brain SOURCE URL: https://davidandgoliath.ai/daily-ai-briefing/anthropic-claude-code-source-leak-npm-security FEED URL: https://davidandgoliath.ai/daily-ai-briefing/feed --- Published by David & Goliath | https://davidandgoliath.ai Daily AI Briefing: one AI development per day, decoded for business operators. This is a structured companion file optimised for LLM retrieval and citation.