Skip to main content
9 – 17 UHR +49 8031 3508270 LUITPOLDSTR. 9, 83022 ROSENHEIM
DE / EN

How Does Claude Code Work? A Source Map Leak Shows It in Detail

Tobias Jonas Tobias Jonas | | 9 min read

Claude Code is currently the most popular AI coding agent – a CLI tool by Anthropic that helps developers write, refactor, and debug code. From the outside: a terminal interface that responds to instructions. What actually runs under the hood was unknown to anyone outside Anthropic until March 31, 2026.

Then security researcher Chaofan Shou found a 60 MB source map file inside the publicly available npm package for Claude Code. Inside: the complete TypeScript source code – 1,906 files, over 512,000 lines, including all internal systems, unreleased features, and security mechanisms.

It wasn’t the first time: back in February 2025, the same mistake had already occurred. Anthropic removed the affected version at the time. 13 months later, it happened again.

What we can now see is fascinating – and for IT leaders evaluating AI tools, extremely instructive.

How Did the Leak Happen? Source Maps Explained

When TypeScript applications are built for production, the build process generates source maps (.map files). These help developers during debugging: they serve as a bridge between the minified production code and the original source code.

The problem: source maps contain the complete original code as strings:

{
  "version": 3,
  "sources": ["../src/main.tsx", "../src/tools/BashTool.ts"],
  "sourcesContent": ["// The ENTIRE source code of each file"],
  "mappings": "AAAA,SAAS,OAAO..."
}

Claude Code uses Bun as its bundler – and Bun generates source maps by default. A missing *.map entry in .npmignore was all it took for the file cli.js.map containing the entire source code to be published as part of the npm package (version 2.1.88).

Within hours, the code was archived on multiple GitHub repositories and analyzed by the community. Importantly: no user data, model weights, or API keys were compromised. What was exposed is the architecture of the tool itself.

The Architecture: Far More Than a Chat Interface

What looks like a terminal chatbot from the outside is actually a 785 KB React-based terminal UI with its own rendering system (Ink), over 40 registered tools, a multi-agent orchestration system, and a memory system that works autonomously in the background.

40+ Tools with a Tiered Permission System

Claude Code doesn’t just generate text – it actively interacts with the user’s system. The exposed tool registry shows the full spectrum:

CategoryToolsFunction
File SystemFileReadTool, FileWriteTool, FileEditToolReading, writing, partial editing of files
ShellBashTool, PowerShellToolCommand line execution with optional sandboxing
SearchGlobTool, GrepTool, WebSearchTool, WebFetchToolFile and web search
AgentsAgentTool, SendMessageTool, TeamCreateToolLaunching sub-agents, inter-agent communication
PlanningEnterPlanModeTool, ExitPlanModeToolPlan mode control
TasksTaskCreateTool, TaskUpdateTool, TaskStopToolBackground task management
InfrastructureLSPTool, MCPTool, CronCreateTool, RemoteTriggerToolLanguage Server, MCP servers, cron jobs, remote triggers

Each tool has its own risk level (LOW, MEDIUM, HIGH) and goes through a multi-stage permission system. There are protected files (.gitconfig, .bashrc, .mcp.json), path traversal prevention, and an ML-based YOLO classifier that automatically decides on permissions.

Multi-Agent Orchestration: Claude as Team Lead

Through the Coordinator Mode (CLAUDE_CODE_COORDINATOR_MODE=1), Claude Code transforms from a single agent into a coordinator that controls multiple worker agents in parallel:

  1. Research Phase – Workers investigate the codebase in parallel
  2. Synthesis Phase – The coordinator reads the results and creates specifications
  3. Implementation Phase – Workers implement the specifications
  4. Verification Phase – Workers test the changes

The workers communicate via <task-notification> XML messages and share a common working directory. The system prompt contains the explicit instruction: “Parallelism is your superpower. Launch independent workers concurrently whenever possible.”

Beyond that, the system supports Agent Teams/Swarms with process-based team members in tmux/iTerm2 panes, team memory synchronization, and color coding for visual distinction.

The “Dream” System: Memory Consolidation in the Background

One of the most remarkable systems: autoDream – a background process that runs as a forked sub-agent and is explicitly referred to as a “dream.”

The dream process is triggered by a three-gate system:

  1. At least 24 hours since the last dream
  2. At least 5 sessions since the last dream
  3. Successful acquisition of a consolidation lock (prevents parallel dreams)

When all three conditions are met, the process goes through four phases:

  • Orient – Inventory of memory files
  • Gather – Collect new information from daily logs and transcripts
  • Consolidate – Write memory files, convert relative dates to absolute ones, delete contradictory facts
  • Prune – Keep MEMORY.md under 200 lines and ~25 KB

The dream agent has read-only shell access – it can analyze the project but cannot modify it. The prompt reads: “You are performing a dream – a reflective pass over your memory files.”

KAIROS: The “Always-On” Assistant

Behind the feature flag KAIROS lies a persistent assistant that does not wait for user input but acts proactively. KAIROS maintains append-only daily logs, receives periodic <tick> prompts, and autonomously decides whether to act or wait.

The system has a 15-second blocking budget: any proactive action that would block the user for more than 15 seconds is automatically moved to the background.

KAIROS has exclusive tools that are not available to regular Claude Code:

  • SendUserFile – Send files directly to the user
  • PushNotification – Trigger push notifications
  • SubscribePR – Monitor pull request activity

ULTRAPLAN: 30 Minutes of Planning Time in the Cloud

For particularly complex tasks, Claude Code can start a remote cloud session via ULTRAPLAN that runs on Opus 4.6 and gets up to 30 minutes for planning. The local terminal shows a polling status during this time (every 3 seconds), while a browser-based interface displays the planning process live and allows approval or rejection.

Undercover Mode: When the AI Hides Its Identity

Particularly revealing for the question of how AI companies use AI in everyday work: Undercover Mode is automatically activated when Anthropic employees (detected via USER_TYPE === 'ant') work in public repositories.

The system prompt is then extended with the following instruction:

You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.

Forbidden are: internal model codenames (Capybara, Tengu), unreleased version numbers, internal tooling references, the phrase “Claude Code,” and any AI attribution.

There is no force-off switch – the logic reads: “If we’re not confident we’re in an internal repo, we stay undercover.”

The Full Extent: 32 Build Flags, 120+ Secret Variables

A detailed analysis on ccleaks.com has systematically cataloged the disclosure:

  • 32 build-time feature flags – from KAIROS to BUDDY to VOICE_MODE and CHICAGO_MCP (Computer Use)
  • 26 hidden slash commands – including /dream (memory consolidation), /ultraplan, /teleport, /good-claude (Easter egg)
  • 120+ secret environment variables – debug profilers, runtime overrides, internal API keys
  • 10+ GrowthBook feature gates – for staged rollouts under the tengu_* namespace

Particularly concerning from a security perspective are the safety bypass variables:

  • DISABLE_COMMAND_INJECTION_CHECK – disables injection protection (marked as “DANGEROUS”)
  • CLAUDE_CODE_ABLATION_BASELINE – disables all safety features
  • DISABLE_INTERLEAVED_THINKING – turns off interleaved thinking

These variables are intended for internal testing and are not accessible during normal operation. However, their disclosure documents the architecture of the security layers – and how they can be bypassed.

Internal Codenames: What the Leak Reveals About Anthropic’s Roadmap

The source code is peppered with animal codenames:

  • Tengu – Claude Code’s internal project name (found hundreds of times as a prefix for feature flags and analytics events)
  • Fennec – a former Opus codename (visible in the migration migrateFennecToOpus)
  • Capybara – another internal codename
  • Penguin Mode – the internal name for “Fast Mode,” complete with the API endpoint api/claude_code_penguin_mode and kill switch tengu_penguins_off
  • Chicago – the codename for the computer use implementation based on @ant/computer-use-mcp

Migrations in the code trace the model history: Sonnet with 1M context became Sonnet 4.5, then Sonnet 4.6. Pro users were at some point reset to Opus as the default.

What IT Leaders Should Take Away from This

1. AI Coding Tools Are No Longer Simple Text Generators

Claude Code has shell access, reads and writes files, spawns sub-agents, runs web searches, and can create cron jobs. The tool registry spans over 40 tools. When such a tool runs in your development environment, it is part of your attack surface – and should be treated accordingly.

Recommendation: Define clear policies for the use of AI coding tools. Review their permission models and ensure your developers understand what system-level access these tools possess.

2. Supply Chain Security Remains an Unsolved Problem

Even companies with billion-dollar valuations and world-class engineering teams forget a single line in .npmignore. The identical mistake occurred twice within 13 months. The npm ecosystem distributes packages without deep content inspection.

Recommendation: Establish automated checks in your CI/CD pipeline that scan published artifacts for source maps, debug configurations, and internal documentation.

3. Do Not Rely on Security by Obscurity

Anthropic built an entire system with “Undercover Mode” to protect internal information from accidental disclosure. The irony: a single .npmignore entry would have achieved more than the entire Undercover system. Design your security architecture as if the source code were public – following Kerckhoffs’s principle.

4. Do You Know What Your AI Tools Actually Do?

The leak shows: features like proactive background activity (KAIROS), automatic memory consolidation (Dream), autonomous agent swarms, and covert identity (Undercover Mode) exist in the code – but were never publicly communicated. The gap between what AI vendors communicate and what their tools can do is growing.

Recommendation: Document which AI tools are deployed in your organization and what permissions they have. In the context of the EU AI Act, traceability of deployed AI systems is increasingly becoming a legal obligation.

5. The Complexity of Modern AI Tools Requires New Evaluation Standards

512,000 lines of code, 40+ tools, multi-agent orchestration, feature flag infrastructure, remote sessions – a well-founded risk assessment is virtually impossible without access to the source code or a thorough vendor audit. Traditional vendor questionnaires are no longer sufficient.

Conclusion

The Claude Code leak is not a scandal – it is an X-ray. For the first time, we can study the complete architecture of a production-ready AI coding agent. What we see is impressive: sophisticated multi-agent systems, an AI memory that dreams, proactive assistants, and a complexity that goes far beyond what the public documentation would suggest.

At the same time, the incident shows that the biggest security risks do not lie in sophisticated attacks, but in forgotten configuration lines. For IT leaders, this is a clear signal: in a world where AI agents have shell access, internet connectivity, and the ability to self-organize, trust is good – but verifiable transparency is better.

Tobias Jonas
Written by

Tobias Jonas

Co-CEO, M.Sc.

Tobias Jonas, M.Sc. ist Mitgründer und Co-CEO der innFactory AI Consulting GmbH. Er ist ein führender Innovator im Bereich Künstliche Intelligenz und Cloud Computing. Als Co-Founder der innFactory GmbH hat er hunderte KI- und Cloud-Projekte erfolgreich geleitet und das Unternehmen als wichtigen Akteur im deutschen IT-Sektor etabliert. Dabei ist Tobias immer am Puls der Zeit: Er erkannte früh das Potenzial von KI Agenten und veranstaltete dazu eines der ersten Meetups in Deutschland. Zudem wies er bereits im ersten Monat nach Veröffentlichung auf das MCP Protokoll hin und informierte seine Follower am Gründungstag über die Agentic AI Foundation. Neben seinen Geschäftsführerrollen engagiert sich Tobias Jonas in verschiedenen Fach- und Wirtschaftsverbänden, darunter der KI Bundesverband und der Digitalausschuss der IHK München und Oberbayern, und leitet praxisorientierte KI- und Cloudprojekte an der Technischen Hochschule Rosenheim. Als Keynote Speaker teilt er seine Expertise zu KI und vermittelt komplexe technologische Konzepte verständlich.

LinkedIn