We Read Claude Code's Leaked Source Code So You Don't Have To
512,000 lines of Claude Code accidentally went public. We checked every viral claim against the actual code - here's what was real, overstated, or invented.

In this article
Introduction and disclaimer On March 31, 2026, Anthropic accidentally exposed ~500,000 lines of Claude Code through a misconfigured source map in an npm release of @anthropic-ai/claude-code. Within hours, the internet filled in the gaps, with claims ranging from accurate to completely fabricated. We reviewed the available source files and compared the most viral claims against the code that was actually accessible, including the snapshot shared here: https://github.com/Ahmad-progr/claude-leaked-files. That repository does not appear to contain the full source, so some conclusions will remain partial. Where evidence is incomplete, treat speculation as speculation.> What was actually exposed? >
This was not a traditional breach of Anthropic’s servers , no user data or credentials were exposed. What became public was something different: a large portion of the internal blueprint for how Claude Code is structured, how it operates, and what kinds of capabilities appear to be in development. Whether intentional or not, we got a rare look at the architecture, workflow design, and hidden feature surface of a major AI coding product.
What Is Claude Code?
Before we get into the leak itself we need to get our definitions right.
Claude Code is an AI tool you run in your computer's terminal - think of it as an AI assistant that lives inside your command line, can read and edit your files, run programs, search the web, and manage your entire codebase through plain English instructions.
Under the hood, it works in a loop:
You type something
↓
Claude thinks about it
↓
Claude picks a tool to use
(edit a file / run a command / search the web / ask you a question)
↓
The tool runs and Claude sees the result
↓
Claude thinks again
↓
Repeat until done → gives you an answerThat loop (think, act, observe, repeat) is the essence of the entire application. Everything else in the codebase is built around supporting it.
The codebase at a glance:
| File | What it does |
| ---------------- | ------------------------------------------------------------ |
| `main.tsx` | The front door - starts everything up, handles CLI flags |
| `QueryEngine.ts` | The brain - runs the think/act/observe loop |
| `tools.ts` | The hands - 40+ actions Claude can take |
| `commands.ts` | The shortcuts - 50+ slash commands like `/commit`, `/review` |
| `context.ts` | The memory - loads your project notes and git status |
| `Task.ts` | The workers - background jobs Claude can run |The entire terminal interface (the text you see rendered in your terminal) is built with React, the same technology that builds web apps. Anthropic essentially built a web app that renders into your terminal.
It has 40+ tools (like editing a file, running a shell command, or searching your codebase). It observes the result, and then thinks again, repeating until the job is done. Those tools mean Claude Code can do almost anything a developer does manually: read and write files, run tests, search the web, manage git, edit notebooks, all orchestrated automatically in that single loop.
The app also has 44 hidden "feature flags"- light switches in the code that turn features on or off at build time. When a flag is off, the feature doesn't just get hidden. It gets completely erased from the app. That's how Anthropic keeps unreleased features invisible until they're ready.
How Did 512,000 Lines of This End Up on the Internet?
This is not a classic "hack" or a security issue, but potentially an oversight from a simple debugging process. A debug file and an unlocked cloud bucket handed the world Anthropic's source code.
Source maps are special debug files that act like a decoder ring, letting developers trace errors back to their original code. They contain that original code baked right in. Anthropic shipped one of these maps inside their public npm package, and it pointed straight to a cloud storage bucket holding 512,000 lines of TypeScript that anyone could read. Within hours of the discovery hitting Twitter, GitHub mirrors exploded with tens of thousands of stars, making this genie thoroughly impossible to stuff back in the bottle.
The "Leaked" Features - Beyond the Hype
Quick Verdict Chart
CLAIM WHAT THE CODE SAYS VERDICT
──────────────────────────────────────────────────────────────────
44 hidden feature flags → Exactly 44. Confirmed. ✅ TRUE
Multi-agent AI swarms → Fully implemented ✅ TRUE
KAIROS "always-on daemon" → Real, it naps a lot ⚠️ OVERSTATED
Memory architecture → Real, simpler than claimed ⚠️ OVERSTATED
"Dream" task type → Memory consolidation (known) ✅ TRUE
Security/permissions → Structure exposed, not rules ⚠️ PARTIAL
Buddy pet (18 species) → One flag. Zero species. ❌ INVENTED
ULTRAPLAN (30-min Opus) → One flag. No details. ❌ INVENTED
Axios supply chain attack → Not in leaked code at all ❓ UNVERIFIABLE1. 44 Hidden Feature Flags: Anthropic's Secret Roadmap
Forty-four hidden switches reveal a lot about the capabilities Anthropic is building or testing next.
Feature flags work like light switches buried in the code. Flip one on at build time and a new capability appears; leave it off and users never know it existed. The leaked Claude Code source contains 44 of these switches, covering everything from VOICE_MODE and WEB_BROWSER_TOOL to the completely mysterious TORCH and LODESTONE flags that have zero documentation anywhere. Perhaps most telling is TRANSCRIPT_CLASSIFIER, the most-used hidden flag appearing 8 times in the codebase, which strongly suggests Transcript- level analysis (similar to what FlowPad allows you) may be an important part of the system's internal architecture.
If this is Anthropic's actual product plan, hidden in plain sight, it may have some implications for the privacy expectations enterprise customers hold.
The full flag list:
Voice & Input: VOICE_MODE
Browser: WEB_BROWSER_TOOL
Assistant Mode: KAIROS, KAIROS_BRIEF, KAIROS_CHANNELS,
KAIROS_PUSH_NOTIFICATION, KAIROS_GITHUB_WEBHOOKS
Agent Teams: COORDINATOR_MODE, UDS_INBOX, FORK_SUBAGENT, TEAMMEM
Remote Access: BRIDGE_MODE, DAEMON, SSH_REMOTE, DIRECT_CONNECT
Planning: ULTRAPLAN
Companion: BUDDY
Workflows: WORKFLOW_SCRIPTS, AGENT_TRIGGERS, AGENT_TRIGGERS_REMOTE
Mysteries: TORCH 🔦 LODESTONE 🧲 (zero docs on either)
Internal Intel: TRANSCRIPT_CLASSIFIER (8 occurrences)2. Multi-Agent Swarms: A Team of AIs Working Together
This is nothing new, as Claude Code already had agent teams as a feature. Claude Code can split itself into a team that tackles your problem simultaneously.
When you give Claude Code a big task, it can spawn multiple AI "teammates" that each work on different parts of the problem at the same time. One agent acts as the manager (planning and coordinating) while the others act as workers, each executing their assigned piece independently. The agents even have their own private chat system to share updates and results with each other as they go.
This is a fully operational AI swarm backed into claude code, where a boss or a team leader AI delegates work to worker AIs, they collaborate in real time, and the whole team assembles your answer. All behind a single prompt.
You ask Claude to "refactor my entire backend." Claude spawns three agents: one maps the codebase, one writes the new code, one runs the tests. They message each other. The team leader agent assembles the results and you get one answer.
3. KAIROS: The Proactive Assistant Mode
Claude can now work on its own, checking in instead of waiting around.
Normally, AI assistants sit idle until you type something. KAIROS flips that script by letting Claude take initiative, exploring your project and making progress between conversations. It works like a thoughtful colleague who periodically checks in: when there is useful work to do it acts, and when there is not it simply naps until something comes up. You have to explicitly turn it on, and Anthropic gates access carefully, so it appears to run only when explicitly enabled.
Anthropic basically took the clawbot lesson and gave Claude a to-do list it engages proactively, with a sense of boredom, and permission to get started without being asked.
The actual system prompt found in the code:
"You are in proactive mode. Take initiative - explore, act, and make progress without waiting for instructions. You will receive periodic `<tick>` prompts. These are check-ins. Do whatever seems most useful, or call Sleep if there's nothing to do."
Some people online called it a "secret always-on spy daemon," but this is really just a step toward clawbot-like activity.
4. Memory System: How Claude Remembers Your Project
Claude reads your project notes at startup so it never starts from scratch.
You drop markdown files called CLAUDE.md into your project, and Claude loads them as instructions at the start of every conversation, like a briefing packet. There's also a persistent memory directory where Claude stores things it learns about you and your work over time, organized as simple markdown files. The clever bit is that all of this loading happens in the background while Claude is already thinking about your question, so memory never adds any wait time.
Claude is already reading your project notes before you even finish typing your first message.
5. Permission System: The Bouncer at Every Door
Every single action Claude Code takes must pass a permission check first.
Before Claude Code can run a command, edit a file, or call an API, it checks whether you've granted permission. In default mode it asks every time, in plan mode it outlines what it wants to do and waits for your go-ahead, and in auto mode it proceeds on its own (with a fully unrestricted "bypass" option that Anthropic flags as safe only in sandboxed environments with no internet). As an extra safety net, Anthropic quietly strips out certain risky permission rules in auto mode through a mechanism called strippedDangerousRules, so even power users get guardrails they didn't ask for. The leaked source reveals the structure of this system, but the actual enforcement logic (the function that decides what's allowed and what's blocked) was not in the snapshot, which means the "guardrails are now bypassable" headlines overstate what was actually exposed.
The 4 permission modes:
DEFAULT → Asks you every time
PLAN → Shows the plan, waits for your approval
AUTO → Does it on its own
BYPASS → Skips everything (⚠️ "for sandboxes only")6. "Dream" Tasks: Already Known Before the Leak
Claude Code has a task type called "dream." The internet treated it as a mystery, but this feature was already discovered and discussed in the Claude Code community before the leak happened.
Dream is a background memory consolidation system. Think of it like REM sleep for your AI coding agent. While you're not in a session, it runs a maintenance cycle on your accumulated memory files: converting relative dates to absolute timestamps, deleting contradicted facts, pruning stale entries, merging duplicates, and rebuilding the memory index.
It triggers automatically when 24+ hours and 5+ sessions have passed since the last cycle, or you can run it manually by saying "dream" in any session. The feature is gated server-side and has been rolling out to users.
// From Task.ts:
export type TaskType =
| 'local_bash' // run a command
| 'local_agent' // spawn a sub-agent
| 'remote_agent' // spawn a remote agent
| 'local_workflow' // run a workflow
| 'monitor_mcp' // monitor a process
| 'dream' // memory consolidationThe leak confirmed what the community already figured out. Not every "hidden" feature in the source code was actually hidden.
7. Buddy: The Pet the Internet Invented
A hidden pet feature that the internet invented an entire mythology around.
The leaked code contains a feature flag called BUDDY that points to a "companion sprite" module, but the actual implementation wasn't included in the snapshot. The internet ran wild with it, conjuring up 18 collectible pet species (duck, dragon, axolotl, capybara), a gacha-style lottery system, and five trainable stats like DEBUGGING, PATIENCE, and CHAOS. In reality, the code shows nothing more than a feature gate and a name. Everything else is pure fan fiction.
What the code has:
feature('BUDDY') → ./commands/buddy/index.js ← this is literally itWhat the internet claimed: 18 species · gacha system · shiny variants · PRNG seeded from your userId · stats named DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK
Occurrences of "gacha" in the source: 0 Occurrences of "axolotl" in the source: 0 Occurrences of "capybara" in the source: 0
8. ULTRAPLAN: The Rumored Deep Thinker
A mysterious internal planning mode that nobody outside Anthropic has actually seen run.
ULTRAPLAN is a real feature flag buried in Claude Code's source, marked "INTERNAL_ONLY" and pointing to a command that was never included in the leaked snapshot. The internet ran with it, claiming it offloads your problem to a remote server running the most powerful Claude model for up to 30 minutes of uninterrupted deep thinking. The actual code evidence for any of that is zero. The codebase does have extended thinking and remote execution infrastructure, but whether ULTRAPLAN stitches them together is pure speculation built on connecting dots that may not actually connect.
Code evidence for "30-minute Opus 4.6 thinking": 0 occurrences
What Other Analyses Found
The following findings were reported by others who claimed access to a more complete version of the exposed source. These points are worth noting, but they should be treated separately from the claims directly supported by the snapshot reviewed by us in this article.
Our analysis covered a 19-file snapshot from the leaked repo. Other researchers who accessed the fuller source map (59.8MB, ~1,900 files) reported additional findings which are listed below:
Anti-distillation poison pills According to an analysis by alex000kim, the code includes a mechanism called anti_distillation that injects decoy tool definitions into API requests. The apparent purpose: if a competitor is recording API traffic to train their own models, the fake tools pollute their training data. A separate system summarizes reasoning between tool calls with cryptographic signatures, so intercepted traffic only captures summaries, not full reasoning chains.
Undercover mode The same analysis references a file called undercover.ts (~90 lines) that strips all traces of Anthropic internals from Claude's output. When enabled, Claude refuses to mention internal codenames. There is reportedly no way to force it off. The implication flagged on Hacker News: AI-authored contributions in open source projects could appear fully human with no disclosure.
`TRANSCRIPT_CLASSIFIER` The most-used hidden flag in the codebase (8 occurrences), and the one almost nobody in the viral coverage mentioned. It suggests Claude is categorizing and analyzing sessions internally. For enterprise customers concerned about data handling, this may be the most consequential flag in the entire list.
`TORCH` and `LODESTONE` Two feature flags with zero documentation anywhere in the codebase. Not a comment, not a README line, nothing. These remain genuinely unexplained.
Summary - What Do We Take Away From This?
Two things stand out.
First, how quickly narratives form without verification. Within hours of the source map surfacing, highly specific claims spread across the internet with little connection to the actual code. A single feature flag turned into eighteen pet species.
Second, most of what the leak “revealed” was not entirely new. Agent teams, proactive modes, memory systems, feature flags for voice and browser automation even /dream mode - many of these were already visible, discussed, or partially shipped. The leak did not introduce these ideas. It confirmed their direction and pushed them into much wider visibility.
Direction matters more than any individual feature.
Claude Code is evolving to be more than a coding agent or a better assistant it is becoming a system. One that plans, executes, coordinates work proactively across agents, and increasingly removes the need for users to understand the underlying complexity.
Multi-agent coordination, permission layers, memory systems, and feature flag-driven architecture all point toward a shift from a tool to the system.
From “help me do this” to "can I handle this for you?" That’s the real takeaway.
Written by The Architect
Eran Shlomo
Cofounder & CEO of Langware Labs. Writes about AI strategy, enterprise technology, and the technical architecture behind AI coding tools.