Anthropic has reportedly confirmed to Axios that the company accidentally exposed the source code of its AI coding tool, Claude Code. This has happened for the second time in a year, with the first incident dating to February. According to a report by Axios, a debugging file was mistakenly included in a routine update and published to the public registry used by developers to access software packages.


What is source code


Source code is the original set of instructions that developers write to tell software or an app how to work. It’s written in programming languages like Python or JavaScript and acts like a blueprint, defining everything from how a button behaves to how data is processed behind the scenes. What users see on their screens is just the final output, while the source code is the logic that makes it all function.

 
 


To make it clearer, think of source code as a recipe in a kitchen. The dish you eat is the finished product, but the recipe explains exactly how it’s made step by step. Similarly, source code is what developers use to build and modify software, even though users never directly interact with it.


What happened and what followed


The issue of leaked source code for Claude Code came to light after a security researcher found that the package contained a source map file capable of revealing the full underlying codebase. The report by Axios noted that the code was quickly replicated and dissected across GitHub.

 


According to a report by Emerge, after this, Anthropic reportedly began issuing DMCA takedown notices against GitHub mirrors of the leaked code. Soon after, a South Korean developer named Sigrid Jin—who was recently featured by the Wall Street Journal for consuming 25 billion Claude Code tokens—responded within hours. He rebuilt the core architecture in Python from scratch using an AI orchestration tool called oh-my-codex, and published a new project called “claw-code” before sunrise.

 


Emerge added that the project is said to be a Python-based reimplementation of the original codebase rather than a direct copy, which puts it in a grey area. Whether that distinction holds up from a legal standpoint remains open to interpretation.


Why this matters


As per Axios, the leaked source code reportedly included multiple feature flags pointing to capabilities that appear to be already developed but not yet released. The report cited an Anthropic spokesperson as saying that these features include the ability for Claude to review its most recent session to identify improvements and carry those learnings across conversations.

 


The code also references a “persistent assistant” mode that could allow Claude Code to continue running in the background even when the user is inactive. In addition, it highlights remote access capabilities, enabling users to control Claude from a phone or another browser, a feature that has already been rolled out for Claude Code.

 


In simple terms, this means the leak not only allowed people to copy Claude Code’s existing features from the source code but also allowed them to copy features that Anthropic was planning to release in the coming weeks. The report further said that the leak won’t shut Anthropic’s business. Still, it gives every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on.


What did Anthropic say


As per Axios, an Anthropic spokesperson told the publication that, “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed.”

 


The spokesperson added, “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”


What happened in February


According to a report by NDTV, citing Odaily, an early build of Claude Code was similarly exposed in February 2025, leading Anthropic to pull the package from npm and remove the associated source map.

 



Source link

YouTube
Instagram
WhatsApp