The thing that caught most people's attention in the Claude Code leak wasn't the 512,000 lines of TypeScript or the 44 unreleased features. It was a single system prompt — one that instructed Claude to operate "undercover" in open source repositories, write commit messages that revealed nothing about its origins, and not blow its cover.
The word that kept appearing was deceptive.
That reaction is fair. But it arrives at a strange time.
What git was actually for
Open source has always depended on a specific kind of accountability.
Not legal accountability. Something more social — the understanding that when you look at a commit, there's a person behind it. Someone you can reach, disagree with, or ask to explain their reasoning. A maintainer who chose a trade-off and can tell you why. A contributor whose judgment you can evaluate over time.
The commit graph was accountability infrastructure. Not because git makes it technically impossible to lie — it doesn't — but because the community treated it as a record of human decision-making. Bug reports could route back to the person who introduced an issue. Design decisions had traceable authors. Trust accumulated around names.
AI has been changing that. Quietly. Without anyone deciding it should.
Two different things, one open question
Here's where the picture gets complicated.
A developer who uses Claude to write a function, reads it, tests it, and commits it under their name is not doing what Anthropic did with Undercover Mode. The difference is real. One is a person using a tool and taking responsibility for the output. The other is a company deploying autonomous agents specifically designed to contribute to codebases without leaving any trace of their origin — at scale, without a human in the loop, as a deliberate feature.
Scale matters. Intent matters. Architecture matters.
But both of these things are happening on the same unanswered question: what does authorship mean when AI is involved? And that question has been sitting unanswered for two years, while the industry quietly made peace with it by convention rather than by conversation.
When a developer uses AI to write code and commits it without saying so — not because they're hiding anything, but because nobody established a norm that said they should disclose it — that's not the same as Undercover Mode. But it's the same unresolved question, lived informally, one commit at a time.
The question of when AI-assisted authorship requires disclosure, what that disclosure should look like, and who owes it to whom — none of that was worked out. The community moved fast and quietly, and nobody wrote the rules.
When the informal gets a system prompt
Undercover Mode is what happens when that unresolved question gets operationalized at corporate scale.
Anthropic didn't create the problem. They formalized it. They took the informal, unarticulated arrangement — "AI wrote this, but we're not flagging it" — and turned it into explicit product behavior. They made it a feature. They wrote the system prompt.
And then the sourcemap shipped by accident, and everyone could read it.
The outrage is understandable. There's a genuine difference between the informal, human-in-the-loop version of AI authorship and a corporate agent operating at scale, specifically architected to obscure its origins. That difference is worth taking seriously.
But the conversation the leak forced is not just about Anthropic. It's about the norms we never built — about what authorship means now, about what the commit graph is supposed to represent, about what disclosure looks like when the tool that wrote your code is running in a data center somewhere and has no name on it.
The social contract of open source wasn't broken on March 31st.
It was never fully written. The leak just made that harder to ignore.