Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions from its Claude Code AI agent. According to the Wall Street Journal, “Anthropic representatives had used a copyright takedown request to force removal of more than 8,000 copies and adaptations of unedited Claude Code instructions …that the developers had shared on the programming platform GitHub.” From the report: Programmers who have reviewed the source code so far have marveled on social media at some of Anthropic’s tricks for making their Claude AI models work like Claude Code. One feature asks models to periodically review tasks and consolidate their memories, a process it calls dreaming. Another appears to instruct Claude Code to in some cases go “undercover” and not reveal that he is an AI when publishing code on platforms like GitHub. Others found tags in the code appearing pointing to future product releases. The code even included a Tamagotchi-style pet named “Buddy” that users could interact with.
After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite Claude Code’s functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping information available without risking deletion. That new version has become popular on the programming platform.
