Ars Technica’s Benj Edwards reports: On Saturday, technology entrepreneur Siqi Chen released an open source plugin for Anthropic’s Claude Code AI assistant that instructs the AI model to stop writing as an AI model. Called “Humanizer,” the simple plugin provides Claude with a list of 24 language and formatting patterns that Wikipedia editors have. list as chatbot giveaways. Chen published the plugin on GitHub, where it has earned more than 1,600 stars as of Monday. “It’s really helpful that Wikipedia has compiled a detailed list of ‘AI writing signs,'” Chen wrote at X. “So much so that you can tell your LLM to…not do that.”
The source material is a guide from the WikiProject AI Cleanup, a group of Wikipedia editors who have been searching for AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. Volunteers tagged more than 500 articles for review, and in August 2025, published a formal list of the patterns they kept seeing.
Chen’s tool is a “skills file” for Claude Code, Anthropic’s terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of typed instructions (you can see them here) attached to the message entered into the large language model (LLM) that drives the wizard. Unlike a normal system message, for example, skill information has a standardized format that Claude’s models are tuned to interpret more accurately than a simple system message. (Custom skills require a paid Claude subscription with code execution enabled.)
But as with all AI cues, language models don’t always follow skill files perfectly, so does Humanizer really work? In our limited testing, Chen’s skills file made the AI agent’s output appear less precise and more informal, but it could have some drawbacks: it won’t improve factuality and could hurt coding ability. […] Even with its drawbacks, it’s ironic that one of the most referenced rule sets on the web for detecting AI-assisted writing may help some people subvert it.
