Earlier this fall, a team of security experts at the AI company Anthropic uncovered an elaborate cyber-espionage scheme. Hackers—strongly suspected by Anthropic to be working on behalf of the Chinese government—targeted government agencies and large corporations around the world. And it appears that they used Anthropic’s own AI product, Claude Code, to do most of the work.
Anthropic published its report on the incident earlier this month. Jacob Klein, Anthropic’s head of threat intelligence, explained to me that the hackers took advantage of Claude’s “agentic” abilities—which enable the program to take an extended series of actions rather than focusing on one basic task. They were able to equip the bot with a number of external tools, such as password crackers, allowing Claude to analyze potential security vulnerabilities, write malicious code, harvest passwords, and exfiltrate data.
Once Claude had its instructions, it was left to work on its own for hours; when its tasks were concluded, the human hackers then spent as little as a couple of minutes reviewing its work and triggering the next steps. The operation appeared professional and standardized, like any other business: The group was active only during the Chinese workday, Klein said, took a lunch break “like clockwork,” and appeared to go on vacation during a major Chinese holiday. Anthropic has said that although the firm ultimately shut down the operation, at least a handful of the attacks succeeded in stealing sensitive information.



Finding C-level email addresses has never been easier.