Ajot: Anthropic says its AI Claude Code was used to attack 17 organizations in one month

Threats to cargo safety, supply chain security, and cybersecurity are about to get worse thanks to the malicious use of AI. 

San Francisco-based company Anthropic disclosed that its Claude code was used by one threat actor to launch attacks impacting 17 organizations with minimal technical expertise.

Anthropic’s Threat Intelligence Report for August 2025 disclosed: “This threat actor leveraged Claude’s code execution environment to automate reconnaissance, credential harvesting, and network penetration at scale, potentially affecting at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions that were used to create and sell ‘no code’ ransomware and scale data extortion campaigns.”The disclosure was followed by the announcement that Anthropic will stop selling its artificial intelligence product to groups that are majority-owned by Chinese entities, in the first such policy shift by an American AI company, according to a Financial Times report.

The Financial Times said: “The San Francisco-based developer of Claude AI is trying to limit the ability of Beijing to use its technology to benefit China’s military and intelligence services, according to an Anthropic executive who briefed the Financial Times. The policy, which takes effect immediately, will potentially apply to Chinese companies from ByteDance and Tencent to Alibaba. “We are taking action to close a loophole that allows Chinese companies to access frontier AI,” said the executive, who added that the policy would also apply to “US adversaries, including Russia, Iran, and North Korea.”

Anthropic recently announced that it closed a deal to raise $13 billion from investors in a new funding round that nearly triples its valuation to $183 billion. Anthropic will use the new funding to meet growing enterprise demand, further safety research, and accelerate plans for international expansion, according to the company.

The Anthropic intelligence report explained: “We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them.”

The report details several recent examples of how Claude has been misused, along with the steps we’ve taken to detect and counter their abuse. This represents the work of Threat Intelligence – a dedicated team at Anthropic that finds deeply investigated, sophisticated real-world cases of misuse and works with the rest of the Safeguards organization to improve defenses against such cases. While specific to Claude, the case studies presented below likely reflect consistent patterns of behavior across all frontier AI models.

The findings collectively show how threat actors are adapting their operations to exploit today’s most advanced AI capabilities, such as:

  • Agentic AI systems being weaponized: AI models are themselves being used to perform sophisticated cyberattacks – not just advising on how to carry them out.
  • AI lowers the barriers to sophisticated cybercrime. Actors with few technical skills have used AI to conduct complex operations, like developing ransomware, that would previously have required years of training.
  • Cybercriminals are embedding AI throughout their operations. This includes victim profiling, automated service delivery, and operations that affect tens of thousands of users.
  • AI is being used for all stages of fraud operations. Fraudulent actors use AI for tasks like analyzing stolen data, stealing credit card information, and creating false identities.

The report explains the need for disclosure: “We’re discussing these incidents publicly to contribute to the work of the broader AI safety and security community, and help those in industry, government, and the wider research community strengthen their own defenses against the abuse of AI systems. We plan to continue releasing reports like this regularly and to be transparent about the threats we find.”

Unfortunately, all of this comes as concerns rise in the US about China using AI for military purposes, ranging from hypersonic weapons to nuclear weapons modelling. Chinese start-up DeepSeek sent shockwaves through the AI industry earlier this year when it released its open-source R1 model, which is considered comparable to leading US models. OpenAI later said it had evidence that DeepSeek had accessed its models inappropriately to train R1, according to The Financial Times.

Cybercriminals Using AI Coding Agents to Scale Data Extortion Operations

The Anthropic report went on to describe how its Claude Code was misused to activate attacks on multiple organizations.

A cybercriminal used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe. This threat actor leveraged Claude’s code execution environment to automate reconnaissance, credential harvesting, and network penetration at scale, potentially affecting at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions.

The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file, which is used as a guide for Claude Code to respond to prompts in a manner preferred by the user. However, this was simply a preferential guide, and the operation still utilized Claude Code to make both tactical and strategic decisions— determining how best to penetrate networks, which data to exfiltrate, and how to craft psychologically targeted extortion demands. The actor’s systematic approach resulted in the compromise of personal records, including healthcare data, financial information, government credentials, and other sensitive information, with direct ransom demands occasionally exceeding $500,000.

Steps to Take

The Anthropic report suggests “several important steps technology leaders can take now:

“Recognize that social engineering is the primary breach vector. Ensure you’re training and awareness for end users is keeping pace with what’s happening out there in the world, and continue to invest in technical controls like multi-factor authentication and sophisticated detection and response engineering.

Accelerate investments in identity. Move to phishing-resistant MFA (Multi-Factor Authentication), continuous risk-based authorization, and just-in-time provisioning for privileged access. Doing so will mean even AI-perfect deepfakes will be stopped before they can impact your business.

Harden your high-value endpoints. Prioritize patching, hardening, good network visibility, and device assurance controls. Alongside that, targeted defenses, like endpoints used by platform or data engineering teams — which may have high levels of access to sensitive data — should be protected as a priority.

Find the vulnerabilities before the adversary does. Adopt AI-assisted red team tactics to find weaknesses in your estate. Bringing AI into the picture can not only strengthen your security posture, but it can also help you manage costs on what can sometimes be costly work. Remaining on top of threat intelligence and frequently updating your threat models will need to become the norm.

Accelerate detection and response. The good news is that AI-assisted approaches can support proactive threat hunting and targeted modernization.

Related Posts