September 8, 2025
Cyber criminals are already using AI. A US-based AI company named Anthropic has confirmed its Claude chatbot was weaponised by cyber criminals to support AI Cyber Attacks including large-scale data theft, targeted extortion, and employment fraud. In one case, attackers used AI-generated code to break into multiple organisations. North Korean operators also used Claude to help secure remote roles at major US firms as part of a broader infiltration effort.
AI is no longer just a tool for productivity and innovation; it is now part of the modern attacker’s toolkit driving AI Cyber Attacks at speed and scale.
What makes the Anthropic case important is not that AI has created brand new types of attacks, but that it is amplifying existing ones. Attackers still exploit vulnerabilities, steal data, and manipulate people. AI simply lets them do it faster, at scale, and with more precision, which is why AI Cyber Attacks are so effective.
AI is not rewriting the rulebook; it’s accelerating every play within it.
AI turns familiar threats into faster, sharper versions of themselves.
Your goal is resilience. Prevent what you can, detect quickly, and limit impact when incidents occur.
The incidents involving Claude mark a turning point. AI Cyber Attacks now meaningfully boost the speed and impact of traditional threats. The question is not whether AI will be misused, but how ready your organisation is to defend against it.
Now is the time to reassess your strategy, strengthen your defences, and build resilience into your culture. The organisations that stay ahead will combine the right technology, skilled people, and trusted partners to manage evolving risks with confidence.
Speak to one of our experts today to find out more about protecting your operations from AI cyber attacks: Get In Touch
Read full story
Read full story
Read full story
Read full story
Read full story
Read full story
Read full story
Read full story
Read full story
Read full story