AI misuse: Cybercriminals weaponising generative AI 

Anthropic, the firm behind Claude AI, revealed it prevented cyberattacks using its generative AI platform from succeeding, “Hackers tried to abuse Claude AI tool to draft phishing content, craft malware, bypass safety filters, and even coordinate influence campaigns.” 

That warning aligns with what many cybersecurity experts are now calling an “Arms race between good AI (defensive) and bad AI (offensive).” In other words, as criminals look to exploit AI tools, businesses are investing heavily in AI-powered defences like threat detection, automated incident response, zero-trust frameworks, and anomaly detection, according to the World Economic Forum. 

What’s happening and why it matters 

AI misuse is far more than hypothetical, it’s happening now. Cybercriminals are not just planning attacks; they’re using AI to orchestrate them. 

  • Hackers attempted to use Claude to write phishing emails, create malicious code and circumvent safety filters, even trying to script influence campaigns and guide inexperienced hackers with step-by-step instructions.  
  • Another report coined a new term: “vibe-hacking.” AI now writes extortion messages so perfectly targeted, they’re almost guaranteed to succeed. Criminals targeted at least 17 global organisations, including healthcare, religious, emergency services, and government entities, with ransom demands exceeding £500,000 (€576,880) 
  • One particularly chilling case saw an attacker use Claude to automate nearly the entire process: identifying victims, generating malware, analysing stolen data, calculating ransom amounts, and composing extortion emails. ⁠ “We believe [this] is an unprecedented degree” of AI-assisted cybercrime, Anthropic admits. 

Taken together, these examples underscore the rising ease with which even low-skilled actors can mount complex, damaging cyberattacks, all thanks to AI misuse.

The threat you can’t ignore 

All of this is part of a larger global shift. The cybersecurity landscape is now defined by a new reality in which attackers are scaling up their operations, using generative AI to create highly realistic phishing campaigns, deepfakes, malware, and identity fraud. These threats that are no longer limited to sophisticated perpetrators but are increasingly accessible to low-skilled actors. At the same time, insider threats are becoming more dangerous, with 64% of cybersecurity professionals in Europe now rating malicious or negligent insiders, including those who might misuse AI, as a bigger concern than external attackers.  

On the defensive side, organisations are responding by investing in AI-powered threat detection, behavioural analytics, automated incident response, and zero-trust frameworks, an unmistakable signal that the defensive AI arms race is well underway. All of this is unfolding against the backdrop of increasing regulatory scrutiny, from the European Union’s AI Act to evolving guidance in the United States, creating not just a technical but also a compliance challenge for businesses worldwide. 

How Getronics can help 

The challenge for most organisations is not simply understanding that AI-driven threats exist but knowing how to respond effectively. This is where an experienced IT services company, like Getronics, becomes essential. By bringing together expertise in both traditional cybersecurity and advanced AI technologies, we can help clients stay ahead of evolving risks. They can implement modern defences such as AI-powered threat detection, anomaly detection platforms, automated incident response, and robust zero-trust architectures, ensuring that businesses are not reacting to incidents after the fact but proactively strengthening their security posture. 

Getronics can also act as a translator of risk, turning highly technical cyber threats into practical guidance that executives and boards can act upon. By using real-world examples, such as the way attackers have weaponised Claude to automate phishing and malware creation, providers can make the urgency tangible and help organisations justify the right investments. 

Beyond external threats, insider risk has become a growing concern. Here, providers can integrate behavioural analytics and user-entity activity monitoring to help clients spot unusual patterns early, mitigating risks before they escalate. Crucially, IT services companies can build trust by deploying explainable AI tools that ensure decision-making is transparent and auditable, allowing clients to have confidence not only in their defences but also in the integrity of the systems themselves. 

Finally, navigating regulation is an area where many organisations struggle. With frameworks like the EU AI Act emerging and regulatory expectations evolving in the UK and US, IT services providers are uniquely positioned to guide businesses through compliance, helping them stay both secure and aligned with governance requirements.  

In short, the role of Getronics is to give your organisation confidence, that your defences are strong, your risks are managed, and you are ready for the next wave of AI-driven threats. 

Getronics Editorial Team

In this article:

Share this post

Talk with one of our experts

If you're considering a new digital experience, whatever stage you're at in your journey, we'd love to talk.