01/09/2025
Recent disclosures from AI developers highlight a growing pattern: threat actors are attempting to misuse generative AI platforms to draft phishing content, generate malicious code, and circumvent safety controls.
These developments reflect a broader dynamic within cybersecurity. As attackers experiment with AI-enabled tactics, organisations are accelerating investment in AI-driven defensive capabilities, including behavioural analytics, automated incident response, and zero-trust architectures. The result is an evolving landscape where offensive and defensive AI capabilities are advancing in parallel.
What’s happening and why it matters
AI misuse is far more than hypothetical, it’s happening now. Cybercriminals are not just planning attacks; they’re using AI to orchestrate them.
Threat reporting indicates several emerging patterns:
- Generative AI is being used to create highly convincing phishing campaigns with improved linguistic quality and contextual targeting.
- Large language models are being tested to generate or refine malicious scripts and automate elements of social engineering.
- AI-assisted workflows can streamline attack preparation, from reconnaissance and victim profiling to drafting extortion communications.
While most mainstream AI platforms implement safeguards, the availability of open-source models and modified systems lowers the barrier for misuse. This increases the risk that even moderately skilled actors can deploy tactics that previously required specialised expertise.
Taken together, these examples underscore the rising ease with which even low-skilled actors can mount complex, damaging cyberattacks, all thanks to AI misuse.
The threat you can’t ignore
All of this is part of a larger global shift. The cybersecurity landscape is now defined by a new reality in which attackers are scaling up their operations, using generative AI to create highly realistic phishing campaigns, deepfakes, malware, and identity fraud. These threats that are no longer limited to sophisticated perpetrators but are increasingly accessible to low-skilled actors. The broader cybersecurity landscape is shifting. Generative AI enables attackers to scale operations, personalise deception, and automate aspects of malware development and social engineering. Techniques such as deepfake voice cloning, synthetic identity creation, and AI-generated phishing are becoming more accessible.
At the same time, insider risk remains a persistent concern. The availability of powerful AI tools increases the likelihood of accidental misuse, policy circumvention, or shadow adoption within organisations.
Defensive responses are evolving accordingly. Organisations are deploying AI-powered anomaly detection, user and entity behaviour analytics (UEBA), automated response orchestration, and zero-trust architectures. These measures aim to detect subtle behavioural deviations rather than relying solely on signature-based detection.
Regulatory frameworks — including the EU AI Act and sector-specific cybersecurity standards — add another layer of complexity, requiring organisations to balance innovation with governance and accountability.
On the defensive side, organisations are responding by investing in AI-powered threat detection, behavioural analytics, automated incident response, and zero-trust frameworks, an unmistakable signal that the defensive AI arms race is well underway. All of this is unfolding against the backdrop of increasing regulatory scrutiny, from the European Union’s AI Act to evolving guidance in the United States, creating not just a technical but also a compliance challenge for businesses worldwide.
How organisations should respond
Understanding AI-enabled threats is only the first step. Effective response requires a structured approach that combines technology, governance, and workforce awareness.
Organisations should prioritise:
- Continuous monitoring supported by behavioural analytics rather than static rules.
- Clear governance over AI usage, including policies addressing internal experimentation and shadow AI.
- Transparent, explainable security tools that support auditability and regulatory compliance.
- Executive-level risk translation, ensuring boards understand both technical exposure and strategic implications.
AI does not fundamentally change the core principles of cybersecurity. It accelerates them. Organisations that already operate with strong governance, layered defence strategies, and proactive monitoring are better positioned to absorb AI-driven threat evolution.
Frequently Asked Questions (FAQ)
What does “AI misuse” refer to?
AI misuse means using artificial intelligence tools for harmful or malicious purposes, for example, to create phishing emails, generate malware, bypass safety filters, carry out influence campaigns, or automate large‐scale cyberattacks.
How are cybercriminals weaponising generative AI?
They are using generative AI to:
- draft credible phishing content,
- write malicious code,
- automate influence operations,
- conduct “vibe-hacking” (tailored extortion), and
- coordinate complex attacks beginning with victim identification all the way through to ransom demands.
Why is AI misuse a growing threat for businesses?
Because the barriers to entry are lowering, low-skilled actors can now deploy sophisticated attacks thanks to AI assistance. Also, insider threats are becoming more dangerous, and regulatory scrutiny is increasing.