AI vs. Cyber Defence: When Attackers Scale Faster Than Defenders 

The emergence of generative AI has not only transformed content creation, software development, and customer engagement, but has also irreversibly changed the dynamics of AI security risks. In boardrooms across Europe and beyond, security and technology leaders are asking a simple but urgent question: Are we still in control? The answer, increasingly, is no.

In 2024, a finance employee at a multinational enterprise authorised a $25 million payment after receiving what appeared to be a video call from their CEO. The voice, the face, and the context seemed authentic, but it wasn’t. It was a deepfake: A synthetic, AI-generated impersonation indistinguishable from reality. This incident was not isolated and represented the acceleration of a broader trend: AI-enabled cybercrime scaling faster than conventional defences can respond. 

AI isn’t just another attack vector. It’s a force multiplier, one that is lowering the barrier to entry for threat actors and increasing the sophistication, volume, and success rate of malicious campaigns. In response, security strategies must shift exponentially. 

Generative AI in the Hands of Attackers 

While AI holds enormous potential for innovation and efficiency, its adoption by malicious actors has been equally rapid. In the past two years alone, IBM X-Force has reported a 1000% increase in phishing volumes globally, with much of that growth attributed to AI-generated content. Attackers no longer need language skills, social engineering expertise, or even access to expensive malware development kits. A single generative model can now compose thousands of tailored phishing messages, complete with internal company language, in seconds. 

In many cases, these messages are not just linguistically flawless, but they’re also contextually intelligent. They mimic the tone of senior executives, reference recent meetings, and target individuals based on their role, region, or access level. The result: a dramatic increase in success rates and a significant decline in user detection.  

But phishing is only the beginning. Voice cloning technologies are now used to create synthetic audio impersonations of executives. Deepfake videos are being deployed in real-time to validate fraudulent transactions or influence board-level decisions. AI-generated malware is written, tested, and redeployed in minutes, with code variations designed to evade signature-based detection. Cybercrime, in short, has industrialised. 

AI Amplification 

At the core of this transformation lies the concept of “AI amplification”—the compounding effect of applying artificial intelligence to cyber threat activities and amplifying AI security risks. What previously took a team of skilled operators weeks to prepare can now be orchestrated by a single actor using a few prompts and an off-the-shelf model. Tasks such as code obfuscation, vulnerability scanning, and user profiling are being automated to a degree previously unseen.

What makes AI amplification especially dangerous is its adaptability. Unlike scripted attacks, AI-generated threats can evolve during execution, further expanding the landscape of AI security risks. For example, an AI-based phishing campaign can continuously improve based on user interaction patterns. Some malware now adapts its actions based on the device it infects, monitoring system conditions, installed security tools, or user behaviour to decide when and how to execute. Even fake voices can alter tone mid-conversation to mimic stress or urgency.

A suspended brain connected to neural and digital networks, visualizing evolving AI security risks and intelligent cyber threats.

Why Defences Are Lagging Behind 

Despite growing awareness, many enterprises remain ill-equipped to counter this shift. There are four reasons for this: 

  • First, most detection systems are not designed to identify AI-generated threats. Signature-based tools, while still useful, fail to flag polymorphic malware or synthetically authored phishing messages that deviate from known templates. Even advanced behavioural analytics struggle to spot deepfakes delivered through legitimate collaboration platforms. 
  • Second, security operations centres (SOCs) are overwhelmed. The volume of alerts, many of them false positives, consumes valuable analyst time. When genuine threats emerge—particularly novel or low-frequency ones—they are often buried. And while AI can help reduce this burden, only a minority of SOCs currently integrate AI-driven analysis at scale. 
  • Third, the talent gap is growing. Organisations face persistent shortages in cybersecurity personnel, with AI-specific expertise particularly scarce. According to recent data, over 50% of CISOs say their teams lack the skills to identify or mitigate AI-enabled threats. Furthermore, the integration of newer generation in the workforce is going to increase the threat of human generated risks. For example, among employees in the United States, only 31% of Gen Z reported feeling confident in recognizing phishing attempts, while 72% admitted to having opened at least one link at work that seemed suspicious — more than any older generation. 
  • Finally, structural inertia is a factor. Security investments often prioritise regulatory compliance over threat adaptability. Frameworks are audited annually; attackers iterate daily. 

The result is a strategic disadvantage. While enterprises adapt incrementally, attackers evolve continuously. 

Three Threat Scenarios Organizations Now Face 

1. AI-Driven Phishing at Scale 

Across multiple sectors, phishing campaigns have shifted from crude, generic emails to precision-engineered lures. AI models trained on internal data — from past breaches, press releases, and executive bios — craft messages that bypass both technical filters and human scepticism. In many incidents, employees acted not out of carelessness, but because the messages were simply too convincing. 

  • Emails mimic company language and formatting perfectly. 
  • Subject lines and timing are tailored to internal events. 
  • Personalisation now extends beyond names to job roles and meeting history. 

2. Deepfake-Enabled Fraud 

Impersonation attacks using deepfakes are becoming more prevalent. Targets are typically finance or HR professionals, asked to act urgently on what appears to be a live video or voicemail from an executive. The psychological pressure, combined with visual or auditory cues, often leads to compliance. The success of these attacks is not due to technological brilliance, but to the trust users place in familiar formats such as video calls, voice notes, or internal channels. 

  • Real-time deepfake voice calls increasingly target mobile messaging apps. 
  • Audio deepfakes are being used to bypass voice verification systems. 
  • Attackers often pair deepfakes with email or chat context to build legitimacy. 

A similar threat involves the use of AI to generate entirely synthetic digital personas, complete with fake email trails, LinkedIn profiles, and even voiceprints. These are used to infiltrate organisations, access restricted systems, or build credibility over time in supply chain ecosystems. This threat is particularly relevant to organizations with distributed onboarding, remote access policies, or third-party supply chain portals. 

  • Threat actors build “ghost employees” to enrol in supplier portals or request access. 
  • Synthetic identities have been spotted initiating B2B scams via procurement teams. 
  • AI-generated images and CVs are used to apply for remote roles in sensitive functions. 

3. Generative Malware and Evasive Payloads 

AI-generated malware is already being observed in the wild. These payloads are not just created quickly—they are designed to mutate. Some can test themselves against security tools and adapt their signatures in real time. Others include built-in logic to detect whether they are running in a sandbox, delaying execution until conditions are “safe.” For traditional antivirus or EDR tools, such threats represent a significant challenge. 

  • Malware obfuscation is now dynamically generated and constantly refreshed. 
  • Some strains use AI to selectively avoid detection only in monitored environments.
  • Offensive AI tools like WormGPT are lowering the barrier to writing evasive code. 

How Organizations Can Respond to AI Security Risks 

Addressing AI-scale threads like these requires a total change in mindset. You’re no longer just containing threats but acting proactively to anticipate them. Consider: 

  • Modern threat detection with embedded AI: Security platforms that use machine learning and behavioural analytics can spot subtle anomalies, such as an executive logging in at an unusual hour, or a device uploading unexpected volumes of data. These tools are not silver bullets, but they are a necessary foundation for operating at speed and scale. In the future, AI agents and humans will be working closely together to support the range of future attacks. 
  • Resilience through awareness: Human users remain both a vulnerability and a strength. Updated awareness programmes must now include training on synthetic media, deepfake detection, and AI-powered social engineering. The goal is not to instill paranoia, but critical thinking: Trust, but verify, especially when the request comes from a familiar voice or face. 
  • Zero trust as the default: Zero Trust frameworks, long discussed, are now essential. Continuous verification of users, devices, and data flows prevents attackers from moving laterally once inside the perimeter. Multi-factor authentication (MFA), conditional access, and micro-segmentation should no longer be optional. 
  • Integrated threat intelligence: Understanding attacker methods requires more than internal telemetry. Integration with real-time threat intelligence feeds—particularly those tracking AI-assisted tools and dark web activity—gives defenders the context to act before incidents escalate. Collaborative frameworks across industries will also play a crucial role. 

A glimpse of what the future could look like in practice recently emerged when Google’s AI-driven “Big Sleep” system prevented exploitation of a critical SQLite vulnerability before threat actors could act. While this technology isn’t yet publicly available, it illustrates the next evolution of cyber defence — AI systems capable of identifying and neutralising threats autonomously, often before human intervention is even possible. Such developments highlight a future where proactive, self-defending architectures become standard, transforming cyber defence from reactive response to intelligent anticipation. 

Closing the Gap Starts Now 

The cybersecurity arms race has entered a new phase. AI has shifted the balance of power toward the attacker, introducing a new era of AI security risks, but that shift is not permanent. Enterprises that act now by embracing modern detection, enhancing workforce awareness, and engaging strategic partners can regain the initiative.

The window to adapt is narrow. But the opportunity is clear. 

At Getronics, we’re already working with organisations across finance, healthcare, manufacturing, and government to build AI-ready defences. We invite you to join them. 

Contact us or download our executive whitepaper to explore how your organisation can turn defence into advantage. 

Let’s meet automation with automation and win together.