How AI Changes the Attack Game

AI does not invent new attacks. It supercharges old ones — making them faster, more personalised, and harder to spot. Here is what you need to recognise.

← Back to Security Awareness Hub
01 // Scale

The Multiplication Problem

AI does not make phishing more clever. It makes it massively scalable. What once took hours of manual effort now takes seconds.

Before AI
1
attacker writing
1 phishing email
With AI
10,000
personalised emails
from 1 attacker

Each email is unique — referencing your name, job title, recent projects, and colleagues. No more “Dear Customer” generics.

Microsoft + OpenAI documented this escalation in their 2024 joint threat intelligence report.

02 // Deepfake Voice

30 Seconds to Clone a Voice

A short audio clip from a conference talk, earnings call, or social media video is all an attacker needs to clone someone's voice convincingly.

Scenario
You receive a call from your CFO. The voice is unmistakable. They ask you to process an urgent wire transfer for a confidential acquisition. They stress secrecy and speed. But it is not your CFO — it is an AI clone.

Red Flags

  • Unexpected call requesting urgent financial action
  • Caller discourages second-channel verification
  • Audio artefacts — slight robotic quality or unnatural pauses
  • Emotional pressure: urgency, secrecy, authority
  • Caller cannot answer personal questions only the real person would know
03 // Deepfake Video

When Seeing Is No Longer Believing

Real-time face-swap technology lets attackers impersonate anyone on a video call. In 2024, scammers ran a video call where every participant except the victim was a deepfake — resulting in a $25M loss.

Scenario
You join a video call with three senior executives. They discuss a sensitive acquisition and ask you to initiate a transfer. The faces, voices, and backgrounds all look normal. But every person on the call is AI-generated — except you.

Detection Tips

  • Watch for slight lag between audio and lip movement
  • Reduced or unnatural blinking patterns
  • Background glitches, flickering, or inconsistent lighting
  • Participant avoids turning head to extreme angles
  • Ask them to hold a hand in front of their face — deepfakes often fail this
  • Artefacts around jawline, ears, and hairline at high resolution
04 // AI Spear-Phishing

Your LinkedIn Is Their Playbook

AI-powered spear-phishing starts with open-source intelligence gathering. Your public profiles become the raw material for a hyper-targeted attack.

1 Harvest — Scrape LinkedIn profile, company website, press releases, social media posts
2 Enrich — AI correlates your role, projects, colleagues, and recent activity
3 Generate — LLM writes a personalised email chain mimicking a colleague’s style
4 Deliver — Email arrives referencing a real project, using correct internal jargon
5 Extract — You click a link, open an attachment, or reply with credentials

The email is fluent, contextually accurate, and feels familiar. Traditional spam filters see nothing wrong. The only defence is a verification habit that runs on suspicion, not appearance.

05 // AI Vishing Bots

The Bot Makes the Call. The Human Closes It.

AI voice bots now handle the opening phase of social engineering calls. They sound natural, follow a script, and assess whether you are a viable target — before handing off to a human attacker.

How It Works
Your phone rings. The caller claims to be from your bank’s fraud department. They verify your name and mention “suspicious activity.” The first minute is smooth, professional, scripted — handled entirely by an AI bot. Once you engage and express concern, a human attacker takes over to complete the social engineering: extracting OTPs, PINs, or remote access.

Red Flags

  • Inbound call from an unknown number claiming to be your bank or IT
  • Caller asks you to “verify” information they should already have
  • Slight delay before responses, overly smooth scripted phrasing
  • Pressure to stay on the line — “do not hang up to call back”
  • Request for OTPs, PINs, or remote access credentials
06 // The Defence Mindset

AI Changes Scale, Not the Pattern

Every AI-powered attack still relies on the same fundamental patterns: urgency, authority, isolation, and emotional pressure. AI makes these attacks faster and more personalised — but the defence playbook remains the same.

Verification processes are the answer.
If you verify through a separate channel, AI cannot help the attacker.
  • Verify any unusual request through a separate, pre-established channel
  • Establish codewords for high-value financial or data requests
  • Never act on urgency alone — legitimate requests survive a 10-minute delay
  • Minimise your public data footprint to reduce OSINT exposure
  • Report suspected AI-powered attacks to your security team immediately
  • Keep training current — AI attack capabilities evolve quarterly

Attack Type Reference

Loading reference guide…
⚙ All processing is client-side. No data is sent to any server. No cookies beyond session rate limiting. No tracking.

Related sections

🔍 OSINT Exposure — See how attackers build your profile