How AI Changes the Attack Game
AI does not invent new attacks. It supercharges old ones — making them faster, more personalised, and harder to spot. Here is what you need to recognise.
The Multiplication Problem
AI does not make phishing more clever. It makes it massively scalable. What once took hours of manual effort now takes seconds.
1 phishing email
from 1 attacker
Each email is unique — referencing your name, job title, recent projects, and colleagues. No more “Dear Customer” generics.
Microsoft + OpenAI documented this escalation in their 2024 joint threat intelligence report.
30 Seconds to Clone a Voice
A short audio clip from a conference talk, earnings call, or social media video is all an attacker needs to clone someone's voice convincingly.
Red Flags
- Unexpected call requesting urgent financial action
- Caller discourages second-channel verification
- Audio artefacts — slight robotic quality or unnatural pauses
- Emotional pressure: urgency, secrecy, authority
- Caller cannot answer personal questions only the real person would know
When Seeing Is No Longer Believing
Real-time face-swap technology lets attackers impersonate anyone on a video call. In 2024, scammers ran a video call where every participant except the victim was a deepfake — resulting in a $25M loss.
Detection Tips
- Watch for slight lag between audio and lip movement
- Reduced or unnatural blinking patterns
- Background glitches, flickering, or inconsistent lighting
- Participant avoids turning head to extreme angles
- Ask them to hold a hand in front of their face — deepfakes often fail this
- Artefacts around jawline, ears, and hairline at high resolution
Your LinkedIn Is Their Playbook
AI-powered spear-phishing starts with open-source intelligence gathering. Your public profiles become the raw material for a hyper-targeted attack.
The email is fluent, contextually accurate, and feels familiar. Traditional spam filters see nothing wrong. The only defence is a verification habit that runs on suspicion, not appearance.
The Bot Makes the Call. The Human Closes It.
AI voice bots now handle the opening phase of social engineering calls. They sound natural, follow a script, and assess whether you are a viable target — before handing off to a human attacker.
Red Flags
- Inbound call from an unknown number claiming to be your bank or IT
- Caller asks you to “verify” information they should already have
- Slight delay before responses, overly smooth scripted phrasing
- Pressure to stay on the line — “do not hang up to call back”
- Request for OTPs, PINs, or remote access credentials
AI Changes Scale, Not the Pattern
Every AI-powered attack still relies on the same fundamental patterns: urgency, authority, isolation, and emotional pressure. AI makes these attacks faster and more personalised — but the defence playbook remains the same.
If you verify through a separate channel, AI cannot help the attacker.
- Verify any unusual request through a separate, pre-established channel
- Establish codewords for high-value financial or data requests
- Never act on urgency alone — legitimate requests survive a 10-minute delay
- Minimise your public data footprint to reduce OSINT exposure
- Report suspected AI-powered attacks to your security team immediately
- Keep training current — AI attack capabilities evolve quarterly