Follow

All things Tech, in your mailbox!

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy.

Deepfake Scams 2.0: When Your Boss Calls and It’s Not Them

AI-driven deepfake scams are evolving fast. From cloned CEO voices to fake video calls, fraudsters are impersonating leaders to steal millions. Learn how these “boss call” deepfake scams work, real-world cases from Hong Kong to India, and what companies and individuals can do to protect themselves. AI-driven deepfake scams are evolving fast. From cloned CEO voices to fake video calls, fraudsters are impersonating leaders to steal millions. Learn how these “boss call” deepfake scams work, real-world cases from Hong Kong to India, and what companies and individuals can do to protect themselves.

In a world where hearing your manager’s voice or seeing your CEO on screen once meant a straightforward request, technology is rewriting the rulebook. Thanks to astonishing advances in artificial intelligence-driven video and voice synthesis, fraudsters are now staging high-stakes impersonation attacks inside boardrooms, not just inboxes. This blog will walk you through how these “your boss called” scams work, real-world cases, law-enforcement responses, and—most importantly—what companies and individuals can do to protect themselves.

1. Setting the scene: The rise of the impersonator

It used to be that corporate fraud came via phishing emails or spoofed invoices. Someone pretends to be a vendor, you pay them. But recently the method has escalated. Fraudsters now don’t need just an email address—they aim for your eyes and ears. They imitate the voice and face of senior executives and then request urgent transfers, changes in banking instructions or access to sensitive data.

Why is this more dangerous? Because:

  • Seeing and hearing someone gives a sense of authenticity that email cannot match.
  • It bypasses many traditional controls (for example: “my boss sent me this on WhatsApp” sounds legit).
  • Deepfake tools—once the province of Hollywood trickery—are now accessible and affordable.

From these shifts comes a new threat model: when your boss calls, it might not be them.

Here’s how the “boss calls” deepfake scam typically plays out:

Advertisement

  1. The fraudster researches the company, identifies a target executive (CEO, CFO, Head of Internal Audit) whose voice or appearance can be mimicked.
  2. They craft an urgent narrative: a secret acquisition, a confidentiality clause, a banking glitch, or a time-sensitive payment.
  3. They contact a subordinate (often in finance or treasury), via WhatsApp, text, email or phone, referencing the executive.
  4. They arrange a video call (or voice call) in which the executive appears (via video deepfake) or their voice is cloned. The subordinate sees the “face”, hears the “voice”, and perceives authenticity.
  5. The subordinate authorises a large transfer or provides access, believing the request is legitimate.
  6. By the time the fraud is discovered, the funds are gone and the trail vanishes.

As one cyber-security expert put it: “These hyper-realistic videos often impersonate CEOs, relatives or government officials, tricking victims into sending money or sharing sensitive data.”

2. Real-world case studies: The damage is real

Hong Kong – The HK$200 million video-conference deepfake

In one of the most widely reported cases, a multinational company’s Hong Kong branch lost HK$200 million (roughly US $26 million) after an employee participated in a video conference that appeared to include multiple senior executives. Investigators later determined that every person on that call—except the victim—was a deepfake. The “CFO” and others were synthetic, the voices cloned, the faces manipulated.

According to Hong Kong Police, the scammers impersonated senior officers and instructed the employee to transfer funds to five designated bank accounts.

This case shows how far the threat has evolved: multiple deepfake participants in a single conference call, not just a one-on-one phone scam.

United Kingdom – WPP and the cloned voice of the CEO

In another significant incident, the global advertising firm WPP reported that their CEO, Mark Read, was targeted by deepfake scammers using voice cloning and publicly available video footage.

According to the report, a fake WhatsApp account was created under his name (with his photo), a Microsoft Teams call was arranged, and the voice clone of the CEO was used during the meeting. Although in this instance the scam was not successful, it reflects the sophistication: the fraudsters mimicked video and voice, and moved through the chat function to impersonate the executive. 

India – Deepfake voice scam of a lawyer

While not a full “boss-calls” case, the following Indian case illustrates how voice-cloning attacks are already active domestically. A lawyer in Haryana, India received a call from what sounded exactly like a close friend, asking for financial help. The voice matched, the tone was familiar, and he transferred funds before realising the call was fake.

These examples are not isolated. According to a recent report, in the U.S. alone more than 100,000 deepfake scams were reported in one quarter, with losses surpassing US $200 million. 

3. Why this works: The psychology + the tech

Psychology of trust

  • Face and voice are powerful trust signals. When you see someone you recognise (or think you recognise) saying something urgent, your instinct is to act.
  • The request often carries urgency (“this is confidential”, “this must not be discussed”, “time is of the essence”) which reduces reflection and increases compliance.
  • The scenario uses authority (“the CEO asked me”, “it’s from the board”, “confidential acquisition”) and urgency—two classical elements of social-engineering.
  • Many organisations’ workflows assume digital/remote approving is sufficient; they have fewer safeguards around “boss calls”, especially when remote work is normalised.

The technology enabling the scam

  • Voice cloning: Even short samples of an executive’s public speeches or interviews allow models to create synthetic voices that approximate pitch, cadence and tone.
  • Video deepfakes: Modern models can overlay a face onto a video, sync lips to voice, and replicate body language convincingly. The Hong Kong case involved multiple synthetic participants. 
  • Zoom/Teams/WhatsApp proliferation: Remote meeting platforms are now normal. It is normal to receive “urgent calls” from above. That normalcy becomes the backdrop that fraudsters exploit.
  • Corporate supply-chain/finance processes that allow transfers on approval without sufficient verification of identity.

The combination of psychology + technology + process gap equals a perfect storm.

4. How law enforcement and regulators are responding

Law enforcement around the world is increasingly aware of this deepfake threat, but many of the systems are still catching up.

  • In Hong Kong, the police received a report of the HK$200 million scam and classified it as “obtaining property by deception”. Investigations were opened, but arrests were not immediately made. 
  • Agencies such as the U.S. Financial Crimes Enforcement Network (FinCEN) and other cyber-regulators are issuing warnings about “CEO-impersonator” scams powered by AI.
  • In India, cybercrime units are investigating deepfake scams, but the pace and capability are still building. For example, a case in Ghaziabad involved deepfake video extortion.

Regulators are also considering how to hold platforms, banks and corporates accountable for prevention and reporting. Cyber-security frameworks are being updated to include deepfake threat vectors.

However, some challenges remain:

  • Deepfake attribution is hard: who created the video/voice, from where, using what dataset? Cross-border investigation is complex.
  • Many incidents go unreported because firms fear reputational damage. That means data gaps persist, and law enforcement lacks full visibility.
  • Existing laws around impersonation, identity fraud, and voice/image rights may not explicitly cover AI-generated voices/faces. Legal frameworks need updates.
  • Detection tools are still playing catch-up: The same technology that makes deepfakes possible is used to try to defend against them, creating an arms race.

5. What companies can do: Building resilience

Prevention is the best medicine. Here are practical steps companies should take:

Verification protocols

  • Never allow any large transfer or change in banking instruction based solely on a video/voice call—even if it’s from the “CEO”. Always use multi-factor verification: e.g., call back to a known number, send a signed internal memo, check with another executive or use an independent channel.
  • Build a “trusted contacts” list: a known number or identity of the executive (not just the one in the call).
  • Require formal written approval (email, internal tool) for high-value transactions—verbal instructions are not enough.

Awareness and training

  • Conduct scenario-based training: show employees how a deepfake “CEO call” might look/sound and what to do when they receive such requests.
  • Raise awareness of red flags: urgency, secrecy, branching from usual process, request to bypass controls, or insistence on immediate action.
    Ensure finance/treasury teams know that “seeing is not verifying”. A video seeing doesn’t equal authenticity.

Technology and monitoring

  • Deploy anomaly detection: monitor for flags like transfers to new accounts, off-normal geographies, or unusual form of request.
  • Use voice-print/face-print authentication systems for internal meetings, or log meetings and look for anomalies later.
  • Consider vendor tools or internal tools to detect deepfake media (audio/video). Even though not perfect, they add a layer of defence.
  • Insist on change-management for banking/fee operations: new payee setup, changes to instructions, must include verification steps.

Incident response

  • Have a plan ready: If an employee reports something suspicious, freeze transfers, notify bank, involve cyber-forensics.
  • Engage insurers: Many cyber-insurance policies now cover business-email-compromise (BEC) and CEO-impersonation risk; check your policy covers deepfake-enabled fraud.
  • Preserve logs and evidence: recording the meeting (if applicable), capturing voice/video of the fraudulent call can help law-enforcement/tracking.

Board-level oversight

  • Ensure the board knows the risk, gives instruction that “boss-calls” don’t override controls.
  • Review and update internal controls, including in remote-work contexts where employees may feel they are “helping the boss”.

One useful summary: “The shift from passive misinformation to active manipulation changes the game entirely. Deepfake attacks aren’t just threats to reputation or financial survival anymore—they directly undermine trust and operational integrity.” 

6. What individuals can do: Stay alert

Even if you’re not the CEO or CFO, you play a role. Whether you’re in finance, operations or any part of the organisation:

  • Never assume “it’s the boss” just because you see them on screen or hear their voice.
  • If a request is urgent, secretive or bypasses usual process—pause. Use a known channel to verify (call the executive’s known number, message a known address).
  • Beware of voice-cloning red flags: the voice might be “close”, but the request is unusual.
  • Don’t assume “remote meeting = safe”. Always treat unexpected video/voice calls with caution.
  • If you are asked to transfer funds or share credentials, but the request came via chat/WhatsApp/Teams with someone claiming to be senior: stop and verify.
  • Be cautious about public recordings of executives: the more video/audio of them is available, the easier for scammers to clone them.

7. What’s next: Looking ahead

The technology is moving fast, and the fraudsters are innovating. Some trends to watch:

  • Real-time deepfake calls: Instead of pre-recorded video, scammers may move towards live deepfake sessions, where they respond in real time using AI.
  • Multi-person deepfake meetings: as in the Hong Kong case, multiple fake executives appearing together.
  • Voice-only scams blended with video injection: imagine hearing your boss’s voice then a request to log in to a system while a synthetic face “assists”.
  • AI-assisted phishing + deepfake: a hybrid where email/text sets up the call, then the call completes the fraud.
  • More targeting of smaller companies: large corporations may beef up defences, so fraudsters may move to mid-sized firms with weaker controls.
  • Rise of tools to detect/manipulate deepfakes: deepfake-detection services, authentication tokens (video watermarking), regulations requiring warnings or labels on synthetic media.
  • Legal frameworks: we may see new laws requiring consent for voice/face cloning, or specific liability for companies failing to verify high-value transactions in remote contexts.

Bottom line: the next five years will likely see an arms race between fraudsters using generative-AI impersonation and organisations trying to defend against it.

Trust, technology and the human factor

This is not purely a technology problem; it is a human trust problem. When someone you believe is your boss asks you to take action, your instinct is to comply. The confluence of that human instinct with high-quality AI tools makes a dangerous mix.

For companies in India or globally, the takeaway is clear: invest in controls as much as you invest in innovation. Make sure your people are trained. Make sure your processes adapt. Make sure you treat a “boss call” with the same scepticism you would a sudden email from a stranger.

For individuals, your best defence is critical thinking. If the request is “urgent”, “confidential”, “must not be spoken about” and involves money or credentials—stop. Verify. Walk away if needed. A real executive will understand.

The new frontier of scams is here: not just fake emails, but fake faces and fake voices. When your boss calls, it might not be them. Recognising that possibility might save your company—or your job.

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

All things Tech, in your mailbox!

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy.
Advertisement