top of page

Winning the battle for digital trust with tactics to combat deepfake fraud

Jeanne Loganbill

We interview AI strategy and risk adviser Aarti Samani in this deepfake deep dive.


Fresh off a call with your CEO, you feel the weight of responsibility. You’ve been given a critical task – an urgent, time-sensitive assignment that could have a major impact on the company's success. 


As you log into the company bank account, the conversation replays in your head. You weren’t expecting that call and felt surprised the company owner would trust you with a transfer this large. But he sounded friendly and sincere. He knew you’d been on holiday the previous week. He’d mentioned your line manager and a couple of your colleagues by name. 


Still, something doesn’t seem right. 


You pick up the phone and dial the CEO's number. A familiar voice answers – but as you explain your assignment, you're met with confusion and denial. 


What just happened? 


Simply put, you’ve been targeted by a deepfake scam. Fortunately, your quick thinking prevented any losses. By a hair, though...


Orchestrating a $25 million cyber heist 


In January 2024, British engineering firm Arup contacted Hong Kong police. A finance worker at the firm’s regional office had sent HK$200 million (about $25.6 million) from the company’s account to unknown fraudsters. 


The worker initially felt suspicious after receiving a message about a “confidential transaction” from someone claiming to be the firm’s UK-based Chief Financial Officer. But then he got invited to a video conference. 


In it, several members of Arup’s senior management team – all people the worker knew – discussed the deal in detail.  


“These fraudsters had done a lot of research,” says AI strategy and risk advisor Aarti Samani. “They’d examined the team structure and knew who was responsible for financial transactions – for example, who had the authority to transfer funds.” 


The founder and CEO of Shreem Growth Partners, Samani is a cyber security and technology expert whose clients include some of the biggest corporations in the world. Her interactive sessions help teams learn how to work successfully with AI and how to protect themselves against digital threats, including deepfake fraud.  


“This particular attack began with a spear phishing email,” explains Samani. “Shortly afterwards, the fraudsters shifted to a messaging app, speeding up communication and overwhelming the victim. This made it much harder for them to think critically or stay aware of the context.” 


The criminals targeting Arup also infiltrated the executive team, accessing photographs and voice samples for senior managers and using them to create hyper-realistic voice clones and AI-enhanced face swaps, which they used in real-time on the video conference. 


These were the “people” Arup’s employee had spoken to.  


In the end, the finance worker sent 15 different payments to five Hong Kong bank accounts before checking in with colleagues at the group’s UK headquarters. But by then, it was too late to retrieve the money. 


The anatomy of a deepfake scam 

“Scams don’t begin with deepfakes,” says Samani. “They begin with social engineering.” 


This is where fraudsters try to manipulate people into doing things or revealing confidential information to access systems, data or money, often making first contact via email or messenger service.  


Social engineering pits a person’s natural instincts against them – their desire to please, for instance. Criminals frequently focus on one target for an extended period, building a pseudo-friendship with the individual to gain trust. They might pretend to be a known authority figure or someone employed by a respected organisation or government department. 


Sometimes, that’s enough to do the trick. Having built a foundation of trust, the fraudster pivots into puppeteer mode and begins exploiting the victim. 


However, criminals will go to much greater lengths if the reward is significant enough to justify the risk.  


In Arup’s case, that included gathering a lot of information about the structure of a company, plus images and voice samples of its senior leaders, which were used as ingredients for sophisticated deepfakes.  


“Many people think you need high-quality images to create convincing deepfakes, but that isn’t actually the case,” says Samani. “The more data you have, the more realistic the deepfake. But all you really need is one photo. AI can do the rest.” 


That said, locating photos of a company's senior management team is typically the least of a scammer's concerns. Many executives have press kits readily available online, complete with high-resolution pictures and videos packaged in a convenient zip file. 


At this point, Samani gives me a live demo of a deepfake in action. Suddenly, a famous pop star appears on screen.  


“My word, that’s convincing,” I say, confirming the celebrity’s name. 


“I did this with just one photo,” says Samani. “I didn’t train the model at all. I loaded the picture into this tool, and here I am – a real, live deepfake.” 


Samani switches the celebrity face for another I haven’t seen before. 


“You probably won’t recognise this one,” she says. "It's the ex-chairman of a global beverage company. She has very different hair from me, but if I made a bit of effort and had the voice clone to go with it, I’d be quite believable. I’d look and sound just like her.” 


I agree, feeling both astonished and slightly rattled. 


“That's amazing,” I say “So, will the voice clone work in real time as well?” 


“Yes,” replies Samani. “Would you like a demo?” 


I enthusiastically agree. She shares her screen and asks me to wait for her to load a Python script. Soon, a little black box appears in the browser window. 


“This is basically a Steve Jobs clone,” she explains. “Again, I’ve done this with very little training. I provide the model with a script – the longer, the better – and we get a fairly realistic result.” 


Samani pastes some text into the box and presses a button. 


“Deepfakes are digital puppets that manipulate human faces and voices,” says Fake Steve Jobs. “They use something called a generative adversarial network, or GAN. Simply put, it’s like having two AIs that work together...” 


Fake Steve sounds astonishingly lifelike. His inflexions are different from sentence to sentence, removing much of the monotony and predictability often associated with artificial voices. 


“That’s incredible," I say. “You could use that to leave someone a voicemail, and it would sound totally convincing.” 


Samani agrees, explaining that voice clones just like this are used in WhatsApp takeover scams, where criminals hack WhatsApp accounts to distribute APP fraud messages. Apparently, fraudsters often reach 20,000 people in one takeover session.  


Fraudsters use WhatsApp voice clones to break into CXO accounts, too. Then, having gained access to staff and business contacts, they use executive voice clones to send audio messages instructing recipients to make payments, click on malicious links or share confidential information.  


“I call this AI-enabled fraud,” says Samani, “but at its core, it still relies on manipulating human behaviour. That’s why I focus on social engineering in my training sessions. Security gateways are useless if your employees give away the access codes." 


The aftermath of a deepfake attack 

“The impact of fraud goes beyond financial loss,” says Samani. “It can also severely damage a company’s reputation.” 


For example, if your organisation is in the middle of an important deal, you might need to tell prospects about the situation proactively. Letting them know what has happened and the steps you’re taking to mitigate damage can help rebuild trust. 


Customers can also panic, especially if there's a data breach or services get disrupted. Take the recent example of Blue Yonder, a third-party supplier attacked by ransomware. With their systems locked down and attackers demanding $1.1 billion, Blue Yonder couldn’t service major clients like Starbucks and Morrisons. 


Managing the ripple effects – service disruptions and customer concerns, for instance – is as critical as stopping further financial loss. The downstream impact of AI-enabled fraud can be significant, requiring clear communication and swift action to protect relationships and minimise reputational damage. 


Measuring the human impact 

Samani also points out that we hear far more about the money lost in deepfake scams than the human impact of the fraud. Maybe that’s not surprising: after all, headlines with dollar signs in them tend to sell newspapers. 


Still, it’s important not to overlook the deep emotional and psychological toll deepfake fraud takes on its targets.   


“Every deepfake fraud has at least two victims: the person being impersonated and the person being targeted,” explains Samani. “In Arup’s case, several people were cloned, so numerous individuals were negatively affected.” 


Victims usually experience significant shame and guilt. They feel embarrassed and violated by what has happened, losing confidence in themselves and questioning their instincts. 


Sometimes, however, the mental health consequences of a deepfake attack can be even more severe. Some victims develop anxiety and depression after being targeted, which can impact their lives for a long time. 


“Many of the organisations I work with don’t have support mechanisms in place for victims of AI-enabled fraud because it’s such a new phenomenon,” explains Samani. “When I advise executives, addressing the mental well-being of victims is a key part of the conversation, alongside awareness training and prevention strategies.” 


Staying ahead of deepfake fraud 

“Many deepfakes are really ‘cheapfakes’,” explains Samani. “They’re quite basic and unconvincing. But some are very polished and difficult to spot – particularly on a video call.” 


Samani explains that criminal gangs often spend a lot of money on underground technology development. The return on investment of a new and effective AI-driven hacking method can be significant, and unlike legitimate business owners, fraudsters don’t have shareholders or investors to consider. So, malicious tech is often much more advanced than efforts to combat it. 


Unfortunately, this means there is no “quick fix” programme companies can install to filter out deepfake fraud. 


Instead, addressing fraud requires a combination of tools, technology, processes and people. First, you need a layered security infrastructure to detect and prevent attacks. But tech alone isn’t enough – the right procedures are crucial.  


Staff should clearly understand what to do in two situations: an imminent threat of fraud, and if fraud has already taken place. For this reason, it can be a good idea for firms to create a simple, straightforward guide to reporting suspicious activity, plus the steps to take to minimise damage if the company falls victim to fraud. 


“Another key element is creating and protecting your company’s and leadership team’s brand,” says Samani. “Every executive should have a unique tone, vocabulary and communication style that employees can identify. This helps staff question anything that seems odd, even if it appears to come from a legitimate source.” 


Maintaining staff awareness is just as important. According to Samani, regular training sessions should highlight emerging fraud trends and explain how they appear in a business setting. This helps employees prepare to identify and respond to WhatsApp fraud, spear phishing and other threats. 


“Your fraud response playbook should be a living document,” says Samani. “It needs to evolve with the landscape. Sophisticated fraudsters use highly effective attack methods and often gain access to internal processes and mimic them, so you need to stay ahead of the game.” 


Practical ways to defend against AI-enabled fraud 

Samani recommends a three-pronged approach to fraud defence, which she tailors to the needs of each organisation she works with. Here’s a general overview. 


1. Train for contextual awareness 

The first step in preventing fraud is to ensure your employees are contextually aware. That doesn’t mean providing a checklist of red flags. Instead, it’s about understanding the context in which things happen. 


For instance, if your CEO emails out of the blue about an urgent acquisition deal you’ve never heard of, that’s a big red flag. Similarly, if you get a message about bonuses after a “great quarter”, but you know it’s been a rough few weeks for the sales team, something doesn’t add up. 


Contextual awareness is only possible in a culture of transparency. While executives can’t share everything, what they reveal should help team members build a clear picture of what’s happening within the organisation. This clarity makes anything unusual obvious so employees can raise the alarm if necessary. 


2. Simulation exercises 

Well-crafted simulation exercises can help employees learn to spot fraud attempts. Unfortunately, while standard cybersecurity awareness programmes include phishing and smishing, most don’t address AI-enabled fraud.  


Samani recommends quarterly deepfake simulation exercises. To improve vigilance, these can be randomised so employees don’t know when they’ll be tested. This shift in mindset – staying cautious, trusting instincts and questioning things that don’t feel right – can turn your team into the first line of defence. 


3. Tabletop exercises 

Finally, tabletop exercises simulate the organisation’s response if fraud occurs. These involve senior leadership and key team members working through a scenario. What role does each person play? What actions need to happen immediately to contain the risk? 


Tabletop exercises are like fire drills, where you prepare for a situation you hope will never occur. By practicing what to do in the event of an AI-enabled attack, everyone knows what they’re responsible for, which can ensure a faster, more coordinated reaction if the worst happens. 


Battling bad actors with good instincts 

“Artificial intelligence isn’t the bad guy here,” says Aarti. “Incorporating AI can help your business grow and thrive in an increasingly tech-driven world. But it’s important to be cautious and prepare your team for potential risks so they can spot and respond to AI-enabled threats.” 

 

Deepfake fraud isn’t a simple technological challenge – it’s a test of human instinct, organisational resilience and leadership. While fraudsters may exploit cutting-edge AI to stay one step ahead, businesses have something equally powerful in their arsenal: the ability to adapt, train and build trust from the inside out. 


By nurturing awareness and transparency, creating robust response plans and equipping employees with the tools to think critically, companies can turn their people into the ultimate defence against AI-enabled threats. When trust is under constant attack, safeguarding it isn’t just good business – it’s essential for survival. 


Learn more about Aarti Samani and book a tailored session to equip your organisation with practical strategies to build a culture of vigilance and resilience. Get in touch here to begin.

19 views

Comments


LONDON
5th Floor, 33 Cavendish Square, London, W1G 0PW
+44 (0)20 8187 5001
info@interpolitanmoney.com

 

DUBAI 

Office 109, Level 1, Tower A,

Damac Park Towers, DIFC, Dubai, UAE

MUMBAI 

2905 Marathon Futurex, NM Joshi Marg, 

Lower Parel, Mumbai, India 400013

Follow us on

  • LinkedIn
  • Instagram
  • X
  • YouTube
Download the Interpolitan app via the Google Store
Download the Interpolitan app via the App Store.
Interpolitan logotype.

Interpolitan Money PLC is authorised and regulated by the Financial Conduct Authority (“FCA”) to issue electronic money under the Electronic Money Regulations 2011. FRN 900413. Forward contracts and associated credit facilities are not regulated by the FCA.

 

An Interpolitan Money account is not covered by the Financial Services Compensation Scheme (“FSCS”). We hold your funds in specially designated, safeguarded bank accounts, with our tier 1 banking partners, which keep your funds separated from our other assets. This means your funds are protected. Please see our FAQs for more information.

 

Interpolitan Money Plc registered office address 2 Leman Street, London, England, E1W 9US, a company incorporated under the laws of England and Wales, registration number 07666629. Interpolitan Money Canada Inc is registered as a Money Business Service (“MSB”) with the Financial Transactions and Reports Analysis Centre (“FINTRAC”). Our registration number is C100000165.

 

Use of this Website is subject to our Terms and Conditions and Privacy Policy including our use of cookies. By clicking any link on this page, you consent to the use of cookies.

bottom of page