Fraudsters using deepfakes technology to cause fraud losses for banks

Putting on a spooky mask and giving a friend a jump is a fun Halloween tradition. Banks, however, face a different kind of fright. Fraudsters have become adept at using deepfake technology to cause significant fraud losses. That’s why banks need to protect themselves from this terrifying technology using a digital trust approach in their risk management operations.

How Deepfake Tech Enables Fraud Losses

Deepfake technology (sometimes referred to as deep fakes) has been used to impersonate numerous public figures, including celebrities like Tom Cruise or recasting Superman with Nicolas Cage.

But we’ve also seen deepfakes used for more sinister purposes. Financial losses resulting from deepfake scams have ranged from $243,000 to $35 million in individual cases. A deepfake using Elon Musk pushed a crypto scam that cost US consumers roughly $2 million over six months. Embattled Ukrainian president Volodymyr Zelenskyy has been subjected to deepfakes for misinformation purposes. Perhaps most troubling is how the technology can simulate famous actors – and sometimes everyday people – into adult films.

Deepfakes turn everything we know about identity verification and digital trust for fraud prevention on its head. Deepfake videos can convince the best banks that John White is alive, well, and just opened a bank account via a video conference call. When the truth is, John White has been dead for two months.

Like all other technologies, deepfakes will become more effective, and that’s the real horror story. That’s why banks and financial institutions must understand the most frightening types of deepfake fraud to monitor today, and they have to make sure they have the tools and expertise to do it. 

4 Terrifying Deepfake Scams to Watch

Here are four common and terrifying deepfake scam techniques:

  • Ghost Fraud Deepfakes. A ghost fraud deepfake occurs when a fraudster steals someone’s identity to commit new account fraud. FIs won’t get suspicious if fraudsters use deepfakes to impersonate a victim. Fraudsters can also use deepfakes to take over a dead person’s bank account, apply for loans, or hijack their credit score information. Deepfake technology makes this type of account takeover attack especially dangerous because banks don’t learn about the fraud since deceased customers don’t report fraud.  
  • ‘Phantom’ or New Account Fraud. This type of deepfake has already resulted in significant fraud losses of roughly $3.4 billion. Fraudsters create a completely new identity using forged birth certificates or driver’s licenses and prepaid SIM cards. Fraudsters use these fake identities to open new accounts with a legitimate telecom provider and use deepfakes to give it the face of a person who doesn’t really exist. With a real telecom account and their fake identity, they can receive codes to pass Know Your Customer (KYC) and two-factor authentication (2FA) requirements.
  • Undead Claims. Deepfake technology has (ironically) given this old type of fraud new life. In some cases, a family member collects their late relative’s financial benefits (e.g., life insurance or pension) before anyone learns of their death. Deepfake technology allows fraudsters to join a video conference call looking, moving, and sounding like a deceased person. 
  • ‘Frankenstein’ or Synthetic Identities. The fictional Dr. Frankenstein built a monster from the remains of different bodies. Fraudsters take a similar approach to synthetic identity fraud by using a combination of real, stolen, or fake credentials to create an artificial identity. Using deepfakes, fraudsters can convince banks that the invented person is real and open credit or debit cards to build up the fake user’s credit score.

How Banks Can Protect Customers from Deepfakes

Deepfakes will become a central component of criminals’ fraud strategies. Expect deepfakes to become increasingly challenging to spot and to prevent fraud losses. That’s a truly terrifying vision. Here’s what banks and other financial services organizations can do to prevent deepfake fraud threats:

1. Complement the Account Opening Process with Behavioral Biometrics

The account opening stage is highly vulnerable to abuse since banks are meeting customers for the first time, and often digitally. If a fraudster uses a convincing deepfake during the proof of life stage, banks could unknowingly onboard a very risky actor. Banks need a full digital trust strategy – including behavioral biometrics that can perform age analysis to determine if the image of the applicant matches the person holding the device. Digital trust should also include bot detection, device and network re-usage checks, and identify location anomalies. Reviewing this information allows banks to detect a potential problem during account opening.

For example, let’s say Mary is a new customer who claims to be 75 years old. By taking a digital trust approach, banks can assess whether Mary is really as old as she claims to be from how she holds and uses her device. Behavioral biometrics analyzes the pressure she uses to touch her screen, the angle she holds her phone, or if she  types at the typical speed of an elderly customer. These insights can help determine if Mary is real or if she is actually a fraudster using a fake or synthetic identity.

2. Review the Customers’ Device Hygiene

Using a digital trust approach, banks can assess whether a customer’s device is trustworthy or not. Banks must analyze the device and determine whether a recording provided for a proof of life was recorded using the same device. Using this approach, banks can check in real time if the device’s camera is active during the onboarding process. If not, it could indicate the video message was pre-recorded on a different device and was uploaded during the onboarding process.

A digital trust approach can also help banks determine whether external factors like malware are at play. This includes checking whether a device was hacked, rooted, jailbroken and emulated, or compromised by malicious programming. Banks should look at these factors carefully to assess if a submitted video is real or if it’s a deepfake.

3. Consult with ID Verification Providers

In the age of deepfakes, banks can’t shoulder the responsibility of detecting fake images alone. That’s why banks who work with outside vendors for onboarding and digital authentication must understand how these firms work. Ask identity verification providers how video for proof of life was provided and whether a video was recorded on the submitting device itself. ID verification providers should also test whether a deepfake is being used when verifying a customer’s identity. For example, does the video capture stability in the person’s eye color? How do the borders or their hair match with their background? These are currently difficult for deepfakes to manage and an opportunity to catch fraud.

4. Teach Customers to Protect Their Data

Consumers have a crucial role to play in protecting themselves against deepfake fraud losses. Given how much personal information is publicly available, this is no easy task. But banks should still caution their customers about how their data can be manipulated, and urge customers to protect themselves. Some core tips for customers include:

  • control who sees your information on social media
  • avoid giving data to untrustworthy, third party websites 
  • avoid installing apps from untrustworthy sources and developers
  • avoid using previously compromised or jailbroken devices
  • keep device software and OS updated

Halloween comes once a year, but the threat of deepfake fraud should scare banks all year long. Fortunately, using behavioral biometric technology that provides digital trust gives banks a strong chance to catch fraud faster.

Download our solution guide to learn how Feedzai’s behavioral biometrics technology reduces false positives and enables a preemptive approach by stopping fraud before it can happen.