Portrait of Jas Anand, Senior Fraud Executive at Feedzai, specializing in financial crime risk management.by Jas Anand
8 minutes • • April 1, 2025

What is a Deepfake and How Do They Impact Fraud?

Illustration of a woman looking at her phone with many faces surrounding her. For article on deepfake fraud and prevention

A deepfake uses sophisticated AI to create highly convincing audio, images, text, or videos that look, sound, and act like real people. The easy availability of this technology practically gives fraudsters access to Hollywood-style special effects, enabling bad actors to commit deepfake fraud at scale. The World Bank reports that deepfake fraud has surged by 900% in recent years.1 Losses fueled by generative AI are on track to reach $40 billion by 2027.2

There’s no putting the toothpaste back in the tube when it comes to deepfakes. Learn how your business can prepare for a new era of deepfake fraud that causes us to question what’s right in front of our eyes and ears.

Key Takeaways

  • Fraudsters commit deepfake fraud by manipulating images, video, or audio of a subject and teach an AI model to imitate that person.
  • Deepfake fraud is particularly troubling because they are highly realistic, easily accessible to fraudsters, and scalable. 
  • Generative AI and deepfakes can make existing types of fraud—such as new account fraud, account takeover, phishing, impersonations, and social engineering—even more costly.
  • Voice cloning deepfakes have successfully targeted several global businesses, including in Hong Kong and Italy.
  • Video-based deepfakes are empowering criminal groups like the Yahoo Boys with very convincing romance scams.
  • Businesses and financial institutions must focus on customer intent and education to protect against deepfake threats.

How Deepfake Fraud Works in 5 Steps

Creating a convincing deepfake fraud involves several critical steps.  

Infographic breaking down anatomy of deepfake fraud. Copy: The Anatomy of a Deepfake Fraud Step 1. Data Acquisition: Criminals collect video, images, audio, and other elements of their subject’s likeness. Step 2. Model Training: An AI model uses the data and imitates the subject’s facial features, vocal ranges, and speech patterns. Step 3. Content Creation: Fraudsters can train the AI model to make realistic fake video or audio clips. Step 4. Delivery & Social Engineering Fraudsters deploy the deepfake and use social engineering tactics to hijack the subject’s real identity. Step 5. Exploitation & Gain Fraudsters manipulate their victims into taking actions believing they are listening to someone they trust.

Three critical factors make deepfake fraud a particularly troublesome threat.

  • Realism. Generative AI can rapidly create realistic-looking images. Worse yet, the technology has shed common imperfections from its earliest stages. This means fewer strange-looking fingers, distorted faces, or stretched-out arms that were once deepfake giveaways.
  • Accessibility. You don’t need a degree in AI or artistic design to use GenAI to create deepfakes. The barrier to entry is lower than ever.
  • Scalability: Thanks to cloud computing, criminals can launch multiple attacks simultaneously or create a large volume of synthetic content for a targeted campaign, such as in spear fishing fraud.

Deepfake Fraud in Action: Mark’s Story

Here’s an example of how a deepfake fraud can work in practice.

Mark, who works in his company’s finance department, suddenly receives an urgent email from what looks like his CEO. The message includes the company logo and the CEO’s familiar email signature. 

The email asks him to immediately process a large payment to a new vendor. Suddenly, his phone rings. The voice on the other end sounds just like his boss and provides specific details about the transfer. 

Feeling the pressure to make his “boss” happy and believing the sincerity of the request, Mark authorizes the payment. He later learns the email and the phone call were both elaborate deepfakes, and the money is now in the hands of fraudsters. 

Marks’ story highlights how the “social engineering” combined with the realistic deepfakes can deceive even careful employees into making costly mistakes.

5 Fraud Types Enhanced by Deepfakes 

Generative AI and deepfakes are already being incorporated into several common frauds. This includes:

  • New Account Opening Fraud. Criminals can use deepfake fraud using synthetic videos, audio, or images that appear to be a legitimate person and open a new bank account. From there, they can bypass facial recognition or liveness detection measures.=
  • Account Takeover Fraud. By mimicking an account holder’s appearance, voice, and mannerisms, fraudsters can convince a customer service representative to grant them access to someone else’s account
  • Phishing Scams. Spelling and grammar mistakes were once obvious red flags of phishing scams. However, thanks to GenAI criminals are less likely to make these errors. Fraudsters can craft highly convincing phishing messages that are grammatically correct, contextually relevant, and contain perfect spelling. 
  • Impersonation Attacks. Fraudsters can convincingly imitate individuals in professional settings like meetings or legal proceedings to commit fraud. In personal settings, they can pretend to be a loved one in need of financial or medical help, such as in a romance or grandparent scam.
  • Synthetic Identity Theft. Using deepfakes, fraudsters can make synthetic identities appear like real people. They can use these fake personas to defraud businesses and other individuals.

These examples are just a sampling of the tactics that deepfake fraud can unleash. What makes this threat so unsettling is that it challenges our ability to trust our own sense of reality. 

Voice Cloning Scams: Don’t Believe Your Ears

There have been reported several cases worldwide of deepfakes involving voice cloning technology successfully impersonating individuals in different professions and industries.

  • Hong Kong: A financial worker was tricked into paying out $25 million when fraudsters used deepfake technology to impersonate the company’s chief financial officer.3  
  • Italy: A group of entrepreneurs were targeted by scammers who copied the Italian defense minister’s voice and requested money to help pay the ransom of journalists kidnapped overseas. At least one victim paid €1 million to an overseas account. 4
  • UK: WPP chief executive Mark Read said scammers used a combination of a voice clone and Youtube footage to schedule a meeting with themselves and company executives. Fortunately for Read, the scam was unsuccessful.5
  • US: A woman almost lost $50,000 when someone called her claiming to have kidnapped her teenage daughter. The caller played a recording that sounded like the daughter in distress. The  woman was able to confirm her daughter’s safety before losing any money.6

In the age of deepfake fraud, hearing messages from prominent figures or loved ones will make scams even more convincing. 

Video Deepfakes: Illusion-based Fraud

Video-based deepfake frauds make impersonation-based fraud like romance scams even more difficult to catch. 

US consumers recently lost $1.14 billion to romance scams last year.7 With deepfake technology, scammers can create a large library of fake online suitors. Aided by advanced LLMs like LoveGPT, romance scammers could target multiple victims at the same time.8

Manipulating publicly-available images to commit romance scams have proven effective. Last year, a scammer used simpler technology to deceive a French woman into believing she was in a relationship with Brad Pitt.9 Meanwhile, organized romance scam groups like the Yahoo Boys are creating more personalized communication for their targets in real time, making romance scams even more convincing and likely to succeed.10

Deepfake-powered videos can also fuel other impersonation tactics like CEO fraud or grandparent scams. If the target believes they are interacting with the real person, they are more inclined to follow their instructions to help their company or a family member.

The Social Engineering Angle: Exploiting Human Trust

Audio and visual manipulation are critical components in the success of deepfakes. The rest depends on trust. That’s where psychological manipulation from social engineering comes into play.

By scouring information like social media profiles, compromised data, or other sensitive information, fraudsters can create specific scenarios that both trigger their targets and quickly gain their attention and trust. The more detailed a story the scammer presents, the more believable it is.

For example, imagine a scammer learns that a person is traveling overseas. They can use this information to contact the parents or grandparents impersonating the traveler. Using manipulated audio or video, they claim their loved one is in trouble with law enforcement overseas. Family members are more likely to be convinced the threat is real because it contains specific details about their loved ones’ itinerary.

The Rise of ‘Scams as a Service’

Businesses and banks may see a rise in highly personalized “scams as a service” tactics. Criminals can purchase pre-configured deepfake materials for a specific target, such as a bank manager or executive. They can also access information like email lists to gain intel on your organization’s internal hierarchy. 

Using the knowledge of who reports to their deepfake subject, the scammers can focus their efforts using voice or video calls or messaging services like WhatsApp. In other words, scams become much more effective by focusing on a single individual.

How Financial Institutions Can Protect Against Deepfake Fraud

Protecting your financial institution from deepfake fraud requires a proactive approach. Here are a few key steps to consider.

Invest in Your Own GenAI-based Solutions 

Advanced technology like GenAI shouldn’t just be used by criminals. Banks can implement their own GenAI-powered tools. For example, generative AI can be used to help customers quickly detect scams before they make a purchase

GenAI technology can quickly review content like ads selling products at unbelievable prices and spot red flags. Implementing your own GenAI agent empowers your customers to join the fight against scams, protect themselves, and build trust with your organization.

Emphasize Intent Over Identity

Scammers coerce victims into authorizing transactions themselves under false pretenses. This renders traditional authentication methods, like SMS verification or push notifications, and questions like “was this you?” obsolete.

Instead, your financial institution must understand the intent behind a transaction. This requires a move towards human-to-human interaction, with trained agents equipped to ask the right questions and flag inconsistencies that may indicate fraud. 

Uncover Money Mule Accounts

Fraudsters need money mule accounts to move and launder stolen funds and profit from their efforts. That’s why they turn to legitimate customers to convince or manipulate them to act as money mules. 

Banks must proactively monitor both inbound and outbound transactions to identify suspicious activity and disrupt the flow of illicit funds. This comprehensive approach enables your organization to disrupt mule networks and mitigate the impact of deepfake fraud.

Embrace a Single Customer View

Financial institutions must move away from channel-specific data silos. Instead, it’s critical to shift towards a horizontal integration of customer data views. 

This holistic view of a customer enables the detection of inconsistencies across different channels. This may include a card transaction in one location and a login from a geographically distant location within a short timeframe. Having a more comprehensive understanding of your customer’s normal behavior makes it easier to identify and respond to fraudulent activities, including those involving deepfakes. 

Tap Into Data and Analytics

Leverage data analytics to gain real-time insights into your customers’ activities. Taking a proactive approach will allow you to detect suspicious activity before a transaction occurs, allowing for timely intervention and prevention.

Your financial institutions can use data analytics to move beyond reactive measures and towards more predictive and preventative fraud detection workflows. This includes analyzing interaction data, such as device information, location data, and behavioral patterns, to find anomalies.

Empower Customers Through Education

Teach your customers about how deepfakes work to help them better protect themselves. Launch campaigns to raise awareness of known deepfake scams, including romance scams, grandparent scams, and business email compromise. Some key lessons to convey to customers include:

  • Ask the person on camera or speaker to repeat a very specific sentence to see if the audio follows along. 
  • Ask someone on video to wave or move their arms in a specific manner to confirm they are real, not pre-recorded. 
  • Have a secret code word known only to family members to verify identity before making any major financial decisions. 

By showing customers how to become active participants in fraud prevention, banks can essentially recruit them as a first line of defense against deepfake fraud.

With deepfakes, fraudsters can now match the level of Hollywood-style illusions. It’s up to all of us to sort fact from fiction on a regular basis.

Key Resources

 

 

Footnotes

1 Deepfakes and AI’s New Threat to Security

2 Generative AI is expected to magnify the risk of deepfakes and other fraud in banking

3 Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’

4 Scam against Italian entrepreneurs: Crosetto’s Minister’s voice cloned with AI

5 WPP boss targeted by deepfake scammers using voice clone

6 AI kidnapping scam copied teen girl’s voice in $1M extortion attempt

7 “Love Stinks” – when a scammer is involved

8 A new ChatGPT dating scam is looking to catfish lonely AI fans News

9 Brad Pitt online romance fraud shows how victims are influenced by complex psychological factors

10 The Real-Time Deepfake Romance Scams Have Arrived

All expertise and insights are from human Feedzians, but we may leverage AI to enhance phrasing or efficiency. Welcome to the future.

Page printed in April 4, 2025. Plase see https://www.feedzai.com/blog/deepfake-fraud for the latest version.