Illustration of money falling over a 2024 calendar - for blog outlining the top fraud and financial crime trends of 2024 to watch

This past year we saw Generative AI emerge as a business game-changer and a fraudster’s new tool. Meanwhile, nations banded together to regulate AI, tech players expanded their roles as authenticators, and the financial services industry braced for shifting scam liability. If that’s what 2023 had in store, just imagine the fraud and financial crime trends we can expect in 2024.

Let’s delve into the key financial crime trends that will shape the financial security outlook in 2024.

Next Generation of AI and Machine Learning

Banks are gearing up to adapt to the next iteration of artificial intelligence (AI) and machine learning (ML) capabilities to revolutionize fraud detection. The focus is on automating tasks, enhancing accuracy, and uncovering emerging fraud patterns. 

At the same time, machine learning for anti-money laundering (AML) is set to go mainstream. This has important implications as banks look to mitigate false positives. Russia’s invasion of Ukraine continues to disrupt the banking industry. As a result, many banks have seen their risk appetite decline, resulting in an uptick in false positives. 

The push for implementing machine learning for AML comes from outside the banking industry. Google has made a significant foray into this domain by launching an AI-powered cloud-based AML tool following a trial with HSBC. The entry of a big tech firm like Google underscores the banking industry’s collective push toward more advanced proactive measures.

Generative AI Unleashes Automated Fraud and FinCrime Avenues

Impersonation fraud has long been a common scam tactic. But as Generative AI (GenAI) takes center stage, the specter of synthetic identity theft and fraud looms more significant than ever.

These methods go far beyond the account takeover threat. This advanced technology allows fraudsters to create identities at scale, making the generation of credible synthetic IDs more accessible. With existing tactics like social engineering, access to sensitive information posted on the dark web from data breaches, social media, and stolen credit card information, fraudsters can massively improve their deceptions. 

Not only can bad actors create new identities faster, but GenAI also gives these identities greater credibility. The ability to create fake images, videos, or even voice recordings helps build a character or persona around an identity. 

With the ability to pass authentication checks more regularly, Generative AI transforms into a formidable tool for AI-generated personas. We expect banks to be tempted to question the authenticity of every interaction as these tools become more pervasive and advanced. GenAI will make synthetic identity fraud one of the most troubling financial crimes to monitor in 2024.

Fraud-as-a-Service Gains Traction

The emergence of Fraud-as-a-Service, facilitated by Generative AI, dismantles barriers that criminal call centers have traditionally faced. Thanks to GenAI (in theory), call centers can now rapidly gather information about their targets, learn organizational operations, and tailor attacks to specific banks. 

This is especially worrisome for new account fraud and application fraud. Each bank has its own account opening workflow, its own unique technology, and its own language during onboarding. Individuals have to appear credible to be able to open an account. Criminals can use GenAI tools to learn different bank screen layouts and stages. With this knowledge of how various organizations operate, criminals can write scripts to quickly fill out forms and create credible-looking identities to commit new account fraud. Banks will no longer need only to answer the question “Is this the right person?” but also answer the question “Is my customer a human or an AI?”

Illustration depicting Feedzai's top Fraud and Financial Crime predictions for 2024
Illustration depicting Feedzai's top Fraud and Financial Crime predictions for 2024

Capabilities like these are why several nations and regions are considering proposals to regulate and study AI’s potential. Earlier this year, several countries, including the US, UK, EU, China, and the G7 nations, gathered to discuss a long-term strategy to study and regulate AI. These initiatives indicate that governments worldwide are looking closely at how AI can be used for good and nefarious purposes.

Fraud Liability Split/Shift Takes Shape

At a GASA conference earlier this year, one attendee observed that in terms of scams, “everybody is busy mopping water off the floor while the tap is still running.” New liability regulations in 2024 could help stem the flow of scams.

In the UK, the Payment System Regulator’s (PSR) regulations are scheduled to go into effect next year.  PSR’s split scam liability model will split responsibility for scams 50-50 between sending and receiving banks. This even distribution of responsibility sets a global precedent that requires everyone with a stake in fraud and scams to accept some level of responsibility. 

This change will also motivate banks to change how they have traditionally approached scam losses. Instead of detecting scams after they take place, banks must shift to prevent scams in the first place. This prompts banks to adopt real-time monitoring and prevention strategies, transcending the traditional reactive approach. Behavioral biometric solutions will likely take center stage to help build a clear understanding of a customer’s normal behaviors. Banks will also be motivated to implement inbound payment monitoring for fraud. Historically, FIs have been motivated to do this for AML. But the liability shift will also mean this will apply to fraud and scams.

This liability change won’t remain a UK-only issue for long. Pressure continues to mount in different jurisdictions, such as the US and Australia, where the push for enhanced consumer protection is growing daily.

Expect More FinServe Collaboration

Collaboration becomes the watchword as banks increasingly join forces with other financial institutions, including fintechs and regtechs. The aim is to share data and insights to fortify defenses against cross-border fraud schemes. 

Simultaneously, regulatory changes are anticipated to clarify and encourage data-sharing, countering previous hesitations attributed to regulations like GDPR. Collaboration relies on sharing known patterns. However, banks will be hesitant to share any information if they fear it will land them in legal trouble. 

But as there are workarounds in law enforcement, there can also be exceptions for fraud. To enhance data-sharing and collaboration, regulators must clarify or soften their stances for banks. Collaboration is the goal, but greater clarity is needed at the top levels to ensure data is shared correctly.

Attacks on Banks Will Get Personalized

We live in an age of personalization. Many banks understand this and deliver services and messages tailored to their consumers’ specific needs.

Unfortunately, fraudsters have also gained an understanding of the benefits of personalization. For example, bad actors may study a bank’s security systems and develop a custom malware attack. As such, banks should expect more targeted attacks in 2024. 

This shift towards personalized attacks demands closer collaboration between fraud prevention, AML, and cybersecurity teams, both internal and external to the organization. Integrating these functions is crucial to effectively thwarting increasingly agile cyber threats.

Capability Consolidation & Future Flexibility

Banks often follow the “adopt one of everything” industry blueprint when addressing fraud. However, it makes more sense for banks to adopt the right technology instead of multiple solutions.

The emphasis is on solving financial crime issues strategically, with an eye on future flexibility. Consolidation reduces costs, improves coherence, and enhances adaptability, addressing the ongoing challenge of making diverse solutions communicate seamlessly with each other. 

This last point is critical. A new solution is successful if it can address its original purpose. However, fraudsters will inevitably exploit this communication gap if the solution doesn’t talk to other fraud prevention solutions.

Card Scams: A Shifting Focus

As the industry becomes adept at detecting scams in faster payment channels, attention is shifting towards scams on other channels. Purchase scams, constituting up to roughly 50% of all scams, are under scrutiny. 

There’s a growing need for assessments of both senders and merchants, prompting potential regulatory action to enhance the safety of online marketplaces like Amazon and Facebook. These marketplaces have historically done the bare minimum of due diligence. But that’s about to change.

 New regulations could prompt these firms to ensure the safety of their platforms. The UK government, for example, recently passed the Online Fraud Charter in agreement with several big tech names, including Amazon, Facebook, TikTok, and YouTube. The measure requires tech firms to add new verification steps (including in peer-to-peer marketplaces) and purge fraudulent content.

Telcos Reassess Their Role in Authentication

Today, many consumers treat their phones as a place to shop, connect with friends, and, of course, conduct banking. This means telecom services have reached a tipping point. These firms must evaluate the safety of their services for how consumers are currently using them, not how they were initially designed.

Consumers are increasingly moving from phone calls to SMS and other text-based interactions. In fact, by some accounts, the traditional phone call may already have passed its prime. This shift in consumer behavior should prompt a reassessment of telcos’ responsibility to ensure the safety of their services. 

Telcos and phone providers are more often used for authentication (e.g., sending SMS OTPs or voice-based passcodes) than phone calls. These firms should consider their future role in authentication as the purpose of the phone evolves.

Authentication Overhaul in the Digital Era

Traditional authentication methods, such as device binding and OTPs via SMS, are losing effectiveness. Increasingly, privacy-conscious consumers demand creative approaches to digital footprint utilization for recognizing legitimate users. As a result, several big firms are stepping into the authenticator role.

The rise of federated identities with tech giants like Google, Amazon, and Apple will likely take center stage, posing questions about the future of authentication in the banking space. For example, could a login with an Apple password become a de facto authenticator for a bank? Could it result in a new federated, blockchain-based identity that can be used in the financial industry? And how do more recent authentication techniques, such as behavioral biometrics, fit into this, too?

Ongoing Usage of Cryptocurrency in Financial Crime Schemes

Feedzai partnered with Mastercard this year to keep customers safe from crypto investment scams. Despite its risks and regulatory scrutiny, the use of cryptocurrency in fraud schemes continues to rise. Feedzai’s exclusive research has found that 40% of scam losses leave a bank account for a crypto exchange and that payments to crypto exchanges are nine times riskier than those to other traditional recipients. 

So, how does crypto persist? The belief in the future of digital currencies, particularly stablecoins pegged loosely to traditional currencies, keeps crypto relevant. As more companies accept crypto for payments, the temptation to invest remains high, fueled by a sense of legitimization and persistent lack of control. 

To some extent, crypto has gained legitimacy through tenure. It’s like the old saying, “If you hang around the barbershop long enough, sooner or later, you’ll get a haircut.” In other words, the more crypto hangs around, the likelier people are to invest in it and talk about it. 

Recent actions against crypto exchange platform Binance also raise serious questions about the lack of controls in place for this market. US regulators and law enforcement officials noted numerous problems with the platform, including that the firm never filed a suspicious activity report (SAR) and allowed customers to onboard without performing standard KYC checks. Some customers were able to open accounts simply by providing an email address.

The year 2024 promises to usher in dynamic shifts and challenges in the realm of fraud and financial crime. We can’t predict the future exactly. However, the banking industry must remain vigilant, adaptive, and collaborative to stay ahead of emerging threats and protect the integrity of global financial systems.