Illustration of person running away from large group of bots, demonstrating how banks can protect themselves from bot attacks

If life is like a box of chocolates, detecting bots is like baking a layered cake. Just like master bakers can’t create multi-tiered masterpieces with one pan, fraud fighters can’t use one bot detection method to capture the sophisticated bot attacks we see today. 

The bot landscape has grown up. While script kiddies will always exist, Bot authors have become adept at evading detection. What’s more, they are ambitious. They constantly innovate to outsmart systems. The pace of cat-and-mouse evasion techniques between bot authors and fraud fighters has never been so fast and furious. A method that works for either side today could be useless tomorrow.

One key factor fueling this arms race is the increasing accessibility of bot creation. The rise of Bots-as-a-Service platforms has lowered the barrier to entry. With minimal technical skills, anyone can subscribe to a service that provides a virtual machine, a user-friendly console, and plug-and-play automation tools. Whether for legitimate or fraudulent purposes, the door is wide open for individuals to deploy bots.

The Democratization of Bots

For small businesses, access to bots can be a game changer. A real estate agent or a small professional services firm might use bots to cold-contact hundreds of prospects daily. Using bots may very well keep their lights on. This highlights an important distinction: not all bots are malicious. 

Legitimate bots often announce themselves. For example, you might encounter a chatbot on a website that tells you it’s a bot and is there to help you. Another example is Microsoft or Google which operate multiple different bots or automation. There’s been a greater increase in those sorts of mechanisms with the advent of Generative AI (GenAI), and there’s been an intense interest in scraping the web to generate training content for LLMs.

Financial institutions must differentiate between legitimate and criminal bots.

The Bot Subculture Plays a Role in Bot Rivalries

In some ways, the industry inadvertently fuels the battle between bot authors and fraud fighters. Research teams and detection players have actively announced their new methods, often provoking bot creators who are more than willing to rise to the challenge. 

This dynamic plays out openly in online communities, where bot authors and scripting enthusiasts gather to discuss techniques. They are on the fraud fringes, and may focus on seemingly innocuous activities like scraping content or automating purchases. But this often leads to scalping tickets or shutting out legitimate sneaker enthusiasts, forcing them to pay higher prices on secondary markets. 

On the darker side, more malicious bot creators target institutions and organizations, often with the backing of organized crime. These bots are designed for specific, nefarious purposes with much higher stakes.

Network Level Bots-as-a-Service

Newer bots mimic identities much better than before, even replicating different browsers or devices. However, their most significant advancement lies in their ability to operate at the network level. 

Bot authors now leverage residential proxies—IP addresses shared across multiple users or households. This tactic allows them to blend bot traffic with legitimate user traffic, making traditional detection methods like blocking bots based on IP far less effective. The risk of false positives increases and legitimate traffic can get blocked, creating more problems than it solves.

Residential proxies have become a hallmark feature of the Bots-as-a-Service model, offering a plug-and-play solution. Bot traffic is routed through these proxies, allowing the bot to appear as though it’s originating from a residential IP address shared by other users. By hiding among legitimate traffic, bots can evade detection more effectively than ever before.

Types of Bots

Bots can be lumped into three categories: naive, complex, and the most nefarious bots, targeted bots.

Category Naive Bots Complex Bots Targeted Bots
Definition Basic, unsophisticated; typically performs simple, repetitive tasks. Advanced; Designed to bypass basic detection methods and mimic human behavior. Sophisticated; purpose-built to attack specific targets or evade detection systems.
Common Uses Scrape data, automate clicks, or simple account actions. Rotate IP addresses, simulate mouse movements or clicks. Part of larger, coordinated attacks aimed at financial or corporate targets.
Detection Relatively easy to detect; follows predictable patterns and lacks complexity. Difficult to detect; blends in with normal user behavior and uses advanced tactics. Hardest to detect; specifically designed to evade security measures.
Example Repeatedly tries to log in to an account using known credentials. Bypass CAPTCHAs or switch IP addresses frequently to avoid detection. Break into a specific bank’s accounts, siphon funds, and cover tracks.

Targeted bot attacks are custom-built to focus on specific financial institutions. To defend against them, you need a strategy that covers all types of bots, from script kiddies barely understanding the bots they use to highly technical, destructive bots created by experts. A truly effective defense must cover the entire spectrum.

A Layered Fraud Strategy: The Only Real Defense Against Bot Attacks

Most bot defense systems miss subtle inconsistencies that researchers and R&D teams, like ours at Feedzai, uncover through reverse engineering and deep analysis. This is why a layered approach to bot detection is critical.

Naive detection methods, like blocking bots based on IP addresses, can catch the simplest bots but won’t stop the sophisticated ones. That’s like baking a single-layer cake for a wedding—it’s not going to cut it. You need a multi-layered strategy that includes more complex techniques, like behavioral analysis.

For instance, instead of just blocking suspicious IPs, we should be looking at keystroke timings:

  • Flight time (how long it takes to move between keys) and
  • Dwell time (how long each key is pressed).

Equally important are factors like time on page and session duration. These might seem basic, but when analyzed deeply, you can start to see how bots behave in ways that real users don’t. Bots tend to move in quick, fluid motions from one button to another, without the natural pauses and variations of a human.

Take this example: When I switch from typing on my keyboard to using my mouse, my hand’s movement naturally causes subtle shifts in the cursor. Bots don’t have hands, so their movements lack that human touch. These slight behavioral variations are vital to identifying bots, and we uncover these inconsistencies through deep reverse engineering and analysis of behavioral signals.

Labels and the Future of Bots

We mentioned analyzing keyboard strokes, but what happens when we no longer use keyboards? When everything is voice-activated? Questions like this, particularly when technology develops at an exponential pace, make the layered approach even more critical. And that brings up the next challenge: how do I classify activity as a bot?

I can’t say definitively if something is a bot or not, but I can say it’s anomalous compared to the pattern of behavior I’ve seen for this particular individual historically. 

Detecting Bots in Known Entities

Obviously, everyone would like to stop bots at the front door and we should always have systems in place to do that. However, there are other opportunities to stop them further in the authentication journey, and equally, part of that would be looking at patterns of behavior created by bots. 

Patterns in bot behavior might be more in the context of payment behavior. I might not capture the signal of a particular bot, because I can’t classify it. However, I might pick up an identification or a signal that you are in an anomalous bucket of people. Alongside that, you’re making a payment right now. That’s not in your usual spending patterns. So it’s that blending of transactional risk monitoring alongside classifying threats that ensures a more robust detection methodology.

Bot Attacks and New Account Fraud

How can you tell the difference between a bot and a new customer when you haven’t even established a relationship with the customer yet? That’s where cohort analysis comes in. You might not know this individual customer, but you know plenty of others like them. 

The same applies to their devices, connections, and behaviors. Is their activity similar to or different from their peers? A layered strategy that combines individual profiling and peer group profiling is key. 

Individual profiling is essential, but when you pair it with peer group profiling, you reach a new level of sophistication in fraud detection. The combination creates a powerful layered approach to identifying emerging threats.

Here’s how it works in new account fraud detection:

Even if you don’t know the customer well yet, you can determine whether their behavior is unusual compared to patterns you’ve seen among similar users. For instance, is this how people typically interact with a particular webpage during account creation? You may not be able to identify a bot definitively, but you can detect behavior that stands out—something at the population’s far end of the percentile range. That anomaly becomes a risk indicator. Combined with other risk factors, it can trigger an alert based on your fraud detection strategy.

Combining individual and peer group profiling is a highly effective way to respond to emerging fraud threats.

Here’s why:

  • Individual profiling focuses on a user’s specific characteristics and behaviors, such as transaction patterns, device information, and login times. Over time, it helps build a detailed profile, allowing the system to detect anomalies that deviate from the customer’s typical behavior.
  • Peer group profiling compares this individual’s behavior with the behavior of others in similar contexts. This could include people from the same demographic, location, device type, or similar transactional behavior. Comparing an individual’s activity to a broader group helps catch outliers or bots early.

When you combine both strategies, you create a layered approach:

  • Individual Profiling ensures that any unusual behavior by a specific user is flagged.
  • Peer Group Profiling helps catch suspicious behavior early on, even if the individual profile is still being established (like with a new customer). It looks at similarities and differences with established behaviors from others in the peer group.

This layered strategy is the foundation of a more robust defense against new account fraud and bots.

Incorporating friendly friction

Not all friction is negative for the customer experience. While seamless interactions are ideal, adding controlled friction (such as security challenges) at the right moments can help uncover isolated incidents that may actually be part of a larger coordinated attack. This friction increases the level of assurance, ensuring that the legitimate customer is attempting to perform a genuine action.

To do this effectively, friction often takes the form of authentication steps. These could be:

  • Additional Authentication: When suspicious activity is detected, ask for more information (e.g., multi-factor authentication).
  • Knowledge-Based Challenges: Asking the customer something only they would know (like security questions or behavioral history).
  • Human Challenges: Implementing captchas or similar tasks that bots would struggle to complete.

By strategically adding these friction points, banks can better determine whether the action is genuine or potentially part of a larger fraudulent attempt.  It’s about balancing security and user experience to reduce fraud without adding too much friction for legitimate customers.

Organizational Silos: Fraud vs. Security

Bots often fall into a gray area between two departments: security and fraud, and these silos can create challenges. The security department typically focuses on denial-of-service attacks, where bots target and disrupt services. The fraud department is more concerned with bots that compromise credentials or execute fraudulent transactions. 

Helping fraud and security departments collaborate better benefits both departments and the organization. While much of the expertise on bots may reside with the security team, the fraud department is the key stakeholder in terms of financial loss. Often, each department pursues separate solutions without realizing they share common goals. By aligning their efforts, they could find vendors or solutions that address both fraud and security more effectively.

Summary

To truly defend against the ever-evolving bot landscape, embrace a layered approach that integrates individual profiling, peer group analysis, and behavioral detection. Like baking a layered cake, each layer of defense adds strength and resilience, ensuring that even the most sophisticated bots can be detected and thwarted. However, the collaboration between fraud and security teams is just as important as the layers themselves. These departments must break down silos and work together to build a cohesive defense strategy that addresses both security breaches and financial losses.

Now is the time to assess your bot detection capabilities and ensure you have the right mix of defenses. By combining advanced technology with cross-team collaboration, you can stay ahead of new threats and keep your customers and assets safe. It’s not just about keeping bots out—it’s about being ready for whatever comes next.

Want to learn more?