Listen to 3 Ways AI Bias Harms Minorities and How Banks Can Fix It (10 min):

February is Black History Month in the United States. It’s a time when we look back at the history of African Americans in the U.S., both their role and accomplishments in shaping American history, as well as the hardships they continue to face. In the banking community, this is the perfect occasion to focus on how financial services can reduce the harmful effects of bias that Black and other minority bank customers are more likely to endure due to artificial intelligence (AI) algorithmic bias.

Algorithmic bias is when AI systems produce outcomes that disproportionately impact and behave unfairly toward certain groups of people. While bias can be unintentionally introduced into AI systems, it has far-reaching effects.

3 Most Common Harms Caused by AI Bias

If left unchecked, algorithmic biases can have a lasting and harmful impact on bank customers’ financial lives. Let’s dive deeper into the types of harm that AI bias can have on customers.

1. Allocation Harm

Allocation Harm (sometimes called the Harm of Allocation) refers to the ability of bank customers to easily access the financial products and services they need. Algorithmic biases can create barriers for minority customers that prevent them from accessing new credit cards, raising their credit lines, getting approved for loans, or qualifying for lower interest rates. What’s more, biased algorithms prevent minority customers from receiving recommendations on better banking products that can improve their financial lives. 

What does allocation harm look like outside the financial services sector? In the healthcare sector, for example, it can manifest itself by failing to recommend vital medical procedures or prescriptions for minority patients. Without the right recommendations, minority patients face significantly higher health risks than other groups. 

2. Quality of Service Harm

Imagine going to a restaurant and having your credit card declined when you finish your meal. It’s embarrassing and makes you angry at your bank. Or in another situation, it might seem that New York generally has higher transaction approval rates. However, upon taking a closer look, we might see Brooklynites experienced much higher rates of declined transactions than customers from Manhattan. These scenarios are quality of service harms that disproportionately impact minority populations. When they arise, minority customers are forced to contact their banks in frustration. 

This isn’t the only way biased algorithms can generate quality of service harms for financial services customers. Suppose a minority customer needs to call their bank and encounters an automated call system that doesn’t understand their accent or chatbots that don’t offer their preferred language. In that case, it adds another quality of service harm to their experience. 

Outside financial services, quality of service harms have also been reported in the wearable devices market. These connected technologies – which are designed to help users track vital health statistics and reach fitness goals – can struggle to read the dark skin tones. The result is inaccurate health data for people of color. 

3. Representation Harm

The final type of harm resulting from algorithmic bias is representation harm. As the name suggests, representation harm can lead to poor representation of large groups. When a group of people experiences negative results (many of which can stem from the earlier examples), the algorithm can inaccurately represent them.

For example, as more African Americans experience declines in card transactions the algorithm may paint others in this same group as high risk. As this unfair representation continues, the algorithm labels similar customers this way. This means more Black, female, and people of color will continue to struggle to access credit or have transactions declined at a considerably higher rate. 

Representation harm manifests itself in other ways outside financial services. Search engine Google, for example, faced criticism for underrepresenting women and minorities in its image search algorithm a few years ago. When searching for terms like “CEO” and “doctor,” the search engine produced images that overwhelmingly showed white males in these roles. Without being addressed, these algorithms can reinforce unfair stereotypes of entire groups.

3 Steps Banks Can Take to Boost AI Fairness

None of these three harms are mutually exclusive. If a bank’s algorithms paint a broad picture of Black or other minority customers as financially risky, these customers are more likely to experience poor quality of service or won’t receive important financial tools and products. Likewise, if a Black customer is unfairly denied higher lines of credit, this causes a domino effect that harms their financial health and causes further representation harm.

Addressing the problems with algorithmic biases is no easy task and can’t be solved overnight. But banks have a critical role in addressing how their algorithms impact the lives of their customers. Here are a few concrete steps that banks and other financial institutions (FIs) can take.

1. Raise AI Bias Awareness 

Being aware of the risks of AI’s potentially biased impact on customers is a critical first step to addressing AI bias. AI and machine learning algorithms are built by human beings, after all. And biases can infiltrate an algorithm at various stages of the algorithm’s lifecycle, from data collection and model development through post-deployment stages. No matter how well-intentioned an algorithm’s design may be, banks and their teams must constantly monitor how these algorithms impact their customers. Staying vigilant against potential problems and holding teams accountable for algorithms that disproportionately impact certain groups of customers more than others is one of the most important actions a bank can take to promote AI fairness.

2. Demand Responsible AI from Your External Partners

In addition to demanding accountability from internal teams, demand it from external vendors, partners, and third-party providers. Banks should consult with their engineering teams to understand the KPIs that are used to measure responsible AI performance. Fairness should become a key KPI in addition to performance. Armed with this knowledge, banks should ask their partners to show how they promote and address responsible AI – or even if they consider it a priority. 

3. Treat AI Bias in Your Bank’s Systems as Another Risk

Banks are in the risk management business. They assess the risks of customer backgrounds, loans, transactions, and more on a daily basis. In addition to including information systems in enterprise risk management, it’s time for banks to explicitly include bias in their AI applications  – as another element that requires a regular risk assessment. Perform regular audits of how the system arrived at its decisions and carefully examine the real-world impact on real people’s lives. If an algorithm works correctly but results in higher transaction declines for Black customers and minorities, immediately look for ways to reduce these harmful effects. Banks that can demonstrate they have improved their models to produce decisions that are 80% more fair than before can say they are making a positive difference in the lives of their minority customers. It is also a win-win situation where banks can make a social impact and use their inclusivity as a market differentiator.

Unfortunately, even in this day and age, AI fairness is still catching on. But banks that are willing and committed to fairness will be in a unique position to improve the lives of their customers and stand apart from their competitors. Black History Month comes but once a year. But pursuing fairness in AI and banking should be a year-round endeavor for all of us.

Can your bank protect the entire customer journey? Watch our on-demand webinar Reinventing Digital Trust Across the Customer Journey to learn how to improve your fraud strategy for 2022.