When AI Gets It Wrong: Bias in AI Decisions

5 minute read

Can Artificial Intelligence Make Bad Decisions?

Even if you’ve never asked Siri to set a reminder or told Alexa to play a movie, artificial intelligence (AI) is still part of your world. That’s because AI and machine learning that can understand us and make decisions are playing an increasingly larger role in our lives.

Today, AI appears in everything from medicine and education to retail and entertainment to hiring, banking and criminal justice – just to name a few. Some AI applications are undoubtedly helpful, like a tech company developing AI that can detect wildfires faster or astronomers who use AI to help them sift through terabytes of space imagery.

But along with AI’s ability to help comes a danger. The more we trust this relatively new technology to make important decisions, the more room there is for large-scale errors. For example, things get complicated when an algorithm decides whether you qualify for a loan or land your dream job. In this article, we’ll examine how bias in AI can inadvertently impact a wide range of individuals and industries.

AI Bias in Finance

Did you know that AI helps prevent credit card fraud and exposes money laundering schemes? Some investment firms are calling on AI to help them make smarter trades, and many financial executives think AI will help with everything from improved customer service to an increase in profits.

But what happens when a machine-learning algorithm decides whether you qualify for a loan? As AI offers the ability to process bigger amounts of data, companies can now rely on alternative data to predict whether you are a good credit risk or not.

Let’s say people who pay back their loans don’t tend to shop at a particular website. Or they tend to have a certain number of LinkedIn connections. But you like that website, and you just don’t use LinkedIn very often. If a lending company fed your data into their AI algorithm, it might determine you’re a bad risk based on those choices.

History itself can create bias too. The latest US Census data shows that black and Hispanic populations have been historically underbanked. For AI to learn, it must be fed data. If the data shows that certain segments of the population are denied loans more often, it may falsely “learn” that those segments are greater credit risks, perpetuating a negative cycle.

AI Bias in Hiring

More and more organizations are turning to AI-powered recruiting agencies to help them fill positions. On the one hand, these agencies can eliminate thousands of hours of interviewing time by using AI to analyze automated video interviews or data from tests and resumes to predict employability. On the other hand, some AI researchers worry that using AI to determine a person’s fitness for a job could potentially be unscientific or, worse, biased.

The recruiting firm HireVue has developed an AI-powered hiring system designed specifically for interviews. The technology uses the candidates’ computer or cellphone cameras to analyze everything from facial expressions to diction to speaking voice – and other factors – before assigning an automatically generated “employability” score.

The problem with this approach? Some experts argue that such a system cannot truly assess the worth, value or employability of an individual and that “analyzing a human being like this…could end up penalizing nonnative speakers, visibly nervous interviewees or anyone else who doesn’t fit the model for look and speech.” Furthermore, they say the technology is “an unfounded blend of superficial measurements and arbitrary number-crunching that is not rooted in scientific fact.”

In another example, Amazon abandoned a hiring algorithm in 2018 because it passed over female applicants in favor of male applicants for tech roles. The reason was simple — the learning program had been fed data of past applicants and employees, the majority of which were male. While tech has long been a male-dominated industry, it’s shifting. If AI only considers past data, the future will never change.

AI in Insurance, Criminal Justice and Other Industries

It’s no secret that insurers look at countless factors to determine rates and coverage. Thanks to AI, they can now look at and analyze countless more. And while the data-crunching abilities of AI are helping insurers give faster quotes and claims settlements, some argue that it could lead to “personalized pricing,” where machine learning ultimately decides who gets coverage and what it costs.

Flaws and bias within AI get even more serious when you’re dealing with law enforcement and criminal justice — dangers range from being mis-identified by police to being slapped with a stiffer sentence simply due to race.

So what can be done to make AI less biased in the first place? For starters, we need to understand how and why AI reaches certain decisions. Two terms that often come up when people talk about improving AI are explainable AI and auditable AI.

Explainable and Auditable AI

Explainable AI means asking an AI application why it made the decision it did. The Defense Advanced Research Projects Agency (DARPA), an agency within the Department of Defense, is currently working on a project called the Explainable AI Project to develop techniques that will allow systems to not only explain their decision-making, but also offer insight into the strong and weak parts of their thinking. Explainable AI helps us know how much to rely on results and how to help AI improve.

Auditable AI asks third parties to test a system’s thinking by giving it a wide range of different queries and measuring the results to look for unintended bias or other flawed thinking.

Fei-Fei Li, AI pioneer, former Google exec and the Co-Director of Stanford University’s Human-Centered AI Institute, argues that another way to help eliminate bias, especially in the areas of gender and race discrimination, is to get more women and people of color involved in developing AI systems. While that’s not to say that programmers are at fault for implementing bias into AI, simply having a broader range of people involved will stamp out unconscious leanings and bring to light overlooked concerns.

Legal Responses to AI Bias

Ultimately, what may end up bringing these anti-bias tactics into effect is the law. The Equal Credit Opportunity Act currently requires that any lender denying a loan must offer “specific reasons” why. That means companies using AI to make loan decisions are already required to produce a form of explainable AI. And in Illinois a new law requires any company intending to use AI in their hiring process to notify all applicants of that fact beforehand. At the very least, if AI is going to start making the big decisions in our lives, we should get to know about it.

A Few Questions for All of Us to Consider

There’s no question that AI is already having a significant impact on our lives – many times without us even realizing it. What questions or concerns do you have about how it might be impacting you, those you know or those you serve? If your organization is using some form of AI in its decision-making processes, what steps are you taking to ensure that bias doesn’t accidentally creep into the picture? Feel free to share your thoughts in the comments section below.

Take the Next Step

We can help you decide pretty quickly whether this would be a good fit for your organization. With 20+ years of experience in automation, we just need about 5 minutes of Q&A. 

Keep Reading

KeyMark announces Insights 2024.

5 Things to Expect at Insights 2024

What to expect at Insights? We’re officially one week away from Insights 2024, KeyMark’s biggest get-together of the year. For those not in the know, Insights is our annual automation conference, bringing together peers and professionals to discuss the future of business powered by process automation. If you’re already signed

Read More
What do you get with KeyMark's support plans.

What’s the Value in KeyMark Support?

Award-winning Extended Support We’ve extensively covered the value in a value-added reseller, and one of the major claims is the addition of implementation, training, and award-winning lifetime support. But what makes KeyMark support “award-winning”, why are our support teams consistently ranked in the highest echelon of partner programs, and —

Read More
Content Services and CSPs are a natural evolution of enterprise content management.

What is a Content Services Platform?

What are Content Services? Content Services is the logical evolution of Enterprise Content Management (ECM) solutions in today’s cloud-native, mobile technology stack. Fundamentally, cloud content service platforms (CSPs) shift the focus from document management in a single hub, to integrated services delivered directly to the applications where end users spend

Read More
Search