Can AI be Biased? Think Again.

Sayam Pradhan
7 min readOct 18, 2021

--

Can-Artificial-Intelligence-be-biased?

Naturally, we’re biased to believe that we’re the best — just like you — but we think it’s important to question things sometimes. Technology can be biased — even if it’s not its fault. There are plenty of reasons to be wary about artificial intelligence. AI systems can inherit biases from their creators and from their surroundings. If the goal is to eliminate bias, it’s important to consider AI bias. As researchers found, Google’s image recognition algorithms labelled people of color as gorillas or that Microsoft’s AI chatbot Tay rapidly became racist and sexist, and these biases were reinforced by how it was taught.

In order to fight bias, we need to have a strong understanding of the two types of bias in artificial intelligence. Artificial intelligence has the potential to disrupt our lives in newly unimaginable ways, but at the same time, it may introduce biases. In Artificial Intelligence, there are two types of biases: “Algorithmic Biases” and “Societal Biases.” Algorithmic bias results from the algorithm coding itself. The best way to avoid this problem would be to train facial recognition software on a diverse set of faces. When an AI reflects bigotry and discrimination, it is called societal AI bias. If we can learn about these biases with a bit of research, we can prevent them from affecting our findings. A typical example of algorithmic bias is in face recognition software. This software trains itself to recognize faces based on what it has seen in the past. If the algorithm is trained with data that says that long hair is associated with women and shorter hair is associated with men, then it will want to classify a man with short hair as a woman, because it has never seen a man with short hair.

Types of Biases in Artificial Intelligence

In Artificial Intelligence, there are two major types of biases: “Technical Biases” and “Societal Biases.” Technical bias results from the algorithm coding itself. If we can learn about these biases with a bit of research, we can prevent them from affecting our findings.

1. Technical bias occurs when something goes wrong happens during the development stage of the AI program. The error can be intentionally or unintentionally introduced. Designers sometimes introduce their human biases into the program, and the program begins to behave accordingly. The introduction of bias can be intentional or unintentional. The five types of technical bias are:

a) Sample Bias: When an active-learning machine-learning system misbehaves or makes a mistake, there is a good chance it has been taught the wrong thing. Perhaps the data does not reflect the environment it will be tested in, or maybe this information was deliberately inserted into the system by a human. This is called sample bias and can lead to an overly optimistic view of performance. Bias in data inevitably creates bias in the products that are built using that data. If your facial recognition technology doesn’t work well for people of color, then you can’t reliably use it to identify criminals in photos. Likewise, if your speech recognition software mishears African Americans but not Caucasians, outbound calls will be misdirected and money will be lost. Every time you use a technology like this without accounting for demographic bias, you make it worse. You make it harder for people of colour to have their voices heard fairly. You increase the likelihood that AI systems today will perpetuate biases that have always existed between different types of humans.

b) Exclusion Bias: Exclusion bias is most prevalent during the data preprocessing stage. Most of the time, it’s a case of deleting valuable data that was deemed unimportant. It can, however, occur as a result of the systematic exclusion of certain information.

c) Measurement Bias: When data collected for training differs from data collected in the real world, or when faulty measurements result in data distortion, this type of bias occurs.

d) Recall Bias: This is a type of measurement bias that occurs frequently during the data labelling stage of a project. Recall bias occurs when similar types of data are labelled inconsistently. As a result, accuracy tends to suffer.

e) Observer Bias: It is also known as confirmation bias, is the effect of seeing what you expect or want to see in data. This can occur when researchers enter a project with subjective, either conscious or unconscious, thoughts about their study. This is also evident when labelers allow their subjective thoughts to control their labelling habits, resulting in inaccurate data.

2. Societal Bias: An AI may develop societal AI bias because it is trained with human-made data that reflect societal biases, or because its training reflects biases of its makers. Or, due to logical flaws or unexpected consequences, an AI may come to tolerate different inputs, results, or actions that are outside our own norms. Societal AI bias can have negative consequences for society, just as human-society bias has had negative consequences for individuals.

a) Racial Bias: Racial bias in AI tools is an interesting example of the prevalence of data heuristics in historical technology. This case is relatively straightforward. Facial recognition software has been trained with photos of Caucasians and people of color, and classified them accordingly. Because of this training data, the software is predisposed to considering Caucasians more important than people of color.

b) Association Bias: When data for a machine learning model reinforces or multiplies a cultural bias, this bias occurs. This effect is called algorithmic bias, and the objective of this tutorial is to explain how algorithmic bias attacks diversity in technology. A great example of data reinforcing and multiplying a cultural bias is the outcome of the 2016 American presidential election. If you can’t be bothered to read many articles about this, I’ll summarize it. Two companies highly proficient in machine learning and big data analysis — Google and Facebook — were responsible for spreading Donald Trump’s messages and stances far and wide via their popular (and increasingly monopolized) services. They did this effectively enough that it more than offset the traditional American advantage in money and airtime during political campaigning.

Examples of Bias in AI

We have learned about AI bias. Let’s take a look at some examples of AI Biases in Artificial Intelligence.

Amazon Hiring

After hiring thousands of people, Amazon engineers discovered that the computer program they were using to choose people for jobs had no good candidates. It also had more men than women. The company’s machine-learning experts later reported that the AI-based recruiting engine has flaws in it. It is just used to show biased results. The discovery that an artificially intelligent system had taught itself to discriminate against women was nothing new, said Dr. Sandra Wachter, a researcher at Oxford University. Nonetheless, Wachter believes that algorithms have the potential to outperform humans in decision-making. However, it was a huge problem.

US Health Care Algorithm

A study published in Science discovered that a health care risk-prediction algorithm, a major example of tools used on more than 200 million people in the United States, exhibited racial bias — due to its reliance on a flawed metric for determining requirements. This algorithm helps hospitals and insurance companies to understand the best way to help people stay healthy. By giving them extra visits to the hospital, they can get better faster, without spending more money. The algorithm aims to prevent severe complications by prioritizing higher-risk patients for more organized and specific attention, lowering costs, and increasing patient satisfaction. It can be very risky if such algorithms are related to people’s health becomes bias.

Facebook Ad’s Algorithm

It’s easy to find and share just about anything on Facebook. People connect and keep up with friends and family every day by sharing all kinds of things: photos, opinions, articles, and favorite jokes or memes. And of course, we all use Facebook to stay in touch with the latest news and connect with the causes we care most about. Facebook, before 2019, let advertisers buy ads, and in the past, they were allowed to choose what kind of people they wanted to advertise to. They were not supposed to be able to choose based on things like race and religion. For this reason, Facebook was also sued by the US Department of Housing and Urban Development. Later, Facebook claimed that it updated its algorithm, now it doesn’t allow advertisers to target people based on religion, caste, race, gender, etc. However, new evidence shows that Facebook’s AI-based algorithm, rather than people, decides who sees a certain kind of ad. The algorithm only lets a certain kind of ad be shown to a certain kind of person. For example, if a person is female and over 40 years old, the program only lets the ad be shown to a female who is over 40 years old. Even the Facebook algorithm recommends posts to you based on the information you’ve provided in your profile.

Some Causes of Bias in Artificial Intelligence

  • Inadequate data for training the program.
  • The data contains human bias.
  • It is difficult to remove bias from data.
  • There is not a lot of diversity among the employees.
  • The company may be creating bias on purpose in order to make money.
  • The company may be unwilling to pay for data de-biasing.
  • Because of privacy concerns, external audits are not possible.

The reasons are greatly explained by Alexandra Ebert in an article titled “10 Reasons For Bias In AI And What To Do About It”.

--

--

Sayam Pradhan
Sayam Pradhan

Written by Sayam Pradhan

I’m a blogger, author, and cybersecurity expert. I enjoy writing blog about current technology.

No responses yet