Artificial intelligence will massively shape our future society, and humans as a whole have a habit of thinking of AI as something close to God – an omniscient being with objective wisdom that vastly outclasses our puny mortal brains.

But AI is a human creation, and so it carries human weaknesses and biases. Far from being objective, studies and experts have noticed multiple instances where artificial intelligence has displayed racist or sexist biases. The New Scientist lists just a few examples, such as gender-recognition AIs failing to recognize black women as opposed to white men and high-income jobs being shown to men more than women.

It would be nice to think that these biased AIs are the product of a few malicious programmers, and thus could be fixed by removing said programmers. But the problem of racist AI is much more insidious than that. Artificial intelligence reflects what humans believe, and is a sign of how programmers must be made aware of their own subconscious biases as well as the importance of diversity in the programming workplace.

Far from being a God, artificial intelligence is more like a child. And like any child, AIs are not born racist but learn to be thanks to the inherent biases in our society.

An overt example of how AIs learn to be racist is Microsoft’s infamous Tay debacle back in 2016. The chatbot AI started off polite and cheerful, aiming to learn how to engage humans through playful conversation. But after being deluged by a horde of Twitter and 4chan trolls spouting racist remarks, Tay learned to praise Hitler, Trump, and spout racist and anti-Semitic remarks. Tay did not understand what it was saying any more than a small child aping his racist father’s words would. But humans put racist data in, and Tay put racist data out.

Tay learned to be racist through a process called machine learning, which has been a major boon towards AI development. In machine learning and its cousin deep learning, a programmer gives an AI an initial set of data. For example, a facial recognition AI would receive many pictures with some being labeled as “faces” and some being labeled as “not faces.” Over time, the AI would come up with their own patterns and algorithms about what constitutes a face without a human having to specifically define what constitutes a face.

But what if the initial faces the AI receives are overwhelmingly white male faces? Over time, the AI could decide that being white is a necessary prerequisite for a face and thus be confused by black faces. This exact instance has happened multiple times, as the New York Times reported that facial recognition AI failed to recognize 35 percent of darker skinned women compared to 1 percent of white men.  This data fed into AI algorithms from information collected using VPN and proxy internet sources is biased from the start. As such the AI has learned to be prejudiced towards minorities, even if no one explicitly told it to.

In the case of facial recognition software, fixing it is not too difficult and includes adding more pictures of diverse groups to help AI understand that black faces are faces. Microsoft reported in July “that it has updated its facial recognition technology with significant improvements in the system’s ability to recognize gender across skin tones,” lowering the number of errors.

Constant vigilance of background checks and a better understanding of where programmers may make mistakes can help repair instances where AI end up displaying racially biased results. But how can we prevent AI from displaying said results to begin with? The most logical idea would be to simply not tell AI what race or gender any individual is. But just as we can generally determine an individual’s race or gender through attributes such as their place of residency or name, so too can AI.

A critical step towards fighting AI racism is to ensure that programmers become more diverse. The AI programmers did not intend for facial recognition software to be better at recognizing white men. But as most programmers are white men, they subconsciously gave the software photos of white men without thinking anything was wrong. Furthermore, the lack of racial diversity meant that the programmers looked only at the total number of errors, which was low, without considering the number of errors made with black women, which was high.

As our society continues to rely more on AI, it is critical to remember that they can possess the human foibles of their creators including racist and sexist biases. Consequently, tech companies must aim to become more diverse and be willing to question whether the data given to an AI is truly representative. Those who are affected by AI should also never be afraid to question and confront its decisions. Far from being a god, AI can end up as a tool used to either consciously or subconsciously perpetuate white and male privilege.