Have you ever been in a situation where you gave a voice command to a smart technology program such as Siri or Alexa and they replied with, “I don’t understand your question” or something similar?

Or have you called up a company only to find that they've got an automated menu that, "Doesn't understand your request," leaving you yelling, “Representative!” until you are finally given a human to talk to?

Well, there may be more to it than these intelligent systems being hard of hearing …

According to the MIT Technology Review, researchers have found that AI systems are learning to be prejudiced against certain dialects, particularly African American ones.

The data show that basically, if you have a certain accent or if you speak in a vernacular (like Ebonics), these programs won’t register your voice properly.

This is problematic not just because it means automated phone systems and chatbots have trouble understanding minorities, but because these programs are used to collate public opinion based on things they read on social media.

What these programs can't understand, they don't include in their output.

This means that people who tweet and write using slang or vernacular are being overlooked. Any service, product or policy created based on the data these programs generate is a service, product or policy that fails to take the skipped people (usually youth and minorities) into consideration.

Worried about just how often AI systems were overlooking minorities, University of Massachusetts, Amherst assistant professor Brendan O’Connor teamed up with one of his graduate students, Su Lin Blodget, to investigate how these programs look at Twitter language, collecting over 59 million tweets using demographic filtering in order to insure that the tweets contained missives from Black Twitter.

They then ran these tweets through several natural language processing programs.

And what did they find?

Well, they found that the programs struggled. One program told the researchers that it was absolutely sure that Black Twitter tweets exclusively in Danish.

“If you analyze Twitter for people’s opinions on a politician and you’re not even considering what African Americans are saying or young adults are saying, that seems problematic,” O’Connor said, in response to the results.

The professor feels that the results show that organizations that use these tools need to be mindful of their shortcomings.

“If you purchase a sentiment analyzer from some company, you don’t even know what biases it has in it,” O’Connor said, “We don’t have a lot of auditing or knowledge about these things.”

The team fears that these biases could cause problems as we grow more reliant on AI to make decisions.

In fact, one AI has already come under fire. Called COMPAS, it is used to decided whether prisoners ought to be granted parole.

One study found that COMPAS judged black inmates as unsuitable for parole incorrectly more often than it did white inmates.

Although the authors of the COMPAS study and the O'Connor team believe their data show that algorithms can be unfairly biased, Stanford University assistant professor Shared Goel isn't so sure zeros and ones are to blame.

Goel feels that algorithms produce accurate outputs, but that societal biases are to blame for what seems to be discrimination. “It’s better to describe what an algorithm is doing, the reason it’s doing it, and then to decide if that’s what we want it to do,” he said.

The A.I. field is still pretty new, but we hope they get to the bottom of this soon!