Black people on Twitter have expressed outrage after multiple experiments proved the website has a tendency to crop out the faces of Black people and spotlighting the faces of white people, according to The Verge and BBC. 

But Twitter is not the only site with the issue.

Other people have spotted similar problems with Zoom, and the racial bias expressed through both platforms' algorithms highlights a larger problem that has implications reaching far beyond just Twitter and Zoom. 

Last week, a Twitter user noticed something strange about the way images were previewed in tweets that you hadn't clicked on yet. Before you actually click on the tweet, Twitter shows you a preview of the image. But one enterprising user showed that no matter how you organized a photo, Twitter always put the white face as the preview and left out the Black face. 

In a sad bit of irony, the racial bias problem with Twitter's algorithm was discovered when someone was trying to show the racial bias in Zoom's algorithm. 

Colin Madland tweeted about problems he was having on Zoom calls where, if a special background was used, Zoom’s facial recognition would not show his Black colleague's face. 

He tried to tweet about the problem but Twitter's algorithm did the same thing, focusing in on his face and not on the face of his colleague. 

The next day, another Twitter user experimented with the issue using the faces of Barack Obama and Mitch McConnell.

No matter what order or size he put the photos, Twitter always put McConnell's face as the preview. People in the comments defended Twitter by noting that if you lighten Obama's face, his image is shown in the preview.

But this proves the original tweet's point: the Twitter algorithm always chooses the "lighter" photo as the preview.

There were a number of theories out there about why Twitter's algorithm did this. 

One user noted that it even happens with two photos of Michael Jackson.

Even when you put multiple photos of a Black person, like Obama, the Twitter algorithm still chooses the white face.

Twitter has responded to the controversy, writing that they "tested for bias" but never found any. 

Some said there are multiple reasons why Twitter's algorithm does this. 

The problem even showed up with cartoons.

Twitter got a lot of fanfare in 2018 when they announced that they were using machine learning to automatically crop photos on the site. In a lengthy blog post, they said the tool would automatically crop photos to focus on "saliency," which they explained was the most interesting part of a photo.

Google Brain researcher Lucas Theis and machine learning lead Zehan Wang said they used academic studies to figure out what part of a photo eyes generally focus on first.

“This data can be used to train neural networks and other algorithms to predict what people might want to look at,” the two wrote.

In response to the recent complaints, Twitter Chief Technology Officer Parag Agrawal said it was good that people spotted the problem and vowed to fix it.

But Twitter's Chief Design Officer Dantley Davis caused even more criticism when he defended the site's process, writing that Madland's original problems were occurring "because of the contrast with his skin."

"I know you think it's fun to dunk on me – but I'm as irritated about this as everyone else. However, I'm in a position to fix it and I will. It's 100% our fault. No-one should say otherwise," he later said.

While this may seem like a fairly mundane problem, it has roots that are already causing societal issues. The problem highlighted the bigger issue that tech firms face, which is that the lack of diversity leaves blindspots in technology that are only noticed once its deployed. 

Facial recognition software is now used widely by police forces and armies across the world despite dozens of studies proving that it still cannot identify the faces of Black people, particularly Black women.

People have already been unfairly arrested due to mistakes made by facial recognition software, and there are dozens of other instances where algorithms with inherent racial bias were deployed with devastating consequences. 

"Predictive policing" software is being rolled out despite outrage from scientists who say the technology is not ready. Other studies have proven that Black people were discriminated against by an algorithm widely used in U.S. hospitals to allocate health care to patients.

ProPublica released a groundbreaking study in 2016 of a software used in U.S. courts to "predict" recidivism, showing that it overwhelmingly discriminated against Black people.