Black people on Twitter have expressed outrage after multiple experiments proved the website has a tendency to crop out the faces of Black people and spotlighting the faces of white people, according to The Verge and BBC.
But Twitter is not the only site with the issue.
Other people have spotted similar problems with Zoom, and the racial bias expressed through both platforms' algorithms highlights a larger problem that has implications reaching far beyond just Twitter and Zoom.
Last week, a Twitter user noticed something strange about the way images were previewed in tweets that you hadn't clicked on yet. Before you actually click on the tweet, Twitter shows you a preview of the image. But one enterprising user showed that no matter how you organized a photo, Twitter always put the white face as the preview and left out the Black face.
In a sad bit of irony, the racial bias problem with Twitter's algorithm was discovered when someone was trying to show the racial bias in Zoom's algorithm.
Colin Madland tweeted about problems he was having on Zoom calls where, if a special background was used, Zoom’s facial recognition would not show his Black colleague's face.
Turns out @zoom_us has a crappy face-detection algorithm that erases black faces…and determines that a nice pale globe in the background must be a better face than what should be obvious.
— Colin Madland (@colinmadland) September 19, 2020
He tried to tweet about the problem but Twitter's algorithm did the same thing, focusing in on his face and not on the face of his colleague.
Geez…any guesses why @Twitter defaulted to show only the right side of the picture on mobile? pic.twitter.com/UYL7N3XG9k
— Colin Madland (@colinmadland) September 19, 2020
The next day, another Twitter user experimented with the issue using the faces of Barack Obama and Mitch McConnell.
Trying a horrible experiment…
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia
— Tony “Abolish (Pol)ICE” Arcieri ???? (@bascule) September 19, 2020
No matter what order or size he put the photos, Twitter always put McConnell's face as the preview. People in the comments defended Twitter by noting that if you lighten Obama's face, his image is shown in the preview.
But this proves the original tweet's point: the Twitter algorithm always chooses the "lighter" photo as the preview.
There were a number of theories out there about why Twitter's algorithm did this.
There’s another thread out there: basically twitter runs a face detection algorithm when picking how to crop/preview a photo.
It systemically considers white faces to be “more” of a face than black faces. So if you have both in a photo it’s more likely to preview the white one.
— Richard Schneeman ???? Stay Inside || protest cops (@schneems) September 19, 2020
One user noted that it even happens with two photos of Michael Jackson.
Happens with Michael Jackson too…… pic.twitter.com/foUMcExS2P
— carter (@gnomestale) September 19, 2020
Even when you put multiple photos of a Black person, like Obama, the Twitter algorithm still chooses the white face.
I wonder what happens if we increase the number of Obamas. pic.twitter.com/sjrlxjTDSb
— Jack Philipson (@Jack09philj) September 19, 2020
Twitter has responded to the controversy, writing that they "tested for bias" but never found any.
We tested for bias before shipping the model & didn't find evidence of racial or gender bias in our testing. But it’s clear that we’ve got more analysis to do. We'll continue to share what we learn, what actions we take, & will open source it so others can review and replicate.
— Twitter Comms (@TwitterComms) September 20, 2020
Some said there are multiple reasons why Twitter's algorithm does this.
Excellent example of racial bias in image AI.
Discussion includes multiple reasons: darker pixels inherently harder to resolve than lighter, but also @Twitter homogeneous team composition certainly implicated.
We must do better in #AI
#DataScience & image analysis.— Felicity Enders (@FelicityEnders) September 20, 2020
The problem even showed up with cartoons.
I wonder if Twitter does this to fictional characters too.
Lenny Carl pic.twitter.com/fmJMWkkYEf
— Jordan Simonovski (@_jsimonovski) September 20, 2020
Twitter got a lot of fanfare in 2018 when they announced that they were using machine learning to automatically crop photos on the site. In a lengthy blog post, they said the tool would automatically crop photos to focus on "saliency," which they explained was the most interesting part of a photo.
Google Brain researcher Lucas Theis and machine learning lead Zehan Wang said they used academic studies to figure out what part of a photo eyes generally focus on first.
“This data can be used to train neural networks and other algorithms to predict what people might want to look at,” the two wrote.
In response to the recent complaints, Twitter Chief Technology Officer Parag Agrawal said it was good that people spotted the problem and vowed to fix it.
This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement.
Love this public, open, and rigorous test — and eager to learn from this. https://t.co/E8Y71qSLXa
— Parag Agrawal (@paraga) September 20, 2020
But Twitter's Chief Design Officer Dantley Davis caused even more criticism when he defended the site's process, writing that Madland's original problems were occurring "because of the contrast with his skin."
"I know you think it's fun to dunk on me – but I'm as irritated about this as everyone else. However, I'm in a position to fix it and I will. It's 100% our fault. No-one should say otherwise," he later said.
While this may seem like a fairly mundane problem, it has roots that are already causing societal issues. The problem highlighted the bigger issue that tech firms face, which is that the lack of diversity leaves blindspots in technology that are only noticed once its deployed.
Facial recognition software is now used widely by police forces and armies across the world despite dozens of studies proving that it still cannot identify the faces of Black people, particularly Black women.
People have already been unfairly arrested due to mistakes made by facial recognition software, and there are dozens of other instances where algorithms with inherent racial bias were deployed with devastating consequences.
"Predictive policing" software is being rolled out despite outrage from scientists who say the technology is not ready. Other studies have proven that Black people were discriminated against by an algorithm widely used in U.S. hospitals to allocate health care to patients.
ProPublica released a groundbreaking study in 2016 of a software used in U.S. courts to "predict" recidivism, showing that it overwhelmingly discriminated against Black people.