In the digital age, the tech world continues to evolve, creating invaluable contributions to humanity. However, with every technological advancement, there remains a tendency for personal bias/prejudice to be built-in.

Societal ills end up integrated into the technology we use because biased members of society, with limited scopes, are developing it. The algorithm-fueled science of artificial intelligence can create barriers to information, dependent on the individual using the system. This has become a worry for many, including officials in Washington.

Democrats on Capitol Hill demanded the problem of algorithm bias be addressed. On April 10, Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), along with Rep. Yvette D. Clarke (D-NY), introduced the Algorithmic Accountability Act, which would require companies to “study and fix flawed computer algorithms that result in inaccurate, unfair, biased, or discriminatory decisions impacting Americans.”

Booker, a presidential hopeful, delivered a personal speech on the matter.

"50 years ago my parents encountered a practice called 'real estate steering’; where Black couples were steered away from certain neighborhoods in New Jersey. With the help of local advocates and the backing of federal legislation they prevailed. However, the discrimination that my family faced in 1969 can be significantly harder to detect in 2019: houses that you never know are for sale, job opportunities that never present themselves, and financing that you never become aware of — all due to biased algorithms,” Booker said.

According to Booker’s website , the Algorithmic Accountability Act would:

  • Authorize the Federal Trade Commission (FTC) to create regulations requiring companies under its jurisdiction to conduct impact assessments of highly sensitive automated decision systems. This requirement would apply both to new and existing systems.
  • Require companies to assess their use of automated decision systems, including training data, for impacts on accuracy, fairness, bias, discrimination, privacy and security.
  • Require companies to evaluate how their information systems protect the privacy and security of consumers' personal information.
  • Require companies to correct any issues they discover during the impact assessments.

With more public knowledge on the limitations technology can bring, this bill is on time. For instance, various reports have alleged racial bias when it comes to facial recognition and self-driving vehicles. Vox reported self-driving cars fail to detect darker-skinned pedestrians, M.I.T.’s Media Lab found facial recognition software had 99 percent accuracy with the images of lighter-skinned/white men — that percentage decreased considerably for darker-skinned men and women.  

It gets deeper.

Three years ago in Arkansas, physically disabled and elderly Medicaid recipients were grouped into a waiver program called ARChoices. An algorithm-based program that determined the number of weekly home care hours members would receive. The recipients were sorted into "resource utilization groups” (RUGs) and assessed.

The algorithm drastically decreased many recipients home care hours for members. The state of Arkansas was sued after patients who “suffered difficulties with cleanliness, were required to remain in waste/soiled clothing, had increased fear of falling, suffered worsened conditions and/or went without food due to the lack of help preparing meals, among other harms described," as detailed in Pulaski County Circuit Judge Wendell Griffen’s order notes. In 2018, Griffen ordered the state to stop using this method  of determining care hours. 

Recently, the Department of Housing and Urban Development charged Facebook for allowing advertisers to discriminate based on race, religion, and disability status — a violation of the Fair Housing Act. Last year, Amazon reportedly shut down an automated recruitment tool that discriminated against women.

These missteps happen when there is nothing in place to check AI’s prejudice.

"Algorithms shouldn't have an exemption from our anti-discrimination laws. [The Algorithmic Accountability Act] recognizes that algorithms have authors, and without diligent oversight, they can reflect the biases of those behind the keyboard. By requiring large companies to not turn a blind eye towards unintended impacts of their automated systems, the Algorithmic Accountability Act ensures 21st Century technologies are tools of empowerment, rather than marginalization, while also bolstering the security and privacy of all consumers,” Rep. Clarke said.

AI is meant to make lives easier, but whose lives are truly benefiting from it? That is the reoccurring question.

Now check these out:

Hey Alexa, Why Are You Listening To My Conversations?

ACLU Wants Amazon To Stop 'Powering' Government Surveillance That Can Be Used Against Black Activists And Undocumented Immigrants

AI Technology Struggled To Detect My Face, Until I Put A White Mask Over It