An Ethiopian-American woman who worked as the co-leader of Google’s Ethical Artificial Intelligence team, conducting research on facial recognition bias against Black people, said she was fired from her job after criticizing the company about its treatment of minority employees. 

Timnit Gebru took to Twitter on Wednesday after losing her job and told her side of the story, saying she was fired by the head of Google’s AI division.  

"I was fired by Jeff Dean for my email to Brain women and Allies," she wrote, referring to a mailing group for company researchers. "My corp account has been cutoff. So I've been immediately fired."

According to Platformer, Gebru became frustrated after working on a research paper which was never published because of opposition from her bosses at Google. The researcher then expressed her dismay in the email to employee resources group, Brain Women and Allies, detailing the struggles she has experienced as a Black leader working on ethics research and showing concern about a seemingly bleak future for underrepresented minorities at the company.

"Stop writing your documents because it doesn’t make a difference," the former Google employee wrote in the email. "Your life gets worse when you start advocating for underrepresented people, you start making the other leaders upset when they don’t want to give you good ratings during calibration. There is no way more documents or more conversations will achieve anything. We just had a Black research all hands with such an emotional show of exasperation. Do you know what happened since? Silencing in the most fundamental way possible."

Platform also obtained a copy of an internal email which Dean sent to Google employees after Gebru separated from the company. The Google executive wrote in his email that the research paper was rejected because "it ignored too much relevant research."

"For example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies," he wrote. "Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues."

Dean said Gebru listed several demands for the company after her paper was rejected, including pressuring the executives to reveal who they had "spoken to and consulted as part of the review of the paper and the exact feedback."

"Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date," the head of Google’s AI division wrote. "We accept and respect her decision to resign from Google."