Dr. Timnit Gebru and AI Ethics

Researching Dr. Gebru’s works for her upcoming talk in Middlebury.
Author

Jay-U Chung

Published

April 19, 2023

Dr. Timnit Gebru is…

Dr. Timnit Gebru is an acclaimed computer scientist whose work focuses on algorithmic bias and fairness in computing. She has worked as an AI researcher for Microsoft and on Google’s team on ethics in artificial intelligence. Some of her most notable papers include the Gender Shades project, which showed that facial recognition software from IBM, Microsoft, Face++, and Google all demonstrated systematic bias by performing worse on darker skinned people, especially women of color. Another prominent paper, yet to be published, was “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”, which examined how Natural Language Processing models (clearly a hot topic then and now) have been focused on becoming larger in size and more powerful without taking into account the data that they were using to train. In particular, they scraped text from online which included white supremacist, misognyistic, ageist, etc. views (https://time.com/6132399/timnit-gebru-ai-google/). This was a very problematic result that given that, as in ProPublica or Gender Shades, biased training data clearly have very negative implications for marginalized groups. This paper caused enough discontent at Google that they tried to convince her to remove Google co-authors from this paper before firing her shortly after. As a personal aside, this reflects badly on Google - Gebru’s own concerns were about how this push for the development of AI and the competition with other companies focuses on growth rather than ethics, and her firing proves this point.

This next part will mostly be my own interpretation. I think she is important as a researcher because she has exposes uncomfortable truths about how Big Tech, academia, and governments perpetuate social inequality. She has legitmized the field of social responsibility by showing concretely that some of the most important tech projects exhibit systematic racial or gender bias. She has also spoken out personally about her own experiences with injustices. This includes personal incidents of racial harassment from the police or colleagues, the culture of sexual harassment and negligence in companies or academic conferences, and the striking lack of people of color (especially Black people and Black women) in tech conferences. In short, the people in charge of the largest AI projects which have the strongest influence on our lives are not very responsible, belonging to a culture of misogyny and racism which their algorithms themselves also exhibit. She has pushed to make the space more equitable by founding projects as Black in AI and DAIR, which seeks to create community funded AI research that values ethical concerns first. To this extent, her status in the field is also a great inspiration to others and an example that it can be possible for marginalized people to achieve success and combat the inequity in academia and business and our general society.

Summarizing Dr. Gebru’s Tutorial on Fairness, Accountability, Transparency, and Ethics (FATE) in Computer Vision

Dr. Gebru’s talk was on algorithmic bias in computer vision. She notes that algorithms cannot be vetted, there is no regulation on what constitutes acceptable algorithms despite the great harm they can pose to society. As a concrete example, she points to the Baltimore police’s use of facial recognition to arrest those in the Freddie Gray protest, an undemocratic action that suppresses political participation. One danger she sees is that computer science abstracts people away from research problems, not considering the context of algorithms. This is deeper than making data fairer or more inclusive as this does not negate any harmful impacts algorithms like facial recognition can have - whether on identifying protestors or promoting gender stereotyped advertising, or even views that gender is something beyond a social construct. Data is often used without consent of individuals, as in IBM’s diversified facial recognition dataset, and does not benefit the people using it, as in China Big Brother tech using faces from Zimbabweans. Law enforcement even collects data to harm immigrant or black and brown communities. Another issue is automation bias - people trust algorithms, robots more because they don’t question it or know how it works. A final point she makes is that the people working on computer vision themselves are not diverse. Even ethical boards for Stanford are mostly white and work to marginalize others.

I think everyone needs to understand that computer vision algorithms can be incredibly harmful to marginalized groups and that computer scientists need to be very aware of structural biases and therefore be responsible for the impact of their work, intended or not.

Question for Dr. Gebru

Dr. Gebru advocates against Big Tech’s view on the benefits of AI, which ignore the damage it does to marginalize communities. What can people do to change this? She’s talked about harmful views like longtermism, how workers should have more rights so they can prioritize ethics over deadlines, more education centering on ethics in computing, or government regulations. Can these incremental changes stop this harm or should it be down to an individual’s decision not to participate in harmful systems?