Writing in Financial Times: In search of a feminist AI

Forget discrimination and gender inequality, and get ready for artificial intelligence, which we can use to make fair and equal decisions for us!

Or can we?

The speed of the technological development around us is breathtaking. Artificial intelligence is no longer science fiction, but present in our everyday lives. Be it the personalized recommendations of a music app or even news in your social media stream, behind the scenes a huge amount of data analyzed for AI-based decisions determining what your services look like. We are also not far from the point in which AI will pre-process all our interactions with both businesses and public authorities – plenty of such applications are already in use. Does this mean that we can say goodbye to structural discrimination?

Unfortunately not – rather the opposite. We are currently at a risk of moving towards a digital world in which AI applications reflect or even reinforce our societal biases and discrimination. AI has a lot of potential to improve our everyday lives and reinforcing existing biases hardly is the aim of their developers. There is structural problem that runs deeper: who develops these applications, based on which data they are developed and for whom they are designed.

The underrepresentation of women in STEM (science, technology, engineering and mathematics) is a well-established fact. In relation to AI the problem is rather straightforward: to ensure that AI applications work for our entire society, their developers should have diverse backgrounds, representing minorities and those underrepresented in society, especially the female half of the population. That is currently not the case and has detrimental effects. We know that overlooking needs of people not fitting the white male standard is dangerous. Women are 17% more likely to die in a car crash because seatbelts are not made for them.[1] Similar issues exist with many technological innovations, including AI. Speech-steered software is 70% less likely to recognize female voices, and AI-based medical analysis may not recognize a female heart attack because the symptoms are interpreted as caused by “depression”! Ensuring a diversity of backgrounds in AI development is thus precisely what European policymakers should take to heart.

Another problem requiring a feminist approach is tackling biased data. AI learns from data, and the data that we collect reflects the biases inherent in our world today. A good example is an imaginary AI application for recruitment: if data shows that men are more often hired to management positions than women, or that people with a name indicating a minority ethnic group get fewer job interviews, the AI will replicate this discrimination – unless the bias is addressed in time by the developers! Yet often the data showing biases may not even be collected. Data collection and using gender-segregated data are crucial for AI development.

These fundamental issues must be considered in both the development of AI applications and in setting the EU’s framework for regulating artificial intelligence. The search for a feminist AI is not yet a lost cause. We believe that improving female involvement, advocating equality and non-discrimination as fundamental principles for developing artificial intelligence are among the most important feminist objectives of the 2020’s.

Miapetra Kumpula-Natri, Member of the European Parliament, 1st Vice-Chair of Special Committee on Artificial Intelligence in a Digital Age

Evelyn Regner, Member of the European Parliament, Chair of the Committee on Women’s Rights and Gender Equality

Tagit

Tutustu lisää

Newsletter

Stay up to date with the latest news directly from the EU!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.