Coded Bias – On Discriminatory Algorithms and the Need to Ban Biometric Mass Surveillance Technologies
Film Screening and panel debate (from 20:45) - Register here!
The Greens/EFA campaign against biometric mass surveillance is organizing a screening of Shalini Kantayya’s award-winning documentary “Coded Bias” followed by a panel debate (from 20:45) on algorithmic biases and how they are being addressed in the Artificial Intelligence Regulation.
On the 28th September, a hybrid event will be organized, taking place at the Palace cinema in Brussels and virtually online. The hybrid format will allow for participants to join the screening and the conversation from the comfort of their homes.
Participants can join the event by receiving free access to the film by registering below. You can watch the film on Tuesday 28 September from 6.00h until midnight. We recommend starting at least before 19.00h, so that you finish in time for our panel discussion starting at 20.45h.
MEPs Patrick Breyer, Saskia Bricmont, Gwendoline Delbors-Corfield, Kim van Sparrentak and Tineke Stirk will be joined by key stakeholders and policy-makers on the AIA, to discuss a number of key aspects of the proposed legislation.
We are very pleased to announce the following panelists:
- Shalini Kantayya, Director of ‘Coded Bias’ - via virtual attendance from U.S.
- Brando Benifei, MEP, Rapporteur on the Artificial Intelligence Regulation (S&D) – via video statement
- Wojciech Wiewiórowski, European Data Protection Supervisor
- Kim van Sparrentak, MEP (Greens/EFA)
- Irina Orssich, Team Leader for Artificial Intelligence, DG CNECT, European Commission
- Ella Jakubowska, Campaigner and Coordinator of the European Citizens' Initiative "Reclaim Your Face" (European Digital Rights)
We invite you to join the debate on the impact and built-in biases of algorithmic decision-making technologies, by attending our screening and high-level panel debate on the fundamental rights’ impact of biometric mass surveillance technologies and the EU’s approach to regulate algorithmic decision-making in the proposed Artificial Intelligence Regulation.
What is biometric mass surveillance?
Biometric mass surveillance is the monitoring, tracking, and otherwise processing of the biometric data of individuals or groups in an indiscriminate or arbitrarily targeted manner. Biometric data includes highly sensitive data about our body or behaviour. When used to scan everyone in public or publicly accessible spaces (a form of mass surveillance) biometric processing violates a wide range of fundamental rights.
Biometric surveillance technologies have the potential to fundamentally change our societies by fuelling pervasive mass surveillance and discrimination. With her chilling exploration of the built-in bias of machine-learning technologies, Shalini Kantayya’s ‘Coded Bias’ reveals how algorithms can perpetuate existing class, race and gender discrimination.
At the same time, more and more people are standing up against the deployment of these technologies. In the United States, lawmakers have already started to impose bans on the use of some of the most invasive forms of algorithmic decision-making software: namely facial recognition technologies.
In the European Union, on the other hand, governments are beginning to experiment with systems of facial recognition and other biometric mass surveillance technologies in public spaces. With the upcoming Artificial Intelligence Regulation, the European Union has the chance to safeguard our fundamental rights and to ban biometric surveillance technologies that magnify the discrimination that women, people of colour and other marginalised groups in the European Union already face today.