Reinforce rights, not racism: Why we must fight biometric mass surveillance in Europe

Gwendoline Delbos-Corfield, Greens/EFA MEP, in conversation with Laurence Meyer (Digital Freedom Fund)


What is biometric mass surveillance?

Biometric mass surveillance is the monitoring, tracking, and otherwise processing of the biometric data of individuals or groups in an indiscriminate or arbitrarily targeted manner. Biometric data includes highly sensitive data about our body or behaviour. When used to scan everyone in public or publicly accessible spaces (a form of mass surveillance) biometric processing violates a wide range of fundamental rights.

Mass surveillance / ©tobias-tullius on unsplash

Gwendoline Delbos-Corfield: “Thanks for joining me today in the context of the Greens/EFA campaign to ban biometric mass surveillance in public spaces. Biometric mass surveillance is progressing fast and we know it poses a threat to our fundamental rights. We recently had a great discussion following our online projection of the Coded Bias documentary, during which we focused on the risks and challenges of mass surveillance and algorithmic transparency. I’m happy we’re able to continue that conversation today. I’d like to focus on something we are not talking about enough: how these surveillance technologies can be discriminatory.

As the racial and social justice lead at the Digital Freedom Fund, could you tell us a bit more about how the use of biometric mass surveillance technologies can discriminate against people?

Laurence Meyer:“Thank you for inviting me to this important discussion! To answer your question, we first need to recognise that all systems of surveillance have a discriminatory impact in societies in which racism, hetero-sexism, cis-genderism, ableism, classism, and so on, are systemic. Multiple studies have shown that systemic discrimination is very real across Europe.

What we mean when we talk about systemic discrimination is that certain people are negatively impacted (and others are positively impacted) in their everyday life, due to the way they are categorised by certain attributes. This is not only on an interpersonal level – through homophobic insults, for example – but also on a macro level,when looking for housing, when job-hunting, when in education, when crossing borders, when in contact with the police, etc. Concretely, it means that the way I am identified, according to certain criteria (skin pigmentation, how I walk, the make-up I wear or don´t wear, the shape of my nose or the width of my shoulders) has direct consequences on my access to resources. 

For Western systems of facial recognition, it has been quite widely documented that the criteria used to differentiate and classify people (Is this person a man? A woman? A white woman? A black woman?) has led to the misidentification of people who are not white cis-men. Dark-skinned black women and dark-skinned non-binary persons are regularly being misidentified by facial recognition technology. In some cases, they are even misidentified as monkeys. This clearly isn’t far off from the historic tropes that fuelled racist imagery.

This has highly problematic consequences when we use these same systems in education systems, at borders, by law enforcement, and in all areas where the problems caused by systemic discrimination have long been documented. The use of these systems can even lead to wrongful arrests.

What is biometric mass surveillance used for and who uses it?

The other problem is the criteria used to identify people in cases of mass surveillance. This could be, for example, when law enforcement officers use facial recognition technology to look for people in a public space that could match their watch list. Even if the people are not misidentified, discrimination is still likely to occur because of the content of the watch list. A good example of this is the Gang Matrix, a database developed by the London police. Many young black men ended up on this database without ever having been accused of a crime, and sometimes even after having been a victim themselves. If this database were to be used in conjunction with a facial recognition system, it would lead to many innocent people being monitored closely, and possibly even to wrongful arrests based solely on racial criteria.

This is what we mean when we talk about the over-policing and the over-surveillance of racialized bodies.

Finally, because facial recognition systems are built on the assumption that the way we look tells you everything you need to know about a person, it reinforces problematic ways of categorising us. If we’re serious about eliminating all forms of oppression and making sure that the way we look becomes irrelevant to whether or not we can access resources, the development of these technologies is a huge step in the wrong direction.”

Gwendoline Delbos-Corfield: “You already mentioned it, but I want to come back to this important point. We know that these systems have higher inaccuracy rates on more underrepresented groups such as women, people with darker skin,and other marginalised groups. In fact, a recent study demonstrated that error rates in commercial facial analysis programmes for darker-skinned women were recorded as being more than 34%, compared with less than 0.8% of light-skinned men, when attempting to determine gender. Can you tell us a bit more about why facial recognition technologies have significantly higher error rates for some groups of people? “

Laurence Meyer:“There is a short and a long answer to this question. Firstly, I would say that there are higher error rates for white women, white gender diverse persons, men of colour, women of colour and gender diverse persons of colour. A longer explanation, but one that is important to mention, is that all these groups of people aren’t misidentified at the same rate.

The short answer is that because facial recognition systems are being trained mostly on white cis-men and designed by the same demographic, they are better able to recognise these features.

The longer answer is that it reproduces the way that creators understand the difference between men and women. If one thinks, consciously or not, that a man is a man or a woman is a woman because of specific bodily attributes – for example, the width of their shoulders, their size, the shape of their faces, the colour of their skin – it tends to exclude a lot of people. If this bias is input into the algorithm which is then used by facial recognition technology systems, it amplifies the exclusion. They reproduce systems of exclusion that predate these technologies.

It makes me think of Sojourner Truth´s speech,  “Ain´t I a Woman?” and the historical exclusion of women of colour, and specifically of black women, from womanhood on the grounds of bodily features. It also brings to mind how disabled persons have been historically dehumanised. We have to remember that, for many people, being human doesn’t have much to do with being a man or a woman. And actually, being a man or a woman doesn’t necessarily have so much to do with physical characteristics. This issue of higher error rates concerning certain categories of people touches upon something much deeper than just a question of bias. In my view, it is the question of who is worthy of attention and identification and who is worthy of being seen. We can draw parallels with the portraits we see in most European museums… and the portraits we don’t see.

Certain faces are overwhelmingly represented, while others are hardly seen at all. It poses the question: who gets to decide which technology is useful to all of us? How could we do better?Technological issues cannot be, and should not be, disconnected from the bigger societal picture. They are not appearing ex nihilo, but are products of a certain vision of the world.”

Gwendoline Delbos-Corfield: “I definitely agree with this. We tend to think that technology is disconnected from our societies, but the reality is that the two are interconnected. One of my biggest concerns is that citizens living in countries that have questionable records on fundamental rights and the rule of law might now see their governments using these technologies to further restrict their rights.

In Hungary, the government continues to restrict the lives of LGBTIQ+ people. If biometric technologies get into the wrong hands, autocratic governments will be able to monitor and control the lives of their political opponents and marginalised groups to an even larger degree. In Serbia, a country which Freedom House rates as only ‘partly free’, it seems as though the government has already begun the deployment of high-resolution cameras, equipped with facial recognition technology in the city of Belgrade. If we’re not careful, Belgrade could become the first European city to be totally covered with this biometric surveillance technology. This is happening right on the doorstep of the European Union.

Yet, many people still question whether biometric mass surveillance is really an issue here in Europe.

Is biometric mass surveillance really an issue in Europe?


I often hear that mass surveillance is an American or a Chinese problem. In the U.S., there are well-documented wrongful arrests, like the case of Nijeer Parks in New Jersey. In China, there is widespread use of government surveillance as part of the social scoring credit system. 

With the European Commission’s proposal for a regulation on Artificial Intelligence now on the table, where do we stand in Europe on biometric mass surveillance? What can we learn from other countries’ contexts? And, how do we raise awareness of the dangers of mass surveillance here in Europe?”

Laurence Meyer:I think this belief that mass surveillance is a US or a Chinese issue, that doesn’t concern Europe, clearly points to the lack of coverage that mass biometric surveillance receives here. In the U.S., researchers, and specifically many women of colour, have published studies that can be used by journalists and support their investigative efforts. In Europe, studies that take an intersectional approach struggle to receive financial support.

This can give the impression that the problem doesn’t exist. This problem of visibility is particularly acute when talking about the harmful use of biometric technologies because they can often be used without us knowing. It’s the case when facial recognition technology (FRT) films us without us knowing or when, as was the case in Sweden, law enforcement officers use a facial recognition app in their everyday work without any prior authorisation. The reality is that we don´t know the extent to which facial recognition systems are used and if they have already led to wrongful arrests in Europe. And that is, in and of itself, really worrying.

But, we know of many cases in which FRTs were deployed outside of a sufficient legal framework. EDRi compiled a pretty comprehensive list of them. This clearly shows that it is far from being just an American issue.

Another thing to add is that we already know that wrongful arrests are happening in Europe. This is particularly the case in law enforcement practices with discriminatory dimensions, such as identity checks without any substantive suspicion of wrongdoing. In France, in 2018, a French-Cameroonian man was sent to a detention facility after an identity check because he couldn´t present his ID to the officers who stopped him in the street.

The American cases show us one thing for sure: the use of biometric tools in policing don’t prevent wrongful arrests, nor do they prevent harmful and discriminatory  treatment. They do, however, increase the use of surveillance for all of us, marking us as data to be registered, identified and categorised. This is also a European problem. 

Systemic discrimination isn’t just an American problem. Mass surveillance isn´t just a Chinese problem. Biometric technologies are increasingly being used everywhere, whether we know about it or not. Biometric mass surveillance won’t magically make existing problems disappear Instead, it will amplify them. “

Gwendoline Delbos-Corfield: “These technologies really do magnify the discrimination that women, people of colour and other marginalised groups already face in their everyday life.


What can we do to stop biometric mass surveillance?

The Greens/EFA group are fighting for the rules on biometric mass surveillance in the EU  to be tightened in the coming years. Unfortunately, we also know that this won’t necessarily be possible everywhere in the world. Already, the European Commission is funding various surveillance projects around the world, including the development of a biometric ID system in Senegal and surveillance drones in Niger. We need to make sure that EU money is not used to endanger human rights for people outside of the EU and that other regions do not become a testing ground for these dangerous technologies.

Let me finish by saying that I really believe that now is the time for us to make a huge impact. Now is the time to stop the spread of these dangerous and discriminatory mass surveillance technologies. Here in the EU, right now, we have a real opportunity to ban biometric mass surveillance.

Thank you for your commitment to this cause, Laurence., and thank you again for joining me today.”