Unsplash/Tim Gouw
Study |

Trustworthy age assurance?

A study commissioned by the Greens/EFA Cluster on Green and Social Economy

A study by
Martin Sas, KU Leuven, Centre for IT & IP Law, Leuven, Belgium
Jan Tobias Mühlberg, Université Libre de Bruxelles, Ecole Polytechnique,Brussels, Belgium
commissioned by the Greens/EFA Cluster on Green and Social Economy


Children represent a substantial portion of internet users, which creates an imperative to create a safe and secure online environment. Harmful content, however, is easily accessible, with 19% of respondents to a recent survey admitting exposure to pornography before the age of 13, 14% reporting being threatened, and 45% reporting verbal abuse online. Most providers of adult-restricted online content rely on self-declaration of age without further validation, which has proven to be ineffective and easy to bypass. Consequently, governments are urging the implementation of robust online age assurance systems that prevent children from accessing adult-restricted content or other types of harmful content online. Legislation aimed at improving online child protection, e.g., GDPR, AVMSD, DSA and age-appropriate design codes, consider age assurance as a protective measure and support their implementation online.
Age assurance measures are, however, controversial and raise concerns about their impact on the fundamental rights of both adults and children, since all internet users would need to prove they are adults to access specific content. This requirement, besides being intrusive regarding individuals’ privacy, can potentially restrict individual’s ability to freely express themselves and engage with others unless they provide personal information and have the capacity to go through an age assurance process. This specifically  affects already marginalised populations who, e.g., do not possess the means for electronic identification, or for whom, e.g., facial scanning proves technically or personally impractical. Thus, fundamental rights including privacy, data protection, non-discrimination, freedom of expression, and freedom of association, are at risk if age assurance measures are not implemented in a proportionate, inclusive, and privacy-preserving way.
Moreover, age assurance measures may hinder children’s development by preventing them from accessing certain content or services, even though these resources could potentially help them enhance their skills and media literacy, especially in recognizing and handling specific risks (e.g., social media) or in navigating difficult personal situations. In this context, alternative measures of protection, such as safer algorithmic recommendation, harmful content warnings or panic buttons, may be better suited to support children in their exploration of the online world. Striking a fair balance between protection and empowerment is therefore crucial, and the best interest of the child should be considered when assessing the necessity of implementing age assurance. Such  assessments must take the type of content or service, the context in which children may access it, the evolving capacities of children, and the privacy intrusion of the age assurance methods into account.
This study highlights the potential risks and benefits stemming from the use of age assurance technologies in the online  environment, in particular regarding their impact on individuals’ fundamental rights and children’s rights. We conducted interviews with researchers, civil society organisations, age assurance tools’ providers and regulatory authorities to gather insights on the readiness of age assurance technologies, the associated risks and their* impact on children protection online. We synthesise the interviewees’ contribution in the form of quotations, highlighted throughout this study. Via desktop research, we reviewed the relevant legal framework and analysed literature on age assurance to identify associated risks and to assess whether these technologies could be in tension with the protection of fundamental rights. We evaluated a wide range of age assurance technologies by assessing the extent to which each technology could protect children from harmful content, as well as the potential negative impact on fundamental rights, including children’s specific rights.
We conclude that, while there is a clear need for protecting children online, there are currently no age assurance method that adequately protect individuals’ fundamental rights. The risks associated with the implementation of age assurance include privacy intrusion, data leak, behavioural surveillance, identity theft, and impeded autonomy. Moreover, while none of the methods reviewed could attest user’s age with certainty, the implementation of such measures may exacerbate existing discrimination against already disadvantaged groups of society, likely widen the digital divide and lead to further exclusion.
Promising privacy-preserving techniques, e.g, digital identities and double-blind transmission methods, are under development. These may offer improved user privacy protection by enabling anonymous age assurance. However, important security and inclusivity risks remain. Moreover, these technologies face implementation challenges, given the current absence of a pan-European technical and legal framework to support their wide adoption.
To guarantee individuals’ fundamental rights online, there is an urgent need for mandatory risk assessments including fundamental rights, data protection and children’s rights impact assessments. These must aim at striking a fair balance between children’s protection and empowerment. Additionally, a comprehensive framework of standards, certification schemes, and independent audit controls must be established to ensure the safety and trustworthiness of age assurance measures and the accountability of technology providers.
In summary, our study reveals a misalignment between the urgency with which governments are pushing for age assurance and the time needed to develop robust, safe and trustworthy age assurance technology. The primary risk lies with the adoption of assurance solutionswithout adequate protection of individuals’ fundamental rights, which could normalise excessive privacy intrusion and heightened risks of data leak and misuses across the online world.


Press release
Anti-Money Laundering, AML
Press release
Due Diligance Directive
Press release
© European Union 2024 - Source : EP
Philippe Lamberts in plenary
Press release
©International Labour Organization ILO (CC BY-NC-ND 2.0 DEED)
forced labour

Please share