The Commission’s approach to age assurance: Do the DSA Guidelines on protecting minors online strike the right balance?
By Sophie Stalla-Bourdillon,
Brussels Privacy Hub, LSTS, VUB
The European Commission’s guidelines on protecting minors online take important steps toward building a comprehensive list of relevant service design and organizational measures that are relevant under Article 28(1) DSA. But in doing so, they also risk oversimplifying a complex regulatory trade-off that underlies the deployment of age assurance methods. This blog post argues that the guidelines overstate the proportionality of age verification and age estimation methods, sideline key data protection concerns, and miss the opportunity to articulate the implications of a rights-based, privacy-preserving design for all users.
On 14 July 2025, the European Commission (EC) published its final guidelines to help online platforms comply with Article 28(1) of the Digital Services Act (DSA), which obliges providers to implement appropriate and proportionate measures to protect minors. The guidelines were developed following a public consultation, input from children through the Better Internet for Kids (BIK+) initiative, and close collaboration with national Digital Services Coordinators and other stakeholders, including platform providers, civil society representatives, and academic experts.
Set within the framework of a risk-based approach, the guidelines propose a range of protective measures that platforms (excluding micro and small enterprises) can implement to better protect kids online. While not exhaustive, these measures emphasise a “value by design” principle, encouraging platforms to integrate safety features into their systems from the outset. Acknowledging that different platforms present varying levels of risk to minors, the guidelines aim to promote a flexible, tailored model that allows services to adapt protections according to their specific context.
Protecting minors online has been a growing priority for EU lawmakers in recent years, as evidenced by measures such as the Audiovisual Media Services Directive (e.g., Articles 6a, 28b), which introduced specific rules in an attempt to shield children from harmful content appearing on audiovisual media and video sharing platform services. In the realm of data protection, the General Data Protection Regulation (GDPR) also recognises minors as vulnerable data subjects, specifically under Recital 75.
The DSA confirms a shift toward a more integrated, bespoke approach—one that seeks to bridge the gap between data protection and online safety for children through a unified regulatory framework for digital platforms. Article 28(4) empowers the Commission to issue guidelines in support of this objective, a step now realized with the July 2025 publication.
This blog post explores one of the thorniest issues raised in the guidelines: age assurance. Age assurance is increasingly framed in policy discussions as a practical tool to prevent underage users from accessing content and services that are unlawful, inappropriate, or harmful for their age group—such as adult content, gambling, and, more recently, social media platforms. Some Member States, like France, have been actively advocating for leveraging the DSA to justify restricting minors’ access to social media, particularly for those under 15.
This post highlights three key concerns about how the guidelines frame age assurance methods, and asks whether they strike an appropriate balance among the various rights and freedoms at stake.
- Age assurance features are subject to data protection by design and by default requirements
In the DSA, age verification is explicitly referenced only in Article 34, where it appears among the risk mitigation measures suggested for very large online platforms and search engines (VLOP/SEs).
It is, however, generally understood that age verification falls under the broader category of “appropriate and proportionate measures” that all platforms (excluding small and micro enterprises) may apply “to ensure a high level of privacy, safety, and security of minors on their service,” under Article 28(1) DSA.
This interpretation is reinforced by the EC in its guidelines, which begin their examination of in-scope service design measures with a detailed analysis of age assurance methods.
Building on the classification used in its 2024 mapping report, the EC distinguishes between three categories of age assurance techniques: Self-declaration, age estimation, and age verification.
- Self-declaration refers to methods where users are prompted to provide their age or confirm an age range themselves—typically by entering a date of birth or simply clicking a button to state they are above a certain age.
- Age estimation involves techniques that allow platforms to assess a user’s likely age based on indirect indicators, helping determine whether someone falls within a particular age group or is above or below a specified threshold. These techniques could include the processing of biometric data (e.g., photo or short video to detect facial features), the analysis of quiz or puzzle responses, typing patterns, browsing behaviour, or app usage.
- Age verification, by contrast, relies on verified sources of information including hard identifiers (e.g., passport, driver’s license, national ID) to confirm a user’s age with a higher level of accuracy and reliability.
The EC then seeks to clarify the appropriate use cases for age estimation and age verification, but may offer an oversimplified analysis.
While the EC begins with the sound premise that a risk-based assessment is necessary to determine whether an age assurance method is appropriate and proportionate—i.e., whether it ensures a high level of privacy, safety, and security for minors which could not otherwise be achieved through less intrusive means—this approach is weakened by the EC’s broad, generic assertions about both age verification and age estimation.
On age verification: After some hesitations (the draft guidelines initially identified three scenarios), the EC now claims that age verification is appropriate and proportionate in four scenarios, which are not particularly easy to parse: 1) when high risks to minors arise, particularly in light of EU or national law prohibiting minors from accessing certain content, e.g., websites selling alcohol, tobacco or nicotine-related products, drugs, websites giving access to porn or gambling content, 2) when, given the nature of the risks posed to minors, platforms state in their terms of service that only adult users (18 and older) are allowed to use their services, 3) where the risks to minors “cannot be mitigated by other less intrusive measures as effectively as by access restrictions supported by age verification” and 4) when Union or national law, in compliance with Union law, prescribes a minimum age to access online platform services, e.g., for social media.
On age estimation: Age estimation methods are now presented as either complementary to age verification methods (to be used in addition or as temporary alternative to age verification methods, particularly when mature age verification methods are not yet readily available) or as appropriate and proportionate for two types of scenarios: 1) when, given the nature of the risks posed to minors, platforms state in their terms of service that only users above a certain minimum age (but still under 18) are allowed to access their services, and 2) when medium risks to minors arise that cannot be mitigated by less restrictive measures.
Compared to earlier guidance issued by the European Data Protection Board (EDPB) and national Data Protection Authorities—most notably France’s Commission Nationale de l’Informatique et des Libertés (CNIL)—the EC’s approach appears significantly broader in scope.
On the topic of age verification, the CNIL has been particularly clear in its warnings, stating that “making age verification widespread could lead to the creation of a closed digital world, where individuals would constantly have to prove their age—or even their identity—posing serious risks to their rights and freedoms, especially freedom of expression.” These concerns are further compounded by potential accessibility barriers, which may disproportionately affect certain users.
Furthermore, the EC’s treatment of age estimation remains relatively weak. It is true that the EC does acknowledge profiling as a potential risk for children and advises providers of online platforms to consider the EDPB’s relevant statement when evaluating age assurance methods. What is more, in-scope age estimation methods now only comprise methods “provided by an independent third party or through systems appropriately and independently audited.” However, when distinguishing age estimation from age verification, the EC states that the difference merely lies in the level of accuracy.
What the guidelines still fail to clearly state is that age estimation may, in many cases, constitute profiling—including profiling on the basis of biometric data. It could also fall under the scope of automated decision-making as governed by Article 22 GDPR. Automating age estimation at scale would inevitably result in some degree of error (granted, that the EC acknowledges that a lower level of accuracy does not necessarily mean a lower impact on fundamental rights and the EC does recommend that online platforms “provide a redress mechanism for users to complain about any incorrect age assessments by the provider”).
Of note, the EDPB in its comments on the draft guidelines proposed to “generally discourage the use of algorithmic age estimation because of the current high rates of false positives and negatives, and the significant degree of interference with users’ fundamental right to data protection.” A few months earlier, BEUC, the European Consumer Organisation, had already raised a series of concerns related to age estimation methods, including commercial surveillance, low accuracy and cybersecurity.
How the guidelines underplay implementation details
It is clear from decisions by both national authorities and the EDPB that broad, generic assertions about age assurance methods are problematic from a data protection perspective. In the TikTok case, for example, regulators expressed clear doubts about the effectiveness of the platform’s age assurance methods. However, they ultimately could not conclude that these techniques constituted a violation of the GDPR.
A key consideration is whether the age assurance method adheres to the data minimisation principle and complies with data protection by design and default under Article 25 GDPR, which necessitates a case-by-case assessment and careful examination of the implementation details. Another important factor, as noted earlier, is the broad scope of online services that would be obligated to implement age assurance methods beyond simple age declaration.
Therefore, it is problematic to make generic statement about the appropriateness and proportionality of age assurance methods without examining the specifics of their implementation.
- Age verification prototypes are not necessarily real-life solutions
This brings us to the proposed solution for age verification promoted by the EC in its guidelines: the mini–ID Wallet. However, a summary of its timeline and (still evolving) technical specifications is not included in the guidelines.
In autumn 2024, the EC had issued a call for tenders to develop a “mini-ID wallet,” aimed at creating an “age verification solution” by the second quarter of 2025. The mini-ID wallet is presented in the guidelines as the EU age verification solution, which will be available before EU digital Wallets become available, and is described as a “solid privacy-preserving and data minimising solution [that] will aim to set a standard in terms of privacy and user friendliness.”
Two core principles usually lay at the heart of privacy-preserving age verification solutions:
- Selective disclosure holds that users should only be required to reveal the minimum amount of information necessary—in this case, just proof that they meet a required age threshold, rather than their full date of birth or identity.
- Double-blindness means that neither the platform requesting age verification nor the party verifying the user’s age should be able to link the identity of the user to their activity on the platform or the use of a particular age verifier.
To achieve these principles, an age verification system relies on the interaction of three distinct actors:
- The user, who seeks to access an online service.
- The verifier, a trusted third party that checks the user’s age.
- The platform, which receives only a confirmation (e.g., “over 16”) without accessing the user’s identity or details about the verification process.
This structure is designed to distribute trust and responsibility across different entities, minimising the risk of data misuse including cross-site tracking.
Early-Stage Solutions, Not Final Fixes.
To implement this model, the EC refers in the mini-ID wallet specifications to the use of zero-knowledge proofs (ZKPs)—a cryptographic protocol that allows a user to prove they meet an age requirement without revealing any other information. In essence, ZKPs make it possible to confirm a fact (such as being over a certain age) without disclosing the underlying data.
However, as pointed by the Electronic Frontier Foundation and EDRi earlier in the year, the technical specifications of the tender do not make ZKPs a mandatory requirement, although it is recommended (granted, the technical specifications have evolved since 31 March 2025. At the time of writing, the technical specifications mention that batch issuance will be used in the short term as a way to preserve unlinkability, but batch issuance is weaker in terms of privacy guarantees.) Echoing the content of the technical specifications, the final guidelines merely state that “online platforms are encouraged to adopt double-blind age verification methods.” (emphasis added).
Notably, despite having developed its own age verification demonstrator leveraging ZKPs, the CNIL notes—in its 2024 deliberation related to access to porn sites—that ‘double anonymity’ solutions have not yet reached full maturity and warns that this requirement could complicate their availability in the short term.
What is more, a tokenisation approach to age verification does not necessarily guarantees that tracking risks are appropriately mitigated. While tokenisation helps reduce the amount of personal data shared during age verification and pursue data minimisation, it does not necessarily ensure that tracking risks (i.e., linkability-related risks) are mitigated—especially if the system lacks a double-blind architecture. If for example the same token is presented to multiple platforms—there is still a risk that users can be tracked or profiled across sites. Therefore, tokenisation alone cannot guarantee robust privacy protection. Yet, the EC does not clearly unpack this point.
All in all, the EU age verification solution is considered by the EC to be the benchmark, even though its technical specifications, and therefore the strength of its privacy guarantees, are expected to evolve over time—a point the EC does not explain. Presenting the mini–ID Wallet as the EU’s solution for age verification thus risks being counterproductive, given the lack of clarity around its timeline and the range of online platform services it would be suitable for in the short term. What is more, it is unclear how platforms will be able to pursue a high level of accessibility and offer more than one age assurance method if each assurance method must achieve the same levels of accuracy, reliability, robustness, non-intrusiveness and non-discrimination.
- Protecting all users’ rights makes it easier to protect children rights
What does not emerge clearly from the Article 28 Guidelines is that when compliance with data protection law is at stake, online platforms are, in reality, faced with two options to design for age ranges or for all users. To use the words of the UK DPA in its Children’s Code, covered service providers, including online platforms processing children personal data, shall either a) establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children or b) apply the fifteen standards of the Code to all users, thus foregoing the need to worry about age appropriate design requirements. The same approach underpins the fourteen fundamentals developed by the Irish DPA, which are referred to a couple of times in the guidelines.
Even if the EC clarifies in the final text of the guidelines that age assurance methods cannot merely be substitutes for other types of measures, it is insufficient, and possibly misleading, to write that “online, providers of online platforms are encouraged to adopt those measures for the purposes of protecting all users,” without distinguishing age assurance methods from other service design measures. Yet, ensuring a high level of data protection for all users should reduce the need to rely upon age assurance methods.
Data Protection by Design and Default Is for All Users
Although recommended by the EC as a key safeguard for minors, a strict application of Article 25 GDPR should require privacy-by-default profile settings for both children and adults. What is more, tracking features and push notifications should also be off by default for both children and adults. Functionalities that increase users’ agency, and measures implemented to ensure effectiveness of user choice after service updates or changes should be on by default. And both adults and children should not be exposed to manipulative or exploitative design features.
It has also been explained why explicit user-provided signals makes sense for all users in the context of recommender systems. As Pershan and McCrosky stated “[a] platform with adequately rich data will inevitably tend to discriminate on many characteristics, including sensitive characteristics under the GDPR. What can be done to provide transparency into this process or prevent this discrimination? There are two general options: either “more data” or “(much) less data.” The second option is the most sensible.
Pushing the reasoning of the CJEU in the Bundeskartellamt case one step further, when personalisation of content is not strictly necessary for the core service, it should be turned off by default.
This would thus leave us with three main types of measures that would be minor-specific:
- Measures ensuring age-appropriate information to minors (about the functioning of the service and the correct implementation of protective features, including reporting processes).
- Measures ensuring minors only communicate with their age group.
- Measures ensuring only age-appropriate content is displayed to minors.
Bearing in mind children’s rights to inclusion, participation, information and freedom of expression, the second and third types of measures require careful balancing—especially considering the age range and scope and reach of the online platform services. What is more, balancing fundamental rights, including balancing the various fundamental rights of children and balancing the fundamental rights of children in relation to those of other users, cannot be easily automated. This necessarily leads to trade-offs that are difficult to prioritise or rank.
Conclusion: what the guidelines get wrong
To conclude, although the guidelines are driven by genuine concern, they do not clearly articulate the implications of the interplay between the DSA and the GDPR. In particular, the approach adopted risks oversimplifying the complex challenges associated with age assurance. Without a clearer delineation of scope and a recognition of inherent trade-offs, the guidelines may fall short of effectively balancing the fundamental rights at stake.