The wait is (almost) over! First risk assessment and audit reports – what will be published, when, and the way forward
Magdalena Jóźwiak
Researcher, DSA Observatory
University of Amsterdam
By the end of November 2024, the VLOPs and VLOSEs first designated by the Commission on 25 April 2023, are expected to publish their risk assessment reports, providing long-awaited insights into this crucial due diligence mechanism introduced by the DSA. This post offers an overview of the information that platforms are expected to make publicly available in the coming days and examines the main activities undertaken so far by the Commission to enforce and supervise the DSA’s risk management framework.
Introduction
One of the most transformative obligations introduced by the DSA is the requirement for VLOPs and VLOSEs to assess and mitigate systemic risks stemming from their functionalities, design, and operations (Article 34-35). By compelling platforms to recognize and address their broader societal impact, the DSA has the potential to revolutionise online platform accountability.
However, the concept of systemic risks remains elusive. While Article 34 outlines several broad categories of risks list in Article 34(1) mentions risks connected to illegal content, fundamental rights infringements, negative effects on civic discourse, elections and public security, gender-based violence, public health, or people’s physical or mental wellbeing), it leaves significant room for interpretation. Whether the risk assessment mechanism achieves its transformative potential will depend largely on how it is initially interpreted and operationalised by the platforms and subsequently enforced by the Commission. For this reason, to move the critical assessment of the DSA due diligence framework forward, it is crucial to understand how the platforms have applied Articles 34 and 35.
The obligation to prepare the initial risk assessment reports was triggered for the first designated VLOPs and VLOSEs on 25 April 2023. Until now, however, these reports have remained confidential, and accessible only to the Commission and the DSCs. As a result, the general public has had little insight into how Articles 34, 35, and 37 have been applied in practice. This will soon change. According to the DSA timeline, the first batch of risk assessment, mitigation, and audit reports is expected to be published imminently by the end of this month.
This post (building on an earlier DSA Observatory post by Paddy Leerssen) aims to highlight this critical moment by outlining what information is expected to be made public as part of the DSA’s risk assessment framework. It also reviews the Commission’s actions in risk management under the DSA to date, and flags some key challenges and uncertainties raised by experts in the field. The post concludes with a roadmap for the next steps following the publication of these reports.
The DSA Observatory will closely monitor these developments, including through updates and analyses of published reports. We also invite researchers active in the area to contribute to this conversation by submitting their insights to the DSA Observatory blog.
What will be published and when?
The DSA combines two regulatory approaches: prescriptive norms (e.g., liability provisions and content moderation rules) and due diligence requirements, which address the long-term societal challenges posed by platformization. In his recent book, Martin Husovec describes this second mode as a “polycentric regulatory model with elements of co-regulation”. The due diligence obligations require VLOPs and VLOSEs to conduct an annual cycle of self-assessment and auditing to address systemic risks tied to their services’ design, functionality, or usage. As a result, platforms play a central role in identifying, evaluating, and mitigating these risks, proportionately to their severity and probability.
This cycle begins when the European Commission notifies a company of its designation as a VLOP or VLOSE. For the first batch of designated platforms, this occurred on 25 April 2023. The obligations specific to VLOPs and VLOSEs took effect four months later, on 25 August 2023 (Article 33(6)). By this date, platforms were expected to:
- Conduct risk assessments (Article 34(1)).
- Submit their findings to the Commission promptly (Article 42(4)).
- Follow up with a report detailing mitigation measures to address identified risks (Article 35(1)).
Additionally, platforms must arrange for independent audits of their compliance with Chapter III of the DSA at least once a year (Article 37(1)). This means that for the first group of platforms designated as VLOPs and VLOSEs on 25 April 2023, the initial audits should have been completed by 25 August 2024. If an audit’s results were not labelled as ‘positive’, platforms had one month to adopt an implementation report detailing how they addressed the auditors’ recommendations. This cycle must be repeated before launching any new functionality that is likely to affect the systemic risks connected to the design, functionality or use of the platform’s services (Article 34(1) second paragraph).
Platforms must send all reports to the Commission and the DSCs of establishment immediately upon their completion and make them publicly available within three months of receiving the audit reports. Consequently, for the first batch of platforms, publication is due by 25 November 2024 (Article 42(4)).
Under Article 42(4), the following documents should be published by that date:
- The report setting out the results of the risk assessment,
- The reports on mitigation measures taken,
- The independent audit reports,
- The audit implementation reports (if applicable).
However, the publicly available versions of these documents may differ from those submitted to the Commission due carveout in Article 42(5). This provision permits platforms to redact information from public reports if it is deemed confidential to the platform or its users, could create significant vulnerabilities for the service’s security or might undermine public security or harm the users. Platforms are likely to make extensive use of this provision.
Finally, it is worth noting that further clarification on systemic risks and their mitigation will be provided by the European Board for Digital Services and the Commission by February 2025. Under Article 35(2), they are required to publish an annual report identifying and assessing the most significant and recurring systemic risks, as well as best practices for their mitigation. These reports will be based on information received from VLOPs and VLOSEs, the Commission’s own research, and the findings of researchers referenced in Articles 40(4) (vetted researchers gaining access to non-publicly available data) and 40(12) (researchers accessing publicly available data).
While the public has largely been kept in the dark about how platforms interpret and fulfil their risk management obligations, the Commission has taken a notably proactive approach to enforcing its role under the DSA, utilising various tools available within the risk management framework. The following section provides an overview of the Commission’s activities in this area.
The Commission’s approach to DSA risk management framework so far
The responsibility for enforcing the DSA’s risk management framework lies with the European Commission, which exercises extensive investigatory powers (the Commission can, for example send a request for information to the platform, conduct interviews or even conduct an inspection on the premises of the company) under the regulation to determine whether VLOPs or VLOSEs comply with their obligations and can impose fines for infringements. To date, the Commission has sent requests for information to all but one platform designated as a VLOP in the first group (the Wikimedia Foundation has not yet been investigated). If an investigation raises suspicions of non-compliance with DSA provisions, the Commission may initiate formal proceedings against the platform.
In all proceedings opened so far, the Commission has addressed, among other issues, platforms’ obligations to assess and mitigate systemic risks under Articles 34 and 35(1). Additionally, most of these proceedings have identified problems related to researchers’ access to data under Article 40(12). The Commission’s investigations have focused on failures to properly assess and mitigate systemic risks, including:
- Proceedings against X (18.12.2023): Issues related to negative effects on civic discourse and electoral processes, dissemination of illegal content (such as hate speech and terrorist speech), and fundamental rights violations, including rights to dignity and non-discrimination.
- First proceedings against TikTok (19.02.2024): Concerns about addictive design, the phenomenon of ‘rabbit holes’ of harmful content, and minors’ access to inappropriate content.
- Proceedings against AliExpress (13.03.2024): The sale of goods posing risks to consumer protection and health (e.g., medicines, food, and dietary supplements), risks to minors (e.g., pornographic material), dissemination of illegal content (e.g., reappearing banned products and ‘hidden links’ to illegal goods), and the AliExpress Affiliate Program, which may contribute to the spread of illegal content.
- Second proceedings against TikTok (22.04.2024): The launch of a new app, TikTok Lite, and the associated TikTok Lite Rewards Programme in France and Spain without prior risk assessment or mitigation measures, potentially encouraging addictive behaviours (in response, TikTok made the commitments to withdraw form the EU the Rewards Programme and not to re-launch any similar programmes, which were accepted by the Commission as binding).
- First proceedings against Meta (30.04.2024): Failures related to civic discourse, electoral processes, and fundamental rights, including the dissemination of deceptive advertisements, disinformation campaigns, and coordinated inauthentic behaviour. The case also involves the planned discontinuation of the CrowdTangle election monitoring tool.
- Second proceedings against Meta (16.05.2024): Concerns about interface designs that promote addictive behaviours and reinforce the ‘rabbit hole’ effect, particularly for minors, alongside inadequate age verification tools that allow minors to access inappropriate content.
- Proceedings against Temu (31.10.2024): The sale of goods non-compliant with EU norms, their reappearance on the platform after removal, and addictive service designs, including game-like reward programs.
Beyond supervising platforms’ compliance through its investigatory and sanctioning powers, the Commission has been also gradually developing other key elements of the risk management framework outlined in the DSA, including guidelines on risk mitigation measures and codes of conduct. Further down the line, the Commission can be expected to also be proactive in the development of voluntary standards (Article 44) and crisis protocols (Article 48).
Under Article 35(3), the Commission, in collaboration with the DSCs, can issue guidelines on risk mitigation techniques for specific risks. These guidelines can suggest best practices and recommend specific approaches for VLOPs and VLOSEs. To date, the Commission has published one set of such guidelines: focused on mitigating systemic risks to electoral processes. The development of these guidelines included consultations with civil society, DSCs, and platforms, enabling stakeholders to share their perspectives on systemic risks in this area.
Another critical component of the DSA’s risk management framework is the development of codes of conduct, a process in which the Commission plays an active role (Article 45.1). These codes are particularly significant during the audit phase. Under Article 37(1)(b), auditors must assess platforms’ compliance not only with the DSA’s provisions but also with the codes of conduct they have committed to. For instance, platforms’ adherence to the 2022 Strengthened Code of Practice on Disinformation already forms part of the auditors’ evaluations in the upcoming reports. Additionally, the Commission has been facilitating the creation of an EU Code of Conduct on Age-Appropriate Design, which focuses on protecting minors using digital products.
Martin Husovec suggests that the Commission might prioritize the development of codes of conduct as a primary enforcement tool.1 Compared to launching formal investigations and imposing sanctions – which require extensive and costly evidence that can withstand judicial scrutiny in case of litigation before the CJEU – codes of conduct are less resource-intensive and may achieve similar results. Moreover, Carl Vander Maelen and Rachel Griffin argue that although codes of conduct are formally voluntary, their practical application may resemble hard law obligations. Platforms that fail to comply with such codes risk attracting scrutiny from the Commission for potential non-compliance.
While codes of conduct offer a collaborative and adaptable approach to compliance, delegated acts represent a more formal regulatory tool in the DSA’s enforcement arsenal. The Commission has been active in drafting two delegated acts under the DSA in the field of risk management. The first such act, effective as of 22 February 2024, focuses on independent audits of VLOPs and VLOSEs, providing detailed provisions on audit methodologies and templates for audit reports. This delegated act will already apply to the upcoming audit reports. A second delegated act, currently under public consultation, addresses data access by researchers and is expected to be adopted in the first quarter of 2025.
What to expect and next steps following the publication
As noted by Fahy et al., the ‘devil with the DSA sits in its enforcement’. This is particularly true for the DSA’s novel regulatory approach to addressing societal harms linked to the operations of the largest platforms and search engines. Whether the DSA’s risk management framework proves to be transformative or merely perfunctory might largely depend on the level of scrutiny it attracts from various stakeholders, which could prompt enforcement actions by the Commission. Recital 137 DSA acknowledges that while failures by VLOPs and VLOSEs to comply with its provisions may result in ‘large societal harms’, such failures are often difficult to detect and address. To address this challenge, the Commission is encouraged to establish a broad coalition of experts, including vetted researchers, representatives of EU agencies and bodies, industry representatives, user associations, civil society groups, international organizations, and private sector experts, to support its enforcement role. The upcoming publication of risk assessment and audit reports is expected to activate this collaborative potential to hold platforms accountable.
However, platforms have been granted a ‘first mover advantage’ in framing discussions around systemic risks relevant to their services. By setting the benchmarks for risk assessments and interpreting the broad risk categories outlined in Article 34(1), they shape the narrative early on. This raises concerns that companies might conduct risk assessments superficially or focus only on risks that do not threaten their current business models. Another concern is that the reports made available to the public may be overly general or formalistic, preventing researchers from engaging in meaningful analysis of systemic risks or evaluating the criteria for measuring platforms’ impact on fundamental rights. Additionally, there is a fear that large portions of the published reports will be heavily redacted under Article 42(5), which permits the removal of information deemed confidential, likely to cause significant vulnerabilities to the platform’s service security, undermine public security, or harm users. Some civil society organizations also criticized Breton’s heavy-handed actions, for instance, a letter sent by Commissioner Breton to X. The letter, issued in the context of street riots in the UK and a planned live broadcast of a conversation between Donald Trump and Elon Musk, emphasized X’s risk management obligations under the DSA and hinted at ongoing proceedings against the platform. However, it offered no clarification as to what specific elements might be considered systemic risks in this context. In this and other cases, civil society organizations have expressed concerns that the DSA risks being used as a tool for ad hoc political intervention. It will be interesting to see to which extent this pressure might be reflected in platforms’ upcoming reports.
We will soon find out whether any of these fears materialize. Several leading digital rights organizations recently issued a joint statement calling for meaningful transparency from platforms, signalling that anything short of that will not do. Ensuring that these reports live up to their potential requires a collective effort, including from the research community. Researchers can play a crucial role in scrutinizing the reports, identifying gaps or instances where critical information is lacking and holding platforms accountable for any failures to provide meaningful transparency. Where reports fall short of providing such meaningful transparency, researchers and civil society should not hesitate to call out these deficiencies. Moreover, the upcoming delegated act on data access will provide an additional pathway for more scrutiny. This proactive engagement is essential for shaping how systemic risks and mitigation measures are defined and operationalized moving forward.