DSA risk assessment reports: A guide to the first rollout and what’s next

by John Albert, DSA Observatory

9 December 2024

 

File:Stacks of papers.svg

 

The recently published risk assessment reports may shed light on how major online platforms assess and manage risks. But the first rollout has already raised questions about publication timelines, redactions, and formats. This post addresses these ambiguities, surveys civil society reactions to the reports, and highlights forthcoming regulatory guidance and opportunities for stakeholder engagement.

 


 

Late last month, the first batch of systemic risk assessments started rolling out in compliance with the Digital Services Act (DSA)’s accountability framework for “Very Large” online platforms and search engines (VLOs) operating in the EU. 

These reports offer insight into how major tech platforms identify and assess systemic risks stemming from their services—such as the spread of illegal content or harms to fundamental rights—and outline measures they have taken to mitigate those risks.  

External stakeholders will need time to fully digest and scrutinize these reports. But early observations already raise questions about their timing, completeness, and format, as well as general concerns around transparency.   

This blogpost focuses on these ambiguities—some resolved, others still open—while offering tips for navigating the reports. Drawing on our analysis and input from civil society groups tracking the DSA’s risk management framework, we aim to help make sense of this initial rollout and consider what comes next.  

Systemic risk assessments: Who published what, when, and why the confusion?  

Between 22 November and 4 December, 19 of the 25 designated VLOs published their annual systemic risk assessments for the first time. Independent researcher Alexander Hohlfeld and content moderation platform Tremau created handy trackers for these risk assessments, plus the audit and audit implementation reports, available here and here. 

Most of the published risk assessments cover the period from September 2022-August 2023—making the reports already over a year old. This time lag, which was anticipated by many civil society organizations, has been criticized for limiting public interest research into emerging risks.  

Several platforms, however, published more recent reports. Booking.com, Zalando, and Pinterest released reports from both 2022-23 and 2023-24, while Meta (Facebook and Instagram), Alibaba (AliExpress), and Wikipedia shared only their most recent (2023-24) assessments. 

This inconsistent rollout did not resolve confusion among external stakeholders over what reports would be available and when; platforms themselves may have had mixed understandings about what was required (at least on this first go-around) under Article 42(4) DSA. 

However, a European Commission representative clarified during a recent Institute for Strategic Dialogue workshop that VLOs are indeed expected to publish their most recent risk assessments (i.e., from the ongoing year).  

When will more reports be published?

Given the Commission’s expectations and the example set by Meta and others, we should anticipate VLOs that haven’t already done so to publicize their more recent risk assessments in due course—though the exact timing remains unclear.

Annual reports will nevertheless be rolled out in staggered fashion, as publication timelines depend on when platforms were designated as VLOs by the Commission. This includes prominent porn sites (Pornhub, Stripchat, XVideos, and XNXX) and e-commerce platforms (Temu and Shein) which were designated later, between late 2023 and mid-2024, meaning their reporting obligations will align with those dates. 

Beyond annual risk assessments, VLOs are also required to conduct ad hoc risk assessments before launching new features or products in Europe (some may recall that the TikTok Lite Rewards program was withdrawn in the EU after the Commission opened formal proceedings against the company for failing to meet this obligation). It remains to be seen whether these ad hoc assessments will be published as standalone reports or folded into the annual assessments. 

For those trying to keep up with this staggered rollout, an online portal managed by the Commission may eventually materialize. Until then, third-party report trackers will continue to be invaluable resources for researchers and other stakeholders.  

How heavily were the reports redacted?  

Beyond the issue of timing, there was concern within civil society that companies might publish heavily redacted versions of their systemic risk assessments. While reports from TikTok and X did include visible redactions, with blocks of text blacked out, most other VLOs appear to have published their assessments in full—or at least without clearly marked omissions. 

Viewed optimistically, the few redactions could suggest that platforms prioritized transparency over guarding confidential information, aiming to showcase their risk management efforts in full. Viewed cynically, one might argue—as others have—that these initial assessments are so superficial and vague that there is hardly anything of substance worth redacting.  

That said, it is not crystal clear whether these public reports are identical to the versions submitted to regulators. The European Commission could help by clarifying if any “hidden” redactions exist—i.e., whether there are discrepancies between the public-facing reports and the confidential versions provided to authorities. 

The long and winding PDFs  

The scope and substance of these systemic reports vary dramatically, reflecting differences in the types of services and associated risks, and cultural and organizational differences between the companies.  

What nearly all of them have in common is the format: PDFs totaling dozens if not hundreds of pages (Snapchat’s report is over 200 pages). If we sample the major social media platforms, TikTok, Instagram, Facebook, and X reports hover around 90 pages, YouTube is folded into a larger Google report (which includes risk assessments of its search engine, app store, maps and shopping platform); LinkedIn’s report is under 40 pages.  

The substance of these reports is more important than their length. But the current reliance on PDFs poses challenges for quantitative researchers now tasked with digging through hundreds of pages. While PDFs can technically be machine-readable, their static and inconsistent formatting often requires extensive preprocessing to extract and structure information effectively, creating unnecessary barriers for external scrutiny.  

For future reports, it would be beneficial for VLOs to adopt a standardized, machine-readable format—such as HTML or JSON—alongside downloadable PDFs to cater to both machine-based analysis and good-old-fashioned reading (Wikipedia is the only VLO so far to have published a risk assessment in HTML).  

The companies behind these risk assessments offer incredibly sophisticated consumer products; we might reasonably expect them to show similar thoughtfulness toward the usability of their transparency reports. Whether they do so voluntarily or by requirement is to be seen, as standardizing reporting formats may be taken up by regulators developing guidance for future risk assessments. 

Upcoming regulatory guidance and opportunities for stakeholder engagement  

The Commission, in collaboration with the Board of Digital Services Coordinators, is working on a comprehensive analysis of the first wave of risk assessments submitted by VLOs. This joint effort aims to identify the most prevalent and critical systemic risks, while also providing guidance on effective mitigation strategies and highlighting best practices.  

The reporting period for this exercise runs from 17 February 2024 to 16 February 2025, meaning the final report from the Commission and the Board can be expected no sooner than mid-February 2025.  

Meanwhile, the Commission continues to engage with civil society stakeholders researching systemic risks and has announced plans to hold workshops in early 2025 to publicly facilitate a dialogue between VLOs, DSCs, and civil society on the risk assessments. This presents a clear opportunity for external stakeholders to give feedback on the reports and raise concerns about their efficacy.  

CSO reactions point to transparency gaps 

We have yet to hear any major public statements from civil society organizations regarding the first risk assessments. But early chatter point to frustrations with a lack of substance in the reports, and the perception that platforms didn’t properly consult impacted groups and independent experts when assessing risks and designing mitigation measures.  

CSOs have also noted that the reports tend to focus on content moderation practices as risk mitigations, without accounting for risks inherent to prevailing business practices like user profiling and engagement-based algorithms.  

For instance, Amnesty Tech tweeted that TikTok’s report largely fails to acknowledge systemic health risks tied to the platform’s design—highlighting Amnesty’s research (in partnership with the Algorithmic Transparency Institute and AI Forensics) that shows how TikTok’s algorithmic “For You” feed can push vulnerable users toward self-harm content.  

These criticisms raise important questions about transparency. Auditors evaluating the risk assessments, like EY and Deloitte, have privileged access to internal platform data which is unavailable to researchers or the public—creating a significant information asymmetry which hasn’t been resolved with the publication of these reports.  

Article 40 of the DSA, which regulates data access for vetted researchers, offers a potential remedy. Its success, however, hinges on regulators’ ability to swiftly implement and enforce mechanisms for researchers to access data which underlies these assessments. 

A draft delegated act on researcher access to platform data is currently open for consultation, moving us one step closer to putting this access framework into practice. Interested parties have until 10 December to submit feedback; the DSA Observatory recently hosted an expert workshop that offers recommendations for the consultation. 

Making the most of the risk reports 

It bears repeating that the newly published risk assessments mark a significant milestone in the DSA’s accountability framework for Big Tech platforms. But these are just the beginning. 

Researchers and civil society organizations now have an opportunity to analyze the available data, identify shortcomings, and turn their findings into meaningful action—by advocating for more effective and transparent risk-management practices, or laying the groundwork for deeper investigations through regulated data access. 

The Commission-facilitated workshops planned for early 2025 may provide a forum to build on these efforts, bringing stakeholders together to address challenges in the risk assessment process and discuss ways to strengthen platform disclosures. 

For now, it’s time to dig into these reports.