The DSA, disinformation and the European elections: solutions through recommender systems?

By Doris Buijs

Many thanks go out to Paddy Leerssen, who provided very valuable feedback, comments and suggestions. 

17 June 2024

Introduction

The EU elections are a week behind us and we are amidst analysis of the voting results. As expected, right-wing parties seem to have gained seats, while the social-democrats, greens and liberals have lost some. Despite these shifts, Von der Leyen stressed that “the center is holding”.

Two months before the elections, MEPs (2019-2024) Kim van Sparrentak (Greens/EFA) and Paul Tang (Socialists & Democrats) wrote a letter to EU Commissioners Margrethe Vestager and Thierry Breton. The authors and signatories urged them to take measures against online disinformation. They deemed the EU elections to be “under huge threat of foreign interference through disinformation”. The MEPs proposed “two very simple steps”:

  • “turning off personalised recommender systems by default for the very large online platforms”
  • “explicitly stop recommender systems based on interaction”

The letter continued by proposing to use the DSA to reach these goals. Specifically, they proposed four options, i.e. guidelines; a reviewed Code of Practice; a crisis measure; or if necessary, new initiatives. So far, none of these four proposals seem to have been put to practice, though. In hindsight, were these options realistic? Not really. This blog post discusses the four proposed ways the DSA could (have) be(en) instrumentalised to implement the MEPs’ two proposed measures. It does so by mainly comparing the MEPs’ proposals to policies already in place, as a fair amount of work has already been done in other EU policies. The current policies do not go as far as the MEPs proposed. This blog post argues that the proposals may not have been directly possible under the DSA, nor desirable.

Background

The MEPs’ letter came as a response to news reports about online news outlet Voice of Europe spreading pro-Russian propaganda, and the EU’s imposition of sanctions on the outlet.  More generally, (Russian) interference and online election disinformation are regarded a major problem within the EU and by EU institutions. In March 2022, the EU banned Russia Today and Sputnik as an economic sanction against Russia for spreading disinformation as Kremlin-backed news outlets (see also: this article by Ó Fathaigh and Voorhoof). And in August 2023, the European Commission published a study on Russian disinformation and the DSA.

More recently, European political parties signed a Code of Conduct for the 2024 EP Elections, which followed on an EC recommendation on the inclusiveness and resilience of elections (though the latter did not address platforms). And on 25 April 2024, the European Parliament adopted a resolution on Russian interference in the EP and the 2024 European elections. The resolution highlights that despite recent legislation such as the Digital Services Act (DSA), AI Act and European Media Freedom Act (EMFA), “further action is needed to stop the spread of malign disinformation online and to protect the right of European citizens to reliable news”. The Parliament writes that more is needed “to protect the European information environment” and calls for a “transversal, holistic and long-term policy approach”. Platforms themselves have in this regard even signed a ‘tech accord’ agreeing to combat deceptive use of AI in elections. And only a couple of days before the elections, DG CONNECT held a roundtable with VLOPSEs, DSCs and other competent Member States authorities on ‘DSA election readiness’ (which includes a link to the minutes of the roundtable).

The Commission signalled a serious ambition to tackle disinformation via the DSA on 30 April 2024. Formal proceedings were opened up against Meta concerning the circulation of, inter alia, deceptive advertisements and Russian disinformation on Meta’s platforms Facebook and Instagram. These proceedings should be seen in light of the recently-adopted guidelines for VLOPSEs and their requirements surrounding systemic risk mitigation related to elections, which was accompanied by a ‘stress test’ conducted by the Commission. The guidelines will be discussed in more detail below.

In short, it is clear that the EU has serious ambitions to tackle disinformation during the 2024 EU elections. President of the European Parliament Roberta Metsola described the elections as “a test of our systems”. Despite the steps already taken, outlined above, the two MEPs didn’t seem willing to await this ‘test’, and called for action.

Option 1: guidelines

The first option proposed, is ‘guidelines’. Based on Article 35(3) DSA, the Commission can indeed issue guidelines on systemic risk mitigation measures. However, the Commission has already done so. On 26 March 2024, the Commission proposed guidelines for VLOPSEs on the mitigation of systemic risks for electoral processes under the DSA. The proposal was followed by a review period for calls for input (you can find the summary here). On 26 April 2024, the Commission published the officially adopted guidelines. By now, the elections are over and another set of guidelines has not been published. Perhaps it therefore more interesting to take a look at what these current guidelines say about recommender systems.


Guidelines can only be proposed by the Commission under Article 35(3) of the DSA
in case of specific (systemic) risks. Although it is generally expected that disinforma-
tion will be or contributes to (a) systemic risk(s) for various VLOPs, the guidelines
may technically speaking be a bit premature. The question whether disinformation
will be or contributes to (a) systemic risk(s) under the DSA will be officially established
for each VLOP separately through audit reports. Pursuant to Article 37(1), VLOPs
shall be subject to an independent audit at least once a year, which is at the end of
August 2024 
for the 19 VLOPs designated as such in April 2023 (including e.g., Face-
book, Instagram, LinkedIn and Twitter, but excluding TikTok). The reports must be
made publicly available at the latest three months after completion of the audit report
(Article 42(4) DSA). In other words, it may be a while before it will be publicly known
if disinformation is indeed a systemic risk under the DSA, and if so, on which platforms.
Of note, the guidelines “set out best practices”, so we can assume they are based on some
preliminary findings already.


Specifics of the guidelines

The guidelines are broadly speaking divided into four subsets: (i) reinforcing internal processes, (ii) risk mitigation measures for electoral processes, (iii) mitigation measures linked to generative AI and (iv) cooperation with national authorities, independent experts and civil society organisations. The three sections afterwards specifically discuss measures during and after elections, and specific guidance on the 2024 European elections. Examples of measures include having adequate resources for content moderation with local language and context knowledge (sub 21) and providing official information on electoral processes (sub 27(a)). VLOPSEs are also encouraged to participate in the rapid response system (which has been implemented) and feedback mechanism as referred to in the 2022 Code of Practice on Disinformation. For a helpful overview of the (proposed) guidelines, see this article on Euractiv.

Turning back to the MEPs’ proposals, what do the guidelines specifically say about recommender systems? Section 3.2 (risk mitigation measures for electoral processes), sub 27(d)(i-vi) lists some possible measures. For instance, VLOPSEs “should consider” to ensure that recommender systems give users “meaningful choices” and “control over their feeds” (i). They could also take measures to “reduce the prominence of disinformation” in electoral contexts in a clear and transparent manner (ii) and to “limit the amplification of deceptive, false or misleading content generated by AI” through recommender systems (iii). Sub (v) notes the option to establish “measures to provide transparency around the design and functioning of recommender systems”, inter alia in particular around data and information used to foster media pluralism and diversity.

In short, the guidelines do not contain any measures as strict as proposed by the MEPs, i.e. to completely turn of recommender systems by default and stop deploying recommender systems based on interaction.

Ignorance is bliss?

The current guidelines do not explicitly ‘encourage’ platforms to stop using recommender systems based on interaction or personalisation. Future guidance could, though. It is therefore interesting to briefly discuss if VLOPSEs can decide to ignore the guidelines.

The intention of the guidelines is to support VLOPSEs in ensuring compliance with their systemic risk obligations regarding electoral processes. Nowhere does the DSA state that such guidance is binding. The Commission merely “strongly encourages” VLOPSEs to “implement these guidelines quickly and comprehensively” (para 71). The Commission’s press release notes that VLOPSEs will have to “prove” that their alternative measures undertaken are “equally effective”, should they not follow the guidelines. Similarly, codes of conduct under the DSA are specifically mentioned in Article 35(1)(h) as a means to comply with the risk mitigation obligations. Could it be that non-adherence to these guidelines will become a stick to beat VLOPSEs with, despite the lack of concrete foundations for such in the DSA? It seems like it. As noted above, the Commission has opened up formal proceedings against Facebook and Instagram under the DSA. Suspected infringements of the DSA related to, among other things, Meta’s intended deprecation of CrowdTangle are presented as the reason for it. The Commission refers to its election guidelines stating that such tools are important to track disinformation, and that they should therefore be expanded in times of elections (see ‘3.2.2 Third party scrutiny, research and data access’ in the guidelines). Taking into account the planned discontinuation of CrowdTangle, the Commission suspects that Meta has failed to assess and mitigate risks related to the civic discourse, electoral processes (Article 34(1)(c) DSA) and ‘other’ systemic risks. It has asked Meta to respond within 5 working days. Reuters writes that Meta “has added safety measures to its misinformation tracking tool CrowdTangle” for the EU elections, and it has added new real-time dashboards. Criticism remains, however.

Despite the above, it seems like VLOPSEs can, in principle, decide to disregard the guidelines and take their own measures, provided that they are able to prove alternative compliance. Perhaps the MEPs would have wished for guidelines that directly ‘encourage’ to turn off personalised and/or interaction-based recommender systems, and therefore called for (more) guidelines. This did not seem too realistic, nor the ‘best’ option, given the fact that there were already guidelines and that such guidance is (semi-)voluntary. As Van Sparrentak pointed out herself (see below), platforms have heavily lobbied against the prohibition to use recommender systems based on interaction recommending extreme content and targeted ads. Thus, it may not be expected that platforms will voluntarily follow such a guideline, if it would even make it into official guidelines. We will continue by discussing the second proposed measure: a reviewed code of practice.

Option 2: a reviewed code of practice 

The second option the MEPs proposed is a “reviewed code of practice”. The 2022 Strengthened Code of Practice on Disinformation (2022 CoPD) naturally comes to mind. The MEPs seemed to imply that this code is not effective (enough) to curb disinformation during election periods – especially around the use of recommender systems.

Other aspects of the 2022 CoPD are relevant when discussing disinformation around elections (e.g., chapter III on ‘political advertising’ and the ‘rapid response system’). We will only focus however on the 2022 CoPD and its measures regarding recommender systems as this is the focal point for the MEPs in their letter.

So, what does the 2022 CoPD say about elections and recommender systems? Under measure 18.1, signatories commit to take measures to mitigate risks around harmful disinformation, such as the deployment of algorithms that recommend “authoritative information” and that reduce disinformation in a clear and transparent manner. They will also publish the main parameters of their recommender systems (QRE 18.1.2) and they will provide “an estimation of the effectiveness of such measures” by providing “meaningful metrics” to cater the performance of e.g., their recommender systems (SLI 18.1.1). In other words, the 2022 CoPD’s key approaches for recommender systems are i) upranking authoritative information; ii) downranking disinformation; iii) transparency about the parameters.

In that sense, the election guidelines and 2022 CoPD are quite comparable: they both seem to focus on the ‘empowerment of users’ and transparency. For instance, both instruments contain commitments/measures to prioritise ‘authoritative’ information and reduce the prominence of disinformation (categorised under ‘user empowerment’ in the 2022 CoPD). This is, to some extent, aligned with the two specific provisions on recommender systems in the DSA (Articles 27 and 38), mainly through transparency and modification of recommender systems’ settings. Article 27 requires platforms to provide transparency about the main parameters used in recommender systems (including options for modification of those). Article 38 requires VLOPSEs to provide users an option not based on profiling based on personal data; another form of ‘user empowerment’.

It could be that a future update of the 2022 CoPD or an additional code of practice contains stronger measures regarding recommender systems as proposed by the MEPs. Given that recommender systems are a crucial part of platforms’ profit model, it may not be too realistic to expect this. Even if those measures would end up in a semi-voluntary DSA code of conduct, this does not mean platforms are required to adhere to these specific commitments (although Carl Vander Maelen and Rachel Griffin point out that VLOPSEs “face strong incentives to comply” with codes under the DSA). Thus far, non-adherence with the 2022 CoPD has no direct consequences in terms of non-compliance with the DSA. So, even if new guidelines or codes of conduct aligned with the MEPs’ proposals would appear, they would not be directly enforceable. This brings us to the third option proposed by Van Sparrentak & Tang: a crisis measure.

Option 3: a crisis measure 

The third option proposed is a “crisis measure”. Van Sparrentak explained her views on the crisis measure (in Dutch) in this radio broadcast by BNR Nieuwsradio. She explains that, in her opinion, the Commission can – and should, use the ‘crisis measure’ to stop using recommender systems recommending extreme content (disinformation) affecting elections. The crisis measure should be applied instead of opening a formal investigation into Meta, according to the MEP.

Indeed, the DSA contains two ‘crisis provisions’: the crisis response mechanism and crisis protocols, as earlier commented on in this blog post. It appears Van Sparrentak is calling the Commission to use its powers under Article 36 – the crisis response mechanism, although she doesn’t mention a specific provision. Pursuant to this provision, the Commission can, in short, upon recommendation of the Board require VLOPSEs to apply measures to prevent, eliminate or limit contributions to a crisis. Their service should significantly contribute to a serious threat, i.e. a crisis (Article 36(1)(a)-(c) DSA). A crisis “shall be deemed to have occurred when extraordinary circumstances lead to a serious threat to public security or public health” in the EU or significant parts of it (Article 36(2) DSA). Recital 91 DSA names examples of potential crises, e.g., armed conflicts, acts of terrorism, natural disasters and pandemics. The same recital underscores the importance of crisis response mechanism measures to be taken “within a very short time frame” and only if this is “strictly necessary”, “effective and proportionate”.

It seems like Van Sparrentak suggests that she regards disinformation affecting the elections as a crisis under the DSA. Thus, she suggests that turning of personalised recommender systems and those based on interaction are welcome crisis measures. To have declared disinformation affecting the European elections a “crisis” and to have triggered the crisis response mechanism raises important questions. First of all, how is election disinformation linked to public security and/or public health? After all, this is what the DSA names as mandatory aspects of a crisis. And if disinformation would constitute a crisis, when does it start? And stop? While the actual days of the election are clear (6-9 June 2024), it seems unlikely that the ‘crisis’ would only have taken place in these three days. Given the pre-emptive nature of the crisis definition in Article 36, this is important yet quite unclear. In that sense, what qualifies as disinformation (and what not) is also a decisive issue. These are complex questions and difficult to comprehend, yet essential for the crisis response mechanism and the Commission’s powers because of it to be activated.

Even if disinformation affecting the European elections can be deemed to have been a crisis under the DSA, the choice of specific measures for VLOPSEs to undertake would have remained with the VLOPs themselves (Article 36(5) DSA). Nevertheless, as set out in the previous blog post, it may indeed be that the Commission would have had some influence when it comes to specific measures. The crisis response mechanism doesn’t seem to be deployed yet. Thus, we don’t have precedents that could provide any indications on the dynamics between platforms and the Commission in this regard. This by extension also does not allow us to assess how realistic the MEPs’ proposal to force VLOPSEs to stop deploying such recommender systems during the EU elections as a crisis response mechanism measure would have been.

Given the vagueness, yet large impact of the crisis response mechanism, to declare the spread of disinformation in the EU elections context a ‘crisis’ may not have been a favourable nor desirable option. Perhaps (if necessary and proportionate) more a last resort in case of a specific crisis situation? Additionally, the Board (comprised of DSCs) must recommend the application of the crisis response mechanism, which does not seem to have occurred yet (based on the information available here). Thus, we move on to the last option proposed: ‘new initiatives’.

Option 4: new initiatives 

The last option that is named in the letter is ‘new initiatives’. It does not seem quite clear on first sight what is meant by these new initiatives. Given the fact that tackling disinformation seems to be one of the top priorities with the Commission, we may soon expect to see such ‘new initiatives’ and ‘creative’ ways to deploy the DSA to reach the goal of a disinformation-free digital space. Entirely new legislation, just after the DSA is completed, appears less likely.

Concluding remarks

Meanwhile, the actual election days are behind us, and it appears that none of the MEPs’ proposals have been materialised. This post has shown that the EU has already launched many other initiatives to counteract disinformation, and specifically to protect EU elections. However, the focus in these instruments as well as the DSA itself seem to be on the ‘empowerment of users’ and transparency when it comes to the regulation of recommender systems. This may explain why MEPs continue to push for more far-reaching interventions to prohibit or suspend certain recommendation practices. However, as we have seen, at least on the short term the DSA’s current toolkit is either too ‘soft’ and voluntary in its design to meet the MEPs’ ambitions to affect core business design issues such as this (i.e new guidelines and codes of practice), or limited to states of exception that don’t cover routine election concerns (i.e. crisis protocols). Perhaps more concrete measures will be pushed for once the first audit reports on systemic risks are published. On that basis, the EU election can indeed be viewed as an early “test of our systems” for evidence-based recommender policy.