Twitter’s retreat from the Code of Practice on Disinformation raises a crucial question: are DSA codes of conduct really voluntary?

 

 

Carl Vander Maelen (Ghent University, Faculty of Law and Criminology)

and

Rachel Griffin (Sciences Po Law School)

The chronicle of a retreat foretold has come to pass. Following months of rumours about Twitter’s willingness or capacities to comply with EU tech regulation after its new owner Elon Musk fired most of the company’s legal and policy staff, EU officials announced in late May that it would withdraw from the Code of Practice on Disinformation. Senior figures including Commission Vice-President for Values and Transparency Věra Jourová, Internal Market Commissioner Thierry Breton, and French Digital Minister Jean-Noël Barrot all responded with harsh criticism of Twitter’s decision – and thinly-veiled threats. In response to these developments, Jourová stated, “I know the code is voluntary but make no mistake, by leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinised vigorously and urgently.”

The public tussle may seem nothing more than the latest power struggle between Twitter and regulators since the Musk acquisition – which has so far mostly taken the form of PR and posturing rather than concrete regulatory action. However, with the recently-approved Digital Services Act (DSA) due to be fully applicable from early 2024, it raises a crucial and sensitive legal question that is also relevant for other major social media platforms in the evolving regulatory landscape. As Jourová’s quote suggests, complying with codes of conduct is meant to be a voluntary good practice by platform companies – yet refusing to comply can also attract legal consequences. This ambiguity raises a crucial question: Are codes of conduct under the DSA really de jure voluntary if there seems to be a de facto obligation to follow them?

Codes of conduct in the DSA

“Systemic risks” play a key role in the DSA. Articles 34-35 require very large online platforms and search engines (VLOPs/VLOSEs) to regularly assess and mitigate risks in areas including illegal content, fundamental rights, civic discourse, electoral processes, public security, gender-based violence, public health, or people’s physical or mental wellbeing, especially of minors. These provisions are expected to be crucial in addressing large-scale, systemic issues in the social media ecosystem that go beyond individual content moderation decisions and user choices, such as algorithmic discrimination, design practices, and potential negative effects of engagement-optimised recommendation systems.

However, given the extreme breadth and vagueness of this list of risks, many questions remain about how Articles 34-35 will be interpreted, what issues will be prioritised, and how far they will really strengthen accountability. To counter this uncertainty, the DSA provides the option to establish codes of conduct, which should “contribute to the proper application of this Regulation”, and “in particular the specific challenges of tackling different types of illegal content and systemic risks”.

As we show in a recent working paper, codes are likely to play essential functions in operationalising the DSA’s vague and open-ended provisions on systemic risks – notably by providing concrete detail about what risks companies should address and how, establishing standards and success metrics for their risk mitigation measures to be evaluated, and specifying how potential DSA violations should be addressed. Codes could thus significantly shape the DSA’s implementation in practice, and ultimately its effectiveness.

As noted above, the concept of systemic risks (like that of illegal content) is already extremely broad, and “in particular” suggests the indicated topics are non-exhaustive. Thus, Article 45(1) in principle provides a mandate for codes in a vast range of policy areas – although Recital 106 names “disinformation and manipulation” and minor safety as particular areas of concern, and Articles 46 and 47 also mandate the development of codes on advertising transparency and accessibility for people with disabilities.

The aforementioned Code of Practice on Disinformation will also become an official DSA code, and given predominant framings of disinformation as a major political and security threat, it’s clear this will continue to be a priority for the Commission (as the harsh criticism of Twitter’s withdrawal suggests). It takes notable steps beyond the DSA’s focus on content moderation and user choice, establishing commitments in diverse areas including “safe design practices”, transparency, and cooperation with civil society.

Major platforms also signed up to a voluntary code on hate speech from 2016. Although its provisions are largely restricted to reporting and moderation of illegal hate speech and are now largely superseded by the DSA, the Commission’s 2022 evaluation raises the possibility of establishing a revised and expanded code on hate speech under the DSA. Finally, a draft code on research data access has been proposed by the European Digital Media Observatory, a research consortium which is independent but has an influential voice in EU platform regulation. This code would be based on the GDPR, but some future iteration will likely play an important role in establishing practical details and privacy protections to facilitate GDPR-compliant independent research using platform data under Article 40 DSA.

Voluntary or mandatory?

Codes are traditionally soft law instruments, containing broad goals that corporations voluntarily endorse. Superficially, the DSA continues this approach, explicitly reiterating in several provisions that codes are voluntary (see e.g. Recitals 98 and 103 and Article 45(1)). Yet several provisions also suggest that failing to participate or comply could attract legal liability.

Notably, a VLOP or VLOSE’s “refusal without proper explanations […] of the Commission’s invitation to participate” in a code could be taken into account when determining if an infringement of the DSA took place (Recital 104), as could an audit report finding non-compliance with applicable codes (Article 75(4)). Conversely, participating in and complying with relevant codes is suggested as an appropriate way for VLOPs/VLOSEs to discharge their risk mitigation obligations under Article 35(1)(h), and to rectify any potential infringements of the DSA under Article 75(3). Thus, VLOPs/VLOSEs face strong incentives to comply with codes to minimise liability risks, and if they refuse to participate in response to an “invitation” from the Commission they risk infringement proceedings and fines.

Furthermore, the Commission and Board take leading roles in code development. The self-regulatory codes on hate speech and disinformation were effectively demanded by policymakers threatening harder regulation, and offering detailed guidance on which types of commitments they should contain. Under Article 45(2) DSA, regulators can identify specific systemic risks that codes should address, invite VLOPs/VLOSEs to sign up – in what may seem more like an order – and invite other stakeholders of their choice to participate in the negotiations, as well as ultimately determining whether code commitments are sufficient to comply with Article 35.

Twitter’s withdrawal from the Code of Pratice on Disinformation illustrates exactly this tension. The vagueness and flexibility of platforms’ Article 35 obligations gives regulators significant discretion in investigating and identifying potential infringements, and Jourová’s statement suggests that the Commission won’t hesitate to use the threat of infringement proceedings to demand compliance with codes of conduct. This raises serious questions whether codes can still truly be said to be voluntary instruments.

The importance of DSA codes

As regards Twitter under its erratic new owner, this discussion may seem overly academic: in the words of Twitter legend @dril, “he fired the people in charge of telling him its illegal”. There has been speculation that Twitter will simply leave the EU market rather than bothering to comply with the DSA. However, the legal status and development of codes of conduct has significant implications for how other, arguably more influential VLOPs/VLOSEs like TikTok and Meta and Google’s various platforms will handle systemic risks.

We argue in our paper that DSA codes can offer significant benefits. First, they could promote effective risk mitigation measures in areas inadequately addressed by the DSA’s provisions on content moderation and curation. These notably include hate speech, harassment and abuse; algorithmic discrimination; political polarisation; and journalism and media pluralism. Codes can also establish commitments in areas such as the design of platform interfaces and recommendation systems, which academic research has identified as important factors driving issues like misinformation and harassment – as illustrated by the Code of Practice on Disinformation’s interesting provisions on how companies should research and test “safe design practices”. Finally, codes can help develop stronger accountability through concrete commitments, success metrics and reporting standards – all of which facilitate oversight by regulators, auditors and external stakeholders and allow for comparisons between platforms and over time.

Importantly, the DSA frames codes as a means to demonstrate compliance with the hard law instrument they are nestled in – directly linking these “voluntary” instruments to mandatory and punitive hard law provisions. As succinctly captured by Fahy et al., this sort of intertwinement between hard and soft provisions “questions the extent to which a platform could abandon the commitments it has voluntarily made”. This continues an ongoing trend of “legalisation” of codes of conduct in EU tech regulation: soft law creates stronger de facto obligations, which are increasingly precise and prescriptive, and delegate more authority – becoming increasingly akin to hard law.

Thus, while the earlier self-regulatory codes on hate speech and disinformation were criticised for being toothless and ineffective, DSA codes could establish stronger accountability mechanisms, with real consequences for non-compliance. Given how selectively and ineffectively major platforms have dealt with issues like hate speech and harassment, coordinated disinformation operations, and algorithmic discrimination so far, and signs that leading companies are currently cutting back even further on “trust and safety” measures, backing up commitments to tackle these issues with hard legal obligations would in many ways be positive.

Yet the legal, financial and punitive pressure on corporations to comply with formally voluntary instruments could also raise questions about whether the EU is overstepping its powers. The use of informal, industry-led codes to address highly political questions around communications and media regulation could also allow governments and companies to evade public scrutiny and accountability mechanisms. For example, the threat of DSA infringement proceedings against companies that don’t take sufficient action on disinformation could be used to pressure them into “voluntarily” suppressing legal but politically controversial content.

Diverse, inclusive and transparent stakeholder participation – including from stakeholder groups that have traditionally been underrepresented in the tech industry and in EU policymaking, and might need financial and logistical support to participate – will be crucial in ensuring that code development reflects a wide range of expertise and perspectives, and is not dominated by the interests and priorities of major platform companies. Ongoing work on EDMO’s Code on Platform-to-Researcher Data Access and the Commission’s call for participation in an Age-Appropriate Design Code show that civil society organisations have ideas and are keen to contribute to DSA implementation. The Commission, Board and national regulators need to strike the right balance between facilitating inclusive and accountable stakeholder participation in developing codes, and offering substantive guidance on their contents, to ensure codes of conduct can meaningfully contribute to accountability in platform governance.