Article 12 DSA: Will platforms be required to apply EU fundamental rights in content moderation decisions?

 

By Naomi Appelman, João Pedro Quintais, and Ronan Fahy, Institute for Information Law (IViR)

 

As the European Court of Human Rights has emphasised, online platforms, such as Facebook, Twitter and YouTube, provide an “unprecedented” means for exercising freedom of expression online. Unfortunately, however, the systems operated by platforms, where (automated) decisions are taken to moderate content based on a platform’s terms of service, have been widely denounced as “fundamentally broken”. These content moderation systems can actually “undermine freedom of expression”, especially where important public-interest speech ends up being suppressed, such as speech by minority and marginalised groups, black activist groups, environmental activist groups, and other activists. Indeed, the UN Special Rapporteur on freedom of expression has criticised these content moderation systems for their overly vague rules of operation, inconsistent enforcement, and an overdependence on automation, which can lead to over-blocking and pre-publication censorship. In order to better protect free expression online, the UN Special Rapporteur, and free speech organisations, have argued that platforms “should incorporate directly” principles of fundamental rights law into their terms of service.

 

While platforms currently have no obligation to incorporate fundamental rights into their terms of service, an important provision in the EU’s proposed Digital Services Act (DSA), may potentially go some way in this direction. Notably, art. 12 DSA lays down new rules on how platforms can enforce their terms of service, including that platforms must have “due regard” to the “fundamental rights” of users under the EU Charter of Fundamental Rights. In this post, we examine the regime of art. 12 DSA and ask whether its current wording requires providers of intermediary services, among them online platforms, to apply EU fundamental rights law, including the right to freedom of expression, in content moderation decisions based on a platform’s terms and conditions.

 

Article 12 DSA proposal: the systematic context

To understand the possible implications of art. 12 DSA it is necessary to, as a first step, understand it in the context of the proposed legislation. In taking a systematic approach we can see how the provision is intended to function within the DSA as a whole, and start to understand how it tries to bring content moderation decisions directly within the scope of EU fundamental rights. The DSA proposal is divided into five chapters: (I) general provisions, (II) liability of providers of intermediary services, (III) due diligence obligations for a transparent and safe online environment, (IV) implementation, cooperation, sanctions and enforcement, and (V) final provisions. Chapters II and III follow different regulatory approaches that should be highlighted to better understand art. 12, which is included in Chapter III.

Chapter II sets out the regime for the liability of intermediary services providers. This consists of a revised version of the liability exemptions – for services of “mere conduit”, “caching” and hosting – and general monitoring ban in arts. 12 and 15 of the e-Commerce Directive, with some noteworthy additions in the form of a “good Samaritan” clause (art. 6) and rules on orders or injunctions (arts. 8 and 9). Chapter III (arts. 10 to 37) deals with due diligence obligations that are independent of the liability assessment made under the previous chapter. Chapter III is a novelty in relation to the e-Commerce Directive. It distinguishes between increasingly more specific categories of providers, by setting out asymmetric obligations that apply in a tiered way to all providers of intermediary services (arts. 10 to 13), hosting providers (arts. 14-15), online platforms (arts. 16-24), and very large online platforms or “VLOPs” (arts. 25-33). Basically, hosting providers are a type of provider of intermediary services, online platforms a type of hosting provider, and VLOPs a type of online platform. The due diligence obligations are cumulative, meaning that as you move along the spectrum of providers you are subject to an increasing number of obligations. For example, providers of intermediary services are subject to the fewest obligations and VLOPs are subject to the most obligations. All providers are subject to art. 12.

Art. 12 is entitled “Terms and conditions”, a term that is defined in art. 2(q) DSA as “all terms and conditions or specifications, irrespective of their name or form, which govern the contractual relationship between the provider of intermediary services and the recipients of the services.” The aim of art. 12 is to increase the transparency of the content of these terms and conditions, and bring their enforcement in direct relation to fundamental rights.

A crucial feature of art. 12 is that it does not only apply to illegal content, like Chapter II. It also covers harmful or undesirable content as defined in terms and conditions of a provider of intermediary services. In doing so, and because it applies to all providers of intermediary services, it extends the obligations of Chapter III beyond illegal content. The result is that under the DSA many more content moderation decisions will be subjected to regulation as compared to the e-Commerce Directive. As we will see, how these terms and conditions relate to fundamental rights remains unclear.

Art. 12’s aims of transparency and enforcement are dealt with in two distinct paragraphs. Whereas paragraph (1) includes information obligations, paragraph (2) deals with application and enforcement and, arguably,  tries to bring providers’ terms and conditions within the scope of EU fundamental rights. We discuss each paragraph in turn.

 

Article 12(1) DSA: Information Obligation

Article 12(1) DSA regulates the content of the terms and conditions of providers of intermediary services. It is essentially an information obligation on certain content moderation practices:

“Providers of intermediary services shall include information on any restrictions that they impose in relation to the use of their service in respect of information provided by the recipients of the service, in their terms and conditions. That information shall include information on any policies, procedures, measures and tools used for the purpose of content moderation, including algorithmic decision-making and human review. It shall be set out in clear and unambiguous language and shall be publicly available in an easily accessible format”.

The provision aims to ensure that the terms and conditions are transparent and clear as to how, when, and on what basis, user-generated or uploaded content (“information provided by the recipients of the service”) can be restricted by a provider. It seems that content moderation as “any restriction” is the object of this provision. Nevertheless, it is unclear to what extent certain content moderation actions by the service provider that do not stricto sensu restrict what content their users can post, such as ranking, recommending or demonetising content, are included or not in the scope of art. 12. In other words, this obligation appears to cover content moderation measures by providers, at the very least to the extent these measures may impose a restriction on the activities of users on the service.

This view is consistent with the second sentence of paragraph (1), which explicitly refers to  “content moderation”, a concept defined in art. 2(p) DSA as covering activities undertaken by providers to detect, identify and address user generated content that is either (i) “illegal content” (art. 2(g)), or (ii) incompatible with their terms and conditions (i.e., thus extending to content that is legal but harmful or otherwise undesirable from the perspective of the provider).

The provision seems to mainly force clarity on what the content moderation policies of a given service provider are and how they are executed. Importantly, the provision explicitly mentions “algorithmic decision-making”, raising the important question of what providing information on “any policies, procedures, measures and tools” on this point might look like (on this topic see for example here, here, and here). However, the exact scope of the paragraph is unclear, as the phrasing in the first sentence of “any restrictions” appears wider than the definition of content moderation in art. 2(q) DSA, thereby broadening the provision.

In its last sentence, art. 12(1)  provides for how this information should be conveyed in the terms and conditions. Echoing arts. 7(2), 12(1) and 14(2) GDPR, the information provided should be “clear”. However, where the GDPR refers to “clear and plain” language, art. 12(1) DSA goes one step further by requiring that the information is “unambiguous”. In our view, this is a qualitatively different requirement that translates into a higher threshold obligation for providers of intermediary services. Finally, it should be noted that art. 29(1) sets out a somewhat similar (although less detailed) information obligation for VLOPs regarding recommender systems (for a discussion see here).

 

Article 12(2) DSA: Applying fundamental rights in content moderation?

However useful information obligations in art. 12(1) DSA might be, the exciting part of art. 12 from a fundamental rights perspective is found in paragraph (2), which regulates the application and enforcement of terms and conditions:

“Providers of intermediary services shall act in a diligent, objective and proportionate manner in applying and enforcing the restrictions referred to in paragraph 1, with due regard to the rights and legitimate interests of all parties involved, including the applicable fundamental rights of the recipients of the service as enshrined in the Charter”.

Due to the explicit reference, this second paragraph has the same scope as the first, meaning it only applies to the enforcement of terms and conditions that restrict user generated content on the service. The core obligation created by this paragraph is directed at the providers of intermediary services to weigh the “rights and legitimate interests of all parties involved” in a “diligent, objective and proportionate” way when applying their terms and conditions.

The extent of this obligation is unclear. In particular, it is uncertain whether it extends the application of fundamental rights to the horizontal relationship between providers and users. That is to say, the provision obligates intermediaries to have due regard to the “applicable” fundamental rights without clarifying what fundamental rights are already applicable in the horizontal relationship between intermediary and user. The extent to which users can directly or even indirectly appeal to their fundamental rights vis-à-vis an intermediary in its content moderation decisions is a controversial issue.

The current phrasing of art. 12 leaves the matter unresolved as it can be read in two different ways. First, the provision can be understood as only referring to fundamental rights which are already applicable in the horizontal relation between intermediaries and users. In this interpretation, the provision leaves undetermined the extent to which fundamental rights are applicable and only obligates intermediaries to have “due regard” when any are applicable. However, a second, and more broader, interpretation is that art. 12 aims to declare fundamental rights directly applicable in the horizontal relation between intermediaries and users. If this second interpretation is followed, this would certainly include the right to freedom of expression in art. 11 (e.g., for users posting content) and the right to non-discrimination in art. 21 (e.g., for users targeted by content) of the Charter – as interpreted by the Court of Justice of the European Union – and, potentially, via art. 52(3) Charter, the extensive case law of the European Court of Human Rights. Such an obligation would be remarkable, not just because it is aimed at private actors, but also because it presumably applies with equal intensity to all providers of intermediary services. However, the DSA itself offers little to no guidance as to how this obligation should be operationalised in practice. Without such guidance, the obligation might remain too vague to have real effect.

For example, even if what is meant by “restrictions” is properly defined, the scope of “diligent, objective and proportionate” behaviour remains fuzzy. Still, promoting “diligent behaviour by providers of intermediary services” seems to be a core aim of the DSA, as stated in Recital 3. The requirement of diligence pops up at various other places in the DSA – in arts. 14, 17, 19 and 20 DSA – primarily in the context of complaint handling by hosting service providers. Similarly, the cloudy obligation of enforcing the terms and conditions with “due regard” for fundamental rights gives no concrete insight on the extent to which these rights should be taken into account in individual decision-making processes by service providers, including those supported by algorithms. It is thus unclear if the recipients of the service (the users) can rely on art. 12 before a court as a means to effectively protect their fundamental rights against a service provider. Concretely: can an individual user appeal directly to fundamental rights in a complaint procedure under art. 17(3) DSA on the basis of the obligation in art. 12(2) for providers to have due regard for fundamental rights in the application of their terms and conditions?

Further, it is unclear how broad the scope of “all parties involved” should be taken. It explicitly includes the users affected by the restriction being applied and enforced. For online platforms, it will also presumably include trusted flaggers and other notifiers covered by arts. 19 and 20. Beyond that it is difficult to identify other relevant parties (the service provider itself?) at this stage. Finally, it should be noted that the obligation in art. 12(2) anchors related transparency obligations, in particular certain transparency reporting obligations applicable to all providers of intermediary services under art. 13(1)(b).

 

Conclusion

At first glance, art. 12 DSA looks like a substantial expansion of intermediary services providers’ responsibilities. After further analysis, however, much remains unclear. To be sure, art. 12(1) imposes an important information obligation regarding restrictions imposed on users of intermediary services, which extends to algorithmic decision-making. Likewise, art. 12(2) introduces an apparently broad obligation for providers to act in a diligent, objective and proportionate manner when applying and enforcing such restrictions, explicitly linked to the respect of fundamental rights. In addition, the provision does expand the scope of the obligations beyond illegal content, applying also to that content which providers of intermediary services consider harmful or undesirable in their terms and conditions. These horizontal obligations for all providers of intermediary services providers are welcome additions to EU law. Crucially, however, art. 12(2) is vague on what this obligation entails and the extent to which these service providers are required to apply fundamental rights in their content moderation practices. As a result, if the legislative text remains unchanged, the application and enforcement dimension of art. 12 will likely only be effective if and when courts are called to interpret it. Until then, the risk is that the obligation remains a paper tiger. To avoid this, the EU legislator should clarify whether the express purpose of art. 12 is to oblige providers to apply fundamental rights law in content moderation decisions. Indeed, platforms may already be going some way in this direction: in Facebook’s Oversight Board decisions, the Board is applying freedom of expression principles under the International Covenant on Civil and Political Rights. Additionally, the DSA could offer more concrete links to art. 12 throughout the legislative text, so as to substantiate the meaning and effect of the provision.