Digital Services Act and the Protection of Fundamental Freedoms – Recommendations for the trilogue process

Dr. Joan Barata

Disclaimer: Dear reader, please note that this commentary was published before the DSA was finalised and is therefore based on anoutdated version of the DSA draft proposal. The DSA’s final text, which can be here, differs in numerous ways including a revised numbering for many of its articles.

1. Introduction

The process of adoption of the DSA has now entered the phase of the so-called trilogue negotiations, aiming at reaching an agreement between the two most relevant legislative bodies: the Council and the Parliament.

This phase started after the adoption by the Parliament, on 22 January 2022, of a series of amendments to the text proposed by the Commission in December 2020. These amendments contain relevant provisions regarding the reinforced protection of users’ rights, the incorporation of additional safeguards to guarantee better decision-making processes by platforms, as well as to limit excessive and unaccountable intervention powers from public authorities. In addition to this, it is also important to underscore that the Parliament rejected some very problematic draft amendments, including the proposal to include an exemption for “traditional media” from content moderation.

This post aims at analyzing some of the new proposed amendments, particularly in light of their impact on the right to freedom of expression and other user’s fundamental rights. In addition to this, this paper is also construed on the basis of the conclusions already presented with regards to some aspects of the Commission’s proposal, in a report published in 2021.

2. Specific comments

A new amendment to Recital 9 (Amendment 8) entrusts the Commission to “provide guidelines as to how to interpret the interaction and complementary nature between different Union legal acts and this Regulation and how to prevent any duplication of requirements on providers or potential conflicts in the interpretation of similar requirements”.

It is obvious that the DSA might create significant interactions and very serious interpretation problems, particularly vis-à-vis European (and member States’) legislation in areas related to platform regulation (copyright, audiovisual services, terrorist content online, etc.). The elaboration of “guidelines” to navigate through such problems may originate additional tensions and disagreements with and among different competent authorities and it is obvious that many of these issues will need to be addressed by instances different from the Commission once a particular case arises. It would be advisable that the competent legislative bodies, before the adoption of the DSA, try to thoroughly detect and address these problematic interactions.

A good example of possible confusions in terms of the nature of services provided and applicable legal regimes can be found in the wording of Recital 13 (Amendment 13), which refers to the fact that “providers of hosting services should not be considered as online platforms where the dissemination to the public is merely a minor or a purely ancillary feature of another service or functionality of the principal service and that feature or functionality”. The example of the comments section in an online newspaper is also mentioned, “where it is clear that it is ancillary to the main service represented by the publication of news under the editorial responsibility of the publisher”. From a general point of view, this approach seems commendable, as it may avoid subjecting certain providers to excessive obligations. This being said, considering the legal debates around the landmark decision of the European Court of Human Rights in the Delfi v Estonia case, as well as the subsequent decisions of the Court in similar cases, an oversimplified application of this principle may erode some basic pillars of EU legislation (intermediary liability exemptions, for example) in a way that would also affect the fundamental right to freedom of expression.

Recital 12 has been the object of several small amendments (Amendment 12) to introduce more legal certainty around the notion of “illegal content”. Illegal content as a broad category may present very diverse typologies, including manifestly illegal and criminally penalized content (child pornography), illegal content as defined by other sources of national legislation (for example, advertising certain products), content which would only be firmly considered as illegal upon a judicial decision requested from an interested party (defamatory content), or content that depicts or represents illegal activities. In order to avoid the removal of protected speech (for example the video of a debate on abortion in a jurisdiction where this practice is illegal) it would still be necessary to introduce additional safeguards in this definition by particularly referring to international and European human rights standards that define the legitimate limits to the fundamental right to freedom of expression.

Amendments to Recital 14 (Amendment 14) reinforce the conceptual separation between public and private dissemination of content. Information exchanged using interpersonal communication services such as emails or private messaging services, is not considered to have been disseminated to the public.

Establishing a division between the public and the private spheres has become particularly challenging. It could also be questioned whether a clear-cut distinction based on a public/private dichotomy makes sense in our current digital environment. While the reference to emails and private messaging services probably makes sense in terms of excluding the existence of communication to the public in such cases, drawing a line between automatic registration and human admission looks far more arbitrary. It is therefore advisable to stick to general criteria to be interpreted and applied on a case-by-case basis, including the availability for an unlimited number of people, the easy accessibility, as well as the intention of the “publisher” and the nature of the information being disseminated.

Recital 22 (Amendment 19) is connected to article 5 and the liability exemption for hosting service providers. In order to benefit from such exemption providers must act expeditiously after having become aware of the illegal nature of the content. The amendments included by the Parliament also establish that the removal and disabling of content must be undertaken “in the observance of a high level of consumer protection and of the Charter of Fundamental Rights, including the principle of freedom of expression and the right to receive and impart information and ideas without interference by public authority”. It is understandable that legislators wish to introduce as many references as possible to the need not to interfere in the right to freedom of expression. However, in this case the proposed language seems to impose on the service provider the responsibility to assess the illegality of the content, and particularly the possible measures to be adopted against freedom of expression European standards. This apparent safeguard seems to create a tension between the need to act expeditiously to benefit from liability exemptions on the one hand, and avoiding, on the other hand, possible responsibilities deriving from a wrong consideration of applicable human rights standards in a particular case. If the intention of the legislator is to avoid that removal and disabling measures affect legal and protected speech, it would be preferable to introduce more clear provisions establishing that such determinations only and strictly affect the information deemed to be illegal, and the obligation to preserve any other piece of content that is not an intrinsic part of the former.

Recital 25 is connected to article 6 (Amendments 137 and 138) establishing that intermediaries “shall not be deemed ineligible for the exemptions from liability referred to in Articles 3, 4, and 5 solely because they carry out voluntary own-initiative investigations or take measures aimed at detecting, identifying and removing, or disabling of access to, illegal content”. Such measures must be “effective and specific” and be accompanied “by appropriate safeguards, such as human oversight, documentation, or any additional measure to ensure and demonstrate that those investigations and measures are accurate, nondiscriminatory, proportionate, transparent and do not lead to over-removal of content”. Providers must also make “best efforts to ensure that where automated tools are used for content moderation, the technology is sufficiently reliable to limit to the maximum extent possible the rate of errors where information is wrongly considered as illegal content”.

These amendments do not solve the issues regarding the interpretation of the word “solely” in this context, already present in the text of the Commission. This adverb seems to limit liability exemptions to cases where platforms have not undertaken any other activity, beyond the mentioned own investigations, which would indicate specific knowledge of a concrete piece of content. In other words, the DSA does not cover other possible actions or measures that may lead the competent authority to establish the existence of actual knowledge or awareness (for example, the reception of an order to provide information under article 9, or a not properly substantiated notice or report by a third party). More importantly, the new amendments clearly increase the number of requirements that must concur for “voluntary own-initiative investigations” not to eliminate liability exemptions. Such requirements are both ambiguous and complex and, as a matter of fact, will have the effect of deterring platforms from undertaking these investigations.

The new Recital 33a) establishes that the DSA “should not prevent the relevant national judicial or administrative authorities on the basis of the applicable Union or national law, in conformity with Union law, to issue an order to restore content, where such content has been in compliance with the terms and conditions of the intermediary service provider but has been erroneously considered as illegal by the service provider and has been removed”. This Recital does not only acknowledge the power of administrative or judicial authorities to order the reinstatement of a piece of content that has been erroneously considered as illegal, but also to assess its compliance with the platform’s internal rules. Granting judicial and administrative authorities the additional power to judge compliance with ToS seems to go far beyond the institutional and in some cases constitutional nature of such entities, also because these matters are already addressed by the DSA as part of the appeal and out-of-court redress mechanisms.

Recital 39a), connected to article 13a) (Amendment 202) aims at protecting users from so-called “dark patterns” and imposes on platforms a series of obligations regarding the prevention of such practices. The definition included in the mentioned article is extremely broad and open to interpretation. Additional parameters included in Recital 39a) make these interpretation challenges even more cumbersome, particularly due to references to notions such as exploiting “cognitive biases”, choices that are not “in the recipients’ interest”, or “presenting choices in a non-neutral manner”, for example.

Recital 42 (Amendment 36) expands the right of users to be informed in a clear and user-friendly manner on decisions regarding removal, disabling access to, demoting, or imposing other measures “including through the use of automated means, that have been proven to be efficient, proportionate and accurate”. According to Recital 42, this obligation does not apply to cases where “the content is deceptive or part of high-volume of commercial content” or the competent authority has made a request in this sense. However, article 15 refers to “deceptive high volume commercial content”. It is clear that the language of the Recital is more accurate and appropriate in terms of possible exceptions to the mentioned obligation and therefore, consistency between these two texts must be guaranteed.

In addition to this, establishing the mentioned obligations to inform users in cases of demotion or measures other that removal or disabling access may not only generate a cumbersome task for platforms that might require redirecting efforts from other content moderation areas, but could also facilitate a better understanding of how to circumvent them with malicious aims.

The same concerns can be expressed vis-à-vis the extension of the provision of internal complaint-handling mechanisms in the cases mentioned above, including the decision of “limiting” the provision of the service (Amendments 222 and 223).

Article 8, paragraph 2, point b) (Amendment 150) refers to the possible extraterritorial effects of the orders contemplated in the mentioned article. It sets out that “territorial scope of the order shall be limited to the territory of the Member State issuing the order unless the illegality of the content derives directly from Union law or the rights at stake require a wider territorial scope, in accordance with Union and international law”. This broad reference to possible extraterritorial effects of orders to act against illegal content is still problematic as it does not necessarily prevent possible conflicts regarding the interpretation of Union law (or even international law) between member States. In case the DSA aims at facilitating more effectively the interruption of manifestly and seriously illegal content dissemination beyond the EU borders such circumstance must be better defined by including clear references to fundamental international human rights principles, particularly legality, necessity and proportionality, as well as other international legal standards that would sufficiently justify the adoption of such measures (for example, the United Nations Convention on the Rights of the Child).

Amendments 513 and 539 affecting Article 12 establish, with regards to ToS and internal content moderation rules, that they must be “fair, non-discriminatory and transparent” and shall be drafted “in clear, plain, user friendly and unambiguous language and (…) respect the freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms, as enshrined in the Charter as well as the rules applicable to the media in the Union”. The broad reference to freedom of expression and pluralism of the media may in practice trigger very complex debates as to whether a particular rule or principle established in the community standards of a platform is completely in line with such international provisions. In addition to this, platforms currently adhere to ethical commitments formulated by the Commission regarding content moderation (for example the Code of Practice on Disinformation) which precisely push intermediaries to adopt restrictions and conditions that would go beyond applicable legal standards. In addition to this, the reference to “rules applicable to the media in the Union” raises even more relevant problems of interpretation due to the fact that: a) these rules are not uniform among member States (despite the general principles established by EU legislation, including the Audiovisual Services Directive), b) these rules are only applicable to providers of editorial content, and not intermediaries or providers of information society services.

Moreover, a new paragraph 2f) establishes that “(t)erms that do not comply with this

Article shall not be binding on recipients”. This safeguard looks very problematic. Who is supposed to do this assessment? Is this a general assessment to be undertaken once the ToS are adopted or only on a case-by-case basis when there is a conflict? This general “threat” that ToS may not be actually applicable under certain conditions constitutes a very important deterrent for platforms, which might feel pushed to reduce their standards to the minimum in order to avoid the establishment of problematic or doubtful obligations.

Article 18 (Amendments 230 to 244) establishes the possibility to use out-of-court redress mechanisms with regards to decisions taken by platforms based on article 17.  The DSA would need to incorporate the adequate provisions to guarantee that this mechanism is not used in parallel with any other redress systems (either public or private), that Digital Service Coordinators and/or the Commission count on mechanisms to avoid and/or deal with contradictory or conflicting decisions taken by different bodies in one single member State or in different member States, as well as the establishment of clear and precise criteria in terms of expertise vis-à-vis areas of illegal content regarding certification of bodies, with a special focus on a human rights-based approach.

With regards to the issue of trusted flaggers, a new paragraph 1a) in article 19 (Amendment 245) determines that platforms “must ensure that trusted flaggers can issue correction notices of incorrect removal, restriction or disabling access to content, or of suspensions or terminations of accounts, and that those notices to restore information are processed and decided upon with priority and without delay”. Trusted flaggers include a very wide variety of individuals and organizations (potentially including government bodies or public agencies as well), with diverging areas of expertise and specialization. Granting this general power to all types of flaggers may generate serious issues of coherence ad quality of the decisions in question, thus creating high levels of uncertainty and differences of treatment among users and platforms.

Assessment and mitigation of systemic risks according to article 26 is the object of Amendments 294 to 302:

  1. New language in paragraph 1 imposes on platforms the obligation to conduct a risks’ assessment once a year and “in any event before launching new services”. This notion of “new services” is ambiguous and may lead to the duty of undertaking the mentioned assessment every time a new functionality is offered to consumers.
  2. Systemic risks do not only include the dissemination of illegal content but also content in breach of terms and conditions (paragraph 1, point a)). This last category is much broader than the former and may embrace different types of infringements and harms, beyond strict and clear illegality (although under the scrutiny of regulatory bodies).
  3. Paragraph 1, point b) referred, in the Commission’s proposal, to “any negative effects” for the exercise of fundamental rights. The current and additional reference to any “actual or foreseeable” negative effects makes this requirement even more problematic.
  4. The reference in paragraph 1, point c) to negative effects on the rights to “media freedom and freedom of expression” deriving from any malfunctioning or intentional manipulation of the service increases the difficulty for private entities like online platforms to make a proper assessment in such complex matters from a legal and human rights perspective. It is likely that platforms end up demoting or restricting otherwise legal “borderline content” which could be connected to the mentioned harms.
  5. New point c.a) referring to “any actual and foreseeable negative effects on the protection of public health as well as behavioural addictions or other serious negative consequences to the person’s physical, mental, social and financial well-being” goes beyond any minimal legal certainty and imposes on platforms the duty (under regulatory scrutiny) to articulate measures and possible restrictions on the basis of extremely vague formulations (“other serious negative consequences”, “social wellbeing”, etc.).

A new article 30a (Amendment 339) imposes on platforms the obligation to “label” and inform about the inauthenticity of pieces of content that consist of generated or manipulated images, audio or video content that appreciably resembles existing persons, objects, places or other entities or events, and falsely appears to a person to be authentic or truthful (also known as “deep fakes”). This definition has the risk of oversimplifying the very complex nature and consequences of deep fakes, as well as lead to the demotion of restriction of fully legitimate pieces of content (of humoristic or political nature, for example). In addition to this, the obligation for platforms to thoroughly scrutinize their services to properly detect content of this nature appears to be quite close to a general monitoring obligation.

Amendment 381 to article 39 paragraph 1 establishes that States must ensure that Digital Services Coordinators perform their tasks under the DSA “in an impartial, transparent and timely manner”. In order to guarantee a proper performance of their relevant competences by these bodies it would be also important to clearly establish a requirement of independence in line with regulatory bodies in other sectors (including audiovisual media services), a clear requirement of technical knowledge and capacity in the areas of their competence, as well as clear requirement of sufficient resources and proper and dedicated teams in cases where the designated authority is already in charge of other functions.

According to Amendment 417 to article 48 paragraph 1a, the European Board for Digital Services, as the independent advisory group of Digital Services Coordinators, “shall be chaired by the Commission”. In the European model, the establishment of restrictions to the right to freedom of expression by non-legislative bodies is connected to the presence of an independent body not subjected to direct political scrutiny or guidance. The very important role that a non-independent body like the European Commission may play vis-à-vis the articulation and implementation of measures with a clear impact on speech is in contradiction with this model. Even though the Board plays a relatively limited role in this area, the fact that it is chaired by a body with significant powers such as the Commission may clearly have a negative impact on the necessary independence of national regulatory bodies.

3. Conclusion

The DSA is a remarkable and unique proposal in the field of online platform regulation. Some of the amendments introduced by the Parliament clearly reinforce important and necessary duties of transparency, accountability, redress and, more broadly, protection of users’ rights when accessing these platforms to express their ideas and opinions and receive services and goods provided by third parties. However, there are still some relevant areas that still need attention during the drafting process (and particularly the current trilogue negotiations) to guarantee the balance between human rights, business model diversity, market efficiency and adequate regulation.

 

Joan Barata is a a Fellow at the Cyber Policy Center of Stanford University. He works on freedom of expression, media regulation and intermediary liability issues. He teaches at various universities in different parts of the world and has published a large number of articles and books on these subjects, both in academic and popular press. His work has taken him in most regions of the world, and he is regularly involved in projects with international organizations such as UNESCO, the Council of Europe, the Organization of American States or the Organization for Security and Cooperation in Europe, where was the principal advisor to the Representative on Media Freedom. Joan Barata also has experience as a regulator, as he held the position of Secretary General of the Audiovisual Council of Catalonia in Spain and was member of the Permanent Secretariat of the Mediterranean Network of Regulatory Authorities. He is a member of the Plataforma de Defensa de la Libertad de Expresión (PDLI) in Spain since 2017.