The DSA and the risk-based approach to content regulation: Are we being pulled into more advanced automation?
Bengi Zeybek*
Disclaimer: Dear reader, please note that this commentary was published before the DSA was finalised and is therefore based on anoutdated version of the DSA draft proposal. The DSA’s final text, which can be here, differs in numerous ways including a revised numbering for many of its articles.
Introduction
Given the size of platforms, the scale at which they operate, and the immense amount of user content created and shared online, automated tools are an integral part of platforms’ content moderation and curation processes. Platforms use automated tools to identify and analyse content in order to enforce their terms of service, to comply with their legal obligations and to arrange visibility of content. At the same time, public policy concerns associated with algorithms, such as the risks of spreading mis- and disinformation, and amplifying hateful views, are longstanding. Automated tools used to moderate hate speech have also been found to discriminate minorities: a 2019 study revealed that such tools are 1.5 times more likely to flag tweets as offensive or hateful when written by African Americans. Facebook is currently under fire with criticisms about its failure to adequately address proliferation of vaccine misinformation. But efforts to deal with misinformation can go wrong too: for instance, Facebook’s algorithms mistakenly suspended the accounts of several environmental organizations shortly after announcing its updated efforts to deal with misinformation over climate science. It is evident that platforms have an immense power over public discourse and that harms occurring online can have real-life consequences, which can affect society as a whole and the core values of democracy.
Against this background, policymakers are looking to improve content moderation to deal with novel types of online harms while preserving fundamental rights. The European Commission’s Proposal for a Digital Services Act (“DSA”), embodies a unique regulatory approach to address online harms, as it adopts a risk-based approach to content regulation: notably, Articles 26 and 27 of the DSA require very large online platforms (“VLOPs”) to assess and mitigate the risks their systems pose (including vis à vis the protection of fundamental rights, public interests, public health and security), and subject their assessments and measures to independent audit and regulatory oversight.
And what does this risk-based approach to content regulation mean for the use of automated content analysis tools? This blog post discusses the role of automated tools with regard to compliance with the risk assessment and mitigation obligations under Articles 26 and 27. It first briefly explains the obligations set out in Article 26 and 27 and then considers the implications of the risk-based approach for automated content moderation tools.
Risk-based approach to content regulation under Articles 26 and 27 of the DSA
One important factor behind the DSA’s risk-based approach to content regulation is the VLOPs’ powerful position in facilitating public debate and shaping information flows, given their size and the number of receipiens of their services. Aiming to integrate public interest considerations into the services provided by VLOPs, the risk-based approach prescribes ex ante duties of care for VLOPs to identify and mitigate risks occurring at scale.
Article 26 requires VLOPs to assess the systemic risks that stem from ‘the functioning and use made of their services in the Union’. Article 26 mentions three broad categories of systemic risks, (i) the dissemination of illegal content through their services; (ii) any negative effects for the exercise of the fundamental rights and (iii) intentional manipulation of their service. When assessing systemic risks, VLOPs must take into account how their content moderation and recommender systems influence these systemic risks, including dissemination of illegal content and content that violates their Terms of Service. According to the examples of systemic risks given in Recital 56, these risks can relate to ‘dissemination of illegal content where it is amplified to a wide audience’, ‘design of algorithmic systems used by VLOPs (…) or other methods for silencing speech’, ‘coordinated manipulation of the platform’s service with foreseeable impact on health, civic discourse, (…)’, including ‘through the creation of fake accounts.’ Article 26, read together with Recital 56, suggests that systemic risks can occur in relation to accounts, a certain type of content, behaviour, activity, as well as the design of algorithmic systems or a combination of these.
In response to these systemic risks, Article 27 requires VLOPs to adopt ‘reasonable, proportionate and effective mitigation measures’, ‘tailored to the systemic risks identified’. These can include, ‘adapting content moderation or recommender systems, their decision-making processes, the features or functioning of their services, or their terms and conditions’ (Article 27(1)(a)). Recital 68 further states that specific risk mitigation measures are also to be explored via self- and co- regulatory agreements, as provided for under Article 35.
Automation and the risk-based approach to content regulation
Although VLOPs can take a range of proactive measures to comply with these risk assessment and mitigation obligations, they are likely to consider automation as a convenient option. This is mainly because systemic risks refer to issues that scale up due to the size of platforms. Plus, VLOPs are already using machine learning, in combination with human intervention, to scale their content moderation practices, and to address the negative consequences of algorithmic amplification (e.g. to detect and take action against ‘coordinated inauthentic behaviour’, Facebook uses automation and proactive case review at scale).
Automated tools can come into the picture in numerous aspects of Articles 26 and 27. They can be contributing factors to the emergence of the systemic risks that Article 26 is focusing on. They can also to be used to detect and evaluate systemic risks, and be put forward as mitigatory measures under Article 27.
Efforts to comply with risk assessment and mitigation obligations will possibly incentivize the use of more advanced automated technologies. VLOPs will have to analyse and to evaluate a broad range of content of different media (image, audio, text, video), metadata associated with the content (account information, IP address, volume/frequency of posting and other signals) and online behaviour at scale in order to assess and mitigate systemic risks. This could be the case, for example, where harmful content such as disinformation or intentional manipulation of their services is in question, in which case VLOPs often rely on metadata analysis. Or, to mitigate risks stemming from the dissemination of illegal content through their services (Article 26(1)(a)), platforms may be compelled to adopt systems to take-down illegal content and its equivalents more quickly (e.g. upload filters). Ultimately, the implications and appropriateness of different automated tools in mitigating systemic risks and their role in the emergence of risks will vary depending on the type of technology, targeted content and its context, the activities of the platform in question, and what can be reasonably expected from a specific platform.
Although machine learning technologies are improving quickly, the incentive for more complex automation in content analysis brings into question their limits and implications on the rights to freedom of expression, privacy, due process and non-discrimination and other fundamental rights. It is already well-documented (see, for example, here, and here) that automation is not good at understanding context, and large scale use of automated content analysis tools only amplify their limitations and associated risks depending on the underlying technology and context in which they are used. For example, a dataset on which machine learning systems is trained can perpetuate existing racial biases and automated technologies may struggle with robustness in the face of ever changing communication patterns and circumvention efforts. Furthermore, lack of transparency and accountability remain among the most significant issues with regard to wider use of automation. With these issues in mind, researchers warn that legislators should not pass laws that rely on the ability of automated analysis tools to perform moderation tasks at scale. In its opinion on the Commission’s proposal, the LIBE committee states, “automated tools for content moderation and content filters should not be mandatory” and they “should only exceptionally be used by online platforms for ex-ante control (…)”.
Many provisions of the DSA will increase transparency, accountability and explainability standards with regard to the use of automated tools in content moderation and curation. Transparency obligations under Articles 13, 23, 33 and Article 31 on data access and scrutiny — for instance, Article 31(1) that allows vetted researchers to access data ‘for the sole purpose of conducting research that contributes to the identification and understanding of systemic risks as set out in Article 26(1)’ — can contribute to a more responsible use of automated tools. But it is an open question whether these safeguards are sufficient to address some of the more structural issues of the risk-based approach to content regulation and the use of automation in that context.
The unnecessary and disproportionate impacts of Articles 26 and 27 on freedom of expression (see here for Joan Barata’s comprehensive analysis of the implications of these provisions on freedom of expression) are likely to be further exasperated with the involvement of automated content analysis tools, thus reinforcing the limits inherent in them. Some of the negative impacts on freedom of expression of Articles 26 and 27 stem from the vague wording of these provisions and the lack of specific freedom of expression safeguards. For example, as Barata states, Article 26 understands “illegal content” as a broad category and not as a specific piece of information. Proactive use of automated tools to identify and take action against such types of broad and highly contextual categories of illegal content will eventually enact prior restraints.
At the same time, these obligations, backed with severe sanctions, are likely to encourage moderation practices other than mere removal or blocking, such as reducing visibility, shadow-banning, demonetisation at scale, practices that the Commission’s proposed text does not address. It is important to note that the amendments proposed in the Council extend the scope of Article 15 on statement of reasons and Articles 17 and 18 on redress mechanisms to decisions to maintain or restrict visibility of specific content. Proposals extending the scope of procedural rights for moderation practices other than mere removal are positive developments. However, compliance measures under Articles 26 and 27 will likely be directed at analysis of data at scale (for example, metadata analysis), rather than at specific items of information. In turn, the due process mechanisms that the DSA provides (e.g. Articles 15, 17, 18) may effectively be siloed, but this also depends on how ‘specific items of information’ is interpreted. Another open question with regard to automation and accountability under the risk-based approach to content regulation is how compliance efforts under Article 26 and 27 would interact with liability exemptions.
Article 27 does not specify any particular measures that can be taken to mitigate risks, implicitly leaving the choice of measures to take to VLOPs. But the DSA gives oversight bodies a number of possibilities to influence on the choice of mitigatory measures. The European Board for Digital Services’ reports published in cooperation with the Commission identifying on most prominent systemic risks and best practices for VLOPs to mitigate systemic risks (Article 27(2)), general guidelines on mitigatory measures to be drafted by the Commission and national regulators (Article 27(3)) will be particularly relevant for developing standards for the identification of risks and measures to mitigate them. The Commission’s role in Article 35 on codes of conduct and Article 37 on crisis protocols and independent audits will also play an important role in defining specific mitigatory measures, including automated ones, for Article 26 and 27 compliance.
That being said, the DSA does not provide strict fundamental rights safeguards to ensure the necessity and proportionality of measures taken under aforementioned provisions and to restrict unwarranted reliance on automated tools when the Commission and other ovesight bodies are performing their regulatory functions, some of which are mentioned above. Article 26(1)(a) implies a human rights impact assessment obligation for VLOPs, but only with regard to four specific fundamental rights): there are no clear provisions on human rights due diligence obligations and algorithmic auditing. Objective criteria to guide these bodies and VLOPs in the determination of (the level of) risks and the design and implementation of ‘appropriate measures’, including automated ones, are also currently lacking in the DSA.
Furthermore, as also pointed out by civil society organisations, the Commission does not have the requisite of independence while carrying out its enhanced supervisory and enforcement functions for the VLOPs’ due diligence obligations under the DSA. A strict independent oversight is necessary to assess the compatibility of oversight bodies’ measures and those taken under Articles 26 and 27 with the Charter of Fundamental Rights of the EU. It is also crucial to ensure the independence, expertise and competence of the body carrying out audits for effective oversight of automated content analysis tools.
Note that some of these most prominent issues with regard to the risk-based approach to content regulation are being addressed during the discussions in the Parliament and in the Council on the Commission’s proposal, which also concern the use of automated tools in that context. Worth mentioning is the opinion of the LIBE Committee that proposes to change Articles 26 and 27 to require VLOPs to conduct impact assessments, in particular on fundamental rights, and to take “transparent, appropriate, proportionate and effective mitigation measures” to address specific adverse impacts identified pursuant to Article 26, to the extent that “mitigation is possible without adversely impacting other fundamental rights”. Importantly, the LIBE Committee’s opinion makes explicit that the decision as to the choice of measures shall remain with the VLOPs and it proposes a series of amendments to adjust the Commission’s regulatory powers. But it remains to be seen whether these proposals will be in final text of the DSA.
In sum, the risk-based approach to content regulation set out in Articles 26 and 27 of the Commission’s proposal for the DSA incentivizes large scale use of automation to analyse and to take action against broadly defined categories content, data related to user behaviour and accounts, without much regard to the limits of different types of technologies and their fundamental rights implications. Efforts to comply with Articles 26 and 27 are likely to intensify surveillance of digital communications across major platforms, undermine speech and other fundamental rights online and increase accountability deficits in algorithmic content moderation and curation. Policymakers should be aware that automation has a central role in compliance with the DSA systemic risk assessment and mitigation obligations, consider the limitations of these technologies and their implications on freedom of expression and other fundamental rights. With that in mind, as the LIBE opinion proposes, the decision to use of automated content analysis tools under the DSA should remain solely at the discretion of the VLOPs. At the same time, the DSA should provide strong procedural rights for users and ensure the independence, impartiality and expertise of supervisory and enforcement bodies as well as of auditors.
Bengi Zeybek holds a research master (LL.M.) degree in the field of Information Law from the University of Amsterdam. She is currently a research intern at the DSA Observatory.