Empirical research in Content Moderation and Access to Justice: do the remedies fit the harms?

By Max Kosian (student-researcher at the Amsterdam Law School), Anna van Duin and Naomi Appelman

Introduction

This blog presents an exploratory study in the Netherlands on people’s experiences with online harms as well as their needs and expectations when it comes to tackling harmful content. It will also highlight the main findings, which show a high incidence rate (1 in 5 people is directly and personally confronted with harmful content online) and suggest that the available remedies do not necessarily fit the harms.

Navigating the legal landscape for people who want to take steps against harmful content is complicated, even for those with legal training. There is a plethora of options (both judicial and out-of-court) including injunctions, notice-and-action-procedures and hotlines, as mapped out in a previous study. With the date of application of the DSA approaching (somewhere between January and March 2024) soon a new option will be added to the mix: the Digital Services Coordinator (DSC) where individuals will also be able to lodge complaints (DSA, recital 118). This gives rise to the question of whether people make use of these options, if so what they aim to achieve and if not, what holds them back.

Despite the clear relevance of these questions in respect of access to justice and content moderation, there is a distinct lack of empirical research into the types of harmful content people encounter as well as their reasons for deciding to take (legal) action or not. From the perspective of access to justice, the most critical issues are: what obstacles people face, why it often seems too difficult to overcome these obstacles and how they perceive the many remedies that are available to contest harmful online content.

The survey

The survey is part of a larger interdisciplinary project on access to justice and content moderation, combining a legal perspective – information law, tort law and civil procedure – with communication science. The survey was conducted in the Netherlands and had 2.500 respondents, 520 of whom indicated they had been directly, and personally confronted with harmful behaviour online. They were subsequently asked about the kind of behaviour they had experienced, what action they took in response and what obstacles they faced during this process. PanelInzicht conducted the survey in October and November 2021.

There are two main takeaways regarding the DSA. Firstly, the DSA takes a blanket approach toward online harmful content, which does not match our findings which reveal a diverse set of relevant factors including age, gender, and migration background. The DSA only mentions the broad category of ‘illegal content’ and does not specify the term further, nor does it differentiate the possible remedies (Article 3(h) DSA). Secondly, the added value of yet another option might be limited as the existing options are severely underused. Of course, filing a complaint with the DSC is an extra-legal measure which in theory means it should be less cumbersome than going to court, but little is known about how the Member States will implement this. The survey shows that most people find online harms genuinely concerning. However, they often fail to act because of a lack of knowledge about their options and a lack of faith in the outcome. Respondents knew about the option to lodge a complaint with a website or platform but thought this to be a pointless exercise and only 10% knew they could also lodge a complaint against the website with the authorities. This is especially relevant from the DSA’s perspective as the law adds many new options to complain to the DSC. It remains to be seen, however, whether these options will be used.

The most important findings are as follows:

  • 1 in 5 people have experienced harmful behaviour online

The total number of respondents was 2.500 of whom 520 or 20.8% responded that they had directly and personally been affected by online harmful content at least once (see the full report, p. 6). It emerged that age is a relevant factor: young people are particularly affected. In addition, older people are more likely to face scams than youngsters (see p. 7 of the report for the division into age categories).

  • Scams and insults are the most common harmful behaviours online

Based on Banko’s taxonomy, the survey categorized online harmful content into three broad families: hate and harassment, self-inflicted harm, and exploitation with eleven further sub-divisions within those categories. The survey showed that 71.5% of respondents had been a victim of a scam and 48,7% had suffered insults. Unsurprisingly, women are more likely to face sexual aggression online than men and people with a non-Western immigration background are more likely to experience identity attacks.

  • Harmful behaviour online occurs mostly on social media

Respondents were asked on which medium they faced online harms the most. The possible answers were social media like Facebook, Twitter, or Instagram; apps like Snapchat or TikTok; video services like YouTube, Twitch or Dumpert (a Dutch video site); messaging services like WhatsApp or Facebook Messenger; dating apps, internet forums; search engines; a news website or blog; an online marketplace for the sale of second-hand items; other. Among all types of hate & harassment, social media was the most common category, often dwarfing the other options (report, p. 8).

  • People generally think harmful behaviour online is bad

Only 1.5% of respondents thought online harmful content was not bad at all whereas a majority of 31.5% thought it was very bad (report, p. 10). There was no statistically significant difference between the categories of harmful content.

  • A small majority of people who have experienced harmful behaviour online have taken steps in response

Although people generally find harmful behaviour online very concerning, only a small majority acts; 57.7% per cent of respondents had taken steps after being victimized by online harmful content (report, p. 11). People do say they have become more cautious and are more likely to take steps if they experience the behaviour as more serious. Statistical analysis showed there exists a significant but weak link between the perception of the severity of the issue and whether people acted.

  • The most common steps taken are complaining to the website or filing a report with the police

However, respondents state that, in their eyes, these steps lead nowhere. Very few people go to a legal advisor or lawyer or start legal proceedings against the platform or website.

  • People mostly want to punish, expose and/or stop the offender

In particular, people take steps because they want to punish or expose the perpetrator; they also want to prevent (further) harm to others. Interestingly, the prospect of receiving financial remuneration was the least-popular option, scoring only 6.7% (report, p. 13).

  • Money and effort are the main reasons people do not take steps

Once again, age was a factor in the decisions people made. Young adults and adults are more likely discouraged when it takes a considerable effort, is stressful, is complicated or when the outcome is uncertain (report, p. 15). They are also more easily deterred because they have little trust in a fair outcome. On the other hand, costs are more of a deterrent for older people and the same goes for those with lower incomes. Migration background also is a factor, as people with a Western immigration background felt more deterred because they did not want to be a burden to their environment or because they did not want to face the perpetrator than those with a Dutch background.

  • People do not know if anything will be done about it if they report harmful online behaviour

Younger people are more held back by uncertainty about a positive outcome than older people and seniors. In addition, mainly young adults do not know whether the behaviour is illegal/illegal.

  • The majority is not familiar with the possibility of asking for help from a specific organisation or taking action against a website

Less than 10% are familiar with the possibility to ask for help from specific organisations or acting against websites.

Link to the published rapport

You can read the full report (in Dutch) here. The survey was funded through the Amsterdam Center for European Studies.

 

Dr. J.M.L. (Anna) van Duin is Assistant Professor of Private Law and Digital Justice at the Amsterdam Law School, specialising in access to justice and effective remedies in the context of EU law, (online) dispute resolution and digitization of the civil justice system. She is a member of the Digital Transformation of Decision-Making (DTDM) research initiative at the Amsterdam Law School and was one of the senior investigators of the above-mentioned study for the Dutch government.

N.M.I.D. (Naomi) Appelman is PhD researcher at the Institute for Information Law (IViR) as part of the DTDM research initiative. She has an academic background in both political philosophy and information law and focuses in her PhD research on the democratic need for contestable online speech governance and its relation to online speech regulation. She was one of the main authors of the above-mentioned study.

Dr. B. (Brahim) Zarouali is Assistant Professor in Persuasive Communication at the Amsterdam School of Communication Research, and a member of the Information, Communication, and the Data Society research initiative. He has plenty of experience in conducting quantitative research within the field of communication science (e.g., surveys and experiments).

P. (Puck) van den Bosch studies communication science at the Amsterdam School of Communication Research and is a student-researcher in the project ‘Content Moderation & Access to Justice’ which is part of the Digital Transformation of Decision-Making (DTDM) research initiative at the Amsterdam Law School.