Research Report on Disparate Content Moderation

Naomi Appelman

(Institute for Information Law, IViR – University of Amsterdam)

We know that content moderation harms such as unjustified removals, shadowbans or blocks are not distributed equally. Time and again research and civil society reports show how social marginalisation is reproduced online (see for example: here, here, and here). And unfortunately, platform regulation coming out of the EU, most notably the DSA, does not yet seem to take this into account sufficiently.

Now that focus is shifting to the implementation and enforcement of the DSA is crucial to understand how research, advocacy, and policies can be grounded in the intersectionality, diversity, and unequal distribution of content moderation harms.

To this end, this reports maps how ten civil society organisations that work for the interests or with marginalised groups relate to this policy debate on disparate content moderation. Specifically, the report discusses experiences with (1) content moderation harms, (2) access to justice and (2) the policy debate.

My hope is that this report can offer a modest contribution to centring perspectives that are not always the focus of the EU digital rights debate, and the DSA largely overlooks. In doing so, this project wants to help in charting out a course of action for research and advocacy that appreciates the intersectionality and unequal distribution of content moderation harms as well as fosters possible solidarities.

Research Set-up

Through a series of 10 expert interviews this report mapped several civil society perspectives on disparate content moderation. The guiding questions were how organisations working for the interests of or with marginalised groups: [1] conceptualise the causes and effects of disparate content moderation (such as downranking, shadowbanning, blocking or (refusal to) removing) as well as the available access to justice, and [2] relate to and are involved in EU platform policy, specifically the Digital Services Act.

Brief overview of the results

The main results of these interviews on online harms, access to justice and the EU platform policy debate with regard to the large industrial scale social media platforms, can be summarized in six main points:

  1. Disparate content moderation is seen as a result of wider systems of social oppression that are reproduced both on the platform and by the platform itself through its content moderation. Within its corporate logic, stigmatized content could be conceived us as a risk and suppressed to minimize that risk, without (sufficient) consideration for the harm that this causes.
  2. There is a large heterogeneity in both experience and impact that seem to be intersectional in the sense that they depend on the wider context of someone’s life as well as the different broader societal oppressions someone is subject to. Content moderation can cause a broad range of harms beyond the most known ones such as unjustified deplatforming, removal, or shadowbanning. The norms themselves, the vagueness or seemingly arbitrary application, data harms as well as the specific affordances of a platform are all instrumental to content moderation harms.
  3. Especially groups that already have an adversarial relation with the law seem to experience unjustified deplatforming and content removal the most, such as sex workers and abortion rights activists. This affects their relationship with the platform, where they considering it more of an active political actor, rather than an unreachable corporation.
  4. The strategies people develop to deal with these hams, relate to several factors: (1) the type of harm experienced, either mainly by platform action or by harassment, (2) the impact this has, whether or not professional for example, and (3) the wider relationship with the law.
  5. In line with existing research, major hurdles to access to justice are the lack of clarity on how content moderation works and what the norms are, as well as a lack of response from platforms to notifications, a lack of explanation as to why content moderation actions were taken and, finally, that available procedural routes are unclear and inaccessible. Willingness to engage with, or trust in formal procedures, both legal and with the platforms, must be understood within the context of possible wider criminalization and/or legal stigmatization.
  6. Crucial is the support from an organisation in finding the right procedural route as well as broader support in navigating and dealing with the platform and the harm. Such supporting organisations are also important in leading possible collective actions to unburden the people experiencing harm. Moreover, success in challenging and remedying content moderation harms is often dependent on having contacts within or access to the platform. This contact is dependent on the voluntary cooperation of the platforms, and, besides the real threat of arbitrariness, this might also feel to support organisations as if it could limit them in their advocacy.

Some Key Conclusions

Overall, the report comes to several key conclusions and recommendations for researchers, policy-makers and advocacy organisations:

  • The differences in experience and impact of content moderation are not explicitly recognized in the DSA but could be included in the codes of conduct or risk assessment requirements.
  • Intersectionality of content moderation harms as well as societal context must be considered by policymakers, academics, as well as by NGOs in campaigning for change. This means avoiding one dimensional policy and working towards broad coalitions.
  • A solely legalistic approach is insufficient to support communities who are apprehensive, with good reason, about formal procedures.
  • Further research is needed on what type of support organisations and funding structures would fit specific contexts best, and how to ensure the position of these victim support organisations vis-à-vis the platforms is not precarious.
  • Platforms should ensure that the content of otherwise criminalized groups that is legal and does not violate platform norms, so-called “grey zone content”, does not get caught up in content moderation actions.

Download the report here: Report Disparate Content Moderation.pdf.

Suggested citation: Naomi Appelman, “Disparate Content Moderation: mapping social justice organisations Perspectives on unequal content moderation Harms and the EU platform policy debate.” (2023) Institute for Information Law, University of Amsterdam, available at: