Researching content moderation through platform transparency rules: the DSA as a research tool to address pro-Palestinian content censorship

by Valerie Bourjeily,

Advanced LLM candidate in Technology Governance, University of Amsterdam

Law School Academic Excellence Track (ACeT) Research Intern, ’24.

28 October 2024

 

The DSA aims to help researchers in understanding platform content moderation. This blog post discusses its relevance in researching the systemic censorship of pro-Palestinian content, and content moderation practices more generally.

 


 

Introduction

With the current crisis in the Middle East of Israel’s military operation in Gaza following the October 7, 2023 attack by Hamas, concerns about the censorship of Palestinian activism have increased. Much of this attention has focused on social media platforms’ content moderation policies, which are crucial for public communication about this conflict. Platforms moderate this speech on legal and policy grounds, but face growing criticism that their actions are inaccurate, biased, and unaccountable, especially so for pro-Palestinian content.

Non-governmental organisation (NGO) Human Rights Watch (HRW) has identified the over-moderation of pro-Palestinian content online, deemed “systemic censorship.” HRW conducted this research between October and November 2023 by publishing a call for evidence of online censorship in English, Arabic, and Hebrew on platforms Instagram, X (formerly Twitter), and TikTok. They received 1.285 cases and reviewed 1,050. The vast majority, specifically 1.049 cases, involved censorship of pro-Palestinian content, with the single remaining case relating to censorship of pro-Israeli content. Most cases involved censorship by Meta platforms Facebook and Instagram. HRW notes that Meta’s censorship of pro-Palestinian content is not new, as Meta has been repeatedly criticised for its censorship practices since 2021.

HRW during its research collaborated with other NGOs, including 7amleh, the Arab Center for Social Media Advancement, which has been long engaged with research and monitoring on Palestinian digital rights and censorship. In 2021, 7amleh created the Palestinian open source digital rights observatory ‘7or,’ which, similarly to HRW, encourages individuals to upload censorship cases. Based on this data, 7amleh releases numerous reports, which have corroborated censorship by Meta and X through studying over 5.100 reported censorship cases between October 2023 and September 2024; 7amleh also identified Meta as the worst offender in terms of censorship earlier this year.

In the Netherlands, public broadcaster NOS investigated pro-Palestinian content censorship which indicated widespread concerns that pro-Palestinian expressions were made less visible. The research involved a survey and test with a specific focus on shadow-banning. The survey involved interviewing 60 Dutch Instagram users in March and April 2024, asking among other things whether they had experienced censorship, which content was likely to be least visible, and if they took measures to prevent shadow-banning. Out of 23 respondents, 3 said they did not notice any censorship, whilst 20 stated they experienced significant decreases in visibility of their pro-Palestinian content. NOS also identified that the 20 content creators who experienced censorship did not breach Instagram’s guidelines. Additionally, NOS mentions that even major NGOs such as Oxfam Novib and Amnesty International report facing reduced visibility when posting about Gaza.

These cases show the difficulties in researching platform moderation policies, and more significantly, their real impacts on communication. Whilst trends of over-moderation may be found, looking “under the hood” and verifying trends is challenging, even for NGOs. However, law may serve as an empirical resource to learn more about content moderation practices. The European Union’s (EU) Digital Services Act (DSA) could provide new opportunities for research through the introduction of various transparency rights relating to platform content moderation. This reflects the prior use and development of exercising the EU’s General Data Protection Regulation’s (GDPR) transparency rights as a research method.

 

Transparency Rights in the DSA

The DSA introduces transparency rights and safeguards regarding content moderation decisions which enable individuals to obtain further information from platforms on why their content has been moderated, on what grounds, and possible avenues for appeal. This should seemingly provide an opportunity to look at underlying grounds to better establish why over-moderation of certain topics, such as pro-Palestinian content, might be occurring.

Article 14 DSA mandates platforms to clearly collate their content moderation rules in “clear and unambiguous language” in their Terms and Conditions, which should facilitate understanding amongst platform users on potential grounds for moderation – though it is debatable how many regular users read Terms and Conditions. However, this could still be valuable, particularly when combined with the Article 17 DSA obligation for platforms to include a “Statement of Reasons” with moderation decisions on content that is illegal or in violation with the platform’s terms and conditions.

Theoretically, this should empower users against obscure platform moderation decisions through providing them the information to either fight against moderation decisions, and providing a legal tool to use against platforms when such transparency rights are not complied with. For example, the Amsterdam District Court, regarding a case brought by legal academic and privacy activist Danny Mekić, recently granted the first ruling regarding the noncompliance of X, formerly Twitter, with its Article 17 DSA obligation to provide a Statement of Reasons.

Ideally, garnering data from Article 17 DSA’s Statement of Reasons could allow the identification and categorisation of the various grounds used for content moderation – and which grounds are most prone to result in over-moderation. Some platforms have already implemented greater transparency measures, perhaps in part due to the DSA, such as Instagram’s “account status” page, which describes any active moderation measures on an account. Whilst providing a key research opportunity, there remain various challenges in using the DSA’s transparency rights as a legal tool. A huge obstacle is that the noncompliance in Mekić’s case is reflected in many other accounts of platforms not complying with their Article 17 DSA obligations.

Unless platforms comply proactively, as Article 17 DSA requires, a burden is placed on individual users to exercise such transparency rights. This is significant considering individuals often struggle in being aware of and exercising their legal rights. Researchers and activists can try to guide  individuals in pursuing such information for individual case studies, but this comes with a slew of other problems, including privacy risks, particularly when researching sensitive topics regarding marginalised communities. Finding people, particularly those of marginalised communities and of heated topics such as Israel and Gaza, also poses a big problem. The research discussed at the start of this post does valuable work here, but it is costly and struggles to obtain representative samples. Additionally, platform noncompliance with their obligations, whether through a lack of or insufficient Statement of Reasons, would necessitate pushback against that platform from the user. Many users, particularly content creators, would be hesitant even with researcher backing to pursue such action against platforms. After all, those who contest platforms moderation all the way to court like Max Schrems and Danny Mekić are no laymen. Consequently, researchers collaborating with users to exercise DSA transparency rights may provide a better look at the workings of content moderation in practice but is not the catch-all solution to researching content moderation.

The DSA seems to provide a way out through the DSA Transparency Database, which includes a variety of Statement of Reasons submitted by platforms under their Article 24(5) DSA obligations, which are removed of any personal data. However, whilst this provides the public with access to a variety of such statements, it may not suit all research purposes. For example, searching key-word ‘Palestine’ with filters for platforms Instagram, Facebook, TikTok, and X arise in no Statement of Reasons. This is despite Instagram and Facebook, as part of Meta, being identified as central actors in over-moderating Pro-Palestinian content by HRW. This brings into question the actual value provided by such a database, if the only information it may provide is the grounds for moderation completely detached from its context.

Other challenges are posed from the proactive nature of platforms at policing content. For example, many platforms include warnings as to potential moderation before content has even been posted, like on Instagram (pictured below). This can impact people’s perceived freedom to talk on certain topics. Measures such as pre-emptive moderation warnings are often ephemeral, disappearing after you close the pop-up, and finding documentation online from various platforms on which may show up is incredibly difficult. There does not seem to be consistent documentation on which of these measures are in play from any platform, making researching the use of such measures challenging.

Transparency Rights in the GDPR

The case brought by Danny Mekić also included the noncompliance of X with their transparency obligations arising from the right to access in Article 15 GDPR. Whilst it is unclear how much information related to content moderation decisions will fall under the right of access, data subject access requests under Article 15 GDPR may provide a new tool to assess compliance of platforms with their Article 17 DSA obligations. Users, when suspecting they have been moderated – particularly those harder to prove with no further data, such as cases of shadow-banning and other demotions – may turn to data subject requests under Article 15 GDPR.

However, even this limited (potential) use of the right of access comes with numerous obstacles. Six years after the implementation date, many platforms have automated processes to obtain data download packages disclosing any personal data held by the platform, implementing Article 15 GDPR’s right of access. However, if this information is incomplete, requesting further information from platforms may take extended periods of time, reducing feasibility. Usually, such requests must be answered in one month, but this may be extended by a further two months when deemed necessary due to the complexity or number of requests. Firms may also not be cooperative – research reflecting on the use of the GDPR’s transparency rights as a research tool highlight online service providers’ “deliberate attempts at curtailing the scope and meaning of access rights.” It remains to be seen whether requests targeting content moderation will be any different. And if GDPR rights do prove at all effective, they will still face similar problems as the DSA in terms of recruiting volunteers, managing privacy risks and obtaining relevant samples.

Beyond Individual Transparency Rights: Article 40 DSA on Access to Data for Researchers

Whilst individual transparency rights in the DSA and GDPR provide the opportunity to use law to glean empirical insights, numerous challenges remain. However, the DSA provides opportunities for research that go beyond individual transparency rights, providing a more explicit opportunity for empirical research through law than previously afforded in legislation like the GDPR. Under Article 40 DSA, vetted researchers can request data from Very Large Online Platforms and Search Engines.

Conditions in Article 40(8) DSA must be fulfilled before requests may be realised, regarding the researchers’ status and affiliations, the research’s funding, personal data protection capabilities, proportionality of data access, and the provision of the final research free of charge to the public. The research in question must also address systemic risks listed in Article 34 DSA and the effectiveness of risk mitigation measures. Systemic risks address a broad variety of topics, including illegal content, impacts on fundamental rights, civic discourse, electoral processes, and gender-based violence, health, and minors.  Consequently, whilst the research is still limited to those contributing to understanding of systemic risks and their mitigation measures, the systemic risks in question cast a wide net.

Of note, however, is that data requests must go through the relevant national Digital Services Coordinator (DSC), a regulatory body responsible for supervising DSA compliance in their respective Member State. However, researchers unable to meet all the requirements in may still access data under Article 40(12) DSA. Access under this provision is limited to information that is publicly available. This makes it less relevant for content moderation; which tends to take place in a non-public manner.

Article 40 DSA has the potential to further law’s potential to provide empirical insights, through allowing both vetted researchers and a broader category of researchers access to data when fulfilling relevant requirements. Article 40 DSA requests may be able to sidestep the limitations of both the reliance on users and the Statement of Reasons database, by removing the user as a ‘middleman,’ and requesting moderation data in context of a specific topic or case study. However, the practical impacts of Article 40 DSA are still unknown, particularly since the delegated act – which is yet to be proposed, let alone finalised –  and national DSCs will both play a crucial role in its actual implementation. Additionally, as with GDPR, platforms are likely to resist access requests. On these controversial and sensitive topics, resistance from platforms may be especially vigorous.

Does It Matter?

It will take further time to determine whether the DSA delivers on its promises and addresses the core crises facing Europe’s democracies arising from social media platforms. With that considered, it is difficult to be optimistic about the impacts of the research opportunities afforded by the DSA – which seem vague so far – on those currently impacted by over-moderation, particularly for pro-Palestinian content.

The DSA may still provide an opportunity to influence social media content moderation practices through increased access and involvement of researchers. This could facilitate public interest uses such as the use of social media in documenting human rights abuses, especially in conflict zones. Whether this can be done fast enough to make a concrete difference for both those documenting human rights abuses on the ground and those embroiled in conflicts across the globe now, however, remains in question.

 

Acknowledgements: The author extends many thanks to Paddy Leerssen for guiding the development of this blogpost. Research for this post was carried out as part of an internship for Amsterdam Law School’s Academic Excellence Track (AcET).