The Digital Services Act & the implications for the safety of journalists (Part 2) 

By Doris Buijs, student researcher at DSA Observatory.

Introduction and background  

As documented by human rights and media freedom organisations, journalists and media professionals nowadays face all sorts of harassment, violence and intimidation. These persistent attacks against journalists, media organisations (such as the continuous threats and attacks against Charlie Hebdo), and in a broader sense freedom of expression and information, increasingly take place online and affect in particular women journalists and journalists from marginalised and/or minority groups. Journalists who receive such threats are oftentimes disadvantaged by intersecting forms of discrimination, such as racism, religious bigotry, misogyny or homo-/transphobia. For example, the Nobel prize-winning journalist Maria Ressa was at one point “receiving 90 hate messages an hour on Facebook alone”. According to a survey carried out by International Women’s Media Foundation and TrollBusters, “nearly 2 out of 3 respondents said they’d been threatened or harassed online at least once”.  

Much of the harassment and violence against journalists take place on online platforms and the recently-enacted Digital Services Act’s (DSA) aim is to regulate those platforms and create a “safe” environment, where fundamental rights, including media freedom are “effectively protected”. The purpose of this second blog post is therefore to explore how the DSA could offer protection to the safety of journalists and other media professionals. A preliminary conclusion is that the DSA could offer protection and have serious, positive impact, but most of the protection will be provided by other(s) (agencies) than journalists themselves. The most impactful provisions could be the ones around trusted flaggers (Article 22 DSA), orders by national judicial or administrative authorities for online platforms to act against illegal content (Article 9 DSA), and the notice and action mechanisms (Article 16 DSA). While more in the background, the so-called “Good Samaritan” provision (Article 7 DSA), platforms’ obligation to inform law enforcement authorities about suspicions of a criminal offence (Article 18 DSA), and the risk governance framework (Articles 34, 35 and 37 DSA) could also play their part in detecting and combating systemic risks for freedom of expression and media freedom. We will have to wait and assess  whether the DSA will actually help to increase and safeguard journalists’ safety (online). 

Structure 

The same terminology as to “journalism” and “the media” as in part 1 are used, meaning that “journalism” refers to UN Human Rights Committee’s definition in General Comment no. 34, and “media” refers to the much debated notion of “media” as adopted by the Committee of Ministers of the Council of Europe in Recommendation CM/Rec(2011)7. This blog post starts with presenting the most prominent DSA provisions that are most likely to increase journalists’ (online) safety. Afterwards, some provisions that could have a positive, but maybe less direct impact on the safety of journalists will also be discussed, as their impact will depend on the outcome of the risk assessments and Very Large Online Platforms’ (VLOPs’) willingness to voluntarily carry out investigations.  

Prominent provisions – Articles 9, 16 and 22 

Article 9 – orders to act against illegal content 

National judicial or administrative authorities can order online platforms to act against illegal content (Article 9(1)). As such, Article 9 can be considered as a new mechanism for member states via their judicial and administrative authorities to directly help improving journalists’ safety by combating illegal content containing harassment of journalists.  

Article 3(h) defines illegal content as “any information that … is not in compliance with Union law or the law of any Member State which is in compliance with Union law”. So, the notion of illegality refers back to national law, which differs per Member State. Although this is understandable regarding the EU’s competencies – this doesn’t help harmonising regulating journalists’ online safety in the EU. According to Article 4(1)(j) of the Treaty of the Functioning of the European Union (TFEU), the EU shares its competences with the Member States in the area of freedom, security and justice. Article 83 TFEU gives the European Parliament and the Council the right to draw up minimum rules defining criminal offences for crimes of particular seriousness and with a cross-border dimension. An example of the use of its competencies as laid down in Article 83, is the new Directive on combating violence against women and domestic violence, as proposed by the European Commission in March this year and through which the EU is trying to strengthen the harmonisation of illegal content (online). The proposed Directive provides a definition of cyber violence: “any act of violence covered by this Directive that is committed, assisted or aggravated in part or fully by the use of information and communication technologies” (Article 4(d)). Such technologies are “all technological tools and resources used to digitally store, create, share or exchange information, including smart phones, computers, social networking and other media applications and services” (Article 4(e)). The proposed Directive mentions the DSA and the (lack of a) definition of illegal content and its aim to make the DSA more effective (although complementing the DSA is not the same as making the DSA more effective): “The current proposal complements the DSA proposal by including minimum rules for offences of cyber violence.” The proposal also specifically mentions women journalists: “Cyber violence particularly impacts women active in public life, such as politicians, journalists and human rights defenders. This can have the effect of silencing women, hindering their societal participation and undermining the principle of democracy […].” Another example of criminalising harassment against journalists is from the Netherlands, where the government announced to make “doxing”, sharing personal data for the purpose of intimidation, prosecutable under Dutch criminal law. Such examples of national legislation will be important for the DSA’s impact as the definition of illegal content in the DSA is based on Member States’ national laws.  These developments could thus help to harmonise, or at least (nationally) criminalise harassment against journalists, making is easier to combat violence against journalists and prevent a climate of impunity 

Article 16 – Notice and action mechanism 

Any internet user can make use of the notice and action mechanism as laid down in Article 16. This means that all internet users would be able to invoke Article 16 and that you don’t have to be a platform user. Following from this, the notification system could be used by journalists too, but the Article is also very broad, allowing many notifications, which could lead to a large number of notifications, all of which require a response from platforms. Providers of hosting services (including VLOPs) shall put mechanisms in place allowing individuals or entities to notify platforms of content the notifier considers to be illegal (Article 16(1)). The online platform shall without undue delay inform the notifier of its decision (Article 16(5)) and the redress possibilities, such as the internal complaint-handling system (Article 20) or the out-of-court dispute settlement procedure (Article 21). This notice and action mechanism is a new avenue that could help journalists either by themselves, journalist organisations or via trusted flaggers (see below) to increase their safety by notifying platforms to act on the violence against journalists and other media professionals. This mechanism could become of great help to journalists and their safety, as platforms are now obliged to offer a system for the notification of possibly illegal content and thus a way of reaching the platform and making platforms aware of such content. Individuals shall without undue delay receive a decision. The notification could lead to actual knowledge or awareness (Article 16(3)), which could lead to liability for the purposes of Article 6. To prevent such liability, platforms are encouraged to act swiftly upon illegal content, such as threats and violence against journalists. 

Apart from the possible largely positive influences coming from the notice and action mechanism, some comments about the effectiveness of the protection of journalists under Article 16 can be made. One of those is that the notice and action mechanism can be misused by harassers of journalists, such as the by (use of) trolls, which is seemingly also acknowledged in Recitals 63 and 81. Secondly, just like Article 17 about the statement of reasons, but in contrast to the recent Regulation on addressing the dissemination of terrorist content online, according to which platforms can be ordered to remove or disable access to terrorist content within one hour, Article 16 of the DSA does not specify within what time frame platforms are required to respond to or take action based on the notice. Although this can be understandable for several reasons, this could be problematic from the point of view of a journalist who receives a death threat. Article 16(5) does state that platforms should take a decision in a “timely” manner, but it remains unclear what “timely” means.  A point of reference could be the Code of conduct on illegal hate speech, where signatory “IT companies” agreed to “review the majority of valid notifications in less than 24 hours and remove or disable access to such content, is necessary” (although it is not clear within what time framework that removal or disabling should take place). In case of a threat against a journalist’s safety or life, “timely” obviously is of a different nature than any other “regular” threat. Recital 52 of the DSA clarifies this to some extent as “such providers can be expected to act without delay when allegedly illegal content involving a threat to life or safety of persons is being notified”, although “acting without delay” might not be the strongest choice of words. 

Article 22 – Trusted flaggers  

The impact of Article 16 will possibly be strengthened by the use of “trusted flaggers”. Just like individual journalists and media professionals, they can notify the platform via the notice and action mechanism as laid down in Article 16. The difference between a notice from a “general user” and one from a trusted flagger, is that a notice submitted from a trusted flagger should be prioritised by the platform and decided upon “without undue delay” (Article 22(1)). Although it remains rather unclear what this priority entails and what should be understood under “without undue delay”, the actual priority given can very well be of help in the big swamp of notices of alleged illegal content.  

The status of trusted flagger should only be awarded to entities (Article 22(2)) that have particular expertise and competence in tackling illegal content and can demonstrate so (see also Recital 62). They should also be independent from any provider of an online platform, and they must carry out their activities in a diligent, accurate and objective manner (Article 22(2)(c)). The Digital Services Coordinator of each member states awards the status of trusted flaggers (Article 22(2)), and entities can apply for the status of trusted flagger. To avoid diminishing the added value of trusted flaggers, the overall number of trusted flaggers should be limited (Recital 61). Trusted flaggers could be non-governmental organisations and private or semi-public bodies, but also “internet units of national law enforcement authorities” (Recital 61). Although they could be of help in increasing journalists’ safety – as Council of Europe (COE) member states, and thus all EU Member States, have a duty to carry out effective investigations into criminal offences, on which the European Court of Human Rights has elaborated in the cases of Özgür Gündem v. Turkey  and Mazepa and others v. Russia – there are some concerns on the use of law enforcement agencies as possible trusted flaggers. If these agencies become a trusted flagger, they could send a notice as a trusted flagger, where they would normally have sent a legal order, but which falls outside of their competences. To prevent this, it is preferable to award the status of trusted flagger for journalists’ safety issues to independent press bodies instead of law enforcement. Either way, COE and EU member states in the end have a positive obligation to create a favourable environment for participation in public debate, for which the safety of journalists is a basic requirement. The European Commission has addressed in its State of the Union that and its Recommendation on ensuring the protection, safety and empowerment of journalists and other media professionals in the European Union that Member States are encouraged to promote the cooperation between online platforms and organisations with expertise in tackling threats against journalists, for instance by encouraging their potential role as trusted flaggers.” PersVeilig, a Dutch initiative by journalist organisations and law enforcement, is explicitly mentioned by the European Commission in its 2021 Recommendation as an example of this future cooperation. The Dutch Government has also announced that it will explore the use of trusted flagger in order to see what interventions could be effective to counter (online) harassment of journalists, mentioning PersVeilig in this as well. It follows that cooperation between law enforcement and civil society organisations specialised in journalism and journalists’ safety is a specific mechanism envisaged by the EU.  

Background provisions – Articles 7, 18 and the risk governance framework 

For an overview of procedural rights such as the internal complaint-handling system, or the out-of-court dispute settlement system, please see Part 1 of this blog post, as this post explains those rights and they can both also be used by journalists for their own protection.  

Article 7 – Voluntary own-initiative investigations and legal compliance  

What can we expect from platforms themselves outside of their obligations? Article 7, the so-called Good Samaritan provision, aims to encourage platforms to carry out voluntary own-initiative investigations to detect, identify and act against illegal content (a general monitoring obligation is prohibited, see Article 8). Platforms shall not be exempted from liability referred to in Articles 4 – 6, under the condition that they carry out those investigations in good faith and in a diligent matter (Article 7(1)). This implies that platforms should act in an objective, non-discriminatory and proportionate manner (Recital 26). Although the basis for those investigations is voluntary, and we will have to see if and how platforms will carry out those investigations, they could offer a nice opportunity for both platforms and journalists’ safety. On the one hand, it might be unlikely platforms will engage in those investigations, as knowledge of illegal content could lead up to liability (so why would a platform want to detect illegal content). If they carry out such investigations, it could be very likely that platforms would just remove content that might be borderline illegal to prevent liability due to Article 6. On the other hand, VLOPs that seek to mitigate possible systemic risks stemming from their services (Article 34) could try to be ahead of this by carrying out voluntary investigations. For more information on the “Good Samaritan” provision and the discussion around the name of the provision (sometimes even called “Good Samaritan paradox”), see this blog post.   

Article 18 – notification of suspicions of criminal offences 

Under Article 18 DSA, if an online platform “becomes aware of any information giving rise to a suspicion that a criminal offence involving a threat to the life or safety of a person or persons has taken place, is taking place or is likely to take place”, it shall promptly inform law enforcement or judicial authorities of the member state in which the threat takes or most likely will take place (Article 18(1)). The platform shall also provide law enforcement or judicial authorities with all relevant information available (Article 18(1)). So, in case a journalist receives a threat to life, and the platform becomes aware of this either by a personal notification of the journalist through the notice and action mechanism (Article 16), via a trusted flagger (Article 18) or via a voluntary own-initiative investigation (Article 7), the platform shall “promptly” inform national judiciary authorities or law enforcement. Additionally, Article 18 does leave room for some questions. First, it is not entirely clear what qualifies as “promptly”, but it seems the EU’s legislators’ intention is for member states to take action as quickly as possible. This is a positive development, as law enforcement agencies have the duty to investigate criminal offences. So as soon as a platform informs law enforcement, an investigation should start. This means that, regardless of whether the incident was actually criminal, it has been investigated by proper agencies with expertise. That is, if member states comply with their duties around effective investigation of criminal offences. Sadly, it is of no guarantee that (specific) journalist’s safety will be in good hands of (all) governments law enforcement agencies in European/COE member states, as we have seen in the case of Khadija Ismayilova v. Azerbaijan. Secondly, Article 18 doesn’t specify whether law enforcement agencies report back to platforms on the outcome of the investigation or what happens after law enforcement has been notified. It is not clear what platforms should do beyond informing law enforcement, or during the investigative process carried out by law enforcement. In principle, they’re not involved unless there is a legal request for (further) information – a (criminal) investigation is a matter solely for the competent authorities. Perhaps the next step in the process could be the order to act on illegal content via Article 9, although Articles 9 and 18 don’t refer to each other. If platforms are indeed obliged to undertake action themselves in the meantime, this provision would actually be quite helpful to journalists: both the platform itself and law enforcement would be invested in tackling the harassment against journalists in case of life-threatening offences. 

Risk governance framework 

As explained in part 1 of this blog post, the risk governance system as laid down in Articles 34, 35 and 37 DSA, could play a part in the protection of journalistic and news media content. The framework could also apply to the protection of (online) safety of journalists and other media professionals. Article 34 mentions risks to media freedom indirectly in paragraph 1, sub a, the dissemination of illegal content (such as threats or hate speech against journalists) and directly in sub b and c, by labelling actual or foreseeable negative effects to freedom of expression and information, media pluralism and civic discourse as systemic risks. As (online) safety of journalists is a prerequisite for media freedom and freedom of expression, threats to (online) safety of journalists could very well be systemic risks in the sense of the DSA. And here again, Recital 90 DSA and journalists as a special group involved in the risk assessments by VLOPs makes sense, given that journalists most directly represent media freedom and pluralism online. Women journalists, LGBTQAI+ journalists and journalists of a marginalised or minority communities could also be regarded as a special group, that should be involved when drafting risk mitigating measures. Gender-based violence is also mentioned in Article 34(1)(d) as a systemic risk, together with “serious negative consequences to the person’s physical and mental well-being”. Looking at the large number of threats, attacks and violence against journalists, it seems very unlikely that that VLOPs won’t have to take measures to mitigate the beforementioned risks. To substantiate this, there is already a lot of research on this topic, such as this report by the European University Institute and the SOFJO Resource guide on safety of female journalists. 

Journalists’ future: a safe one? 

Although the violence against journalists and other media professionals is growing unacceptably, the growing attention to the problems around journalists’ safety is a positive development and it is good to see that the issue is on the EU’s agenda. So far, platforms have been regulated mostly by self-regulation regarding the protection of journalists. With the DSA, a next, positive step towards further protecting the necessary safety of journalists and other media professionals can be taken.