What do we talk about when we talk about risk? Risk politics in the EU’s Digital Services Act

by Rachel Griffin, doctoral candidate at SciencesPo

31 July 2024

 

What are the implications of framing normative and political questions about platform governance in terms of ‘risks’ to be managed through technocratic expertise? This article suggests that the DSA’s system of risk management obligations for the largest platforms ignores the essentially political and contestable nature of risk, and will only reinforce the power of corporate and state actors to determine what kinds of harms will be recognised and addressed.




 

The concept of ‘risk’ is sometimes seen as characteristic, or even as the defining characteristic, of the era of industrial modernity. Risk assessment techniques were originally developed at the dawn of the colonial era in the context of maritime insurance. During the 19th and 20th centuries, according to sociologist François Ewald, social insurance – founded on techniques to evaluate and collectivise risks – became a way to compensate for the destabilising effects of capitalism (an aspect of the famous ‘double movement’ analysed by Karl Polanyi). Finally, in the 1980s, the sociologist Ulrich Beck proposed the concept of the ‘risk society’, to define our era in which globalisation and technological advances create new risks that are global, potentially catastrophic, and very difficult for states to understand or control.

 

Although very influential (perhaps more than ever, in our times of pandemics and ‘polycrisis’), Beck’s theory has been criticised as insufficiently sensitive to the essentially socially constructed nature of risk, and to the distribution of the dangers of industrial society, which remains highly unequal. According to Ewald, ‘nothing is a risk in itself’: rather, risk denotes a set of techniques for calculating the probability of events in order to guide decisions, and their continuing diffusion into new spheres has been motivated above all by the search for profitable new business opportunities for insurers and other actors providing risk management services, like auditors.

 

In recent decades, we have also seen a remarkable diffusion of these techniques into innumerable areas of regulation and public administration. British researcher Michael Power has called this ‘the risk management of everything’ (and, following the spectacular failure of these techniques in the financial sector in 2008, ‘the risk management of nothing’). American legal scholars Ari Ezra Waldman and Julie Cohen, who describe the outsourcing of much of the interpretation, implementation and monitoring of regulation to private companies, term this mode of regulation ‘regulatory managerialism’.

 

This tendency is evident in the EU’s Digital Services Act (DSA), enacted in 2022 to deal with issues related to online platforms hosting user-generated content. More specifically, Articles 34 and 35 of the DSA establish risk assessment and mitigation obligations for the largest platforms, those with more than 45 million EU users (‘VLOPs’). Corporate risk management will therefore play a central role in the regulation of the most influential platforms in our media landscape, and of issues that attract significant public and political attention, such as misinformation, harassment and child safety.

 

With my Sciences Po colleagues Beatriz Botero Arcila and Pedro Ramaciotti-Morales, we have just launched a new research project to study how the DSA’s risk management obligations are being implemented. This article outlines some preliminary reflections and results based on my research within this project, informed by the extensive literature on risk regulation as well as by conversations with experts working in regulated companies, civil society organisations and regulatory agencies.

 

Understanding risks

 

Risk management techniques are well-established and -studied in other regulatory areas like environmental protection and finance. Risk management is classically divided into two stages: assessing risks based on evidence, and then deciding how to mitigate risks identified as significant. A risk itself is classically defined as a harmful event, assessed according to its severity and probability. Risks in this sense can be distinguished from uncertainties, whose probability cannot be estimated, and from ambiguities, where the nature of a harmful outcome or the criteria for assessing harm are unclear.

 

This definition does not seem very relevant to many of the areas of risk listed in Article 34(1) of the DSA, which include for example negative effects on fundamental rights, ‘civic discourse’ and public safety. ‘Negative effects’ on such abstract and broadly-defined values can’t sensibly be understood as specific events that might or might not happen, with quantifiable probabilities. Not only could these concepts encompass a whole host of very different social problems, which are (in the sense of the definitions outlined above) highly uncertain and ambiguous, they are also extremely politically contested.

 

According to an alternative definition, recently officially recognised by the International Standards Organisation, risk denotes the impact of uncertainty on an organisation’s objectives. This definition seems preferable, since it recognises that risks are essentially political and socially constructed: which issues are identified as risks, and how these issues are understood and managed, will always depend on how and by whom an organisation’s goals and values are defined.

 

It is widely recognised that choosing risk mitigation measures requires making political judgments. However, risk assessment also necessarily involves political and contestable decisions: for example, the definition and prioritisation of issues to assess; the delineation of the population deemed at-risk; and the selection of evidence to consider. This is the case even in ‘classic’ fields of risk management, such as pollution regulation, where there do at least exist quantitative indicators which are widely accepted (albeit never ‘objective’ or apolitical). In the areas listed in Article 34(1), not only is there a lack of precise definitions; there are deep ideological and political conflicts over the nature of these essentially contested concepts. Risks are therefore even more unclear, and contestable.

 

The DSA largely ignores this, and instead mostly seems to rely on what science and technology studies scholar Brian Wynne has called a ‘simple-realist’ conception of risk – meaning it assumes that risks have some objective existence, external to the institutions charged with risk management, whose task is simply to study these risks in order to manage them properly. Wynne contrasts this with a ‘reflexive’ realism, which assumes that hazards exist objectively, but that any way of identifying, measuring and evaluating them is necessarily situated in a social context, and involves normative and political decisions.

 

For example, Article 34(1) DSA states that VLOPs’ risk assessments must ‘include the following systemic risks’. However, the following categories are extremely abstract and broad, along the lines of ‘any actual or foreseeable negative effect on the exercise of fundamental rights’. As such, there exist nearly infinite ways of defining more concrete risks within these general categories, and choosing between them is ultimately a question of political priorities. Article 34 offers very little guidance on how VLOPs should make such choices, except that this is ‘specific to their services’. Similarly, Recital 90 states that VLOPs should consult with ‘the groups most affected by the risks’. But identifying such groups assumes that it is already clear what the risks are and who is affected. This implies that civil society will only be consulted after important decisions on the definition of the risks to be managed have already been taken.

 

In environmental regulation, Wynne criticises ‘simple-realist’ approaches as not only empirically unconvincing, but undemocratic and elitist. The idea that risks exist objectively and independently of political debate is convenient for the institutions responsible for risk management, since it invisibilises their political decisions and makes it more difficult for individuals, groups or movements – who may understand the problems associated with new technologies in a very different way – to contest these decisions.

 

I think we can already see this kind of depoliticisation in expert discussions around the DSA and its implementation. For example, as my colleague Paddy Leerssen has observed, regulatory agencies talk endlessly about the vital role of civil society actors, but typically describe them as providing ‘evidence’, rather than different political perspectives. From VLOPs themselves, the message is often along the lines that ‘we are all working together’ – with regulators, NGOs and researchers – on risk management. The possibility that companies, states and civil society actors might have incompatible interests, values and ideologies is not part of the debate.

 

Critical technology scholar Jathan Sadowski has argued that if we understand regulation of digital technologies primarily as a way of mitigating their risks, then other, more fundamental questions – like the underlying objectives of technological development, or whether certain technologies should exist at all – are excluded from regulatory debates. In much the same way that social insurance systems have stabilised capitalist labour markets, risk management could here serve to stabilise a market that is highly profitable for ‘big tech’ companies, insofar as it softens their most destructive tendencies, while sidelining more fundamental resistance to their commercial logics.

 

Who defines risks?

 

The ‘simple-realist’ conception of systemic risks may obscure but cannot change their political character. So who will exercise the political power to decide which risks are important and how they should be managed?

 

Ultimately, under Article 56 DSA, responsibility for overseeing compliance with Articles 34-35 rests with the European Commission. However, the primary actors responsible for their implementation are the VLOPs themselves. They must set up internal risk management systems and report at least once a year on the risks they have identified and the mitigation measures they have put in place. These reports will be validated by private audit firms and finally assessed by the Commission.

 

This kind of delegation to companies is characteristic of the ‘regulatory managerialism’ analysed by Cohen and Waldman. Although it allows regulators to leverage large corporations’ resources and expertise to pursue their objectives – which can be particularly attractive in technically complex fields like digital technology – it also poses well-recognised dangers: in particular, that corporations selectively focus on those risks and mitigation measures that are least disruptive for their business goals (a phenomenon documented in detail by Waldman in his empirical research on the regulation of personal data).

 

This is not only due to pure cynicism on the part of companies, but also to institutional conditions. The DSA’s risk management systems will not be set up from scratch. Rather, VLOPs will adapt systems for enterprise risk management that are already well-developed in all companies of this scale, as well as human rights due diligence processes that are also becoming increasingly widespread (and of which it could also be argued that their primary purpose is to manage business risks, such as negative publicity). As Power has observed, even systems established with the best intentions have a tendency to focus more and more on ‘secondary risks’: that is, the potential costs to an organisation if its risk management is deemed inadequate, rather than the issues these risk management systems nominally aim to address. For example, designing systems to be more ‘auditable’, so organisations have a paper trail to demonstrate that they are managing risks responsibly, may be prioritised over making them more effective.

 

Indeed, audits will be an essential way for VLOPs to demonstrate compliance with the DSA. Under Article 37(1), auditors must assess not only whether their evaluations and reports are based on accurate data, but also whether they comply substantively with Articles 34-35, as well as with other commitments such as voluntary codes of conduct. It will almost certainly be the dominant ‘big four’ audit firms who will play this role, and who will therefore exercise substantial influence over the definition of risks and the choice of assessment measures across the industry.

 

The DSA also envisages NGOs and academic researchers contributing to risk management, for example through consultations and independent research. How much influence they will exercise in practice remains an open question; it will ultimately still be VLOPs and regulators who decide whether and when to take account of their input. In any case, such institutions are also far from being neutral or representative of the public. NGOs rely heavily on support from private philanthropy (including from major platforms themselves). Their relative resources reflect long-term inequalities between different social groups and between different EU countries. Equally, academia is not representative of society (and nor is it meant to be). The research questions most interesting from an academic perspective do not necessarily correspond with those most useful for regulatory enforcement.

 

Finally, although all these other actors will have significant influence, it is the Commission who will decide whether the measures put in place by VLOPs are sufficient to comply with the DSA. It will therefore be able to put substantial pressure on these companies to manage risks in a way that meets its expectations. It is already clear from the first enforcement actions that the Commission has launched under the DSA (five formal investigations against multiple VLOPs in under a year, and over 20 preliminary requests for information) that it will not hesitate to express its expectations about how systemic risks should be defined and prioritised.

 

At this stage, it should be stressed that the Commission is an executive institution with its own political agendas. By defining the risks that VLOPs should prioritise, it will be able to influence the regulation of media and communications in areas that are extremely politicised and contested – like debates around the genocide in Gaza, or the balance between children’s safety and their privacy and freedom of expression. Accordingly, the depoliticised discourse of risk management as a dry and technocratic issue could obscure not only the power of dominant corporations, but also the actions of government institutions which implicate all of our civil liberties and communicative freedoms, in a way that is nothing if not political.

 

 

This is a translation of a forthcoming article in the October 2024 edition of Comprendre son temps, a review published by Sciences Po. The research on which it is based was funded by the Project Liberty Institute. It also benefited from discussions at the Law & Political Economy in Europe Summer Academy at Glasgow University, and at the European Rights & Risks Stakeholder Engagement Forum, organised in Brussels by the Global Network Initiative (GNI) and the Digital Trust and Safety Partnership . The author’s participation in the latter event was funded by Google and TikTok.