Online advertising: These three policy ideas could stop tech amplifying hate

 

 

By Catherine Armitage, Johnny Ryan and Ilaria Buri

The relationship between the spread of harmful content and the business models that fund it is a preoccupation for many policymakers today. Political momentum is building around the idea that banning ‘surveillance advertising’ could be the answer. This has translated into a variety of different proposals being discussed by MEPs as they prepare their response to the EU’s Digital Services Act (DSA). When it comes to tackling the biggest digital threats to democracy, our analysis suggests that some of these proposals look more promising than others. We conclude that there are three policy ideas which could potentially have a significant impact – but more work is needed to develop them into something which can be built into future legislation in a meaningful way.

Disclaimer: Dear reader, please note that this commentary was published before the DSA was finalised and is therefore based on a non-final version of the DSA draft proposal.

‘Ban surveillance advertising’ has become a rallying cry in the digital policy world these days. From the ‘Tracking-free ads coalition’ in the European Parliament, to the campaign to ‘Ban Surveillance Advertising’ in the US, to last week’s launch of an international coalition for action against surveillance-based advertising – it’s clear that there is widespread support for policymakers to address this issue in upcoming legislation.

These groups all point to the many and varied harms beyond privacy posed by the ‘surveillance-based’ advertising business model of some very large platforms that are often seen as the equivalent of ‘the internet’. These include discrimination, disinformation, undermining democracy and manipulation of the public debate, fuelling the amplification of hate speech, racism, xenophobia and incitement of violence, particularly against minority groups. The online ad ecosystem has also been linked to market manipulation, fraud, funding criminal actors, security risks and the decline of independent journalism and media pluralism.

But how can legislation tackle these concerns? What can policymakers actually do to reduce these harms through the regulation of targeted advertising?

Translating a call for a ban into concrete legislative proposals that address these real-life harms is challenging. It requires careful analysis to devise solutions that are future-proof, effective and able to solve problems that currently aren’t addressed by existing legislation.

In Brussels there are currently more than 20 proposals on this issue being considered by legislators working on the Digital Services Act (DSA) and Digital Markets Act (DMA).

However, our initial analysis suggests that most stop short of posing a serious threat to ad-funded business models which incentivise tracking and data collection, enabling  harmful content to thrive.

There are three ideas which could have the potential to make the most impact. Developed further and supported by strong evidence, these ideas could lead the way to developing legislation that might be able to tackle some of the many harms that these business models pose to society.

 

Competition

The Commission has stated that the aim of the Digital Markets Act (DMA) is to ensure that gatekeeper platforms ‘behave in a fair way online’. This means imposing obligations that prevent them from engaging in unfair practices, even if they have a commercial incentive to do so.

Although the Commission’s draft proposal includes several provisions related to online advertising, these are all focused on B2B issues in the market such as measurement, fraud and supply chain transparency. Until now, there has been less focus on the DMA’s potential to tackle some of the consumer and societal harms that ‘surveillance capitalism’ has created.

Competition regulators in several countries have already led work to tackle the underlying business models of dominant platforms which rely on monetising user data. Some of these ideas could be introduced to discussions on the DMA, for example:

(1) the need to restrict platforms’ ability to use data collected for different purposes across their various properties to dominate online advertising and other markets;

(2) preventing dominant platforms from making access to their services conditional on providing consent for their data to be processed for purposes beyond what is necessary to provide the requested service (particularly across multiple services/properties);

(3) limiting acquisitions that enable dominant platforms to access and monetise more data about individuals.

It will be interesting to explore how and if these ideas could be translated into legislative suggestions for the DMA.

 

Dark patterns

Consent is a central part of ad-funded business models that rely on collecting large amounts of user data. This means that platforms have a commercial incentive to find ways to persuade users to opt in to tracking. Facebook’s annual report even warns that “regulatory or legislative actions affecting the manner in which we display content to our users or obtain consent to various practices (…) could adversely affect our financial results”.

Evidence shows that the biggest ad-funded platforms rely on dark patterns to guide users into providing consent for tracking and ad targeting. And these practices are common across the wider ad tech ecosystem too.

Although several groups and NGOs have challenged these practices as infringements of GDPR, enforcement has been limited until now. And whilst it’s easy for a platform to make small user experience (UX) changes if challenged, it’s much harder for regulators to constantly assess the legality and potential impact of changing designs and practices.

This suggests that introducing a prohibition or limitation on specific ‘dark pattern’ practices and techniques could have an impact on business models that rely on getting consent for ad targeting, especially if the use of dark patterns is linked to the validity of the consent.

During DSA discussions so far, dark patterns have been mostly linked to issues related to recommender systems and political manipulation on platforms rather than advertising.  However, there is scope for dark patterns linked to consent for targeted advertising to be relevant to a number of different pieces of legislation including DSA, DMA and the AI Act.

 

‘Off switch’ & exercise data subject rights 

Whilst regulating the use of dark patterns could impact the number of people who ‘switch on’ data collection for advertising purposes, it’s also important to consider how to enable more people to ‘switch off’ tracking.

Today, users face multiple barriers to exercising the rights created in GDPR. This includes withdrawing consent and objecting to processing, and rights to access, rectification and erasure. Particularly in the context of Real-Time Bidding (RTB), data flows are so complex and opaque that exercising these rights can be almost impossible. Even within platforms like Google and Facebook, which have developed specific interfaces dedicated to ‘ad settings’ and ‘privacy controls’, many of the choices available to users stop short of fully exercising GDPR rights.

Evidence suggests that when users are provided with a real choice that isn’t steered by commercial self-interest, most will choose to turn off tracking for advertising purposes. Apple’s recent ATT initiative, which showed all iOS users a prompt to ‘ask app not track’ or ‘allow’ when opening apps for the first time, has shown that 85% of users globally (as high as 94% in the US) chose to ‘ask app not to track’.

If the DSA is able to mandate companies to surface people’s right to withdraw consent (or other GDPR rights) in a similarly clear way, maybe more of us would opt out of ad tracking.

The DSA could do this by specifying that when someone sees an ad, they must be reminded of the existence of their GDPR rights and how to exercise them. GDPR did not lay out specific obligations for how online platforms should notify and enable users to exercise these rights in the context of digital advertising, which is particularly complex and involves strong commercial incentives to keep users opted in to the system.

This proposal could go some way to help users (a) know the rights which they can exercise in relation to their data (b) take action to control how their data is used to target advertising. Ultimately, this is about shifting the element of ‘control’ to users rather than leaving it up to companies who have a commercial incentive to limit this control as much as possible.

What else has been proposed ? Opt-ins and limitations on data collection

Beyond these three ideas, there are several other proposals on the table for policymakers to consider.

Some of these ideas aim to step in where GDPR enforcement has failed to materialise. This includes reinforcing requirements for users to opt in before they receive targeted advertising. The draft IMCO report, for example, proposes that intermediary services must ‘by default, not make the recipients of their services subject to targeted, microtargeted and behavioural advertisement’. However, a user would still be able to choose to opt in to this type of advertising by providing consent (as defined in GDPR), assuming that there is a means of operating targeted advertising that protects their data.

Mandating opt-ins reinforces GDPR, but effective enforcement is still needed to challenge the way some of the biggest ad-funded platforms get consent to collect data for advertising. Several GDPR complaints have already argued that existing mechanisms used by these companies to get users to opt in to ad tracking do not comply with the GDPR’s requirements. But three years since the regulation came into effect, there has not been any systemic enforcement against these practices.

Other proposals go further by outlawing specific uses of data to target advertising. Examples include an outright ban on using any type of personal data for digital advertising; restricting the use of certain personal data categories (e.g. sensitive data, children’s data); and limiting ad targeting to data that has been ‘explicitly provided’ by a user.

Limiting the types of data that feed into the online advertising ecosystem might protect consumers to some extent but it doesn’t address the main harms posed by the business model. There would still be an incentive for companies to track and profile users in order to increase engagement and time spent using their platforms. As long as people are using the platform, they can be served (non-personalised) ads and the platform can make money. This could continue to have broad negative impacts on society stemming from the dissemination and amplification of attention-maximising content that could be harmful including disinformation, abuse, discrimination, hate speech and more.

 

A complete overview of all ongoing policy proposals to reform online ads can be found here. This work is part of AWO’s work for OSF on regulating the digital public sphere.

 

Catherine Armitage is Public Policy Advisor at AWO. She has more than a decade of experience working in the marketing industry, most recently as Policy Director at the World Federation of Advertisers.

Dr Johnny Ryan FRHistS is a Senior Fellow of the Irish Council for Civil Liberties, and previously held senior roles in the adtech and media industry.

Ilaria Buri is a research fellow at the DSA Observatory at the Institute for Information Law (IViR) at the University of Amsterdam. She is admitted to the Bar in Italy and, before joining academia, she gained extensive experience as a practitioner in law firms.