The Regulation of Recommender Systems Under the DSA: A Transition from Default to Multiple and Dynamic Controls?

Urbano Reviglio (1) and Matteo Fabbri (2)

(1) Centre for Media Pluralism and Media Freedom, European University Institute, Fiesole

(2) IMT School for Advanced Studies, Lucca, Italy

 

 

In this contribution, we offer a critical overview of the interplay between the DSA requirements on the transparency and user controls for recommender systems and the design features that may be operationalized to comply with them.

 


 

On October 2nd 2024, the European Commission issued a request for information to YouTube, Snapchat and TikTok under the Digital Services Act (DSA), “asking the platforms to provide more information on the design and functioning of their recommender systems”. This request is aimed at obtaining information on the parameters used by the platforms’ algorithms “to recommend content to users, as well as their role in amplifying certain systemic risks, such as those related to elections and civic discourse, users’ mental health (e.g., addictive behaviors and “rabbit holes”), and the protection of minors”, including measures to mitigate them. If platforms were to fail in providing  this information, a formal initiation of proceedings could be started (Article 66 DSA). Eventually, the Commission can impose fines for incorrect, incomplete, or misleading responses to these requests for information (Article 74(2) DSA).

This regulatory spotlight on recommender systems is one way to hold the very large online platforms (VLOPs) accountable for the potential risks they pose to individuals and society. Yet it’s unclear whether the Commission’s latest inquiries (and potential investigations) into recommender systems will aim beyond transparency and safety measures to address a related objective: Empowering users, by giving them useful information about how their recommender systems work and providing meaningful alternatives to profiling-based recommendations, i.e. alternatives that offer users more direct control over what data is used to feed these systems and their outputs.

Unless users can meaningfully shape their online experiences, the DSA risks falling short of its objective of making recommendations transparent and controllable for users. However, providing such control features may not align with platforms’ economic incentives: this is a fundamental challenge for the effective enforcement of the provisions on recommender systems.

Contextualising the regulation on recommender systems

The DSA is the first supranational regulation that aims to address the controllability of recommender systems by empowering users of online platforms. One way it seeks to accomplish this is through transparency: Article 27 requires providers to explain “in their terms and conditions, in plain and intelligible language, the main parameters used in their recommender systems,” focusing on content and ranking, “as well as any options for the recipients of the service to modify or influence those main parameters”. When such options are mentioned in the terms and conditions, platforms should provide an “easily accessible” functionality “that allows the recipient of the service to select and to modify at any time their preferred option” (Article 27 (3)).

Another aspect of user empowerment is controllability: Concretely, Article 38 requires VLOPs that use recommender systems to “provide at least one option for each of their recommender systems which is not based on profiling”. These requirements have the potential to reshape the interaction between users and online platforms by reverting the traditionally passive role of the former, as they would be able to modify the parameters of the recommendations and therefore contribute to determine their output.

So how are the platforms implementing these new legal requirements thus far? The process is ongoing and the results are mixed, but it’s clear that platforms have thus far prioritized the former (transparency) without adequately addressing the latter (controllability).

State-of-the-art of implementation

Let’s take Meta and TikTok as two paradigmatic examples. Meta has implemented a series of explanations about how the outputs of Facebook and Instagram depend on different types of content (e.g., Reels, Stories) and recommendation policies (e.g., Explore). These explanations provide detailed information on which signals influence recommendations, allowing users to understand how their behaviour and interactions with the platform could change the recommended content they see. TikTok, for its part, gives a high-level overview of the parameters influencing its recommender systems. This information is a prerequisite for users to exercise more conscious control over the content which is served to them on social media.

When it comes to providing actual user controls, however, the platforms are quite limited. For example, on Meta’s platforms users can mention a narrow list of reasons for disliking a piece of content, or list keywords corresponding to hashtags that they want to filter out of their feeds. On TikTok, controllability is mainly offered through filters for “specific words or hashtags from the content preferences section in your settings to stop seeing content with those keywords”, thereby mirroring Meta’s approach without providing any further options. On Facebook, Instagram and TikTok (fig. 1) users can also opt to have a non-personalised feed as per Article 38: however, the usefulness of a completely non-personalized experience is quite debatable.

 

Fig.1: Examples of the application of Article 38 DSA and ‘Content preferences’ on TikTok (screenshots, October 2024).

It’s clear that the application of Article 27 has thus far been limited to fulfilling user explanation requirements (par. 2), while platforms have failed to implement user control provisions (par. 3) in most cases: they rather appear to have followed a ‘minimalist’ interpretation according to which Article 27 does not mandate the existence of user control options, but only requires them to be made accessible “where” they exist.

To solve interpretative discrepancies, further research is supposed to be carried out by the European Centre for Algorithmic Transparency (ECAT) in collaboration with the DSA enforcement team at DG Connect. Eventually, the Commission could issue a delegated act and guidelines to provide further guidance (Article 35) while supporting and promoting the development and implementation of voluntary standards in accordance with Articles 27 and 38 (Article 44(i)) as well as a code of conduct (Article 45). VLOPs should also engage with “representatives of groups potentially impacted by their services, independent experts, and civil society organisations” (Recital 90) and modify their interfaces and recommender systems in order to mitigate systemic risks (Article 35(d)), which include, among others, risks related to civic discourse, media freedom and pluralism, and even mental well-being (i.e., users’ addiction).

This is an open-ended process which will require further assessments, dialogue and testing. However, no guidelines or delegated acts on the implementation of these articles seem to be forthcoming. As a starting point, we envision which user controls can be meaningful in the perspective of fulfilling the DSA’s objective of empowering users vis-à-vis the largest online platforms.

Envisioning a framework for a substantive user control

There have been various attempts to envision alternative ways to give users agency over personalization on social media (see Panoptykon’s recent prototype for user control  (fig. 2) and the Mozilla Foundation’s proposal for responsible recommendations).

A key challenge in creating a user experience that enables direct controllability is the necessity of a dynamic set of design features that may be subject to normative and technical debate, difficult to implement across platforms, and that may ultimately go unnoticed or unused by users. Most of these proposals address this challenge by focusing on different aspects of user autonomy.

Firstly, a user could decide which data feeds personalization: this mainly involves deciding whether their profile data and implicit preferences (i.e. behavioral signals) are used by the recommender system for personalization. Such affordances can help the user guess what the recommender system “thinks” of them and allow them to  opt-out from behavioural profiling and recommendations based on implicitly inferred preferences that may not align with the user’s actual or consciously expressed preferences. However, limiting the amount and type of users’ data through which recommender systems are fed does not align with the business incentives of platforms, and this is likely why this approach has not been spontaneously adopted by VLOPs.

 

Fig.2: An example of the potential data-driven control signals of recommender systems (Panoptykon, 2023).

Secondly, a user could decide which preferences are considered in personalization. While there are various design strategies to align recommendations with users’ preferences, such alignment could be more easily ensured through the operationalization of ‘personal tags’, which are descriptive keywords or labels that provide additional information about a user’s inferred preferences. Such tags have been already implemented in China, where platforms by law provide users with functionalities to select or deselect tags that identify their inferred personal interests. In Douyin, for instance, these are divided in macro-categories, such as “humanistic sciences”, “travel” and “food delicacies”, which are in turn divided in subcategories. “Food delicacies”, for example, is further divided into “scouting restaurants”, “enjoying delicacies”, “traditional snacks”, and “purchasing ingredients”. Once a user chooses a category or subcategory of content, it is also given the option to select how much he or she is interested in it and the consequent “weight” in recommendations (fig. 3). To some extent, these features are similar to ads control settings provided by platforms like Google (Fig.4), and they can partly represent the recommendations criteria that the DSA requires them to disclose.

 

 

Fig.3: An example of the personal tags and their relative weight in the “Content preferences” of the Chinese platform Douyin (screenshot, November 2024).

 

Fig.4 Ads control by ‘trending topics’ in Google (Screenshot, November 2024).

Thirdly, a user could decide to integrate specific interest-driven options that modify the criteria of recommender systems. On Instagram, for example, users can decide to limit the recommendations of ‘sensitive content’ as well as ‘politically-related content’ from users one does not follow. On TikTok, users can decide to amplify educational videos related to science, technology, engineering and mathematics (STEM feed). While these options can be very useful, they are arbitrary and substantially unbounded to the provisions enshrined in the EU regulation.

Finally, explicit user feedback—signals such as likes, dislikes, or ‘not interested’ in content or topic —is essential in controlling the output of recommender systems, yet this was neglected by the DSA. In most cases, such features are not easy to find on the interface nor particularly granular, they may be available in the app but not in the browser’s website, and it is unclear whether and how specific feedback would lead to specific outcomes.

While platforms may expand the level of explicit feedback offered to users, it remains debatable how: it would be worth verifying the effectiveness of these feedback features through independent audits (Article 37 DSA), as user feedback controls may be ineffective and even contribute to harmful consequences to mental health, which is also a systemic risk in the DSA.

A more pragmatic way forward

A more pragmatic approach to the DSA requirements on user empowerment regarding recommender systems could accelerate their implementation in line with the objectives and provisions of the EU regulation. To imagine how this could be done, let us elaborate two hypothetical scenarios.

First, the right to opt-out from personalization derived from Article 38 could become more dynamic and meaningful: users could decide the percentage of personalized and non-personalized content they want to see. Such a proportional approach would offer users the opportunity to choose their preferred degree of personalization, effectively materializing a right to decide which type of influence a VLOP’s recommender system should have on one’s own exposure to online content.

Similarly, the ability to reset the personalization—currently provided by TikTok and Instagram—could be expanded to support a prominent EU policy objective: diversity exposure. A personalization reset could preserve users’ past personalization data, allowing them to benefit from prior profiles. Encouraging users to create and navigate multiple profiles would help them to diversify their experience, providing a concrete opportunity to escape the “filter bubble” (even if this may not even exist).

Strategically expanding on already existing VLOPs’ design features may represent a more pragmatic approach for implementing the DSA’s provisions and objectives in the short term. While it is desirable to achieve a more comprehensive set of design features that ensure full user control of RSs, there is a risk that platforms’ resistance to losing influence over user choices, coupled with the multiple technical and regulatory challenges they would face, might result in arbitrary and ineffective solutions at best, and deadlock at worst.

Conclusions

Explanations of how recommender systems work can be useful but only relatively meaningful if not complemented with opportunities for user control over recommendations. At present, the design features implemented by very large social media platforms for the explanation and user control of their recommender systems seem to represent a mere case of transparency washing. While the DSA can still drive a historical shift in the experience of social media users, a set of guidelines, a code of conduct, or even a delegated act on how to ensure transparency and user control of recommender systems in online platforms would be essential to enable a meaningful implementation of Articles 27 and 38. And yet, there seems to be no sign of any additional regulatory guidance on the horizon.

It is indeed likely that user control will not concretize any time soon. This is why we advocate for a more pragmatic approach to implementing design solutions which may build up on existing platforms’ design features. While we do not envision this as a satisfactory solution in the long term, it would still represent a significant regulatory advancement in platform governance.