Making Recommender Systems Work for People: Turning the DSA’s Potential into Practice
By Alissa Cooper and Peter Chapman,
Knight-Georgetown Institute
The Digital Services Act sets out broad new legal requirements to make recommender systems more transparent and accountable, including for their role in systemic risks. To fulfill that promise, implementation must go beyond basic disclosures and defaults; it must shape how these systems are designed and assessed over time. A new report from the Knight-Georgetown Institute, Better Feeds, and its accompanying EU Policy Brief offer a practical roadmap for putting these goals into action — and putting people first.
Every day, billions of people around the world and hundreds of millions of Europeans scroll through social media feeds, search results, and streaming recommendations that shape what they see, read, and watch. The vast majority of these experiences are shaped by algorithmic recommender systems that select, filter, and personalize content and other items across a diverse array of online platforms and services.
The Digital Services Act (DSA) is one of the first regulations in the world to recognize the varied ways that recommender systems influence our public discourse and well-being. The DSA introduces high level expectations for improving the transparency and accountability of recommender systems, including that platforms disclose how these systems work and provide tools for users to influence what gets recommended. The DSA also includes requirements for researcher data access, including in relation to how recommender systems may contribute to, or mitigate, systemic risks in the European Union.
The DSA’s expectations have transformative potential. However, more specificity is needed if the DSA is going to inform the design of more transparent and accountable recommender systems that effectively mitigate systemic risks. Indeed, in May 2024 the European Commission opened proceedings against Meta in relation to recommender system design and minors. In October 2024, the Commission opened an investigation into the online marketplace Temu’s use of recommender systems. A recent civil society complaint against Meta further alleges that the company is failing to meaningfully operationalize DSA recommender system requirements for Facebook and Instagram.
A new report from the Knight-Georgetown Institute, Better Feeds: Algorithms That Put People First, offers a how-to guide for global policymakers and product designers to address links between recommender systems and risks. An accompanying EU Policy Brief specifically considers how recommendations from the Better Feeds report can inform DSA implementation. This commentary describes how evidence-based insights from the Better Feeds report can help inform implementation of DSA recommender system requirements.
The current state of recommender system design
Many platforms optimize their recommender systems to maximize predicted “engagement” – the chance that users will click, like, share, or stream a piece of content. A common approach for designing these personalized systems assigns fixed weights to specific predictions (such as probabilities of clicks, likes, or shares) based on their presumed importance to the specific user, and the system sums these terms to compute a score for each item to be recommended (a piece of content, a page, or a friend account, for example). A more complex recommender system may rely on a neural network (a machine learning model) to generate ranking scores instead.
Optimizing for engagement aligns well with the business model of platforms monetized through advertising. Short-term gains in platform usage mean larger audiences for advertisers. But it can also contribute to risks, including threats to fundamental rights, the spread of illegal content, and problematic overuse or other harms to minors.
Recommender system requirements in the DSA
The DSA establishes a set of high level requirements aimed at improving the transparency and accountability of recommender systems across online platforms.
- Article 25 prohibits the use of deceptive or manipulative interface designs, seeking to enable intentional and deliberative user interaction.
- Article 27 requires platforms to clearly explain the main and most significant parameters used in their recommender systems, and to allow users to directly and easily select or modify their preferred recommendation settings when multiple options are available.
- Article 28 reinforces protections for minors by mandating proportionate measures to ensure their privacy, safety, and security.
- Articles 34 and 35 focus on risks and mitigations, requiring platforms to assess and mitigate systemic risks stemming from the design and operation of recommender and other algorithmic systems, among other design choices.
- Finally, Article 38 obliges very large online platforms to offer at least one recommender system that is not based on profiling, in line with the definitions and protections of the EU’s General Data Protection Regulation.
Collectively, these provisions aim to build a foundation for more responsible and user-aligned recommender system design.
From principles to practice: Better Feeds guidelines for strengthening the DSA
To realize the DSA’s promise, implementation must go beyond the current level of disclosure and defaults – it must meaningfully shape how recommender systems are designed and assessed over time.
The Better Feeds EU Policy Brief offers a practical roadmap for doing just that. The Better Feeds guidelines center policies and designs that promote long-term user value, where outcomes are aligned with users’ deliberative, forward-looking aspirations or preferences. Grounded in research and broadly aligned with the DSA’s systemic risk framework, it emphasizes three key areas: design and public content transparency, user choices and defaults, and assessments of long-term impact.
- Design and public content transparency
Article 27 requires that platforms explain the “main parameters” of recommender systems, including the criteria which are “most significant” in determining information to recommend to users and the reasons for their importance. These disclosure requirements can be interpreted in multiple ways, however, and the first round of DSA audits reveals variation in how platforms defined these expectations in the absence of clear regulatory guidance.
For Article 27 disclosures, specificity and consistency is key. The Better Feeds guidelines describe how platforms should disclose specific input data and weights in ways that allow for baseline interpretation of recommender systems’ main parameters. Consistent disclosures would allow independent experts, users, and the European Commission to examine and compare how recommender systems are optimized across platforms. This could enable more effective risk assessment and mitigation under Articles 34 and 35.
Beyond consistent disclosures of input data and weights, the Better Feeds guidelines also recommend that platforms publish a sample of the public content that is most highly disseminated and that receives the highest engagement. DSA Article 40(12) requires platforms to enable the sharing of real-time publicly available platform data with researchers working to identify and understand systemic risks. Publicly available platform data is vital for understanding what users see online and, by extension, how recommender systems surface different types of content.
Platforms have enabled inconsistent access to publicly available platform data, and KGI is working with leading experts to develop a comprehensive framework for what kind of platform data should be defined as publicly available, under what circumstances, and in what format.
- User choices and defaults
The DSA introduces multiple expectations in relation to user choice and defaults. Article 25 requires platforms to avoid deceptive or manipulative design interfaces. Article 27 requires that users be presented with direct and easily accessible settings to select among different recommender systems, when multiple options are available. Article 28 requires platforms to take proportionate measures to ensure a high level of privacy, safety, and security of minors. Article 38 requires large platforms to offer at least one recommender system option that is not based on user profiling.
The Better Feeds guidelines recommend that platforms optimize recommender systems for long-term user value by default, even (and especially) when other options are present. This means designing recommender systems in ways that are aligned with users’ deliberative, forward-looking aspirations or preferences. The guidelines lay out practical steps to integrate user-facing tools that explain how user interactions can shape future recommendations, including easily accessible ways for users to set their preferences about types of items to be recommended and to be blocked.
These tools can help fulfill the spirit of Article 25’s call to avoid manipulative design and Article 38’s mandate for enabling non-profiling-based recommender options on very large platforms.
- Assessments of long-term impact
Platforms frequently assess changes to recommender system design. Every year, product teams run thousands of experiments to evaluate design changes against company-selected metrics. Given the frequency of experimentation, many platforms also maintain a holdout group – a group of users that are exempt from having design changes applied to their accounts, and who function as a control group for comparison with the rest of the user base.
Long-term holdout experiments can be an important tool for understanding systemic risks and mitigations under Articles 34 and 35, as well as audits under Article 37. Platforms should establish validated ways of measuring how recommender systems connect to systemic risks such that these measurements can be independently analyzed and compared over time and across platforms.
The Better Feeds EU Policy Brief describes how policymakers and platforms can integrate the use of long-term holdout experiments into risk assessment, including both annual systemic risk assessments as well as new product feature assessments in areas likely to have “critical impact” under Article 34. Long-term holdout experiments are an important way to understand the degree to which platforms optimize for long term user retention, value, and satisfaction.
If platforms were to share aggregated, non-confidential results of long-term holdout experiments through DSA assessments, they would work to align their recommender systems in support of users’ interests, because otherwise the experiment results would show that product changes are leaving users less satisfied than those in the holdout group.
Consistent metrics to measure harm, systemic risk, and long-term user value, particularly in relation to at-risk populations, are an essential part of effective risk assessment and mitigation under Articles 34 and 35. The Better Feeds guidelines recommend platforms incorporate a range of methodological tools to measure harm and systemic risk, including user surveys, usage tracking, engagement data, and other methods focused on at-risk populations.
Towards Better Feeds
The DSA lays an important foundation by connecting recommender system design to systemic risk and by establishing requirements to increase the transparency and accountability of recommender systems. By integrating empirical lessons from Better Feeds into DSA implementation, platforms and policymakers can advance toward a future where recommender systems are not just more transparent, but more accountable to users’ long-term needs and democratic values. The DSA can deliver on its potential – by not only making recommender algorithms more understandable for users, but also by making them work better for users.
The Knight-Georgetown Institute (KGI) is based at Georgetown University in Washington, D.C. KGI bridges the gap between independent technology research and the urgent needs of policymakers and product designers navigating this complex landscape.