Reclaiming the Algorithm: What the DSA can—and can’t—fix about recommender systems
By Katarzyna Szymielewicz, Panoptykon
Europe’s information environment has a structural vulnerability so long as dominant platforms continue to optimise their recommender systems for engagement rather than democratic resilience. This piece examines how the DSA can be used to push platforms toward algorithms that better serve the public interest—through systemic-risk mitigation, design obligations, and enforcement—and what meaningful recommender-system interventions could look like in practice.
For nearly a decade, Panoptykon has explored legal and technological solutions to protect vulnerable individuals, and society at large, from harms caused by online platforms’ recommender systems optimised for short-term profit. In 2019, I published a manifesto calling for reinvention of the internet’s broken data ecosystem and putting users back in control of their experience. In 2021, when the DSA was negotiated, we fought for an amendment that would force Very Large Online Platforms (VLOPs) to allow third-party recommender systems on their platforms, thus giving users real choice to shape their feeds.
Once the DSA became fully operational, together with researchers and civil society experts who joined the Recommender Systems Task Force, we have monitored how VLOPs (fail to) comply with their obligation to mitigate systemic risks caused by their algorithmic systems. Going beyond this critique, in 2024, we developed a recipe for designing safer social media platforms. This piece builds on the concepts developed by the Recommender Systems Task Force and lays out how a more ambitious enforcement of the DSA — supported by its upcoming revision and the new Digital Fairness Act — can create space for more responsible, human-centered social media. For a more nuanced analysis of how these policy openings can be used to advocate for what we call “algorithmic pluralism” please check this working paper.
Are we ready for cognitive warfare?
On the night of September 10, 2025, around twenty Russian drones violated Polish airspace. NATO was forced, for the first time since its founding, to open fire on enemy aircraft in Europe. Within hours, Polish social media filled with a torrent of disinformation about the event. Russian bots and fake accounts sought to cast doubt on the provenance of the drones, accusing Ukraine of the provocation. “Over the course of that night, we analyzed around 200,000 mentions spreading the Russian narrative, or 200 to 300 mentions per minute,” said Michal Fedorowicz, president of the Res Futura collective.
In this case, like many others before it, social media platforms failed to stop the wave of disinformation from reaching millions of citizens across the country and beyond. This episode fueled further polarisation and deepened anti-Ukrainian sentiment in parts of the Polish public. And it exposed a structural vulnerability in Europe’s information environment, one we must be prepared to address more quickly and decisively next time, when the stakes could be even higher.
At the crux of this vulnerability is not disinformation alone, but how social media recommender systems, optimised for engagement rather than the public interest, increasingly enable disinformation to thrive. According to the EP Youth Survey (2024), these systems now serve as the primary source of political information for Europeans under 30. 76% of respondents said they encountered disinformation or fake news in the previous seven days alone.
This should not come as a surprise. We know that the largest online platforms rank and present content based on algorithmic predictions of user engagement, and that such engagement often correlates negatively with quality. It turns out the things that keep people clicking—like sensationalised language, outrage cues, angry emojis, and engagement bait—tend to amplify the extreme, the negative and the divisive.
In elections, these design choices are especially consequential. Social media algorithms shape not only what individuals see, but what becomes politically salient. When results balance on a knife’s edge (as in Poland’s last presidential election, decided by 1.8%) the risk of manipulation in decisive moments should not be underestimated. Yet platforms’ engagement-driven optimisation creates an environment that rewards sensational, emotionally charged narratives, which is a dynamic adversarial actors can readily exploit.
Reclaiming the Algorithm: What the DSA makes possible
The Digital Services Act (DSA) gives the EU, for the first time, a legal basis to demand accountability for how recommender systems shape the information environment, especially when they contribute to risks like electoral interference. The following outlines what such accountability could look like in practice.
Facebook’s “break-glass” measures: a precedent without transparency
In the run-up to the 2020 US presidential election, Facebook (now Meta) implemented dozens of temporary emergency changes to its news feed. These “” measures — 63 in — were meant to slow the spread of inflammatory and misleading content (including the voter-fraud misinformation peddled by then-candidate Donald Trump), and included measures such as “proportional demotion” of content likely to involve hate speech or violent incitement, and downranking content containing keyword matches for voter fraud or delegitimization claims.
This episode showed that platforms can indeed modify their recommender systems quickly when they decide the stakes are high enough. But it also exposed the limits of relying on unilateral corporate discretion. To this day, we still know very little about what Meta actually changed or what effects those changes had, and scientific inquiry into the issue has only delivered fragments of the answer. In any case, allowing a single company to reshape the flow of political information, without any transparency or independent oversight, is not a governance model the EU, or any democracy, should accept.
The DSA’s structural alternative: Ongoing risk mitigation (Art. 35)
Rather than rely on platforms to opaquely take “break-glass measures” as they see fit, the DSA creates a legal architecture designed to safeguard against systemic risks. The largest online platforms are required to manage such risks on an ongoing basis, including those affecting civic discourse, media freedom and pluralism. Under Article 35(d), this explicitly includes modifying their interfaces and recommender systems as necessary to reduce foreseeable risks.
The Commission’s March 2024 election guidelines recognise that recommender systems can shape the information environment and public opinion (see sec. 3.2.1. point d), and point to interventions such as reducing the visibility of election disinformation through clear and transparent methods (for example, by downranking content that has been fact-checked as false, or posted from accounts repeatedly found to spread disinformation).
If a platform fails to fulfill their risks management obligations, the Commission can escalate by opening investigations, imposing fines, or, where public security or the integrity of democratic processes is at stake, adopting interim measures under Art. 70(1).
A test case: Romania 2024
Despite the breadth of Articles 34 and 35, supported by the Commission’s guidelines, enforcement has been cautious so far in tying electoral integrity risks to recommender systems An exception is the high-profile proceeding against TikTok following Romania’s controversial 2024 presidential election, which saw the Russia-aligned candidate Călin Georgescu’s sudden rise from political obscurity to presidential frontrunner, thanks to a hugely influential TikTok campaign that was, according to the Romanian intelligence services, spurred on by foreign interference. The Commission is still investigating TikTok’s management of risks linked to coordinated inauthentic manipulation and automated exploitation of its service, particularly its recommender systems.
The Romanian case is likely to become a landmark test of how platforms are expected to handle electoral risks: in particular, what counts as adequate mitigation, and how far recommender system design itself can be understood as a potential source of harm.
Yet the novelty of the DSA’s systemic-risk framework, and the complexity of this case, could mean that enforcement outcomes will take a long time to materialize. In the meantime, we do not know what TikTok has changed (if anything) in its recommender system after what happened in Romania.
When manipulative design violates the DSA (Art. 25)
Importantly, the DSA does not only regulate systemic risk. Article 25 prohibits platforms from designing, organising, or operating their interfaces in ways that deceive or manipulate users. Engagement-based ranking, which largely infers user preference from impulsive behaviour rather than explicit, considered choices, sits uncomfortably close to that line.
As the Knight-Georgetown Institute argues in its “Better Feeds” analysis, impulsive clicks do not necessarily reflect users’ deeper preferences. The report offers this potato chip analogy:
A person attending a party might eat a whole bowl of potato chips, and the party host might take this as a sign to refill the chip bowl. But perhaps the party guest is eating impulsively, when in fact they have a long-term goal to be eating healthier food. The guest’s impulsive behavior is misaligned with their underlying preference, but the host interprets the behavior as a sign of what the guest wants.
In a similar manner, impulsively dwelling on, clicking, and liking certain content (say, content that is risky in some way) does not necessarily reflect the user’s forward-looking desires to the platform. A platform that concludes from this engagement that the user must want more and more of this content would be falsely assuming that the user’s impulsive, mindless, or myopic behavior equates with their long-term preferences. (Better Feeds…, p. 17)
If platforms systematically optimise for “impulsive, mindless, or myopic” behaviour without giving users meaningful ways to express reasoned preferences, that may amount to manipulation under Article 25. Enforcing design obligations could furthermore complement risk-based enforcement, by addressing harmful incentives at the source rather than only after harms materialise downstream.
What about Article 38 and alternative feeds offered by VLOPs?
Article 38 of the DSA mandates that users be given an option for each of their recommender systems which is “not based on profiling”. Could that be an answer for those who feel disappointed or abused by the algorithmic feed? Why is this not enough?
While all VLOPs have developed alternative ranking feeds, their usability and value to users remain poor. In practice, alternative feeds are chronological or informed by most popular (“trending”) topics, which either makes them irrelevant or drives quality of the content to the very bottom. While non-personalised feeds are, technically speaking, available, VLOPs do not set them as a default. Also, according to KGI, in most cases “alternative feeds are difficult to access and not well understood”.
As a result, few users choose this option and even if they try, many switch back to engagement-optimized feeds due to a poor user experience. KGI warns that “This outcome allows platforms to claim that users prefer engagement-optimized ranking, obscuring the spectrum of alternative designs, and it provides no incentive for platforms to improve their user experience beyond the baseline set by ranking for predicted engagement.”
At Panoptykon, we advocate for a combination of safer defaults and authentic personalisation as a path to a healthier and better social media experience. While engagement-oriented ‘personalisation’ of the algorithmic feed leads to many documented harms, we believe there are ways to enable authentic personalisation, initiated and controlled by the user. I will explore them in Part 2 of this blog post, focusing on social media interoperability.
How to fix the logic of recommender systems and keep personalisation
Since the DSA came into force, several whistleblowers have documented, alongside civil society researchers and advocates, a range of recommender system platforms could implement if they treated their DSA obligations (including interface design and systemic risk mitigation) seriously. They argue that redesigning recommender systems to promote trust, social cohesion and deliberation is not only a moral imperative, but technically within reach.
One widely discussed design change, sometimes described as “bridging-based ranking,” would adjust recommender system ranking to reward content that generates constructive engagement across ideological lines, rather than maximising attention and within-group outrage. Another approach would be to rank credible news outlets higher on political topics, based on signals like transparent ownership and editorial standards.
Both interventions illustrate what is technically possible, but neither is a simplistic fix. Boosting quality news sources, for example, depends on establishing transparent, contestable criteria for “quality” grounded in public-interest standards, which is easier said than done.
Even if these proposals don’t offer a silver bullet, they are driven by a basic question that we should all be asking platforms: what values are they’re choosing to optimise for in the first place? The Knight-Georgetown Institute argues for shifting recommender objectives away from impulse-driven clicks and toward users’ “deliberative, forward-looking aspirations.” Their analysis even suggests that optimising for long-term user value could improve retention over time, meaning public-interest design and business incentives could align. But it seems unlikely that platform incentives alone would deliver such a paradigm shift.
Enforcing systemic risk obligations (Articles 34 and 35)
This is where the DSA’s systemic risk framework should have bite. The regulation expects the largest platforms to test, monitor, assess, and adapt recommender systems as part of a continuous risk-management cycle (Article 35), and to disclose their efforts publicly in annual reports. In principle, this work is overseen by the Commission, and scrutinised by independent auditors (Article 37), and vetted researchers who are empowered under Article 40 to examine internal platform data relevant to systemic risk mitigation.
In practice, however, none of this oversight is yet functioning at the level needed to critically evaluate recommender-system design and its role in systemic risks. As shown by a collective civil society response to the first round of VLOPs’ risk assessments, these reports relied heavily on recycled material and focused on content moderation, neglecting to acknowledge design-related risks while offering unsubstantiated claims about mitigation measures.
Granted, even improved risk assessment reports will always be partial in that they only disclose summaries of results. This is where independent auditors, vetted researchers, and regulators need to come in and provide additional scrutiny. But both Article 37 and Article 40 mechanisms are in their infancy, and it remains to be seen the level and quality of data access they achieve, and how quickly this generates insights that can inform enforcement proceedings.
Until we have more meaningful transparency and oversight, changes to recommender systems will remain as opaque as Facebook’s 2020 break-glass interventions—and just as difficult to evaluate or hold accountable.
Conclusion
If social media platforms serve as critical infrastructure for democratic debate, then engagement-based ranking simply has to go. Systems optimised for emotional volatility are too easily exploited by adversaries, and too fundamentally misaligned with the public interest, to remain the organising logic of Europe’s information environment.
The EU should use the DSA to demand a better model. And the research community has already shown what that could look like. Whether it’s the Knight-Georgetown Institute’s Better Feeds or People Vs Big Tech’s Safe by Default, we have technically grounded frameworks for replacing engagement-driven ranking with designs that genuinely serve the public interest.
And yet, while using the DSA’s levers to reform recommender systems is a necessary and important effort, it may only get us so far, because such interventions would still rely on the existing platform ecosystem to reform from within. If the overarching goal is a more plural and resilient information environment, we should seek regulatory pathways that open the market for recommender systems to competing algorithms and public-interest incentives.
Part 2 of this series will examine what those pathways might look like, and discuss which tools, within and beyond the DSA, could realistically get us there.
