Investigation: Platforms still use manipulative design despite DSA rules

By Chitra Mohanlal, Tech Researcher at Bits of Freedom

Our recent investigation into a selection of Very Large Online Platforms reveals multiple potential breaches of the DSA relating to manipulative design (Article 25), recommender system transparency (Article 27), and the obligation to offer alternative recommender systems (Article 38). The report explains and illustrates several types of manipulative design practices on platforms including Facebook, Snapchat, TikTok, Shein, Zalando and Booking.com. These findings can be used to support enforcement actions under the DSA.


Users constantly make choices when they use a website or app. These choices include clicking on content or information to watch, selecting desired options, and performing certain actions. We expect the choices we make to be free choices; we expect to be properly informed to ensure that our choices align with our goals. Sadly, this does not correspond with what actually happens. Choices are often manipulated through platforms design.

This is prohibited by Article 25 of the DSA, which states that online platforms should not be designed in a way that manipulates or deceives users. So-called dark patterns are practices that impair or distort the ability of users to make autonomous and informed decisions through the design, structure, or functions of the platform. Specific practices include, among others: giving more prominence to certain choices, repeatedly requesting a user to make a choice they have already made, making it harder to cancel a service than to subscribe, making some choices more difficult than others, or providing default settings that are difficult to change.

It’s worth noting that these patterns are often used to direct users towards actions that are not in their best interest but rather benefit the platform, making these practices extra harmful.

The research and key findings

The aim of this exploratory research was to define specific forms of manipulative design and detect them on a selection of large online platforms: Snapchat, Facebook, TikTok, Shein, Zalando, and Booking.com. By demonstrating concrete examples of manipulative design, regulators will be able to take more targeted actions against possible violations by platforms.

In this research, we distinguished between two main types of manipulative design. The first involves misleading techniques that deceive users into making choices that they did not intend to make (deceptive patterns). The second refers to techniques that grab and hold the users’ attention, causing them to spend more time on the platform and return more often than intended (unwilled attention-capturing patterns).

Our investigation found evidence of manipulative design on six major platforms, with potential violations of Articles 25, 27, and 38 of the DSA. Specifically:

  • Repetitive and misleading notifications on Snapchat and Facebook, including pop-ups, red badges, and “fake friend” alerts (Article 25(3b))
  • Cookie banners that visually steer users to accept tracking, especially on Facebook and Zalando (Article 25(3a))
  • Pre-selected or hidden defaults during onboarding that undermine user autonomy, especially on Snapchat and Facebook (Article 25)
  • Limited access to non-profiled recommender systems on Facebook, TikTok, and Snapchat; poor retention of user preferences on Facebook (Articles 25, 27, and 38)
  • Account deletion processes on Facebook that are significantly harder to complete than account creation (Article 25(3c));
  • Deceptive urgency cues on Shein, including fake timers and scarcity claims (Article 25);

These findings highlight a pattern of platform design choices that may impair users’ ability to make free and informed decisions, contrary to the goals of the DSA.

How manipulative design shows up in practice

The following offers a closer look at how these practices play out across different platforms. We group the findings into five areas where manipulative design is especially prevalent: notifications, tracking and consent, onboarding, recommender systems, and other dark patterns (like gamification and urgency cues). Each section includes examples and references to relevant DSA provisions.

1. Notifications: Pop-ups, misleading badges, and ‘fake friend’ alerts

Default-on notifications: On social media platforms, almost every single type of notification is turned ‘on’ by default. It takes users considerable effort to manually select which notifications they do and do not want to receive. The number of options can be overwhelming, and platforms generally don’t provide proper walkthroughs to help guide users through the process.

Persistent prompts: Even when a user manages to turns notifications ‘off’ on Snapchat, the platform sends a pop-up message every few days requesting them to allow notifications. Article 25(3b) specifically prohibits this practice, stating that “repeatedly requesting that the recipient of the service make a choice where that choice has already been made, especially by presenting pop-ups that interfere with the user experience” is an example of a manipulative practice.

Red badges: Snapchat and Facebook also use attention-grabbing red badges within the platform. These are typically red dots, sometimes with numbers inside them, that indicate how many notifications have been sent to the user. Badges have familiarly been used to indicate a new interaction with the user or that the user needed to take an action. Today, they are also used when new content is merely suggested (such as videos or comments the user hasn’t yet interacted with). Because there will always be new content to suggest, the platform can generate these notifications and badges at will. This is arguably manipulative, since users expect the badges to signal something else—namely, a message or interaction from another user.

Secondly, because a badge doesn’t reveal the content of the notification, it manipulates users into clicking. This exploits the user’s fear of missing out and vulnerability to variable rewards, thus undermining their autonomy. Although the DSA does not mention this practice explicitly as a manipulative pattern, it could be addressed in the upcoming Digital Fairness Act (DFA), which aims to regulate addictive design that causes users to spend more time on platforms than intended.

“Fake friend” notifications: Lastly, Snapchat sends numerous so-called ‘fake friend’ notifications. These appear to be messages from a friend or someone the user follows—but they are, in fact, platform-generated. They inform the user that new content is available from that person, not that a message was sent. As a result, users may be tricked and falsely informed, in possible violation of Article 25.

2. Tracking: Cookie consent steering

In cookie banners, users are often steered toward allowing all cookies. In this research, we found examples of this on the websites of Facebook and Zalando. The platforms do this by making the “accept all” button stand out visually, giving it disproportionate prominence. This is specifically prohibited by Article 25(3a) of the DSA, which identifies “giving more prominence to certain choices when asking the recipient of the service for a decision” as a manipulative design practice.

The platforms also include text that encourages users to accept all cookies, while downplaying potential privacy concerns—even though such decisions may carry negative consequences for users’ privacy.

3. Onboarding: Pre-selected choices and hidden defaults

In the onboarding processes of Facebook and Snapchat, certain settings are pre-selected. This increases the likelihood that users unknowingly agree to these settings, which undermines their ability to make autonomous and informed decisions, as described in Article 25 of the DSA.

For example, when creating a new Snapchat account, users are shown suggested “friends” that are pre-selected. This increases the risk of users coming into contact and sharing content with others they do not know, and did not intend to share information with.

Another pre-selected setting is that when users add information to their Facebook profile, the privacy setting (which is small and grey, and thus not easily visible) is set to “public” by default.

4. Deceptive design in recommender systems: Limited accessibility and control

The order of content (including comments) on social media platforms is automatically determined by profiling-based recommender systems. User data and interactions are used to predict what type of content the user might want to see and what will generate the most engagement. Because this can contribute to risks such as threats to fundamental rights, the spread of illegal content, problematic overuse, and harms to minors, the DSA sets out requirements to improve accountability and transparency in recommender systems.

Non-profiled feeds are hard to access: Article 38 of the DSA states that platforms using recommender systems must provide at least one alternative that is not based on profiling. All of the social media platforms we researched formally met this requirement. However, users’ ability to select a non-profiled recommender system is often impaired by the platform’s design. The option is hidden, pointing to a possible breach of Article 25. It may also constitute a breach of Article 27(3), which states that users should be able to modify their preferred recommender system at any time, and that this function should be directly and easily accessible from the location where the information is being prioritized.

This is not the case when the function is placed on a separate page (as with Facebook and Snapchat), or requires a specific gesture to access it (e.g. tapping and holding content on TikTok).

Facebook does not remember user preferences: When a user opts for a non-profiled feed on TikTok or Snapchat, the app retains the user’s preferred setting on subsequent visits. This is not the case on Facebook. When users opt for a non-profiled feed on Facebook, the platform simply defaults back to the profiling-based feed once a user closes and reopens the app. This means users must repeatedly reselect their preferred recommender system—despite the DSA’s recital 67, which refers to this as an example of “default settings that are very difficult to change.” Civil society organizations have already filed a complaint against Meta on this basis.

These design choices align with what has previously been stated by Reviglio and Fabbri regarding the regulation of recommender systems under the DSA: Platforms appear to have prioritized the DSA’s transparency requirements without adequately addressing the controllability requirements for recommender systems.

Minimal control over content curation: Even when non-profiled feeds are easily and readily accessible, users may still have very limited options to control what they see. This is where Article 27 falls short: while it requires platforms to provide access to alternatives, it does not mandate more meaningful user control over the experience, such as letting users choose which data may or may not be used.

A report from the Knight-Georgetown Institute reinforces this point, suggesting that greater specificity is needed to ensure platforms meet their obligations around recommender systems and mitigate systemic risks. That report, Better Feeds: Algorithms That Put People First, proposes new guidelines—for example, requiring platforms to design and default to recommender systems that enhance long-term user value and align outcomes with users’ deliberate, forward-looking aspirations or preferences.

5. Other manipulative patterns: Gamification, friction, and urgency

Social pressure through gamification (Snapchat): Snapchat uses a variety of gamification elements and patterns that create social pressure. This includes “snap streaks” with friends, the “snapscore” (a friendship level), and features that allow users to share their location with friends.

Account deletion barriers (Facebook): On Facebook, deleting one’s account is an unnecessarily complex task. The steps required to find this function are unclear and confusing, and once located, the process involves multiple steps to complete. This is specifically cited in Article 25(3c) of the DSA as a manipulative practice: “making the procedure for terminating a service more difficult than subscribing to it.”

Deceptive urgency cues (Shein): Shein clearly employs more deceptive patterns than other e-commerce platforms investigated. One such tactic is the use of fake timers that give the impression a discount is about to expire. Another is the use of messages suggesting that a product is almost sold out. Both are designed to pressure users into acting quickly, undermining their ability to make a well-informed decision (Article 25).

The Autoriteit Consument & Markt (ACM) in the Netherlands, together with other European consumer protection authorities in the Consumer Protection Cooperation (CPC) Network, has already issued a warning to Shein regarding these practices. If Shein fails to comply, sanctions may follow under EU consumer law.

Separately, the European Commission is investigating Shein, as a Very Large Online Platform under the DSA, for potential failures to address systemic risks—including those related to consumer protection, public health, and user well-being. This DSA investigation is separate from, but complementary to, the ongoing CPC action.

Conclusion and future research

This research reveals a pattern of manipulative design practices across major platforms that likely violate several provisions of the DSA. Our investigation provides concrete evidence of how these practices are implemented on Very Large Online Platforms and contributes to the emerging evidence base that regulators may draw upon to assess compliance, particularly under Article 25 (online interface design and organisation) and Article 27 (recommender system transparency).

These findings may support enforcement efforts at both the national and EU levels. While the European Commission oversees systemic obligations for VLOPs and VLOSEs, national regulators like the Dutch ACM also play a key role in investigating and enforcing other DSA rules, including those related to manipulative design. The practices documented here can support targeted scrutiny under the DSA.

Follow-up research will focus more closely on how Snapchat uses notifications to manipulate user behavior. Various types of notifications, such as those promoting suggested content or mimicking personal messages, will be logged over time to assess their frequency. In parallel, qualitative interviews will explore how users perceive and respond to these cues, including red badges and other visual triggers, in order to assess impact.

If our research shows that users’ expectations consistently diverge from the outcomes of these interactions, especially in ways that nudge attention or drive forms of engagement that go against their explicit preferences, this could strengthen the legal case that certain designs are deceptive under Article 25.

Finally, these findings may also contribute to ongoing policy debates, including the development of the Digital Fairness Act (DFA), which aims to introduce stronger safeguards against addictive and manipulative design practices.

The full report from Bits of Freedom can be accessed here:

https://www.bitsoffreedom.nl/wp-content/uploads/2025/06/20250616-report-exploratory_study_manipulative_design.pdf