DSA Audits: How do platforms compare on influencer marketing disclosures?

By Taylor Annabell, Utrecht University

Under the DSA, social media platforms must provide clear tools for influencers to disclose paid content. But how well do they meet this obligation—and how rigorously is compliance assessed? This post compares eight DSA audit reports on influencer marketing disclosures and finds striking inconsistencies in how audits were conducted, what was measured, and how “compliance” was defined. The findings raise broader concerns about audit transparency, platform-defined standards, and the need for clearer guidance on what adequate disclosure—and meaningful oversight—should look like.


Whenever a social media influencer gets paid to promote a product or service online, they must—under European consumer law—inform their viewers that the content in question is, in fact, an ad. Disclosure rules have consistently been reiterated across media formats, from newspapers to radio, television, and now, social media—and aim to prevent consumers from being misled into thinking that a product recommendation or review is authentic, rather than paid promotion.

Failure to disclose commercial content is not only illegal; it can also have civic implications, especially when it intersects with the political sphere. Take the alleged interference in Romania’s November 2024 annulled presidential election: Micro-influencers on TikTok were paid as part of a political advertising campaign that asked viewers to describe an ideal president, while listing qualities associated with far-right candidate Călin Georgescu. Much of this content, however, was not disclosed as commercial.

Whether hidden advertising helped manipulate TikTok’s Romanian user base during the elections—and whether the platform adequately enforced its own commercial content policy—was among the factors listed by the European Commission when it opened formal proceedings against TikTok in December 2024. According to the Commission’s press release, TikTok is under investigation for potentially failing to meet its DSA risk management obligations with regard to election integrity.

As both the Commission and scholars have identified, social media influencers frequently fail to disclose commercial content, and platform design plays a key role in the current high rates of non-disclosure. That’s why it’s vital that platforms facilitate disclosure processes for influencer commercial content in ways that are transparent, easily accessible, and highly visible.

Platforms’ legal obligation under Article 26(2)

Article 26(2) of the Digital Services Act clarifies the role platforms play in facilitating the disclosure of commercial content through their interface, crystalizing the relationship between platform liability and influencer marketing. Very Large Online Platforms (VLOPs) are required to ensure users can self-disclose when their content is commercial—and that these disclosures are made visible to other users.

Taking this obligation as a starting point, this post examines how social media platforms implement self-disclosure mechanisms for commercial content, according to Independent Audit Reports. These annual audits, mandated by Article 37 of the DSA, assess VLOP and VLOSE compliance with a wide set of obligations (audit reports are published individually by each platform; a consolidated list is maintained in Tremau’s DSA Database).

Through a comparative analysis of eight VLOPS commonly used by influencers, this post identifies significant variation in how audit reports document compliance with Article 26(2). What’s striking is that all platforms were deemed compliant—despite apparent inconsistencies in how auditors understood the obligation, and in the level of rigor applied in their assessments.

While auditors are afforded flexibility in their methodologies, the lack of clear guidance or shared expectations is increasingly problematic. Most auditing firms involved—including EY, Deloitte, KPMG, and FTI Consulting— bring experience primarily from financial and sustainability assurance contexts, which don’t necessarily translate into evaluating platform governance issues such as the operational realities of influencer marketing.

To make audit reports more meaningful as a transparency mechanism, clearer benchmarks and practices are needed, grounded in how influencer marketing actually works on platforms.

How paid influencer content fits into the DSA

While the DSA narrowly defines advertising as content involving remuneration between the platform and an advertiser, Article 26(2) opens the door to a broader set of commercial content, which is especially relevant for influencer marketing. This provision covers not only platform-placed ads but also “content provided by the recipient of the service that is or contains commercial communications.”

Although influencer’ commercial content is a form of native advertising, it doesn’t fit neatly into the DSA’s core definition of advertising. During the legislative process, stakeholders such as the European Parliament’s IMCO Committee pushed to explicitly include influencer marketing in the scope of DSA transparency obligations—but this language was ultimately dropped.

Still, it can be argued that the DSA connects to influencer marketing in at least four relevant ways:

  • Illegal content (Article 3h): Hidden advertising may violate EU consumer law and thus could be interpreted as illegal content, which platforms are obliged to address transparently.
  • “Dark patterns” (Article 25): If a platform’s interface makes it difficult to disclose sponsored content, this could amount to manipulative design in violation of Article 25.
  • Systemic risk assessment and mitigation (Articles 34 and 35): Paid influencer content may contribute to systemic risks—for example, in relation to civic processes (as in the Romanian election) or negative effects on minors—which VLOPs and VLOSEs are required to address where such risks may stem from the design, functioning, or use of their service.
  • Commercial communications (Article 26(2)): Platforms must provide users—including influencers—with clear disclosure mechanisms for commercial content. This includes functionality to self-declare commercial posts and ensure disclosures are visibly marked for other users.

This last provision—Article 26(2)—has become especially relevant in recent DSA audit reports. While it does not explicitly mention influencer marketing, its reference to “commercial communications” draws on the E-Commerce Directive, which defines the term broadly to include “any form of communication designed to promote, directly or indirectly, the goods, services or image of a company, organisation or person” engaged in commercial activity.

This definition arguably covers influencers, who are considered traders for the purposes of EU consumer protection law.

How Article 26(2) applies in practice

In practice, auditors appear to interpret Article 26(2) as the DSA’s de facto influencer disclosure clause. Although influencer marketing was practically scrubbed from the final DSA text, platforms’ monetization features and today’s blurred lines between organic and commercial content keep the issue very much within scope.

Under 26(2), platforms must (1) provide functionality on their interface for users to self-declare when content is or contains commercial communication and (2) ensure that the disclosure of commercial content is identifiable in real-time, including through prominent markings, to other users on the platform.

In other words: if users (influencers) are promoting products, platforms must make commercial content disclosure not only possible, but clearly visible to other users.

Comparing the audit reports:
Not all disclosures are created equal

Before turning to the audit findings, it’s useful to compare how platforms actually implement disclosure tools in practice.

Seven of the eight audited social media platforms offer a toggle or interface tool that allows users to disclose when content is commercial. The exception is X, where users are expected to indicate commercial content manually, using hashtags like #ad, #paidpartnership, or #sponsored.

When an influencer activates a disclosure toggle, a label is added to the post. However, the wording of these labels, or “tags”, varies across platforms.

As shown in Table 1, most platforms use tags that include the word paid, a preference which—as we’ve previously argued—can problematically obscure non-monetary forms of payment like gifts, trips, or free products. If the language of the tag doesn’t match the influencer’s understanding of what counts as a commercial relationship, they may be less likely to use the toggle—creating a gap between legal obligations and actual platform behavior.

Table 1: Interface features for facilitating commercial content disclosure by eight VLOPs

 

Platform Name of function
‘Paid Partnership’ tag
‘Paid Partnership’ tag
‘Brand Partnership’ tag
‘Promoted by’ or ‘sponsored by’ tags
*According to audit report, not Pinterest webpages
‘Paid Partnership’ tag
‘Paid Partnership’ or ‘Promotional Content’ tags
Use hashtags eg #ad, #paidpartnership, #sponsored
‘Includes paid promotion’ tag

Auditor terminologies show gaps and inconsistencies

Comparing the audit reports against platform documentation helps clarify what was evaluated—and highlights where important features may have been overlooked. In the case of Pinterest, this comparison revealed a notable gap in the audit.

In its audit report, Pinterest’s auditor EY does not mention the platform’s “paid partnership” label—a feature the platform explicitly promotes as a disclosure tool for influencers, according to its Business Help Centre and a 2022 media release. Instead, the audit seems to focus solely on how advertisers disclose commercial content, leaving out user-facing mechanisms entirely.

This ommission begs the question: was the influencer disclosure tool considered out of scope, or was it overlooked? The answer is unclear. EY also states that commercial content on Pinterest is automatically labeled with phrases like “Promoted by” or “Sponsored by,” regardless of whether users self-disclose. But this description does not match Pinterest’s own policy, which undermines the credibility of the auditor’s assessment concerning recipients.

Pinterest is the only social media platform where this particular mismatch was so evident in the first round of audits. Still, the example points to broader concerns: To what extent are the audit reports detailed enough to clarify what was actually assessed, and how rigorous are those assessments?

It also raises a more structural issue: namely, whether auditing firms have sufficient expertise to distinguish, for example, between advertiser tools and influencer mechanisms—and to assess each according to its specific obligations under the DSA.

Auditors use inconsistent approaches to assess compliance

Audit reports outline the methods auditors used to evaluate each obligation under the DSA. This includes descriptions of audit procedures and justifications for methodological choices—such as the types of information accessed, sources of evidence, sampling methods, and benchmarking criteria.

Table 2 summarizes the procedures used by auditors across the eight social media platforms, highlighting the techniques and evidence relied on in assessing compliance with Article 26(2).

Table 2: Audit procedures used by auditors of eight VLOPs to assess 26(2)

Platform
Auditor EY EY Deloitte EY EY KPMG

FTI

EY
Inquired with management or control owners about the platform’s approach  ✓
Inspected user access to disclosure tag
Inspected code of commercial content disclosure tag
Inspected visibility of disclosure tag on the interface
Tested relevant IT controls
Assessed processes and controls were appropriate.
Inspected display of disclosure label code
Inspected platform policy for commercial content
Inspected use of disclosure tag among in sample
Inspected access to disclosure tag among in sample
Inspected translation of disclosure label into EU member state languages code
Assessed capability to detect inappropriate labelling
Assessed sufficiency of hashtag approach
Inspected whether disclosure tag included in backend data of test content

As the table shows, the only procedure used by all auditors was engaging with management or control owners to understand the platform’s process for enabling users to self-disclose commercial content.

Among the seven platforms that offered disclosure tools, auditors varied in their approach:

  • All seven navigated the platform to access the disclosure tag;
  • Five assessed the visibility of the tag on the user interface;
  • Five scrutinized the underlying code related to the tag;
  • Four evaluated how the tag is displayed;
  • Two checked whether the tag was translated into EU member state languages.

In keeping with this procedural variance, the audit reports also varied in how they assessed disclosure tags across different formats.

For example, EY reported inspecting “surfaces for which users could label branded content” on Facebook and Instagram. But without specifying what those surfaces were, it’s unclear how thoroughly different user pathways were tested—which is especially relevant, given that the disclosure process differs across content and account types.

In contrast, EY’s audit of YouTube does acknowledge differences in how disclosure tools function across access points. It noted that users uploading videos via the mobile app cannot access the paid partnership tag in the same way as desktop users—though the tag is available for YouTube Shorts. In other words, the availability of disclosure tools depends not only on the platform itself, but also on how users access it (browser-based or mobile app) and what type of content they post.

It’s unclear from most of the audit reports whether similar testing was carried out—and if so, what auditors’ findings were.

Notably, this inconsistency isn’t just between auditors, but also across audits carried out by the same firm. In EY’s case—responsible for five of the eight audits examined here— the level of detail varies considerably. Whether this reflects differences in platform cooperation, audit scope, or EY’s own methodology and reporting practices is difficult to determine, given the limited disclosures provided in the reports.

Auditor sampling methods are ambiguous

These inconsistencies across audit reports are particularly noticeable when it comes to sampling: which content was tested, how it was selected, and what was reported.

EY reports using sampling methods in its audits of Facebook, Instagram, and Snapchat, but not for Pinterest or YouTube. None of the EY audits specify sample sizes. Only Deloitte’s audit of LinkedIn provides this information—though the sample consisted of just a single video illustrating the access process.

Even where sampling is mentioned, the details are sparse. For Snapchat, EY notes that it examined tag availability in both the iOS and Android mobile apps. For Facebook and Instagram, it refers to inspecting a sample of “commercial communication placements,” confirming that posts were declared as “Paid Partnership.”

Yet the sampling process remains ambiguous: how were these posts selected for the sample? What criteria was used to evaluate them? Did the review only confirm that a tag was visible on the interface, or did it assess whether the tagged posts were indeed commercial—and properly disclosed?

The issue of correct labeling practices also comes up in the X audit. FTI Consulting notes that X lacks mechanisms to detect undisclosed commercial content beyond user reporting—however, this framing arguably misinterprets the scope of Article 26(2), which does not require platforms to monitor or enforce disclosure of user content, but to facilitate it.

In other words, responsibility for disclosing commercial content falls to the user—putting aside the question of what platforms should do about intentional or inadvertent non-disclosure.

Lack of standards limits audit consistency

Comparing the audit procedures and sources of information used to assess compliance reveals variation not just in methodological rigor, but also in how Article 26(2) itself is interpreted. For example, it’s unclear whether the translation of disclosure labels into EU member state languages should be checked, as was done in the audits of Facebook and Instagram.

Another example: Only four of the eight audit reports reference the platform’s commercial content policy as part of the assessment process. In the case of Instagram and Facebook, EY states that it reviewed the company’s own definitions around when users are expected to disclose content. However, our research finds Meta’s branded content policy inadequate for addressing the full range of circumstances that require disclosure under European Consumer law.

Auditors should therefore—when assessing compliance with Article 26(2)—test platforms definitions against existing regulatory frameworks. Doing so would help ensure much needed consistency across the platform ecosystem.

A broader issue is the absence of any clear criteria within Article 26(2) itself that would establish when a platform has sufficiently fulfilled the obligation to provide disclosure functionality. None of the audit reports propose a threshold, either. Uncertainty persists around core elements of this obligation: for example, how visible or accessible does the functionality need to be? Does must it be implemented across both web and mobile interfaces?

In the absence of such standards, some platforms introduce their own definitions and benchmarks. Snapchat’s audit, for instance, defines “unambiguous” as “leaving no room for multiple interpretations”—referring to the requirement that recipients should be able to identify commercial content in a “clear and unambiguous manner.” Nevertheless, such definitions remain vague as to when a disclosure label is sufficiently clear.

Moreover, these definitions are provided by platforms—not auditors—and reflect the broader emphasis on self-assessment within the audit process. As such, the interpretative power granted to platforms also limits the extent to which audit reports enable us to scrutinise how commercial content disclosures are being implemented in practice.

Comparing audit conclusions and recommendations

Auditors determined that all eight platforms were in compliance with Article 26(2). This demonstrates the flexibility that platforms have in operationalising disclosure functionality (whether through toggle tools or hashtag-based systems) as well as the variety of audit methodologies used to evaluate compliance.

Four platforms—Facebook, TikTok, X, and YouTube—received a verdict of “positive with comments,” meaning auditors considered them compliant but still issued recommendations. The following table summarises these comments and recommendations, along with the platforms’ responses as documented in their Audit Implementation Reports.

Table 3: Comments and recommendations from auditors, and responses from platforms that received ‘positive with comments’ for 26(2)

  Audit Conclusion Recommendation Response
Although each inspected placement and surface provided functionality to self-disclose commercial content, there was no benchmark of placements and surfaces. Facebook should create and periodically review a repository of all surfaces which display commercial content, and establish a process for updating labels when commercial content is turned into paid ads By the end of 2024, Facebook will create a central repository of all active Branded Content placement locations across surfaces (e.g., Feed, Marketplace) and platforms (e.g., Web, iOS), designed to include new and experimental placements, and enable periodic review.

TikTok could not provide adequate evidence regarding its General IT Controls for systems supporting the functionalities and associated controls, although the auditor carried out other procedures to mitigate the risks.

TikTok should conduct a risk assessment to identify IT risks related to the effectiveness of the automated functionalities and strengthen General IT Controls to mitigate these risks. TikTok will complete an IT Risk assessment focusing on the automated functionalities and controls, to identify any risks to compliance objectives, and strengthen General IT Controls.
X does not detect commercial content that has not been appropriately labelled but relies on user reporting. X to continue to develop space for undisclosed paid partnerships reporting.

X to provide a ‘Commercial Content’ tag on posts.

X justified not implementing the recommendations based on the auditor’s positive conclusion and the existing reporting mechanisms for violations of the Paid Partnership Policy.
Although functionality allows YouTube Videos and Shorts to be declared as containing commercial content, this is not consistent across mobile app and website. YouTube should expand the functionality to allow users to self-disclose commercial content for YouTube Videos on the YouTube Studio Mobile App. YouTube will introduce functionality in the app that enables users to disclose information during the video upload process.

 

These recommendations highlight the differing interpretations of what Article 26(2) actually requires.

For instance, EY recommended that Facebook develop and maintain a repository to document its disclosure functionality across different surfaces and operating systems. Yet similar conditions exist across other platforms, many of which support multiple content formats and interfaces. It is unclear why this recommendation was issued to Facebook specifically, and not to others with comparable system complexity.

Responses from platforms also vary. Both X and YouTube assert that their current systems already fulfill the obligation. YouTube, nonetheless, agrees to implement the recommendation, while X declines to introduce a disclosure label similar to those used on other platforms. X’s decision effectively sustains a fragmented approach to commercial content disclosure—rather than pushing toward greater consistency across the ecosystem in which commercial content would always be accompanied by a label.

Conclusion: Clarifying compliance under Article 26(2)

A close review of the audit reports for just a single DSA obligation—Article 26(2)—echoes previous critiques of audits and their limitations as tools for platform accountability. Across the eight assessments, we see wide variation in how the obligation is interpreted, the rigor with which assessments were conducted, the kinds of evidence auditors relied on, and the procedures used to assess compliance. These inconsistencies make it difficult to compare outcomes across platforms and raise questions about what standards, if any, are guiding the process.

When it comes to the functionality for disclosing commercial content, the uniformly positive conclusions by auditors are particularly troubling. Our previous research finds that self-disclosure mechanisms on platforms like Instagram, Snapchat, TikTok, and YouTube fall short due to their (in)accessibility and (in)visibility. And yet, auditors still deemed these systems compliant. In doing so, the reports effectively legitimize a model in which platforms are free to offload responsibility for legal compliance onto influencers, while failing to ensure that disclosure functionality actually works in alignment with consumer protection law.

As the Commission reflects on this first cycle of audit reports, additional guidance could bring much needed clarity to both platforms and auditors on how to address these ambiguities. In the case of Article 26(2), this could include:

  1. Clarifying how disclosure functionality should work across different surfaces, operating systems, account types, content types, and access modalities (e.g. browser vs. mobile app);
  2. Establishing whether the accessibility and visibility of this functionality should be part of the compliance assessment;
  3. Clarifying whether platform definitions of “commercial communication” align with relevant EU consumer protection law;
  4. Setting criteria for what qualifies as a “clear and unambiguous manner” for the disclosure display.

Without clearer guidance, audits risk reinforcing the very opacity and unevenness the DSA was meant to address.


The author would like to acknowledge Catalina Goanta and John Albert for their invaluable comments and feedback.