
The Digital Services Act (DSA), adopted by the European Union in 2022 and fully applicable since February 2024 to "very large online platforms" (VLOPs) such as X, Facebook, TikTok, and Google, is not officially presented as an instrument of "organized censorship." Formally, it is purported to be a regulatory framework intended to govern digital services in order to protect users from illegal content, systemic risks, and opaque platform practices.
However, a growing number of critics — particularly in the United States, including Elon Musk and several Republican members of Congress — describe the DSA as a mechanism of mass censorship. In their view, it imposes heavy bureaucratic oversight on freedom of expression and enables selective repression of dissenting opinions.
From the outset, it must be emphasized that the DSA applies to all digital intermediaries (platforms, hosting providers, and related services), albeit with obligations that intensify significantly for VLOPs, which are defined as platforms with more than 45 million monthly users in the EU.
Who censors? (The actors involved)
Censorship—rebranded as "moderation" — is not exercised first by the European Union or by national governments. Instead, it is delegated to the platforms themselves, under strict regulatory supervision from national administrative authorities. The result is a decentralized but highly regulated system, which critics describe as a bureaucratic "industrialization of moderation."
Digital platforms constitute the primary enforcers. They are required to actively moderate content and, for VLOPs formally designated by the European Commission, the obligations are particularly onerous. These platforms must conduct assessments of "systemic risks," such as the dissemination of disinformation -- and implement proactive measures, including detection algorithms and large teams of human moderators. Platforms with fewer than 45 million monthly users in the EU are subject to lighter requirements, but must nonetheless respond to users' reports.
According to critics, this framework effectively forces American platforms to act as "speech police" on behalf of the EU, under the constant threat of severe sanctions. In doing so, the DSA produces extraterritorial effects that extend well beyond Europe. This point is crucial: any American user of X, for instance, can be sanctioned by X for expressing opinions on the platform. In practice, the DSA is thus applied to all Americans. This requirement constitutes a clear instance of the normative imperialism that has characterized the EU for the past 20 years.
The only conceivable technical alternative would be the creation of separate platforms — an X-USA and an X-EU — which would amount to a denial of the very idea of a global network and of the internet itself. As a result, platforms are structurally compelled to apply the DSA not only to European users, but also to Americans and, ultimately, to users worldwide.
The European Commission acts as the central supervisor for VLOPs. It ensures compliance by launching investigations, demanding access to data, and imposing fines. In December 2025, for example, it imposed a €120 million fine on X for an alleged lack of transparency in moderation practices, particularly concerning "blue checkmarks" -- alleged verifications of authenticity -- which were accused of potentially promoting disinformation. This enforcement model raises serious concerns regarding the separation of powers: the European Commission designs and enacts the DSA, and then proceeds to enforce it itself.
In addition, the EU designates "trusted flaggers" — typically NGOs — whose alerts must be treated as a priority by platforms. The official list of trusted flaggers includes organizations identified as left-wing or far-left, such as HateAid in Germany and UNIA in Belgium.
Each EU member state also appoints a national Digital Services Coordinator (DSC)—for example, ARCOM in France or BNetzA in Germany — to supervise non-VLOP platforms and coordinate with the EU. These authorities handle local complaints and impose sanctions. The result is a decentralized enforcement network in which the most restrictive countries, notably Germany and France, exert disproportionate influence through their radical national laws on "hate speech." This dynamic leads to a downward harmonization of freedom of expression across the entire European Union: content flagged in one country is effectively flagged for the entire EU.
Finally, any individual may report content through the "notice-and-action" mechanism. Trusted flaggers, accredited by the DSCs, enjoy privileged status: their reports are prioritized and frequently result in rapid removals of any content they deem questionable or inaccurate. Recently, the United States went so far as to ban entry into the country for five Europeans—including former European Commissioner Thierry Breton and several activists — for their role in exerting pressure on American online platforms.
In short, American social media platforms form the first line of censorship, but they operate within a clear chain of command emanating from the EU and European national governments, which impose transparency quotas, audits, and permanent oversight.
The US House Judiciary Committee has denounced this system as one of "organized censorship," in which the EU effectively "arms" NGOs to compel American technology companies to remove content that is lawful in the United States but deemed "problematic" in Europe.
How does it work? (The mechanisms)
The DSA establishes a highly structured process that its critics describe as industrial in nature, mass-producing content moderation in the manner of a bureaucratic assembly line.
First, the notice-and-action mechanism allows any individual to report allegedly illegal content. Platforms are required to review such reports "promptly," often within 24 to 48 hours in emergency situations. If the content is deemed illegal, removal is mandatory. For VLOPs, automated tools — artificial intelligence and algorithmic detection systems — are also used to scan content.
Second, VLOPs are subject to mandatory systemic risk assessments. These platforms must conduct annual audits addressing risks such as misinformation, harm to mental health or threats to democratic processes, and must propose mitigation measures, such as deprioritizing suspect viral posts. For example, TikTok, after a DSA investigation into risks posed to minors, was required to remove features considered "addictive." The European Commission may at any time demand access to internal data and launch formal investigations, as it has done with X regarding advertising practices and the presence of bots.
Third, platforms are subject to extensive transparency obligations. They must provide a statement of reasons for every content removal (Article 17 of the DSA), publish semi-annual moderation reports — such as the figure of 41.4 million pieces of content removed in Europe between January and June 2025 — and offer internal and external mechanisms for appeals, including mediation and judicial review. In practice, however, these reports are opaque and appeal procedures are slow, which strongly incentivizes preventive censorship.
The EU and national DSCs also conduct investigations, including techniques such as "mystery shopping" to test compliance, as in the case of alleged sales of illegal products on e-commerce platforms such as Temu. Fines could reach up to 6% of a company's global turnover, amounting to potentially billions of euros for firms such as Meta or Google. In cases of non-cooperation, platforms even face the possibility of a temporary ban within the EU.
This environment of stringent enforcement strongly encourages platforms to over-moderate content in order to minimize regulatory risk, leading to the removal of content that is perfectly legal. We are speaking here of approximately eight million posts deleted per month in the European Union, not including complete bans, such as those imposed on Russian media outlets.
Based on what criteria? (The foundations of "moderation")
The criteria underpinning content moderation are neither uniform nor clearly defined. Instead, they rely on existing EU and national laws, rendering the system vague and highly susceptible to abuse. Terms such as "hate" are never precisely defined, allowing for expansive and discretionary interpretation.
Illegal content is treated as the top priority. Defined by the EU and national legislation, it includes hate speech (such as incitement to violence based on race or religion), terrorist content, child sexual exploitation material, counterfeit goods, and dangerous products. For example, investigations have targeted Temu for selling toxic toys or so-called "pedophile dolls."
Yet the central problem remains: "hate" itself is never defined in law. Labeling a politician as a right-wing extremist, for instance, could arguably be considered hateful, yet such expressions are never targeted by the DSA or its enforcement apparatus. By contrast, establishing potential links between Islamic immigration and anti-Semitism or political violence is routinely classified as "hateful."
In reality, "hate" has no coherent legal meaning and serves primarily as a pretext for censoring opinions that deviate from left-wing orthodoxy. In the United States, "A state may not forbid speech advocating the use of force or unlawful conduct unless this advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action." Speech is protected by the First Amendment.
A second category concerns "systemic risks," applicable only to VLOPs. This is the most elastic and dangerous category, encompassing harm to minors through addictive or violent content, and what is broadly labeled as "disinformation." Examples of disinformation include allegedly false information about elections or public health, and algorithmic manipulation. Such content is not necessarily illegal but is considered "harmful" if amplified, leading to measures such as deprioritization of posts about COVID-19 or vaccines deemed false.
It must be stressed that disinformation itself is not illegal. The DSA therefore mandates the active censorship of content that is lawful -- but merely displeasing to the European Princes and their legions of censors.
In 2025, one hundred free speech experts warned that the DSA would lead to a "dislocation" of global free speech.
American freedom of speech cannot survive a "Big Brother" DSA.
Drieu Godefridi is a jurist (University Saint-Louis, University of Louvain), philosopher (University Saint-Louis, University of Louvain) and PhD in legal theory (Paris IV-Sorbonne). He is an entrepreneur, CEO of a European private education group and director of PAN Medias Group. He is the author of The Green Reich (2020).

