In Europe, the controversy surrounding what is popularly known as the "Chat Control" project — proposed EU regulation officially aimed at combating child sexual abuse material — has, for months, been crystallizing massive opposition on both technical and civic fronts.
The core principles of the legislation are clear:
"Detection software would be embedded in the messaging app or the operating system to scan chat content and automatically forward any material flagged as prohibited to law enforcement agencies."
The automatic scan of private content (texts, images, videos) sent through messaging platforms such as WhatsApp and Telegram, or prompts sent to AI platforms (e.g. ChatGPT) would take place "client-side," before its encryption, meaning directly on your phone, tablet or computer. Welcome to 1984. In the first proposal for a Chat Control project in 2022, such scanning was mandatory. In the current proposal, it is optional — but strongly recommended.
Whenever there is a desire to expand control over European citizens, "terrorism" or pedophilia is invoked. It is a clever tactic: who would want to be perceived as supporting terrorists or sympathizing with pedophiles?
Of course, however, that does not seem to be the regulation's true objective. The real issue appears to be the government's desire to control, regulate, police and monitor European citizens down to their smallest gesture.
The unelected, untransparent and unaccountable EU does not care about your freedom; it fights against it (such as here and here).
Why Many Experts Consider Automatic Detection Dangerous
Major Technical Limitations
Some of Europe's most renowned researchers in cybersecurity and privacy have signed an open letter warning that the expansion of "client-side scanning" (scanning content on the user's device) paves the way for mass surveillance. This is an understatement: scanning the communications of tens of millions of users is, by definition, mass surveillance.
The EU's initial proposal required scanning all messages in the EU — without exception. Faced with overwhelming opposition, the EU returned with a text that merely allows — without mandating — exactly the same practice. Are you texting your child or perhaps your bank? Your message is scanned. Sending a prompt to ChatGPT? Scanned. Your life is an uninterrupted scan, and anything you type that is deemed by the monitoring software as "prohibited" is forwarded to law enforcement authorities.
100% transparency; 100% control: the leitmotif of every totalitarian project since the 20th century.
An academic study, Bugs in Our Pockets: The Risks of Client-Side Scanning, warned of the dangers inherent in this approach. According to its Cornell University authors, the scanning "does not effectively prevent crime or protect privacy". The study demonstrates that such a system could be circumvented, misused, or overwhelmed by false positives, and would create new security vulnerabilities.
Apparently these algorithms cannot reliably differentiate between innocent exchanges and illegal behavior. This problem is particularly true for written conversations, where context, irony, sarcasm or family quirks can easily lead to misinterpretation. Consider, for instance, a newborn baby, often naked. Send a photo of your baby to a relative, and the algorithm may report you to the authorities as what it believes you are — a pedophile. A letter from the police is ready to be dispatched, summoning you to explain the "child pornography" you are allegedly circulating.
Massive False Positives — and Their Consequences
According to the writers of the aforementioned open letter, this inability to correctly analyze communications could easily entrap the participants in billions of legitimate private exchanges in a vast net.
I happen to be the happy father of an 18-month-old girl. For professional reasons, her mother and I do not always live together, and we constantly exchange photos of our daughter — up to ten times a day. All it would take is for an algorithm to flag a single image, just once, for us to become suspects, quietly entering countless criminal databases, justifying surveillance, official intervention, and more. Even East Germany's Stasi never dreamed of such power.
This liability for false positives has already been observed in existing reporting systems: holiday photos, such as children playing on a beach, are automatically flagged, triggering unjustified procedures, investigations, or leaks exposing the private lives of innocent people. These cases illustrate the jeopardy of widespread Orwellian policing: the presumption of guilt.
A Breach of Encryption — and of Overall Security
A central objection raised by privacy advocates and secure messaging services — WhatsApp, Telegram, Signal and others — is that scanning fundamentally undermines end-to-end encryption. In reality, it abolishes it.
What is the point of encrypting something that is already being spied on? This proposed EU regulation effectively turns any phone or computer into an instrument of surveillance and intrusion. "Siri, tell me: how many civil servants are reading our conversations?" Just preventively, of course: to fight Evil.
Some companies, such as Signal, are threatening to leave the EU rather than compromise encryption. It takes far more, however, to move the European Commission: after all, the European economy is flourishing so "brilliantly" that it can easily afford to chase away the world's best companies.
The latest version of the draft regulation — put forward by the socialist Danish EU presidency — transforms the scanning requirement into a "voluntary option", allowing a political agreement between member states. This reversal is merely a rapid cosmetic fix: the proposal maintains all the mechanisms that, in practice, establish widespread, arbitrary and massive control. It is, essentially, a totalitarian structure.
Nextcloud, a privacy and encryption advocacy organization, warns that the proposed regulation poses "a fatal threat to our democracies". It creates an infrastructure capable of spying on private conversations on a massive scale, making them accessible with a single click to even the most inconsequential civil servant.
Some criticisms of the proposed regulation point to violations of the secrecy of correspondence, fundamental freedoms and the right to a fair trial. Communications between lawyers and their clients will be scanned, as will WhatsApp messages with your doctor about erectile dysfunction problems or suspected cancer. By definition, nothing escapes its doting supervision. Everything is suspect. You are suspect.
Unfortunately, none of these objections is likely to sway the European Commission's refined minds. Ideologues occasionally have a way of being impervious to reality, reason or any values other than their own. In the corridors of the EU, control appears to be the dominant value. All these people hear is force.
This may be precisely what awaits us.
Monstrous Regulation
As most of the targeted internet and technology service providers are American, Chat Control, even in its new hypocritical "Stasi-lite" guise, will inevitably lead to confrontation with the U.S. government. Chat Control would apply not only to communications originating in Europe and destined for Europe, but also to EU member-state citizens on American soil as well as to communications from anywhere in the world to Europe.
In short, messaging and AI platforms would soon inevitably be forced to extend Chat Control to all communications. This is yet another example of regulatory imperialism characteristic of the incompetent people who run the EU today.
Do we really believe Americans will tolerate this new surge of repression from a decadent continent trampling on the principles that once made its civilization great?
Drieu Godefridi is a jurist (University Saint-Louis, University of Louvain), philosopher (University Saint-Louis, University of Louvain) and PhD in legal theory (Paris IV-Sorbonne). He is an entrepreneur, CEO of a European private education group and director of PAN Medias Group. He is the author of The Green Reich (2020).

