The justification is unambiguous: child sexual abuse material (CSAM) is a genuine harm, and legislators want platforms to do more to detect and report it. The EU Regulation on preventing and combating child sexual abuse (Regulation 2022/0155, commonly called Chat Control 2.0) is not cynical in its stated aim. But the mechanism it proposes — mandatory client-side scanning of private communications — would fundamentally alter what end-to-end encryption means, regardless of what the legislation's text says about preserving it.
Understanding why requires understanding what E2EE actually guarantees, and exactly where client-side scanning breaks that guarantee.
What End-to-End Encryption Actually Guarantees
End-to-end encryption means that messages are encrypted on the sender's device and can only be decrypted by the intended recipient(s). No intermediate server — not the messaging provider's infrastructure, not any party that intercepts the message in transit — can read the plaintext. The encryption and decryption happen only at the endpoints: your device and theirs.
This guarantee depends on exactly one thing: the plaintext is only ever visible on devices that hold the private decryption key. The moment plaintext is made available to any additional process — even one running locally on your device — that guarantee is weakened, because that additional process can send its findings (or the plaintext itself) to a third party.
Chat Control's proponents argue that scanning on the device, before encryption, doesn't compromise E2EE because the encrypted message in transit is still unreadable. Cryptographers respond that this argument is technically correct and practically irrelevant: if your device is required to run surveillance software on your messages before sending them, it doesn't matter that the message is encrypted afterward. The plaintext was already inspected.
How Client-Side Scanning Works
There are three main technical approaches to client-side scanning (CSS) for CSAM detection, each with different properties and failure modes.
Exact-match hash comparison (PhotoDNA-style). A database of known CSAM images is hashed using a perceptual hash function — typically Microsoft's PhotoDNA or similar. When you share an image, the client software computes a hash of the image and compares it against the database. If it matches, a report is generated. This approach only detects known material; novel images are never flagged.
Perceptual hashing (NeuralHash-style). Apple announced and then withdrew a system called CSAM Detection in 2021 that used neural hash matching — a technique that identifies images as similar even when they've been re-encoded, cropped, or had their colors altered. The hash space allows for approximate matching rather than exact matching. Security researchers, within days of Apple's announcement, demonstrated collisions: non-CSAM images that produced the same hash as flagged content.
Machine learning classifiers. Rather than hashing, a neural network model runs on the device and classifies images or text as likely to contain illegal content. This approach can detect novel material but has significant false positive rates that become meaningful at scale.
The False Positive Problem at Scale
Consider a classifier with a 99.9% accuracy rate — flagging a message as problematic incorrectly only 1 time in 1,000. Applied to a platform with 500 million daily active users, each sending an average of 10 messages per day, that produces 5 million false reports per day. The human review pipeline that would need to process those reports does not exist and cannot realistically be built.
What this means in practice: either false positives are forwarded to law enforcement (catastrophic for the millions of innocents falsely flagged), or they're filtered by automated systems before human review (which means the oversight is automated, not human, and can be gamed). Neither outcome is acceptable, and the tension between sensitivity and specificity cannot be resolved by building better classifiers — it's a consequence of operating at internet scale.
A system that must scan all private communications to find the small fraction that are illegal will inevitably surveil the overwhelming majority that are not. The architecture of mass surveillance and the architecture of targeted CSAM detection are, technically, the same thing. — A position held consistently by researchers at Johns Hopkins, MIT, and elsewhere who signed an open letter opposing CSS mandates
The Scope Expansion Problem
Once the infrastructure for mandatory client-side scanning exists, its scope is determined by legislative amendment, not technical constraints. A scanning system built to detect CSAM hashes can be retargeted to flag any content whose hash is on an updated list. This is not a hypothetical risk — it is the operational reality of how these systems work.
The EU's Chat Control proposal includes CSAM detection and, in its extended provisions, scanning for "grooming" — text-based detection of communication patterns. Text scanning is necessarily more context-dependent and error-prone than image hashing, and the definition of which text patterns constitute grooming is inherently political.
The technical architecture does not distinguish between scanning for child abuse material and scanning for political dissent, journalist sources, or labor organizing. The distinction exists only in the current legal text — and legal text changes.
Apple's Retreat and What It Means
Apple announced its CSAM Detection system in August 2021, describing it as a way to check iCloud Photos for known CSAM without Apple itself being able to see users' photos. The proposal was immediately criticized by cryptographers, privacy researchers, and civil liberties organizations. Within days, researchers had demonstrated hash collision attacks. Within a month, Apple announced it was delaying the rollout to "take additional time over the coming months to collect input." The system was never deployed and was formally abandoned in December 2022.
Apple's retreat matters for the EU debate because Apple had access to some of the best cryptographic engineering talent in the world and a genuine incentive to make the technology work — and they couldn't build a system that withstood scrutiny. The EU regulation does not specify a technical approach; it mandates the outcome and leaves implementation to service providers. This is not a soluble engineering problem dressed up as a policy question. It is a fundamental tension between private communication and state-mandated surveillance.
Legislative Status and Industry Response
As of early 2026, Chat Control 2.0 has been stalled in the EU Council. A qualified majority has not been reached, with Germany, Austria, and several other member states indicating they will not support a mandatory scanning provision that applies to encrypted communications. The European Parliament's LIBE committee voted against the proposal in 2023. The Commission has not withdrawn the regulation; it remains a live legislative proposal.
Signal's president Meredith Whittaker stated publicly in 2024 that Signal would cease operations in any EU jurisdiction where Chat Control became law rather than implement client-side scanning. Threema issued a similar statement. ProtonMail, based in Switzerland (not EU), noted that the Swiss equivalent regulation would determine their obligations. The practical effect of these statements: if Chat Control passes in its current form, the messaging services most used by people with genuine privacy needs will withdraw from the EU rather than implement mandatory surveillance.
What This Means for Users Now
Chat Control has not passed. No messaging app is currently required to implement client-side scanning under EU law. The practical implications for users today are limited.
The longer-term implications matter for how you evaluate messaging platforms you rely on for sensitive communications. Key questions to ask:
- Has the service made a public commitment about how they would respond to mandatory scanning requirements?
- Is the client software open source, allowing independent verification that scanning is not occurring?
- Where is the service incorporated, and what legal jurisdiction governs its obligations?
- Does the service's threat model documentation address government compulsion?
The EU Chat Control debate is not an isolated legislative incident. Similar proposals have been advanced in the UK (the Online Safety Act, which went through several encryption-hostile versions), the US (the EARN IT Act, which would expose platforms to liability for encrypted content they cannot inspect), and Australia. The argument being made — that "responsible encryption" can accommodate lawful access without compromising security for everyone — is the same argument made in each context. The cryptographic response is also the same: a backdoor for law enforcement is a backdoor for anyone who discovers it. The math does not change depending on who is asking for the key.