When iMessage launched end-to-end encryption in 2011, it was marketed as a privacy feature. In a strict technical sense, it is: the message content is encrypted in transit and Apple cannot read it. In a meaningful sense, iMessage is not a private communication system. Understanding why requires separating two things that the industry has every incentive to keep conflated.
What Encryption Actually Solves
Encryption is a cryptographic operation. It takes plaintext and produces ciphertext that is unreadable without the corresponding key. End-to-end encryption specifically means the encryption and decryption happen on user devices — the service provider never holds the plaintext.
That's the problem it solves: content confidentiality from the service provider and anyone who intercepts traffic in transit. Nothing else.
Encryption does not protect who you communicate with. It does not protect when. It does not protect how often, from where, or with what device. It does not protect what happens to your data on the other end, once the recipient decrypts it. It does not protect against the service provider being compelled to produce logs of your activity. It does not protect against a backup system that stores a decrypted copy.
Privacy is an outcome. Encryption is one input to that outcome. A necessary one — without it, content confidentiality is impossible — but far from sufficient.
The iMessage Illustration
iMessage encrypts message content. Apple cannot read your messages in transit. But consider what else is true:
- iCloud backups: If you back up to iCloud, iMessage backups were historically stored in a way that Apple could access. Apple added end-to-end encrypted iCloud backup in 2022 — but it must be explicitly enabled. The default setting for most users produces decryptable backups that Apple can hand to law enforcement.
- Social graph: Apple's servers know who you message, even if not what you say. That contact graph is metadata, not content.
- Device trust: The encryption is only as strong as the device holding the keys. If your device is unlocked by a court order, the plaintext is accessible regardless of how good the encryption is.
- SMS fallback: iMessage silently falls back to unencrypted SMS when the recipient doesn't have iMessage. The green-bubble/blue-bubble distinction is the only user-visible signal — which most users don't know to watch for.
None of this makes iMessage bad. For the threat model "I don't want strangers intercepting my messages over Wi-Fi," it works fine. For "I need meaningful privacy from Apple and law enforcement," it doesn't — and Apple's marketing does not help users understand the difference.
The WhatsApp Question
WhatsApp uses the Signal Protocol. Its end-to-end encryption is technically excellent — the same cryptographic construction used by Signal itself. Meta cannot read your WhatsApp messages.
But Meta can read everything else. WhatsApp collects metadata extensively: who you talk to, how often, your last-seen status, your IP address, your phone number and contacts list, your usage patterns. Meta's entire business model is advertising targeting — and metadata is sufficient to target advertising without touching message content.
"We don't see your messages. We see your patterns." This is the business model of every ad-funded platform that claims encryption.
In 2021, WhatsApp updated its privacy policy to require sharing more data with Facebook for business accounts and to expand the scope of data it collects across the Meta ecosystem. The backlash was significant — millions of users migrated to Signal and Telegram. The irony is that the underlying cryptography hadn't changed. What changed was people's intuition about the gap between "encrypted content" and "actual privacy."
The Gmail S/MIME Problem
Gmail supports S/MIME encryption for enterprise users. When properly configured, Google cannot read the content of S/MIME-encrypted emails. Organizations deploy this and check the "encrypted email" box.
What remains fully visible to Google: subject lines (excluded from most email encryption standards), sender and recipient addresses, timestamps, message sizes, IP addresses, and the full social graph of organizational email. Google's ad systems — even for Google Workspace — use signals derived from behavioral patterns. Content encryption doesn't prevent that.
For an organization handling sensitive communications, encrypting content while leaving everything else in a Google data center represents a misunderstanding of what the threat model actually requires.
What Privacy Actually Requires
Run through each of the major encrypted messaging and email products against the full list of what privacy requires:
| Requirement | Signal | iMessage | ProtonMail | |
|---|---|---|---|---|
| Content E2EE | Yes | Yes | Yes | Yes |
| No phone number required | No | No | No | Yes |
| Metadata minimization | Partial | No | No | Partial |
| Zero-knowledge architecture | Partial | No | No | Yes |
| Encrypted backups by default | Yes | Optional | Optional | Yes |
| No ad-funded business model | Yes | No | Partial | Yes |
The pattern is consistent: content encryption is table stakes and almost everyone gets it right. The divergence is everywhere else.
The Glass Door With a Lock
A useful mental model: think of a glass door with a very good lock. The lock prevents entry. But anyone walking past can see exactly what's inside, who's in there, and when they come and go. The lock is technically effective at what it does. The door is not a privacy solution.
Most "encrypted" apps are glass doors with good locks. The content is secured. The everything-else is visible.
Actual privacy requires opaque walls, not just a better lock. That means: zero-knowledge architecture where the service provider cannot decrypt anything; identity that doesn't link to a real-world record like a phone number; metadata handling that minimizes what the server sees; open source code and independent audits so "trust us" becomes "verify yourself."
Where We Are Honest About Haven's Limits
We build Haven and we believe in what we're building. We also believe the industry has a problem with overclaiming, and we don't want to contribute to it.
Haven implements zero-knowledge email and chat — we cannot read your message content. We do not require a phone number. We support aliases for identity compartmentalization. Our core cryptography is open standard (PGP for email, MLS for chat), meaning independent researchers can verify how it works.
What Haven does not currently do: protect your IP address (use a VPN or Tor for that), guarantee that your correspondent's device is secure, or eliminate the metadata that network protocols inherently generate. We receive connection information that the internet requires to route traffic. We minimize what we log, but we are honest that this is policy, not cryptographic guarantee.
Privacy is not a binary. It's a spectrum of protections against a specific threat model. Knowing where the protection starts and stops — for any tool you use — is what separates actual privacy practice from the feeling of it.