Privacy

The Privacy Skill Nobody Teaches: Building a Personal Threat Model

May 1, 2026 9 min read Haven Team

Picking privacy tools without a threat model is like buying a lock without knowing what you are locking out. The same setup that protects a journalist from state surveillance is overkill for someone who wants to avoid targeted ads — and may introduce new risks by adding complexity you cannot maintain. The five questions that make your privacy choices coherent are not complicated, but almost nobody teaches them.


Privacy advice is abundant and often contradictory. "Use Signal." "Use a VPN." "Use Tor." "Use a de-Googled phone." Each piece of advice is correct in some contexts and wrong in others. The context is your threat model — a structured way of thinking about what you are protecting, from whom, and what you are willing to do about it.

The Electronic Frontier Foundation's Surveillance Self-Defense guide formalizes threat modeling into five questions. These questions originate from security engineering practice and have been adapted for individual use. Working through them for your own situation replaces the generic "here is a list of tools" advice with answers that are actually calibrated to your life.

The Five Questions

1. What do I want to protect? Your "assets" — the data, communications, identities, or relationships that need protection. This might be the contents of your messages, your location history, your financial information, your professional contacts, or your real identity. Being specific matters: "my privacy in general" is not actionable; "the identity of the sources I communicate with as a journalist" is.

2. Who do I want to protect it from? Your "adversaries" — the entities that might want access to your assets. Possibilities range from advertisers and data brokers to employers, landlords, family members, domestic abusers, law enforcement, or state intelligence agencies. Different adversaries have different capabilities and different legal authorities. A tool that resists an advertiser may offer no protection against a national intelligence agency.

3. How bad are the consequences if I fail? Not all failures are equal. Losing control of your browsing history to an advertiser is annoying. Losing control of your source's identity to an authoritarian government has life-altering consequences for that source. The answer to this question scales your investment of effort and risk tolerance.

4. How likely is it that I will need to protect it? Probability matters alongside impact. A low-probability but high-consequence threat (state surveillance of a private citizen in a stable democracy) may warrant different treatment than a high-probability, moderate-consequence threat (data broker aggregation of your public records).

5. How much trouble am I willing to go through? The most secure setup is often the least usable, and unusable security is security people route around. A threat model that requires you to maintain a dedicated air-gapped laptop for all sensitive communications may be technically sound but behaviorally unsustainable. Honest answers to this question prevent you from designing a system you will not actually use.

Why this matters

Security that is too difficult to maintain gets abandoned. A threat model calibrates your security to your actual risk, so you can build sustainable habits rather than a system you will bypass the first time it becomes inconvenient. The goal is not maximum security — it is appropriate security.

Common Threat Model Profiles

Most people fall into one of a few broad profiles. These are not rigid categories — your situation may straddle several — but they are useful anchors:

Profile Primary Adversary Key Assets Practical Focus
General privacy Advertisers, data brokers, corporate tracking Browsing behavior, purchase history, location patterns DNS-over-HTTPS, ad blocking, email aliases, data broker opt-outs
Sensitive personal Employer, family members, abusive partner, nosy acquaintances Medical info, relationship details, political views, finances Encrypted messaging, device PIN/encryption, private browsing, separate accounts
Professional / journalist Corporations, legal opponents, law enforcement (subpoena) Source identities, unpublished work, confidential communications Signal, encrypted email, secure drop systems, device compartmentalization, legal awareness
Activist / at-risk Surveillance of political activity; potentially state actors Organizational contacts, meeting details, communications content and metadata Tor, encrypted devices, operational security (OPSEC), minimizing digital footprint
High-value target Nation-state actors, sophisticated criminal groups Strategic communications, financial assets, physical location Threat-specific professional guidance; generic tools are insufficient

The last row is important: if you are a high-value target of a sophisticated nation-state actor, this article and generic privacy tools are not sufficient. That threat model requires specialized, tailored operational security guidance that a blog post cannot provide.

Why Overkill Is Its Own Risk

There is a tendency in privacy-conscious communities to treat maximum security as the only legitimate goal. This is wrong, and it causes real harm.

Overkill has several costs. The most obvious is usability friction — if your communication setup is too complex, you will not use it, and neither will the people you want to communicate with. A journalist who insists every source use Tor and PGP-encrypted email will have fewer sources than one who makes Signal as easy as possible. Security that is not adopted does not protect anyone.

A subtler cost is complexity as attack surface. Every additional tool you run is another piece of software that can have vulnerabilities. A complex OPSEC setup requires more maintenance, more decision-making under pressure, and more opportunities for mistakes. A focused setup that covers your actual threat surface and is maintainable over time is more effective than a maximalist setup you cannot consistently execute.

Perfect security requires perfect vigilance. Perfect vigilance is unsustainable for most people over any significant time horizon. Aim for good-enough security that you will actually maintain — not perfect security that you will abandon after a week.

There is also a social cost. Heavy OPSEC has a look. Using Tor for everything, refusing to use any commercial services, and operating under pseudonyms creates patterns that can themselves attract attention. For most people in most situations, these patterns are a cost with no corresponding benefit.

Applying Your Model: A Starting Checklist

Once you have worked through the five questions and identified your profile, the next step is matching tools to threats. A rough checklist, roughly in order of leverage:

  1. Enable full-disk encryption on all your devices. This is low-effort and protects against physical theft, which is a relevant threat for nearly everyone. Our post on disk encryption explains what it covers.
  2. Use a password manager with unique, strong passwords for every account. Credential reuse is one of the highest-probability attacks; a manager closes it.
  3. Enable two-factor authentication on critical accounts (email, banking, primary identity providers). Prefer authenticator apps or hardware keys over SMS.
  4. Switch to encrypted messaging for sensitive conversations. Signal is the well-audited default for individual and group chat. For combined email and chat under one identity, Haven is one option worth evaluating.
  5. Use DNS-over-HTTPS to reduce ISP visibility into your domain lookups. Enable it in your browser or at the router level.
  6. Audit your data broker presence and submit opt-out requests for the highest-risk aggregators. See our data broker opt-out guide.
  7. Review app permissions on your phone. Location access, microphone access, and contact access are routinely requested by apps that do not need them.
  8. Consider a VPN only if your ISP is your primary concern and you trust a specific VPN provider more. Our post on VPN limitations covers when this is and is not a useful trade.

Steps 1–4 cover the most common threats for most people and require modest effort. Steps 5–8 address more specific concerns. Anything beyond this is threat-specific and requires working through the five questions for your actual situation — not for the most extreme situation you can imagine.

The Model Is Allowed to Change

Your threat model is not a permanent document. Your life circumstances change: a new job, a change in political activity, a new relationship, a legal dispute. Revisit your threat model when your circumstances change and update your tools accordingly. A threat model written for a normal person does not automatically apply to someone who becomes a source in a high-profile investigation two years later.

The habit of asking "who is my adversary and what can they do" — even informally — is more valuable than any specific tool recommendation. Tools change; the questions stay relevant.

See also our 2026 privacy stack overview for current tool recommendations organized by threat profile.

Try Haven free for 15 days

Encrypted email and chat in one app. No credit card required.

Get Started →