Standard Tor works by routing your traffic through three volunteer-operated relays. The list of those relays is published by Tor's directory authorities so that clients can verify which ones are part of the network and check their cryptographic identity keys. As of 2026, there are roughly 7,000 of these public relays.
A nation-state censor with the ability to filter traffic at the ISP level can simply enumerate every IP on the directory and drop all packets to them. China, Iran, Russia, Turkmenistan, and several others have done exactly this at various points. From the user's perspective, Tor "stops working" — connections to known relays time out — and that is intentional on the censor's part.
The Tor Project's response, developed in stages from 2010 onward, is a two-layer system: bridges, which are unlisted relays, and pluggable transports, which make the bridge traffic look like something other than Tor.
Bridges: Unlisted Entry Relays
A bridge is structurally identical to a normal Tor relay except that its address is not published in the public directory. To use one, a client has to learn its IP and port through some out-of-band channel: the Tor Project's BridgeDB website, an email request, the built-in "Request a bridge" button in Tor Browser, or a friend who already has one.
BridgeDB rate-limits requests and returns different bridges to different requesters, which is meant to slow down a censor who might try to enumerate the entire bridge pool by simply asking. It is not perfect — Iran and China are both known to harvest bridges by farming requests through residential IPs — but it raises the cost meaningfully.
A bridge alone solves the "this IP is on a public block list" problem. It does not solve the "this connection looks like Tor" problem. Tor's TLS handshake has historically had recognizable fingerprints, and deep packet inspection has been able to identify Tor flows by their TLS characteristics, cell sizes, and timing patterns. That is what pluggable transports address.
The architecture treats the transport — how Tor's data is encapsulated on the wire — as a swappable module. Tor itself produces a stream of cells; a transport plugin wraps that stream in something that looks like other traffic. New transports can be deployed without changing the core Tor protocol. The interface is documented in PT spec versions 2.1/2.2.
obfs4: Random-Looking Bytes
obfs4, deployed in 2014, is the longest-running pluggable transport still in regular use. Its approach is straightforward: encrypt every byte on the wire with a key shared via the bridge's identifier so that the resulting stream is statistically indistinguishable from uniform random data.
To a passive observer, obfs4 traffic has no TLS handshake, no recognizable header, no protocol-specific structure. It just looks like noise. The advantage is that the censor has to take an active position — blocking all traffic that looks random — rather than fingerprinting a specific protocol.
The disadvantage is that a sufficiently aggressive censor can do exactly that. China's Great Firewall has at various points blocked any TLS-less flow that does not match a known plaintext protocol, on the theory that legitimate use of high-entropy unencrypted traffic is rare. Iran has experimented with the same approach. When that happens, obfs4 stops working in those countries and users need a transport that mimics a specific protocol instead.
meek: Tunneling Through CDNs
meek takes the opposite approach. Instead of looking like nothing, it looks like an HTTPS request to a major cloud service that the censor cannot afford to block — typically Microsoft Azure, Amazon CloudFront, or Google App Engine — and uses domain fronting to actually reach the bridge.
In domain fronting, the TLS Server Name Indication (SNI) field claims the connection is going to www.example.com, but the HTTP Host header inside the encrypted tunnel actually requests tor-bridge.example.com. The CDN sees both and routes by Host header. The censor sees only the SNI and assumes you are visiting a benign site.
meek was very effective for several years. It is much less so now, because the major CDN operators stopped supporting domain fronting around 2018 after pressure from governments. Azure terminated it. Amazon banned it explicitly. Google had already disabled it in 2018. As of 2026, meek-azure exists but is operationally fragile.
Snowflake: Volunteers Become Proxies
Snowflake is the Tor Project's newest widely deployed transport. It uses WebRTC — the same protocol responsible for the IP leaks discussed in our WebRTC piece — to route Tor traffic through short-lived proxies running in volunteers' browsers.
When you visit the Snowflake extension page or run the Snowflake plugin, your browser becomes a temporary entry point that proxies a single Tor user's traffic through itself. The censored user's traffic enters Tor via a stranger's residential connection. Because the proxies are scattered across millions of residential IPs and rotate constantly, IP-blocking becomes useless.
Snowflake's signaling channel — how a client finds a free proxy — uses domain-fronted requests to a "broker" service. This is where the censor pressure point is. If the censor blocks the broker, no new connections can be set up. Snowflake has been countered in this way in several countries, and the project keeps rotating broker infrastructure to stay ahead.
webtunnel: Hiding Inside Regular HTTPS
webtunnel, deployed in 2024, is the most recent transport. It wraps Tor traffic inside what looks like an ordinary HTTPS WebSocket connection to a normal-looking web server. The bridge operator publishes a real website on the same host that returns plausible-looking HTML to browsers that arrive without the right authentication token. Only Tor clients that present the correct key get upgraded to a Tor tunnel.
To a censor, the bridge looks indistinguishable from any other small website running on a VPS. There is no high-entropy noise, no domain fronting, no WebRTC signaling. The trade-off is that each bridge has to maintain a convincing decoy website, and the operational complexity of running one is higher than obfs4.
How They Compare in Practice
| Transport | Disguise | Works in China? | Works in Iran? |
|---|---|---|---|
| obfs4 | High-entropy noise | No | Intermittent |
| meek-azure | Domain-fronted HTTPS | Sometimes | Sometimes |
| snowflake | WebRTC to volunteer browsers | Yes (when broker reachable) | Yes (when broker reachable) |
| webtunnel | Indistinguishable from HTTPS | Yes | Yes |
Numbers and capabilities here are approximate and change as the arms race continues. The OONI project publishes ongoing measurements of Tor reachability by country and transport, and the Tor Browser bug tracker is a reliable source for "this transport stopped working in country X" reports.
The Threat Model Question
All four transports protect the fact of using Tor from a network observer. None of them protect against an observer with endpoint access. If your device is compromised, no transport matters. If you log into an identifying account through Tor, the transport keeps the censor in the dark but reveals you to the destination service.
And no transport changes Tor's underlying anonymity properties: the three-hop circuit, the guard relay's persistent role, the exit relay's visibility of your traffic to the destination. For deeper context, see our pieces on Tor versus VPNs and traffic analysis attacks.
Running a Bridge
If you live somewhere uncensored, running an obfs4 bridge is one of the highest-leverage privacy-altruism things you can do. The Tor Project maintains a one-command Docker image and a guide for running a bridge on a $5/month VPS. As of late 2025, the network had roughly 2,500 active obfs4 bridges, with ongoing demand for more in response to specific countries' blocking activities.
The legal exposure of running a bridge is dramatically lower than running an exit relay, because no traffic leaves the bridge to the wider internet — it just routes inward to other Tor relays. The censorship-evasion infrastructure of the internet is built and sustained by individuals choosing to run a small piece of it. It works because enough of them do.