The Praxeology of Privacy ~ Chapter 14: Anonymous Communication Networks
The internet leaks metadata. VPNs help locally. Tor distributes trust through relays. Mixnets defeat global adversaries. Choose tools matching your threat model.
"We kill people based on metadata."
Michael Hayden, former NSA and CIA Director^1^
Introduction
Chapter 13 established cryptographic foundations. Encryption protects the content of communications. But encryption alone is insufficient for privacy.
The problem is metadata: information about communications, not their content. Who communicates with whom, when, how often, and for how long reveals patterns that surveillance can exploit. Even with perfect content encryption, metadata enables comprehensive monitoring of social networks, political associations, and personal relationships.
This chapter examines architectural solutions to the metadata problem. We begin with why the internet's fundamental design leaks privacy, then examine solutions in order of increasing protection: VPNs as a simple first step, Tor and I2P as multi-hop solutions, and mixnets as the strongest available protection.
14.1 The Problem: How the Internet Leaks Privacy
IP Addresses: Built-In Identifiers
The Internet Protocol was designed for reliability, not privacy. Every packet contains the sender's IP address in plaintext, visible to every router along the path. The exposure is not a bug but a fundamental design choice: routers need to know where packets came from to send responses back.
Your IP address functions as an identifier. It connects to your physical location (often to your street address), your internet service provider, and through your ISP's records, to your legal identity. Every website you visit, every service you connect to, receives your IP address. They know where you are connecting from, and with minimal effort, who you are.
Even when content is encrypted, the IP header is not. HTTPS hides what you read on a website; it does not hide that you visited that website. Your ISP sees every domain you connect to. Network observers along the path see source and destination IP addresses on every packet.
Metadata: The Full Picture
Metadata is data about data. For communications, metadata includes who (sender and recipient identifiers such as email addresses, phone numbers, and IP addresses), when (timestamps of communications), how long (duration of calls, size of messages), how often (frequency of communication between parties), and where (location data from devices). Content encryption hides what was said. Metadata reveals everything else.
Hayden's statement that "we kill people based on metadata" is not hyperbole.^1^ Intelligence agencies have acknowledged using metadata for targeting decisions. The capabilities are substantial. Social network mapping uses communication patterns to reveal social structures: who the leaders are, who the intermediaries are, who the isolated actors are. Behavioral analysis tracks changes in communication patterns that signal significant events; suddenly increased communication frequency may indicate planning, while sudden silence may indicate operation. Location tracking through mobile device metadata reveals physical movements, routines, and deviations from routine. Association inference connects individuals to known targets through communication records, establishing guilt by association.
Why Content Encryption Is Insufficient
Consider encrypted messaging between two parties. An observer who cannot read the messages still sees that Alice and Bob communicate, how frequently they communicate, when they communicate (times of day, days of week), how their communication patterns change over time, and who else each party communicates with.
Such information suffices for surveillance purposes in many contexts. Knowing that a journalist communicates frequently with a particular government official is valuable intelligence regardless of message content.
Traffic Analysis
Traffic analysis is the systematic exploitation of metadata. Timing correlation observes that if Alice sends a message at 2:03:47 and Bob receives one at 2:03:48, they are probably communicating, even if the content is encrypted and the route is indirect. Volume correlation matches message sizes across network hops to link sender and receiver. Pattern analysis identifies regular communication patterns (every Tuesday at 3pm) that reveal relationships even without content. Network flow analysis follows traffic through network infrastructure to reveal endpoints even when individual hops are encrypted. Traffic analysis works because communication must traverse physical infrastructure that can be observed.
14.2 Requirements for Anonymous Communication
Formal Properties
Anonymous communication systems aim to provide several properties, each addressing different aspects of the metadata problem.
Sender anonymity means observers cannot determine who sent a message. Even if the content is known, the originator remains hidden. This protects whistleblowers, journalists, and anyone whose speech might invite retaliation. Receiver anonymity means observers cannot determine who received a message. The intended audience is hidden, protecting recipients from association with senders who may be targeted.
Unlinkability means observers cannot determine that a particular sender and receiver are communicating with each other. Even if both parties are known to use the system, connecting their activity defeats surveillance that relies on mapping relationships. Unobservability means observers cannot determine that a communication is occurring at all. The communication is hidden among other traffic or cover activity. This is the strongest property: not just hiding who communicates, but hiding that communication happens.
These properties have different strengths depending on the adversary model. Against a local observer (seeing only part of the network), weaker protections may suffice. Against a global adversary (seeing all network traffic), stronger protections are required. The design choice reflects expected threats: journalists in authoritarian regimes face different adversaries than users seeking privacy from advertisers.
Adversary Models
The strength of an anonymity system depends on assumptions about the adversary. A passive adversary observes traffic without modifying it. They collect data, analyze patterns, and attempt identification through correlation. An active adversary can inject, delay, drop, or modify messages. They might run their own nodes in the network, perform timing attacks by introducing delays, or attempt to force users into identifiable behavior.
The local adversary sees only a portion of the network: perhaps the user's connection to their ISP, or traffic through a single relay. The global adversary sees all network traffic simultaneously. This is the most powerful model, as it enables end-to-end timing correlation that no amount of encryption can prevent without additional countermeasures.
Realistic threat modeling requires honest assessment. Most users do not face nation-state adversaries. But systems designed only for weak adversaries fail catastrophically when stronger ones appear. The cypherpunk approach builds systems that resist the strongest plausible adversaries, recognizing that capabilities expand over time.
The Anonymity Set
Anonymity is relative to an anonymity set: the group of possible senders or receivers among whom the actual party cannot be distinguished. This is the fundamental measure of anonymity strength.
If the anonymity set contains only three people, the adversary has a 1-in-3 chance of correct identification. If it contains millions, the odds improve correspondingly. The mathematics are straightforward, but the implications are profound: anonymity depends on who else is using the system.
Anonymity sets depend on who else is using the system at the same time. This creates a collective action dynamic: the more users, the stronger the anonymity for each user. A system with few users provides weak anonymity regardless of cryptographic strength. A system with many users can provide strong anonymity even with simpler cryptography.
This dynamic explains why adoption matters as much as technology. Privacy tools benefit from broad adoption. Early adopters sacrifice some anonymity to bootstrap the system; later adopters benefit from the anonymity set the pioneers created. The collective action problem also creates vulnerability: if adversaries can reduce adoption (through legal pressure, usability degradation, or stigmatization), they weaken anonymity for remaining users.
Cover Traffic and Dummy Messages
Some systems introduce cover traffic: fake messages indistinguishable from real ones. Cover traffic has several purposes. It maintains consistent traffic volume regardless of actual usage, defeating volume analysis. It creates activity even when users are idle, making timing analysis harder. It expands the effective anonymity set by including dummy messages among possible "real" messages.
Cover traffic has costs. Bandwidth consumption increases, as dummy messages consume the same resources as real ones. Latency may increase if systems wait to batch real and dummy traffic. Complexity increases, since distinguishing cover from real traffic must be impossible for observers but possible for recipients.
The design choice depends on threat model. Against passive local adversaries, cover traffic may be unnecessary overhead. Against active global adversaries, it may be essential for effective protection.
14.3 VPNs: A Simple but Limited Solution
What VPNs Actually Provide
VPNs (Virtual Private Networks) are the simplest approach to hiding your IP address. They encrypt traffic between you and the VPN provider's server, which then forwards it to the destination. VPNs encrypt the local network segment so traffic between user and VPN server is protected against local network observers such as coffee shop WiFi or your ISP. They change your IP address so destinations see the VPN server's IP, not yours. They provide geographic relocation so users appear to be in the VPN server's location. For many users, this is enough. If your concern is your ISP logging your browsing history, or the coffee shop network operator sniffing your traffic, a VPN solves the problem.
What VPNs Do Not Provide
VPNs are not anonymity tools. The VPN provider knows your real IP address and sees all your traffic destinations; you have not eliminated surveillance but shifted it from your ISP to your VPN provider. If the provider logs traffic or cooperates with authorities, you have no protection. Websites still see browsing patterns, cookies, and behavioral fingerprints that identify users regardless of which IP address connects. Unlike multi-hop systems, VPNs offer no defense in depth; compromise of the VPN provider compromises everything.
Multi-Hop VPNs
Some VPN providers offer multi-hop configurations, routing traffic through two or more servers. Users can also chain VPNs manually by connecting to provider A, then through that connection to provider B. This improves the trust situation: provider A sees your real IP but not your destination; provider B sees your destination but only provider A's IP.
However, multi-hop VPNs remain weaker than Tor for several reasons. The number of VPN providers is small compared to Tor's thousands of relays, limiting the possible routing combinations. VPN providers are commercial entities with known identities, making them easier to pressure or compromise than pseudonymous Tor relay operators. The same provider often controls multiple hops in their "multi-hop" offering, providing less actual trust distribution than it appears. And unlike Tor's constantly changing circuits, VPN configurations tend to be static.
Multi-hop VPNs represent an improvement over single-hop, but they do not achieve the trust distribution of purpose-built anonymity networks.
Trust Model Problems
VPN providers make claims about logging policies that cannot be verified. "No logs" claims have been contradicted when providers have turned over data to authorities. Users have no way to audit provider practices.
Even well-intentioned providers can be compelled by legal process to log, compromised by attackers, or acquired by less privacy-respecting companies.
The VPN trust model requires trusting third parties who can be identified and pressured. The model is fundamentally incompatible with threat models that include the VPN provider or entities that can compel the provider.
Appropriate Use Cases
VPNs are appropriate for protecting against local network observers (ISPs, public WiFi), accessing geo-restricted content, and bypassing simple IP-based blocks.
VPNs are not appropriate for anonymity against sophisticated adversaries, protection against the VPN provider, or activities where trust in a third party is unacceptable.
14.4 Onion Routing: Tor and I2P
The Core Insight: Distribute Trust
The fundamental weakness of VPNs is that one entity sees everything. Onion routing solves this by distributing trust across multiple relays. No single relay knows both who you are and what you are accessing.
Tor: Architecture and Operation
Tor (The Onion Router) is the most widely used anonymous communication system.^2^ It uses layered encryption where each layer is decrypted by successive relays.
The user's Tor client builds a circuit through three relays: a guard (entry) node, a middle node, and an exit node. The client knows all three; each relay knows only its neighbors. The message is encrypted three times, once for each relay; the guard decrypts the outer layer and forwards to middle, middle decrypts and forwards to exit, exit decrypts and forwards to destination. No individual relay knows both origin and destination: the guard knows the user but not the destination, the exit knows the destination but not the user, and the middle knows neither. This architecture means that even if one relay is compromised or malicious, anonymity is preserved. An adversary must control multiple relays in the same circuit to link user to destination.
Tor Network Economics
Tor operates through volunteer relay operators who donate bandwidth and computing resources. The incentive structure relies on self-interest, ideological commitment, and organizational support. Some operators need Tor themselves and contribute to the network they use. Others operate relays to support freedom of communication as a matter of principle. Some relays are operated by organizations such as universities and privacy advocacy groups as part of their institutional missions. This volunteer model creates sustainability challenges. Exit relays face particular burdens: abuse complaints, legal exposure, and higher bandwidth costs. The network has consistently struggled with insufficient exit capacity.
Tor Limitations
Tor has well-documented limitations, and real-world attacks have demonstrated these vulnerabilities.^6^
Timing attacks allow a global adversary observing both ends of a circuit to correlate timing and link sender to receiver; Tor does not protect against adversaries who can monitor traffic at both entry and exit points. Traffic analysis can reveal information from patterns in circuit usage even without breaking encryption. Website fingerprinting attacks analyze the size and timing patterns of encrypted traffic to identify which websites a user visits; research has achieved over 95% accuracy in controlled settings when monitoring a small set of popular websites, though real-world effectiveness remains debated.^7^
Exit node vulnerabilities exist because exit nodes see unencrypted traffic to destinations unless the destination uses HTTPS, allowing malicious exit operators to observe and modify unencrypted traffic. Guard node compromise is particularly serious because Tor users maintain the same guard nodes for extended periods; an adversary who controls a user's guard sees all their Tor traffic entering the network, and combined with exit observation or website fingerprinting, guard compromise enables deanonymization.
Documented deanonymization attacks have succeeded against Tor users, though the Tor Project's ongoing maintenance has addressed many specific vulnerabilities. In Operation Torpedo (2012), the FBI deployed malware through compromised onion services to unmask users by exploiting browser vulnerabilities.^8^ The 2013 Freedom Hosting attack used similar techniques. Both attacks exploited browser plugins (particularly Flash) that Tor Browser now disables by default; the specific vulnerabilities were patched. In 2014, researchers (allegedly from CMU/CERT) operated over 100 malicious relays comprising 6.4% of guard capacity, using a "relay early" traffic confirmation attack to deanonymize onion service users; The Tor Project discovered the attack, patched the "relay early" vulnerability, and ejected the malicious relays in July 2014.^9^
These historical attacks illustrate an important pattern: Tor undergoes continuous security review and improvement. Specific implementation vulnerabilities, when discovered, are typically patched promptly. The attacks that succeeded exploited bugs that no longer exist. What remains are structural limitations inherent to Tor's low-latency design: timing correlation by global adversaries, traffic analysis, and website fingerprinting cannot be fully eliminated without fundamentally different architecture. Users should distinguish between historical exploits (largely fixed) and structural constraints (inherent to the design).
Nation-states have also demonstrated sophisticated capabilities for detecting and blocking Tor bridges, the unlisted entry points designed to circumvent censorship. China, Iran, and Russia have implemented bridge-blocking with varying degrees of success. Sybil attacks, where an adversary creates many pseudonymous identities to gain disproportionate influence over a network, allow adversaries operating many relays to increase their chance of being selected for circuits, improving attack capabilities.
Performance suffers because multi-hop routing increases latency; Tor is slower than direct connections, sometimes substantially. Usability remains challenging; despite improvements, Tor is harder to use than ordinary browsing, and users make mistakes that compromise anonymity.
Tor's directory authority system represents a point of centralization. Approximately nine directory authorities vote hourly to produce the network consensus document that lists all relays, their properties, and their trustworthiness. Clients download this consensus to build circuits. The directory authorities themselves are known entities with stable identities, operated by trusted community members and organizations. While compromise of a single authority has limited impact due to the voting mechanism, the system as a whole depends on this small group remaining honest and uncompromised. This is a pragmatic design choice: fully decentralized consensus is difficult for relay discovery, and the directory authority model has worked adequately. But it differs from the fully trustless models that some systems aspire to.
Tor provides strong protection against many adversaries, but well-resourced nation-states with global surveillance capabilities or the ability to operate malicious infrastructure have demonstrated successful attacks.
Onion Services: Censorship-Resistant Publishing
Tor's best-known use is anonymizing outbound connections: users access regular websites without revealing their identity. But Tor also enables the reverse: publishing services without revealing the server's location or requiring any registration with domain authorities.
An onion service generates a cryptographic key pair. The public key, hashed, becomes the .onion address (e.g., duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion). The service connects to the Tor network and establishes introduction points. Clients connect through the Tor network to these introduction points, then establish a rendezvous circuit. Neither client nor server reveals their IP address to the other or to any relay.
Onion services require no domain registration. Traditional websites require domain names purchased through registrars who enforce identity requirements and can revoke domains under legal pressure. Onion addresses derive from cryptographic keys. No registrar exists to pressure, no ICANN policy to invoke, no DNS seizure possible. The address is self-authenticating: if you reach the service, you have reached the right service, verified by cryptography rather than certificate authorities.
The server location remains hidden. The hosting server's IP address never appears in any connection. Adversaries cannot identify which server to raid, which hosting provider to pressure, or which jurisdiction's laws apply. A website can be published from anywhere and remain accessible as long as any path through the Tor network exists.
Applications include whistleblowing platforms like SecureDrop, which allow sources to submit documents without revealing their location to the news organization or anyone else. Censored publications can maintain presence despite government takedown orders. Forums and markets can operate without the jurisdictional vulnerabilities that destroyed centralized predecessors. Even conventional services like Facebook and the BBC operate .onion versions to reach users in censoring countries.
Limitations exist. Onion services are slower than regular websites due to the multiple hops. The long random addresses are difficult to communicate and verify, though conventions like vanity addresses and trusted directories help. And while the server location is hidden, operational security failures can still reveal operators through other means.
I2P: A Different Architecture for Internal Services
I2P (Invisible Internet Project) uses similar principles to Tor but with different design goals and architectural choices.^10^
Garlic routing differs from onion routing in a significant way: rather than sending messages individually, garlic routing bundles multiple messages (called "cloves") together into encrypted packets. These bundled packets travel through the network before being separated at endpoints. This bundling makes traffic analysis harder because an observer cannot easily distinguish which clove corresponds to which communication stream.
I2P also uses unidirectional tunnels rather than Tor's bidirectional circuits. Each communication requires four tunnels: outbound and inbound for each party. Data sent through I2P takes one path to the destination and a different path for responses. This architectural choice makes observation more difficult because an adversary cannot assume the return path mirrors the outbound path.
I2P is a self-contained network that hosts hidden services (called "eepsites") accessible only within the network; users primarily communicate with other I2P users rather than anonymizing connections to external sites. Because traffic stays within I2P, there are no exit nodes with their associated vulnerabilities and abuse issues. The architecture is distributed: every I2P user also routes traffic for others, creating a more symmetric network than Tor's client-relay distinction.
I2P has its own security challenges. The network relies on a distributed database (the "netDB") maintained by floodfill routers. Research in 2013 demonstrated that Sybil attacks against floodfill routers could compromise the network; attackers who controlled sufficient floodfill peers could manipulate the database to enable deanonymization.^11^ The I2P project responded by implementing mitigations including separating the netDB into multiple sub-databases and improving peer selection algorithms. Like Tor, I2P undergoes continuous development; discovered vulnerabilities are addressed through software updates. The smaller network size compared to Tor means fewer resources for security research, but the project maintains active development and responds to reported issues.
I2P's tradeoffs differ from Tor's. Fewer users means smaller anonymity sets. Tor has received substantially more academic scrutiny, leaving I2P's security properties less thoroughly analyzed. The focus on internal services makes accessing the regular internet less convenient than with Tor. I2P is appropriate for users whose primary need is communication with other I2P users rather than anonymous access to the general internet.
14.5 Mixnets: The Strongest Protection
Why Onion Routing Is Not Enough
Tor and I2P protect against adversaries who cannot observe the entire network. But a global adversary, one who can monitor traffic entering and leaving the network simultaneously, can perform timing correlation. If a message enters Tor at 2:03:47.123 and exits at 2:03:47.456, the timing links them regardless of the encryption layers in between.
Mixnets solve this fundamental limitation.
Chaum's Original Vision
David Chaum proposed mixnets in 1981, before Tor existed.^4^ The concept: messages are collected, batched, reordered, and forwarded by mix nodes. Batching and reordering defeat timing analysis by breaking the relationship between input and output timing.
How Mixing Defeats Traffic Analysis
In a mixnet:
- Messages arrive at the mix node over some time period
- The mix collects messages until it has enough for a batch
- The mix decrypts its layer of encryption on each message
- The mix reorders messages (shuffles the batch)
- The mix forwards all messages simultaneously
An observer seeing messages enter and leave the mix cannot link inputs to outputs. Timing correlation fails because all outputs leave together. Order correlation fails because the order is shuffled. Even a global adversary who sees everything cannot determine which input corresponds to which output.
High Latency as Necessary Tradeoff
Mixing requires latency. Messages must wait for batches to accumulate. This makes mixnets unsuitable for interactive communication (instant messaging, web browsing) but suitable for asynchronous communication (email, file transfer, cryptocurrency transactions).
The tradeoff is fundamental: lower latency means smaller batches, which means weaker anonymity. Higher latency enables larger batches and stronger anonymity. No way exists around this limitation; it is inherent to the mixing approach.
Modern Implementations
Modern mixnet projects include Nym, which uses the Sphinx packet format and economic incentives for mix operators.^12^ Nym introduces cover traffic (fake messages) to further defeat traffic analysis and uses cryptocurrency-based incentives instead of volunteer operation. Loopix is a mixnet design providing sender-receiver unlinkability with resistance to active attacks.
These projects attempt to make mixnets practical for modern use while preserving their strong anonymity properties. However, a critical limitation must be acknowledged: no currently deployed mixnet has the user base to provide meaningful anonymity sets. High-latency mixnets for email (like the historical Mixmaster and Mixminion systems) are essentially defunct. Modern mixnets like Nym remain in early deployment with limited adoption. The theoretical strength of mixing is real, but anonymity depends on who else is using the system; a mixnet with few users provides weak anonymity regardless of cryptographic sophistication. For applications where latency is acceptable and the mixnet achieves sufficient adoption, mixnets provide the strongest available protection against traffic analysis.
14.6 Comparative Analysis
Tradeoffs
Each system presents distinct tradeoffs. VPNs offer low latency and high usability but provide only weak anonymity and require trusting the provider completely. Tor provides strong anonymity against local adversaries with medium latency and usability; users need not trust any single relay. I2P offers similar properties but optimized for internal network use, with lower usability for general browsing. Mixnets provide the strongest anonymity, effective even against global adversaries, but at the cost of high latency and low usability; like Tor and I2P, they require no trust in any single node.
Different Tools for Different Threat Models
The right tool depends on the threat model. Against local observers such as ISPs or public WiFi networks, VPN is sufficient and easiest. Against destination websites seeking to track users, Tor provides multi-hop protection that VPNs cannot. Against well-resourced adversaries with global visibility, mixnets provide the strongest protection but require accepting high latency. For internal community communication, I2P may be most appropriate. For general anonymous browsing, Tor offers the best usability-anonymity tradeoff for most users.
No Universal Solution
No universally optimal anonymous communication tool exists. Each involves tradeoffs. Users must understand their threat model and choose accordingly.
The perfect being the enemy of the good, practical anonymity often means accepting tools with known limitations instead of waiting for ideal solutions that may never exist.
Chapter Summary
The internet's design leaks privacy by including IP addresses in every packet. Metadata, the information about communications, not their content, reveals communication patterns even when content is encrypted. Traffic analysis exploits this metadata to map social networks, track behavior, and establish associations.
VPNs provide a simple first step: encrypting the local network segment and changing your visible IP address. However, VPNs require trusting the provider completely. They are appropriate for protection against local observers but not for anonymity against sophisticated adversaries.
Tor uses onion routing with layered encryption through three relays, ensuring no single relay knows both origin and destination. This distributes trust and provides strong anonymity against adversaries who cannot observe the entire network. I2P uses similar principles but focuses on internal network services instead of accessing the regular internet. Both remain vulnerable to global adversaries who can perform timing correlation.
Mixnets provide the strongest protection by batching and reordering messages, defeating even global traffic analysis. The cost is high latency that makes them unsuitable for interactive use but appropriate for asynchronous communication.
Different tools suit different threat models. Against local observers, VPNs suffice. Against destination tracking, Tor provides multi-hop protection. For highest-security requirements against global adversaries, mixnets provide the strongest available protection. No universal solution exists; users must choose based on their specific requirements and accept the associated tradeoffs.
Footnotes
^1^ The quote is widely attributed to Michael Hayden from a 2014 debate at Johns Hopkins University. See David Cole, "We Kill People Based on Metadata," New York Review of Books, May 10, 2014.
^2^ Roger Dingledine, Nick Mathewson, and Paul Syverson, "Tor: The Second-Generation Onion Router," Proceedings of the 13th USENIX Security Symposium (2004): 303-320.
^3^ For I2P technical documentation, see https://geti2p.net/en/docs.
^4^ David Chaum, "Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms," Communications of the ACM 24, no. 2 (1981): 84-90.
^5^ Claudia Diaz, Harry Halpin, and Aggelos Kiayias, "The Nym Network," available at https://nym.com/.
^6^ For a comprehensive survey of attacks on Tor, see the "Attacks on Tor" repository maintained by security researchers, available at https://github.com/Attacks-on-Tor/Attacks-on-Tor.
^7^ On website fingerprinting, see Giovanni Cherubin et al., "Online Website Fingerprinting: Evaluating Website Fingerprinting Attacks on Tor in the Real World," USENIX Security Symposium (2022). The study found that accuracy degrades rapidly when monitoring larger sets of websites in realistic conditions.
^8^ On Operation Torpedo, see Kevin Poulsen, "FBI Used Drive-By-Downloads to Expose Tor Pedophiles," Infosecurity Magazine, August 2014. The NIT (Network Investigative Technique) exploited a Flash vulnerability to reveal real IP addresses.
^9^ The Tor Project, "Tor Security Advisory: 'Relay Early' Traffic Confirmation Attack," July 30, 2014, available at https://blog.torproject.org/tor-security-advisory-relay-early-traffic-confirmation-attack/. See also Philipp Winter et al., "Identifying and Characterizing Sybils in the Tor Network," USENIX Security Symposium (2016).
^10^ For I2P technical documentation, see https://geti2p.net/en/docs. On garlic routing specifically, see https://geti2p.net/en/docs/how/garlic-routing.
^11^ Christoph Egger et al., "Practical Attacks Against the I2P Network," RAID Symposium (2013). The researchers demonstrated successful attacks using only 20 malicious floodfill peers. For I2P's threat model and mitigations, see https://geti2p.net/en/docs/how/threat-model.
^12^ Claudia Diaz, Harry Halpin, and Aggelos Kiayias, "The Nym Network," available at https://nym.com/.
Precious chapter: nostr:naddr1qqgxzdt9xcexvef5v3jrsefexsunzq3qklkk3vrzme455yh9rl2jshq7rc8dpegj3ndf82c3ks2sk40dxt7qxpqqqp65wund9cr
Next Chapter: nostr:naddr1qqgrwv33xvenvdnyxc6ngv3hv5mxxq3qklkk3vrzme455yh9rl2jshq7rc8dpegj3ndf82c3ks2sk40dxt7qxpqqqp65wrxhrrt