Damus
Max Hillebrand profile picture
Max Hillebrand

The Praxeology of Privacy ~ Chapter 19: Operational Security

Article header image

Operational security prevents adversaries from gathering compromising information. Threat modeling guides defense. Human factors are the weakest link. Perfect OPSEC is impossible.

#The Praxeology of Privacy

"Only amateurs attack machines; professionals target people."

Bruce Schneier^1^

Introduction

Technical tools fail if humans fail. Encryption protecting your messages is worthless if you post the same content publicly under your real name. Tor's anonymity does not help if you log into your personal accounts through it. Bitcoin's pseudonymity does not protect you if you buy at an exchange that has your identity and then use those coins for sensitive purchases.

Operational security (OPSEC) is the discipline of preventing adversaries from gathering information that could compromise security. It is not a tool but a practice: ongoing attention to the ways human behavior can undermine technical protection.

19.1 Threat Modeling: Who Is Your Adversary?

Define Your Specific Adversary

Security is not absolute; it is relative to a threat model. What threats are you protecting against? The answer determines appropriate measures. A local network observer (someone on your WiFi who might sniff traffic) is addressed by a VPN. Passive surveillance (dragnet monitoring that captures everything without targeting you specifically) is addressed by encryption and anonymization tools. Platform surveillance (the services you use collecting data about your usage) is addressed by choosing privacy-respecting services. Targeted surveillance by a corporation (a specific company actively trying to gather information about you) requires more serious measures. Targeted surveillance by a state (a government agency actively investigating you) is the highest level of concern for most people. Each adversary has different capabilities and different interests. Defending against a coffee shop hacker requires different measures than defending against intelligence agencies.

Assess Adversary Capabilities and Resources

A script kiddie uses tools without understanding them; limited capability, easily deterred. A skilled hacker understands systems thoroughly and can develop novel attacks; more capable but still resource-constrained. A corporation has significant resources, legal authority, and can hire expertise; may not have time or interest for sustained targeting. Law enforcement has legal authority, technical capabilities, and time; may lack resources for sophisticated attacks but has patience. Intelligence agencies have extensive resources, sophisticated capabilities, and legal authority; they are the most capable adversaries. The resources and sophistication of your adversary determine what protective measures are necessary and what are overkill.

Match Defensive Measures to Actual Threats

Defending against NSA when your threat is an abusive ex-partner wastes resources and attention. Defending against a coffee shop hacker when law enforcement is investigating you is dangerously inadequate.

Common errors include over-engineering, under-engineering, and mismatched measures. Over-engineering means using Tor to browse recipes when your threat is an advertising tracker; this wastes complexity. Under-engineering means using basic encryption when law enforcement is actively investigating you, creating dangerous inadequacy. Mismatched measures combine sophisticated technical measures with social media over-sharing, allowing the weak link to defeat the strong protection.

Threat modeling requires honest assessment of who might want your information and what resources they would commit to getting it.

Personal Threat Assessment

Your threat profile emerges from who you are and how you live. Profession matters: journalists, lawyers, medical professionals, activists, and those handling sensitive information face elevated risks. Jurisdiction shapes both threats and protections, since laws on encryption, speech, and financial privacy vary dramatically. Public profile affects targeting: publishing under your real name, having followers, or past controversial statements all increase visibility. Relationships create interconnected risk: family members' social media can expose your location, and business partners' security practices become your vulnerabilities. Financial situation determines certain threats: wealth attracts different threats than poverty. Political context may elevate risk if your beliefs or activities put you at odds with powerful actors.

Not all information requires equal protection. Consider what would hurt most to lose: financial credentials, private communications, medical records, location patterns enabling physical targeting. Then consider what would merely embarrass but not endanger. Finally, identify what you do not care about. Concentrate resources on the first category; accept exposure of the last.

Risk Calibration

Risk cannot be eliminated, only managed. Calibrate acceptable residual risk by considering consequence severity, which ranges from annoyance through financial loss, reputation damage, legal jeopardy, to physical danger. Someone risking embarrassment calculates differently than someone risking imprisonment. Probability assessment matters: are you a likely target or merely caught in dragnet collection? Most people overestimate targeting probability while underestimating dragnet exposure.

Protection costs include time, money, convenience, and social friction. Measures costing more than the expected harm they prevent are not worth implementing. Sustainability is essential: heroic measures requiring constant vigilance fail when vigilance lapses. Social constraints shape viable options: security measures that isolate you from family or professional networks may cost more than they protect.

Threat models are not static. Reassess when circumstances change: new job, new relationship, changed public profile, political shifts, or after any security incident.

Break the OODA Loop at Observation

Chapter 1 introduced Boyd's OODA loop; Chapter 10 applied it to state surveillance. The same framework guides personal operational security.

Every adversary must cycle through Observe, Orient, Decide, Act. Your first priority is to break the loop at Observe. If the adversary cannot see your activity, they cannot analyze it, cannot decide to target you, cannot act against you. Prevention of observation is the most cost-effective defense because it collapses the entire attack chain before resources are committed.

When evaluating a practice or tool, ask: does this prevent observation, or does it only complicate later stages? Using encrypted messaging prevents observation of message content. Using a VPN may only complicate attribution after observation has occurred. Both have value, but preventing observation is primary.

If an adversary has already observed your patterns, they have passed the hardest stage. Subsequent stages are easier to execute. This is why operational security failures are often catastrophic: once observation has occurred, the damage compounds through subsequent stages. Handle reuse, metadata leakage, and pattern correlation all represent observation failures that enable everything that follows.

19.2 The Weakest Link: Human Factors

Social Engineering Attacks

Social engineering attacks exploit human psychology, not technical vulnerabilities:

Phishing uses fake communications that trick people into revealing credentials or installing malware. Pretexting creates false scenarios to manipulate people into providing information or access. Baiting leaves infected devices or media where targets will find them. Tailgating follows authorized people into secure areas.

These attacks work because they exploit trust, curiosity, helpfulness, and fear. Technical defenses do not protect against them; awareness and procedure do.

Coercion and Legal Pressure

The $5 wrench attack, examined in Chapter 5, illustrates that physical coercion can compel disclosure regardless of cryptographic strength. Legal coercion operates similarly: courts can hold individuals in contempt for refusing to disclose passwords, and jurisdictions vary in their rules on compelled disclosure.

Technical measures like deniable encryption, dead-man switches, or distributed secrets can mitigate but not eliminate coercion risks.

Convenience Shortcuts and Laziness

Security measures that impede convenience get bypassed. Password reuse is perhaps the most common: unique passwords for every account is tedious, so people reuse passwords, creating single points of failure. Verifying signatures and checksums takes time, so people skip it, accepting unverified software. Maintaining identity separation requires discipline, but tired people take shortcuts, crossing streams. Updates interrupt work, so unpatched systems remain vulnerable.

Sustainable security must account for human laziness. Measures that require constant vigilance fail when vigilance lapses.

Humans Fail Before Technology Fails

In almost every security breach involving good cryptography, the failure was human: using weak passwords, reusing credentials across services, falling for phishing, mixing identities, social media over-sharing, or trusting compromised collaborators.

The technology worked; the humans failed. OPSEC is primarily about managing human behavior, not technical configuration.

19.3 Technical Security Fundamentals

Device Security and Hardening

Security starts with the operating system. The OS mediates all interactions between applications and hardware; a compromised OS undermines every security measure built on top of it.

For desktops, Qubes OS isolates applications in separate virtual machines, so a compromised browser cannot access your files or keys. For phones, GrapheneOS hardens Android with improved sandboxing, verified boot, and removal of Google services. These purpose-built systems provide security that mainstream operating systems cannot match. If specialized systems are impractical, harden what you have: keep systems updated, disable telemetry and unnecessary services, use standard user accounts instead of administrator privileges for daily work, and enable all available security features such as Lockdown Mode on Apple devices.

Full disk encryption protects data if device is lost or stolen; use LUKS on Linux, FileVault on macOS, or BitLocker on Windows, because without encryption, physical access means complete compromise. Enable firmware passwords to prevent boot from unauthorized media, and on supported hardware, use verified boot to ensure firmware and bootloader integrity. Use a screen lock with short timeout and strong password; biometrics are convenient but can be compelled, while PINs and passwords have stronger legal protection in some jurisdictions. Every installed application is attack surface, so install only what you need and remove what you do not use. Network services you do not use should not be running; every open port is a potential entry point.

Network Security Practices

Network security prevents eavesdropping and man-in-the-middle attacks. HTTPS should be used everywhere; verify TLS certificates and use browser extensions that enforce HTTPS. Use a VPN for untrusted networks because public WiFi should be treated as hostile. DNS queries can leak information, so use DNS over HTTPS or configure trusted DNS servers. Configure your firewall to block incoming connections you do not need.

Key Management and Backup

Cryptographic keys must be both protected and recoverable. Keys should be encrypted, preferably with hardware protection such as HSMs or hardware wallets. Keys without backup are vulnerable to loss, but backups expand attack surface. Consider recovery planning for incapacitation: dead-man switches, social recovery, or multi-signature arrangements. Regularly rotating keys limits exposure if keys are compromised.

Software Provenance Verification

Verify that software comes from legitimate sources. PGP signatures on downloads prove provenance; developers sign releases with keys whose authenticity can be verified through the web of trust. Zapstore offers an alternative model: developers sign releases with their Nostr keys, and users discover applications through social graph endorsements, verifying signatures against public keys of developers whose reputation they can evaluate. Hashes prove integrity through checksum verification. Reproducible builds allow you to build software yourself from source and compare results to official builds. Dependencies can be compromised, so audit significant dependencies.

Update Hygiene and Patch Management

Security updates patch known vulnerabilities. Apply updates promptly because known vulnerabilities are actively exploited. Verify update sources because fake update prompts are an attack vector. In critical systems, test updates before deployment. Software that no longer receives updates should be replaced.

19.4 Compartmentalization and Identity Separation

Never Cross Identity Streams

The most common OPSEC failure is mixing identities: connecting your anonymous activity to your real identity through some link. Using the same username across sites allows correlation. Using personal email for anonymous accounts links them. Using the same browser for different identities allows correlation through fingerprinting. Accessing different identities from the same IP address links them. Writing style can be analyzed to link identities through linguistic patterns. Once a link exists, it cannot be removed. Compartmentalization must be maintained from the beginning.

Separate Identities for Separate Purposes

Each separate purpose should have a separate identity. Your real identity is for official matters, employment, family, and similar contexts. A pseudonymous professional identity is for work that you want attributed to a consistent persona but not your legal name. Anonymous identities are for activities where even a consistent pseudonym is undesirable. Mixing purposes within an identity creates links that cannot be broken.

Hardware Separation Between Identities

Ideal separation uses different hardware for different identities. A laptop used only for anonymous activity cannot leak information to your real identity through device fingerprinting. Different operating systems serve different compartmentalization needs. Tails is a live operating system that runs from USB, leaves no trace on the host computer, and routes all traffic through Tor; it is ideal for one-off sensitive activities where you want no persistent state and complete network anonymity. Qubes is a desktop operating system that compartmentalizes different activities in separate virtual machines running simultaneously; it is ideal for ongoing work across multiple security contexts, where you need to maintain separate identities persistently while switching between them throughout the day. GrapheneOS is a hardened mobile operating system for Pixel phones; it provides strong device security and privacy for mobile computing but is a phone OS, not a desktop solution. The choice depends on use case: Tails for anonymous sessions without persistence, Qubes for compartmentalized daily computing, GrapheneOS for secure mobile. Less ideal but more practical for those who cannot adopt specialized systems: different browsers, different profiles, different user accounts on a standard operating system.

Network Separation

Different identities should use different network paths. Use home internet for real identity and public WiFi or mobile data (purchased anonymously) for anonymous activity. If using VPNs, use different providers for different identities. Real identity can use regular internet while anonymous identity uses Tor. Network correlation is a powerful deanonymization technique. Serious compartmentalization requires network separation.

Temporal Separation

Activity patterns can link identities. If two identities are always active at the same hours, they may be the same person. Quick responses to events can correlate identities through response timing. Both identities inactive during the same vacation is revealing. Varying activity patterns and introducing deliberate delays can make temporal correlation harder.

19.5 Surveillance Detection

Prevention is preferable to detection, but detection enables response. Recognizing when you are under surveillance, whether physical or digital, allows you to modify behavior before compromise becomes complete.

Physical Surveillance Indicators

Physical surveillance leaves traces if you know what to observe. The same person appearing in multiple unconnected locations is the strongest indicator; once is coincidence, twice is suspicious, three times is confirmation. Vehicles that appear repeatedly, especially if they contain occupants who do not exit, warrant attention. People who seem to have no purpose, loitering without apparent reason, reading newspapers for extended periods, or making phone calls that never end, may be conducting surveillance. Sudden behavioral changes in your environment, such as new "regular" faces at your usual locations, merit scrutiny.

Detection requires establishing a baseline of normal activity. Know who typically populates your regular environments: the coffee shop, the commute, the neighborhood. Changes from baseline are what you detect. Without knowing normal, you cannot recognize abnormal.

Counter-surveillance routes test for followers. Vary your routine unpredictably. Use routes that force followers to expose themselves: dead-end streets, sudden reversals, entering and quickly exiting buildings with multiple exits. The goal is not to evade but to detect. If detection confirms surveillance, you can then decide whether to continue, modify behavior, or seek assistance.

Digital Surveillance Indicators

Digital surveillance is harder to detect but leaves its own traces. Unexpected account activity, such as login notifications from unfamiliar locations or devices, indicates potential compromise. Password reset emails you did not request suggest someone is probing your accounts. Devices behaving unusually, running hot when idle, battery draining faster than normal, or network activity when you are not using the device, may indicate compromise.

Canary services can detect certain types of surveillance. A dedicated email account that you never use, checked only occasionally, will show login activity only if someone else has accessed it. Files with unique names placed in cloud storage can be monitored; access by anyone other than you indicates compromise. These "tripwires" do not prevent surveillance but reveal it.

Network monitoring can reveal unexpected connections. Tools that display active network connections show what your device is communicating with. Connections to unfamiliar servers, especially during idle periods, warrant investigation. DNS queries to unexpected domains may indicate malware or monitoring software.

Be aware of legal surveillance indicators as well. Unusual law enforcement interest in your associates, questions about you from unexpected sources, or legal process served on your service providers may indicate investigation. Some jurisdictions require notification after certain types of surveillance conclude; absence of such notification does not mean absence of surveillance.

Limitations of Detection

Detection is not foolproof. Sophisticated adversaries conduct surveillance specifically designed to evade detection. Nation-state capabilities include techniques that leave minimal traces. The absence of detected surveillance does not prove its absence; it may only prove the adversary's competence.

Detection also carries risks. Counter-surveillance behavior that is too obvious signals awareness, potentially accelerating adversary timelines. Paranoid behavior can damage relationships and judgment. False positives waste resources and attention. Detection should inform proportionate response, not induce paralysis.

The goal of surveillance detection is not certainty but improved situational awareness. Even imperfect detection shifts the odds in your favor.

19.6 Common Failure Modes (Case Studies)

Handle Reuse and Identity Crossover

Chapter 18 examined how Ulbricht's arrest resulted from operational security failures, not cryptographic weaknesses.^2^ The critical errors were handle reuse (the "altoid" username appeared on both anonymous promotional posts and personal accounts) and server misconfiguration. The technology worked; human operational failures created the vulnerabilities. This pattern recurs across cases: operators use personal email addresses for anonymous infrastructure, reuse usernames across contexts, or include identifiable metadata in uploaded files.

Trusted Collaborator Betrayal

LulzSec was a hacking collective that breached Sony, the CIA, and other high-profile targets during a 50-day spree in 2011. Hector Monsegur ("Sabu"), one of its leaders, was arrested and immediately began cooperating with the FBI.^3^ He continued operating within LulzSec while feeding information to investigators.

Other LulzSec members trusted Sabu and shared information with him that led to their arrests. The technical security of their communications did not matter; they were communicating with an informant. No technical measure protects against a trusted collaborator who has been turned. Trust is irreducible; choose carefully whom you trust, compartmentalize what each person knows, and recognize that anyone might be compromised.

Printer Steganography: Reality Winner

In 2017, Reality Winner, an NSA contractor, printed a classified document and mailed it to journalists.^4^ She was identified and arrested within days. The document itself betrayed her.

Color laser printers embed nearly invisible yellow dots encoding the printer's serial number and the date and time of printing. The NSA knew which printer produced the document and when. Internal logs showed Winner was one of six people who had printed that document. Further investigation revealed she had contacted the journalists from her work computer. She was sentenced to five years in prison.

The lesson extends beyond printers. Physical documents carry metadata: printer tracking dots, paper batch information, handling traces. Digital documents carry metadata: author names, revision history, GPS coordinates. Metadata persists when you think you have removed it. Assume every document, physical or digital, contains information you did not intend to include.

Blockchain Analysis: Tracing Bitcoin

Bitcoin's pseudonymity is frequently mistaken for anonymity. In 2019, international law enforcement announced a major operation that resulted in 337 arrests across 38 countries.^5^ The investigation relied primarily on blockchain analysis, not on breaking encryption or compromising Tor.

The method was straightforward. Blockchain analysis firms traced Bitcoin flows from the target to cryptocurrency exchanges. Exchanges, required by law to collect identity information, provided records linking transactions to individuals. The blockchain's permanent, public record became the prosecution's evidence. Users who believed Bitcoin provided anonymity discovered that every transaction they had ever made was recorded, traceable, and attributable once any endpoint touched an identified service.

The lesson: Bitcoin provides pseudonymity, not anonymity. The base layer blockchain is a permanent public record. Privacy requires additional measures: avoiding identified exchanges, coinjoining, transacting through Lightning Network with appropriate channel management, or using ecash systems that break the transaction graph. Pseudonymity is useful but is not the same as anonymity.

Photo Metadata: EXIF Exposure

In 2012, software entrepreneur John McAfee was evading authorities in Central America. Journalists accompanied him and published photos documenting his flight.^6^ One photo contained EXIF metadata including GPS coordinates. Within hours, observers had identified his precise location at a resort in Guatemala. He was arrested days later.

The journalists knew about metadata risks; they had warned each other. The failure occurred during file handling at the publication's headquarters. Someone uploaded the original rather than a stripped version. One mistake in a chain of careful handling was sufficient.

Modern smartphones embed extensive metadata in photos by default: GPS coordinates precise to meters, device model, date and time, sometimes camera settings and thumbnails of previous edits. This metadata survives casual inspection; viewing a photo does not reveal the hidden data. Sharing an unstripped photo can reveal your location, your device, and when you were there. Strip metadata before sharing any image. Better: disable GPS tagging at the camera level for sensitive contexts.

19.7 The Limits of OPSEC

Perfect Operational Security Is Impossible

No one maintains perfect operational security forever. Constant vigilance is exhausting; eventually people slip. More security measures mean more opportunities for mistakes. Real life creates situations where security must be compromised. You cannot defend against attacks you do not know exist.

Anyone can make mistakes. Extended operations increase the probability of a fatal error approaching certainty.

Targeted Sophisticated Adversaries May Succeed

Against a sufficiently motivated and resourced adversary, OPSEC may not be enough. Intelligence agencies have more resources than individuals. Subpoenas, warrants, and international cooperation expand adversary capabilities. If adversaries can access your devices or person, technical measures fail. Adversaries who can compromise your hardware or software suppliers have access you cannot prevent.

The goal of OPSEC is not to be perfectly secure but to raise the cost of attacking you beyond what adversaries are willing to pay. Against adversaries with unlimited resources, this may not be achievable.

Risk Acceptance as Necessary Component

All activities involve risk. Perfect security is impossible; the question is what risk level is acceptable. Risk assessment asks what is the probability of failure and what are the consequences. Risk mitigation asks what measures reduce probability or consequences to acceptable levels. Residual risk asks what remains after mitigation and whether it is acceptable. Risk acceptance means explicitly acknowledging residual risk instead of pretending it does not exist.

Security without risk acceptance is theater. Acknowledging limits enables rational decision-making.

Knowing When to Stop

Diminishing returns apply to OPSEC. Early measures provide large benefits: using encrypted messaging instead of plaintext provides massive security improvement. Later measures provide smaller benefits: using three layers of VPNs instead of two provides marginal improvement. Excessive measures create new risks, as complexity increases the chance of misconfiguration.

At some point, additional measures are not worth their cost in effort, complexity, or usability. Knowing when to stop is part of good OPSEC.

Chapter Summary

Operational security is the discipline of preventing adversaries from gathering information that could compromise security. Technical tools provide cryptographic protection, but human behavior can undermine any technology.

Threat modeling identifies specific adversaries, their capabilities, and their interests. Defensive measures should match actual threats, neither over-engineering against unlikely threats nor under-engineering against real ones. The OODA loop framework provides strategic guidance: break the adversary's decision cycle at the observation stage, where prevention is cheapest and most effective.

Human factors are the weakest link. Social engineering exploits psychology. Coercion can compel disclosure. Convenience shortcuts and laziness bypass security measures. In almost every breach of good cryptography, humans failed before technology failed.

Technical fundamentals include device hardening, network security, key management, software verification, and update hygiene. These provide the foundation but are insufficient alone.

Compartmentalization prevents identity correlation. Different identities require different handles, hardware, networks, and activity patterns. Once identities are linked, the link cannot be broken.

Surveillance detection enables response when prevention fails. Physical indicators include repeated sightings of the same person or vehicle across unconnected locations. Digital indicators include unexpected account activity, password reset requests, and unusual device behavior. Detection is not foolproof; sophisticated adversaries design surveillance to evade detection. But even imperfect detection improves situational awareness.

Case studies demonstrate common failure modes. Silk Road failed through handle reuse and server misconfiguration. LulzSec members were caught because they trusted Sabu, an informant. Reality Winner was identified through printer steganography dots embedded in the document she leaked. Blockchain analysis has traced Bitcoin transactions to identified exchanges, leading to hundreds of arrests. John McAfee's location was exposed by GPS coordinates in photo metadata. Each failure was human, not technical.

Perfect OPSEC is impossible. Fatigue causes slips; complexity creates vulnerabilities; life intrudes. Against sufficiently motivated adversaries with sufficient resources, OPSEC may not be enough. Risk acceptance is a necessary component: explicitly acknowledging residual risk instead of pretending security can be perfect. Knowing when additional measures are not worth their cost is part of good operational security.


Footnotes

^1^ Bruce Schneier, Cryptogram Newsletter, October 15, 2000. The observation reflects the reality that technical security often fails through human factors rather than cryptographic weaknesses.

^2^ For analysis of Ulbricht's arrest and OPSEC failures, see Andy Greenberg, "How the Feds Took Down the Dread Pirate Roberts," Wired, November 18, 2013; and court documents from United States v. Ross William Ulbricht, 14-cr-68 (S.D.N.Y.).

^3^ On Sabu and LulzSec, see Parmy Olson, We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency (New York: Little, Brown, 2012).

^4^ On Reality Winner and printer steganography, see the FBI affidavit in United States v. Reality Leigh Winner, 1:17-MJ-590 (S.D. Ga. 2017). For analysis of the printer tracking dots, see the Electronic Frontier Foundation's documentation on printer steganography.

^5^ U.S. Department of Justice, "South Korean National and Hundreds of Others Charged Worldwide in the Takedown of the Largest Darknet Child Pornography Website," Press Release, October 16, 2019. Chainalysis provided blockchain analysis tools used in the investigation.

^6^ On the McAfee EXIF metadata incident, see Graham Cluley, "Fugitive John McAfee's location revealed by photo meta-data screw-up," December 3, 2012. The photo's GPS coordinates placed McAfee at a specific resort in Guatemala.


Precious chapter: nostr:naddr1qqgxydmpv56nqefnv4jnwepcxdjx2q3qklkk3vrzme455yh9rl2jshq7rc8dpegj3ndf82c3ks2sk40dxt7qxpqqqp65w2dape8

Next Chapter: nostr:naddr1qqgxvdtyx43nwvmxvcckgerx8qmnvq3qklkk3vrzme455yh9rl2jshq7rc8dpegj3ndf82c3ks2sk40dxt7qxpqqqp65wtqgfr5