The Trust Shift: Secure Enclaves for Private Nostr Relays
TEE relays shift trust from operators to chip manufacturers. For most threats, that trade is worth making, with eyes open.
The previous post in this series established that PIR fails for Nostr. The problem isn't implementation difficulty or performance cost, though both are severe. The problem is structural: Nostr queries combine multiple predicates, require range filters, demand real-time subscriptions, and scatter across dozens of relays per user. PIR was designed for single-index lookups from a cooperative server. Nostr is something else entirely.
So we turn to different technology. What if the relay itself ran inside a secure enclave, where even the machine owner couldn't see what was happening inside? This is the promise of Trusted Execution Environments: hardware-enforced privacy that doesn't depend on the goodwill of whoever runs the server.
Signal deploys this model for contact discovery. When you install Signal, it checks which of your phone contacts also use Signal. But Signal doesn't want to know your contacts. Their solution: an Intel SGX enclave running on Signal's servers. Your phone establishes an encrypted channel directly into the enclave, sends hashed phone numbers, and receives back only the intersection. Signal's operators see encrypted traffic flowing in and out. They cannot see what contacts were queried.
Could the same model work for Nostr relays?
The Mechanism
Intel SGX creates protected memory regions called enclaves. The CPU's memory encryption engine encrypts all data leaving the processor. Code running inside the enclave can access its data in cleartext; code running outside, including the operating system, the hypervisor, and anyone with root access, sees only encrypted bytes. AMD's SEV-SNP provides similar guarantees at the virtual machine level: encrypt an entire VM with keys the host cannot access.
The critical feature is remote attestation. Before sending sensitive data to an enclave, a client can demand cryptographic proof that specific code is running inside genuine Intel hardware. The enclave generates a measurement of its code, the TEE hardware signs that measurement with a key traceable to the chip manufacturer, and the client verifies the signature chain back to Intel's root certificate. If the code hash matches what the client expects, and the signature chain validates, the client knows the enclave is running the right software on real hardware.
A Nostr relay in an enclave would work like this: the relay operator deploys open-source relay code inside SGX or SEV. Clients connect, verify attestation, and establish TLS connections terminating inside the enclave. REQ filters arrive encrypted, get processed inside the enclave, and matching events return over the encrypted channel. The operator sees that communication happened. They cannot see what was queried.
This sounds like exactly what we need. It isn't quite that simple.
The Vulnerability Pattern
SGX has been broken repeatedly. In 2018, Foreshadow used speculative execution to extract secrets from enclaves, including the attestation keys that prove an enclave is genuine. In 2019, Plundervolt showed that manipulating CPU voltage could corrupt enclave computations, allowing attackers to extract cryptographic keys. In 2020, SGAxe extended speculative execution attacks to extract the keys Intel uses to sign attestation quotes, breaking the entire trust model. In 2023, CacheWarp defeated AMD SEV-SNP's integrity protections, achieving privilege escalation inside encrypted VMs.
In October 2025, researchers demonstrated TEE.Fail, a memory bus interposition attack costing under $1,000 in off-the-shelf electronics. They extracted cryptographic keys from both Intel TDX and AMD SEV-SNP, including attestation keys from fully updated machines in trusted status. Intel and AMD responded that physical attacks are "out of scope" for their threat model.
Each vulnerability prompted patches. Each patch prompted new research. The pattern suggests not implementation bugs to be fixed but fundamental tension between performance and security in shared hardware. Side channels, speculative execution artifacts, and physical access attacks recur because the hardware was designed for speed, with security bolted on after.
This doesn't mean TEEs are worthless. It means they provide defense in depth rather than absolute guarantees. An attacker who merely operates the machine, without physical access and without sophisticated side-channel capabilities, cannot extract enclave secrets even with root access. That's meaningful protection. It's just not the mathematical certainty that PIR would have offered, had PIR been feasible.
The Trust Shift
The deeper issue isn't vulnerability history. It's the trust model itself.
All TEE security traces back to the chip manufacturer, but the trust is anchored at manufacturing time, not runtime. Intel generates and fuses a Root Provisioning Key into each SGX processor at the factory. Intel retains a database of these keys. When a platform first initializes SGX, it proves possession of this fused key to Intel's provisioning service and receives attestation certificates in return. With Intel's Data Center Attestation Primitives (DCAP), third parties can then verify attestation quotes locally, without contacting Intel for each verification. Intel is not in the loop for individual queries.
This distinction matters. Intel cannot selectively target a specific user at runtime without having compromised the hardware at manufacturing or issued fraudulent certificates at provisioning. But Intel, or anyone who compromised Intel's key generation facility, could have compromised all chips of a given generation. And Intel, or anyone who compromised their certificate authority, could issue attestation certificates for fake enclaves.
The trust assumption is: Intel manufactured genuine hardware with correctly functioning isolation, Intel's root signing keys remain secure, and Intel has not been compelled to issue fraudulent certificates. These are manufacturing-time and infrastructure-time trusts, not runtime cooperation. A malicious relay operator cannot call Intel to decrypt your queries. But a nation-state that compromised Intel's key generation facility years ago, or that holds fraudulent attestation certificates, could potentially forge attestation.
Intel processors also contain the Management Engine, a separate computer running inside your CPU that cannot be disabled, has network access independent of the main OS, and runs proprietary firmware. Security researchers and the Electronic Frontier Foundation have called it a potential backdoor. In 2017, Intel confirmed remotely exploitable vulnerabilities in Management Engine affecting every Intel platform from 2008 to 2017. AMD's Platform Security Processor has similar architecture and similar concerns.
This is not conspiracy theory. It's the documented architecture. Your choice is not between trusting someone and trusting no one. Your choice is between trusting the relay operator and trusting that Intel's manufacturing and attestation infrastructure remain uncompromised. For hiding your reading habits from your relay operator, that trade is sensible. Against nation-state adversaries who might have compromised chip manufacturing or certificate infrastructure, the security margin narrows.
The Threat Model Match
Who are Nostr users actually hiding from?
Most users face curiosity from relay operators, commercial data harvesting, and the possibility of mass surveillance programs that vacuum metadata from cooperative service providers. Against these threats, TEE-based relays provide genuine protection. The relay operator cannot see queries. Cloud providers cannot access enclave memory through their hypervisor. Mass surveillance would require either physical interposition on server memory buses, compromise of Intel's manufacturing or certificate infrastructure, or sophisticated side-channel attacks. None of these scale to routine monitoring of ordinary users.
Some users face targeted nation-state adversaries. Journalists, dissidents, activists in authoritarian environments. For these users, TEEs offer meaningful but not absolute protection. The key question is whether the adversary has compromised chip manufacturing or attestation infrastructure. A nation-state that compromised Intel's key generation facility could potentially forge attestation for any SGX chip produced at that facility. A nation-state with physical server access could attempt memory bus interposition. These are significant capabilities, but they represent targeted attacks, not passive surveillance.
The honest assessment: TEE-based relays would provide meaningful query privacy for typical users against typical threats. They shift trust from a relay operator who might log your queries today to a chip manufacturer whose hardware and certificate infrastructure you trust was not compromised at manufacturing time. For most users, this is a significant improvement. For users facing adversaries capable of compromising Intel's manufacturing or certificate infrastructure, the protection is real but not absolute.
The Implementation Gap
Even accepting TEEs as the right tool, significant work remains. Nostr relay software would need modification to run inside enclaves: attestation endpoints, enclave-compatible networking, memory management that fits within SGX's limited enclave page cache. Clients would need attestation verification, comparing code measurements against known-good hashes. The Nostr ecosystem would need to establish which code hashes are trustworthy and how to update that list when relay software updates.
Side-channel protection adds another layer. If an attacker can observe which memory pages the enclave accesses, they might infer query patterns even without seeing query content. The mitigation is Oblivious RAM, algorithms that make memory access patterns independent of the data being accessed. Signal uses Path ORAM for their enclave services. A privacy-focused relay would need similar treatment, with associated performance overhead.
The federated model complicates everything. Nostr users query multiple relays to construct their feed. If three of your ten relays run in enclaves and seven don't, the seven conventional relays still see your queries. Privacy is only as strong as the weakest relay in your query set. Meaningful protection requires either a critical mass of TEE relays or explicit client support for routing sensitive queries only to attested relays.
None of this is insurmountable. All of it represents real engineering work that has not yet been done.
The Honest Path
PIR offered mathematical privacy guarantees that Nostr's query model could not satisfy. TEEs offer a different kind of assurance: hardware-enforced isolation that shifts trust from relay operators to chip manufacturers. The tradeoff is real but defensible. Most Nostr users are better protected by an enclave-running relay than by hoping their relay operator is honest.
The cypherpunk instinct distrusts hardware security because hardware is opaque. You cannot audit an Intel CPU the way you can audit cryptographic code. But cryptographic code runs on hardware, and if the hardware is compromised, the cryptography provides no protection. At some point, the chain of trust touches silicon. The question is where you draw the line.
For practical query privacy on Nostr, TEEs represent the best available option given PIR's inapplicability. They protect against the adversaries most users actually face. They do not protect against all adversaries, and we should be honest about that. The predator at the door is usually the relay operator logging your queries, not the NSA with physical access to server memory buses. Against the former, TEEs work. Against the latter, they offer limited help.
If the Nostr ecosystem pursues TEE-based relays, it should do so with eyes open. This is defense in depth, not cryptographic certainty. The trust hasn't been eliminated, only shifted to manufacturing time and certificate infrastructure. But shifting trust from an unknown relay operator who sees your queries today to Intel's manufacturing process years ago, accepting the vulnerability history and the hardware-rooted attestation chain, may be the best privacy bargain available for a decentralized protocol that can't afford PIR's costs.
Sometimes good enough is what we get.