Writing bitcoinquantum.space with llm-wiki.net
In April 2026 I wanted to assess whether the quantum threat to Bitcoin was real. The honest answer lived across fifteen papers, a dozen Delving Bitcoin threads, twenty Bitcoin Optech newsletters, a running testnet, some Liquid transactions, and whatever Avihu Levy had pushed to GitHub that morning. The work was real and scattered. No article summarized it honestly. Headlines were downstream of press releases. The primary sources were where the actual answer lived.
This is one of the things llm-wiki was built for. I used it. Three weeks later I published [bitcoinquantum.space](bitcoinquantum.space) โ three articles, ~15,000 words, 95+ sources cross-referenced, every claim verified. This is a writeup of how.
## The shape of the problem
Serious research has three failure modes:
1. **You can't find everything.** Sources scatter across formats and venues. You don't know what you're missing.
2. **You can't remember everything.** By paper #60 you've forgotten paper #4. You re-read. You contradict yourself.
3. **You can't update.** A new paper drops on publication day. Your conclusion is stale and your notes are already collapsed into prose you can't untangle.
Traditional knowledge management fixes (1) and partly (2). It fails at (3) because the maintenance burden compounds. @karpathy's framing, *"who does the maintenance?"*, is load-bearing because humans don't, not reliably, not for unsexy cross-reference updates nobody sees.
llm-wiki.net fixes (3) by making the entire artifact mechanically regeneratable from immutable raw sources. The only thing you maintain is the source pile.
## The pipeline, applied
**Raw sources, not notes.** Every paper, blog post, mailing list thread, and testnet report got dropped into `raw/` verbatim with a frontmatter header. No interpretation, no paraphrasing. If I don't have the primary source, I don't have it. `raw/` grew to 95+ entries.
**Compile, don't write.** `/wiki:compile` reads the raw pile and synthesizes cross-referenced wiki articles โ one per concept, person, and proposal. "SHRINCS." "Taproot script-path post-quantum proof." "The BIP 86 problem." "Quantum Safe Bitcoin." Each article carries a confidence level, citations, and bidirectional cross-references. The wiki is Claude's work; the sources are mine.
**Query to find gaps.** Once compiled, I stop reading papers and start asking questions. *"What's the relationship between Ruffing's Taproot proof and BIP 86?"* The wiki answers with citations โ and in the process surfaces the gap: 70-90% of BIP 86 outputs can't use the escape hatch. That's a thread I wouldn't have pulled linearly. Query mode is where llm-wiki stops being a filing cabinet and starts being a research partner.
**Output, last.** The articles on bitcoin
In April 2026 I wanted to assess whether the quantum threat to Bitcoin was real. The honest answer lived across fifteen papers, a dozen Delving Bitcoin threads, twenty Bitcoin Optech newsletters, a running testnet, some Liquid transactions, and whatever Avihu Levy had pushed to GitHub that morning. The work was real and scattered. No article summarized it honestly. Headlines were downstream of press releases. The primary sources were where the actual answer lived.
This is one of the things llm-wiki was built for. I used it. Three weeks later I published [bitcoinquantum.space](bitcoinquantum.space) โ three articles, ~15,000 words, 95+ sources cross-referenced, every claim verified. This is a writeup of how.
## The shape of the problem
Serious research has three failure modes:
1. **You can't find everything.** Sources scatter across formats and venues. You don't know what you're missing.
2. **You can't remember everything.** By paper #60 you've forgotten paper #4. You re-read. You contradict yourself.
3. **You can't update.** A new paper drops on publication day. Your conclusion is stale and your notes are already collapsed into prose you can't untangle.
Traditional knowledge management fixes (1) and partly (2). It fails at (3) because the maintenance burden compounds. @karpathy's framing, *"who does the maintenance?"*, is load-bearing because humans don't, not reliably, not for unsexy cross-reference updates nobody sees.
llm-wiki.net fixes (3) by making the entire artifact mechanically regeneratable from immutable raw sources. The only thing you maintain is the source pile.
## The pipeline, applied
**Raw sources, not notes.** Every paper, blog post, mailing list thread, and testnet report got dropped into `raw/` verbatim with a frontmatter header. No interpretation, no paraphrasing. If I don't have the primary source, I don't have it. `raw/` grew to 95+ entries.
**Compile, don't write.** `/wiki:compile` reads the raw pile and synthesizes cross-referenced wiki articles โ one per concept, person, and proposal. "SHRINCS." "Taproot script-path post-quantum proof." "The BIP 86 problem." "Quantum Safe Bitcoin." Each article carries a confidence level, citations, and bidirectional cross-references. The wiki is Claude's work; the sources are mine.
**Query to find gaps.** Once compiled, I stop reading papers and start asking questions. *"What's the relationship between Ruffing's Taproot proof and BIP 86?"* The wiki answers with citations โ and in the process surfaces the gap: 70-90% of BIP 86 outputs can't use the escape hatch. That's a thread I wouldn't have pulled linearly. Query mode is where llm-wiki stops being a filing cabinet and starts being a research partner.
**Output, last.** The articles on bitcoin
31โค๏ธ6๐งก1