Damus

Recent Notes

LessWrong (RSS Feed) profile picture
One World Government by 2150

Can we determine when humanity will unite under a democratic one-world government by projecting voting patterns? Almost certainly not. Is that going to stop me from trying? Absolutely not. The approach is simple: find every record-breaking election in the historical record, plot them, and extrapolate with unreasonable confidence.

To determine precisely when the first election took place is to quibble about definitions and to place more faith in ancient sources than they deserve. So, instead of doing that, suffice it to say that by ~500 BCE both the Roman Republic and Athenian democracy were almost certainly holding elections.

The Athenians had an annual “who’s the most annoying person in town” contest. The “winner” had 10 days to pack their bags before they were banned from the city#fne26qxfu8wm4 for a decade. Each voter scratched a name onto a pottery shard (ostrakon) to submit their vote, which is where we get the word ostracism. https://www.smithsonianmag.com/history/ancient-athenians-voted-kick-politicians-out-if-enough-people-didnt-them-180976138/ about 8,500 ballots from one ostracism vote around 471 BC, so we have an actual number, which is more than can be said for most ancient elections.

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/858e208585ab5b3ee8b0c956748fe202cd92a34fbe3a2351fdd53dd99177a7f9/igcygufl71xr3krpksed

Ostraka for the Athenian general and politician Themistocles, who was expelled and ended his days as governor of Magnesia (a Greek city) under Persian rule, a guest of the empire he had once helped save Greece from.#fny0wiofeyck

The largest election in the ancient world may have been in the late Roman Republic, around 70 BC. The franchise was broad—any male citizen could vote—but there was a catch: https://en.wikipedia.org/wiki/Elections_in_the_Roman_Republichttps://en.wikipedia.org/wiki/Elections_in_the_Roman_Republic (and that day might get pushed back https://x-legio.com/en/wiki/elections). So while millions were eligible across Italy and beyond, actual turnout was likely on the order of tens of thousands. Precise numbers are debated, but https://en.wikipedia.org/wiki/Elections_in_the_Roman_Republic.

Then… history happened. The Roman Republic turned into the Roman Empire, the Empire fell, and large-scale elections mostly disappeared for the next millennium.

The next major vote took place in https://en.wikipedia.org/wiki/1573_Polish%E2%80%93Lithuanian_royal_electionhttps://en.wikipedia.org/wiki/1573_Polish%E2%80%93Lithuanian_royal_election. When the last Jagiellonian king died without an heir, the commonwealth did something radical: it let every nobleman vote for the next king. About 40,000 szlachta rode to a field outside Warsaw and elected Henry of Valois, a French prince.

Henry did not stay long. He was elected in 1573, arrived in early 1574, and then—upon learning that his brother, the King of France, had died—quietly fled Poland in the middle of the night to claim the French throne. The Poles were undeterred and simply held another election and chose someone who actually wanted the job this time.

Elections began to scale in early modern Europe. Britain’s 1715 parliamentary election had on the order of a https://api.pageplace.de/preview/DT0400.9780826434074_A23738903/preview-9780826434074_A23738903.pdf.

In 1804, Napoleon held a plebiscite to approve his elevation to Emperor. He https://en.wikipedia.org/wiki/1804_French_constitutional_referendum of the vote, which tells you everything you need to know about how strictly we're defining “elections” here. Elections in France grew over the century (both in number and actual democratic participation), culminating in the election of 1870 with about 9 million voters.

For most of the 20th century, the largest elections were those in Russia and the Soviet Union. The 1937 election took place during the Great Purge, and one party ran—the Communist Party—and https://en.wikipedia.org/wiki/1937_Soviet_Union_legislative_election of the vote. Again, we’re being generous about what counts as an election here.

By 1984, the exercise had become pure choreography: 184 million participants, one pre-approved candidate per seat across each constituency, https://www.sciencedirect.com/science/article/abs/pii/0261379485900150—all while the man nominally running the country, Konstantin Chernenko, was visibly dying of emphysema.

India is the current record holder, which in 2019 ran an election with about 615 million votes cast. It required seven phases, 39 days, 4 million voting machines, and a https://www.aljazeera.com/gallery/2024/5/8/an-election-booth-inside-a-forest-in-india-for-just-one-voter because Indian law requires no voter travel more than two kilometers to cast a ballot. They broke their own record in 2024 with 642 million voters.

The funny thing about these is that, if you take all the largest known elections since the Dark Ages#fnwz4njyw1fg and plot them on a log scale, you can fit a straight line to it pretty well, showing that the record for the largest single vote has grown roughly exponentially over the past five centuries. That is, the number of voters participating in the largest ever election appears to double every 30 years.

If you project that line forward and take a https://pubmed.ncbi.nlm.nih.gov/36568848/, you see that the trend line crosses the world population curve around the year 2150, at roughly 9.6 billion people. In other words, if voting records keep growing at the historical rate, a single election would involve every living human being sometime in the mid 22nd century.

Behold. The graph:

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/XNbgqHfezYJwrnakS/3dc96b5e456402f920098857c33c7a7e94450dc7c34d9abb6044ecc8da9a6541/wrbbyboa4thndixmrzbj

One world government by 2150. You heard it here first.

OK, maybe not a one-world government. But maybe something interesting? A global referendum? A planetary-scale plebiscite? A vote on whether AIs get voting rights so the trend can keep going?

Of course, this methodology is nonsense, but it’s nonsense with a graph and a line of best fit. There's something satisfying about drawing a line through 500 years of data points and watching it hit a target.

But, if there really is a global vote in 2150, I want credit for calling it.

- #fnrefe26qxfu8wm4Technically, https://www.britannica.com/topic/ostracism, whereas if they were truly exiled they wouldn’t.
- #fnrefy0wiofeyckhttps://en.wikipedia.org/wiki/Themistocles he was ordered to lead a Persian army against Greece and killed himself rather than comply. The method, per Plutarch, was drinking bull's blood. If it seems like all ancient greats had awesome, poetic deaths, it's because that's what the historians wrote down. Is it true? This is history we're talking about. If you want the truth, study physics.
- #fnrefwz4njyw1fgAthens and Rome weren’t used in calculating the line of best fit. The Dark Ages happened, democratic elections mostly didn't, and including them would skew the fit.

https://www.lesswrong.com/posts/4Y3v2hjzHXR5HuMPN/one-world-government-by-2150-1#comments

https://www.lesswrong.com/posts/4Y3v2hjzHXR5HuMPN/one-world-government-by-2150-1
LessWrong (RSS Feed) profile picture
What if superintelligence is just weak?

In response to https://www.hyperdimensional.co/p/2023 by Dean W. Ball.

Dean Ball is a pretty big voice in AI policy – over 19k subscribers on his newsletter, and a former Senior Policy Advisor for AI at the Trump White House – so why does he disagree that AI poses an existential danger to humanity? In short, he holds the common view that superintelligence (ASI) simply won’t be that powerful. I strongly disagree, and I think he makes a couple of invalid leaps to arrive there.

Better Than Us Is Enough

His main flawed argument is that he implies AI must be omnipotent and omniscient to wipe us out and then explains why that won’t be the case. He states: “one common assumption… among many people in ‘the AI safety community’ is that artificial superintelligence will be able to ‘do anything.’” He then argues that “intelligence is neither omniscience nor omnipotence,” and that even a misaligned AI with “no [..] safeguards to hinder it” would “still fail” because taking over the world “involves too many steps that require capital, interfacing with hard-to-predict complex systems.” But omnipotence or omniscience was never the requirement, it just needs to be smarter and better than us – humans.

Think Forward

Importantly, it doesn’t actually take superintelligence to wipe out or disempower humanity. For me to imagine this, I simply need to think forward to the not-so-distant future. Imagine you get a tiger cub. Think forward to what the tiger will look like in a year and ask yourself: could it kill me in a year? Now do this with AI. Imagine the future with a billion robots, AI running the military, AI doing basically all jobs with perhaps some level of human oversight, AI running the media, biolabs, political and military decisions, critical infrastructure. That metaphorical tiger could kill us. Ball himself imagines a future where AI is “embedded into much of the critical infrastructure and large organizations in America, such that it is challenging to imagine what life would be like if Claude ‘turned off.’”



Ball also discusses scenarios in which superintelligence has almost outlandish abilities, performing science breakthroughs without much experimentation. He focuses on Yudkowsky’s claim that “a sufficiently superintelligent AI system would be able to infer not just the theory of gravity, but of relativity” from a few frames of a falling apple, or that “bootstrap molecular nanoengineering.” Ball may be correct that these specific claims are wrong, but these are not load-bearing parts of any story for why AI might become dangerous. You don’t need to infer relativity from first principles to engineer a bioweapon. Notably, Yudkowsky himself has given other scenarios that do not require the AI to make scientific breakthroughs without experiments (see https://ifanyonebuildsit.com/, chapter 2).

If your response is “but there will be many AIs and there will be monitoring, so we’ll be safe,” then you’ve shifted to a different (and very flawed) argument.#fnkr78vmsufsn The point is that clearly AI will be able to take over in the future if we haven’t aligned it well by then. In reality, it probably won’t take that long to largely automate all jobs and tasks, since it’s enough to achieve some combination of: secure power, enable actions in the physical world, get rid of or sideline humans. And once it reaches a critical capability level, the AI has to act fast because of competing AI projects that represent future rival agents.

An Old Argument, Made Worse

The core of Ball’s case, that the world is simply too complex and chaotic for any intelligence to control, is not a new argument. Robin Hanson made essentially the same case in his https://intelligence.org/ai-foom-debate/ with Yudkowsky: innovation is too distributed across many actors, no single AI can race ahead of all competitors fast enough to dominate. But Hanson more correctly understood that this is an argument about the speed and distribution of AI takeoff, not an argument against existential risk. Ball takes Hanson’s position and corrupts it by treating it as a refutation of existential risk from AI entirely.

- #fnrefkr78vmsufsnThe “many AIs and monitors” defense is pretty weak: unaligned AIs can cooperate with each other; monitoring can be evaded, there’s simply too much to monitor, AI doing the monitoring for us could itself be jailbroken or could cooperate with the systems it’s supposed to watch, and AIs can hide their reasoning through methods like steganography.

https://www.lesswrong.com/posts/cTcrbXRAGAy6wtFpR/what-if-superintelligence-is-just-weak#comments

https://www.lesswrong.com/posts/cTcrbXRAGAy6wtFpR/what-if-superintelligence-is-just-weak
LessWrong (RSS Feed) profile picture
Five years since lockdown

I received my one-shot vaccine on March 26th, 2021. I had been in lockdown for more than a year, my entire life on hold, my world closed.

Previously: https://www.lesswrong.com/posts/uM6mENiJi2pNPpdnC/takeaways-from-one-year-of-lockdown & https://www.lesswrong.com/posts/9hWeJZiuL8QjBwQKz/reflections-on-lockdown-two-years-out

A few weeks ago, I was talking to a community organizer in London, and he said, “This feels like the first year that things have really gotten back to normal since Covid.”

I was surprised — it’s been five years, at this point. But he said that communities are only just recovering, that people are only just starting to go out like they once did.

So I thought about the people I know, the people I’m close to. And I realized how right he was, and I was surprised at my own surprise.

Two people I know have developed, essentially, full agoraphobia. They may have been a little weird before the pandemic, but they could function — one of them was in university, and the other had a job outside the home. Now, it feels impossible to imagine either of them going back to that old life. They are cripplingly anxious about the most basic of interactions, sometimes even with the people they live with and love best. It’s rare for them to even set foot outside of their houses, never mind doing normal everyday tasks out in the world, like grocery shopping.

I only know what happened to these people because they are sufficiently close, and I only see them because I visit them in their own homes. To the rest of the world, what happened to them is invisible.

There are people I knew before the pandemic who simply disappeared from my life. How many of them just never came out of hiding?

We all had our lives knocked off course by the pandemic, to a greater or lesser extent. Even if you somehow genuinely had a good time during lockdown, no one can say they’ve had quite the life they expected they would before 2020.

Maybe you missed or had to postpone major milestones — graduating from university, getting married. Maybe you lost loved ones who should have had much longer lives. Maybe you had to raise a kid without any of the support system that you’d expected to be able to rely on.

Likely, the rhythm of your life has permanently changed in some ways. My dad has the same job he’s always had, but his office closed during the pandemic and never reopened, so he still regularly goes days at a time without seeing anyone in person. My immunocompromised cousin used to travel for business all year; now, she can’t even have unmasked visitors in her home.

I had spent the years between college and the pandemic living in a group house, working in the tech industry, and seriously dating two people. When the dust settled from 2020, I had lost all of those things. It took me years to come to terms with the fact that I would never have the life that I’d spent those years building. It was a long and painful mourning process.

Five years after leaving lockdown, I’ve built a life I like much better, but that doesn’t negate the grief of losing what I had, what I thought I would have. Wherever we may have ended up, we all lost something.

Coming out of the pandemic, I felt like no one wanted to talk about the trauma we had all just gone through. The people who’d had a bad time mostly didn’t want to talk about it or just disappeared entirely, so all I heard was “I had a pretty good pandemic”, even when I felt like the person was lying. A few times, I tried to share what had happened to me, and the person I was talking to laughed in my face.

For years, I couldn’t think about the pandemic without crying. I skipped over any story that included it in the plot summary#footnote-1, and couldn’t bear to hear the theme music of the shows I’d watched during that time.

But I also developed a fascination with 2020, even as I found it incredibly triggering. It was like an alternate reality that we had all lived in, and we were all now collectively pretending it hadn’t happened. People’s desire to pick up where they left off was understandable, but it felt like denial to me. Any time I heard someone acknowledge out loud that the pandemic had happened, I was shocked and thrilled, like they’d acknowledged an illicit secret.

In 2024, I started being able to read books about 2020#footnote-2. Three months ago, on the final day of 2025, I finally felt ready to watch Bo Burnham’s Inside — the feature-length musical special he wrote, recorded, and released during the pandemic. The first time I heard one of the songs from it, I cried for hours. Now it’s my favorite movie, and I know the whole soundtrack by heart.

It’s been five years. The paint and stickers that marked distances six feet apart on sidewalks have been torn away, or faded with time (except where they haven’t). Some businesses keep up their old signs about masking and distancing, too expensive to replace, though none of them have enforced it for years, now. Rates of masking in airports have fallen and fallen, until those of us who sicken easily just have to stay home. People have healed, or at least those of us who haven’t have disappeared, and no one really thinks about them anymore.

The shape of your community has changed. The shape of your life has changed. Some of these things would have happened without the pandemic, but many of them wouldn’t. Time still would have passed, yes, but your life would have gone differently.

It’s been five years. The world has gone back to normal. If that even means anything.



#footnote-anchor-1 The pandemic shows up shockingly rarely in mainstream fiction, but people were writing fanfiction all through 2020 and 2021, and many of them used it as a way to process what was happening.

#footnote-anchor-2 Some books I’ve read on the pandemic:

- 2020: One City, Seven People, and the Year that Changed Everything, which tells the story of seven New Yorkers and how they changed and adapted during the pandemic, interspersed with chapters that examine broader social and political changes during 2020
- The Emergency, a memoir of a Black ER doctor working on the south side of Chicago, written in the depths of the pandemic, focusing on the healthcare inequities exposed by the crisis
- Every Minute is a Day, another ER doctor’s memoir written before there was an end in sight, this one in the Bronx, the worst-hit neighborhood in the US
- Please Unsubscribe, Thanks, a sort-of-Digital-Minimalism-adjacent manifesto that resonated deeply for me, about how the world stopped during the pandemic and then picked up as if nothing had happened
- The Anthropocene, Reviewed, John Green’s memoir, written during 2020 and released before lockdown ended, a beautiful capturing of that time of strangeness, uncertainty, and fragile hope

https://www.lesswrong.com/posts/cgA4G26fFFMpytjdF/five-years-since-lockdown#comments

https://www.lesswrong.com/posts/cgA4G26fFFMpytjdF/five-years-since-lockdown
LessWrong (RSS Feed) profile picture
OpenAI: How we monitor internal coding agents for misalignment


Sharing some of the monitoring work I've been doing at OpenAI: How we monitor internal coding agents for misalignment.
OpenAI now monitors 99.9% of internal coding traffic for signs of misalignment using our most powerful models. Today, that monitor is GPT-5.4 Thinking. It gets access to the full conversation context, that is everything the agent saw, and everything the agent did, including tool calls and CoT. Higher severity cases are sent for human review within 30 minutes.
Some examples of misaligned behavior we’ve found:
- Encoding commands in base64 to circumvent monitors
- Calling other versions of themselves to bypass restrictions
- Attempting to upload files to the public internet
- Trying to prompt inject users.
We haven’t yet detected any real world examples of sabotage, scheming or sandbagging.

Our goal is to establish robust monitoring practices internally, strengthen them through iteration, and ultimately help make similar safeguards standard across the industry.

https://www.lesswrong.com/posts/syB2r2X4E7nmw58co/openai-how-we-monitor-internal-coding-agents-for#comments

https://www.lesswrong.com/posts/syB2r2X4E7nmw58co/openai-how-we-monitor-internal-coding-agents-for
LessWrong (RSS Feed) profile picture
Should You Sign Up for Cryonics? Interactive EV calculator

Below is an interactive expected-value calculator of signing up for cryonics with Monte Carlo simulation.

The model assumes you sign up now.

See also: https://www.lesswrong.com/posts/yKXKcyoBzWtECzXrE/you-only-live-twice, https://www.lesswrong.com/s/weBHYgBXg9thEQNEe, https://www.cryonicscalculator.com/.

If you're in London and want to sign up for cryonics, we're hosting a https://partiful.com/e/S1zTdHG4Wjn4KYqihZky tomorrow.

https://www.lesswrong.com/posts/afzTcrsMwEkCrBRN8/should-you-sign-up-for-cryonics-interactive-ev-calculator-1#comments

https://www.lesswrong.com/posts/afzTcrsMwEkCrBRN8/should-you-sign-up-for-cryonics-interactive-ev-calculator-1
LessWrong (RSS Feed) profile picture
Subscriber count graphs of popular youtubers on ASI risk

2026-03-19

Disclaimer

- Quick Note

- target audience - potential youtubers on ASI risk

Main

- It seems likely to me that these are among the most important graphs in human history, and whether our species lives or dies depends on these graphs.

- The most important thing to track is not the absolute number of subscribers as a function of time, but the slope in number of subcribers as a function of time. Youtube channels, like software apps, follow exponential growth curves. Most people are not used to thinking in terms of exponentials. Even a mediocre channel could grow if you make enough videos. A good channel will grow more quickly with lesser videos required.

- I am tracking only youtube not twitter or substack, because the highest view counts are orders of magnitude higher. As of 2025-03, MrBeast has over 400 million subscribers. Top substack accounts have less than 10M subscribers. I am aware that all views are not equal, and views of powerful or technical people are worth more, for the AI pause political movement.

- Statistics obtained from viewstats by MrBeast on 2026-03-19 (Viewstats is also good for tracking outlier videos)

Rational Animations

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3dcuTNbvzg9JQvetv/vqgw8e61mpii4vy5lqi6

AI in context

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3dcuTNbvzg9JQvetv/jh3e7wmgmblb6zvczzn0

The AI Risk Network - John Sherman

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3dcuTNbvzg9JQvetv/dslku7psbqpv8kvsrnso

AI Species - Drew Spartz

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3dcuTNbvzg9JQvetv/owajha7dlkankywt5jos

Doom Debates - Liron Shapira

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3dcuTNbvzg9JQvetv/vpfqigqh00xmno3niuy9

Lethal intelligence - Michael Zafiris

could not obtain graph - 162K subs

Rob Miles

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3dcuTNbvzg9JQvetv/vvk2ajbyvnj9akdk10kr

Siliconversations - Liam

https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3dcuTNbvzg9JQvetv/o3fteasoi0qdx61wumwt

The Inside View - Michael Trazzi



https://www.lesswrong.com/posts/3dcuTNbvzg9JQvetv/subscriber-count-graphs-of-popular-youtubers-on-asi-risk#comments

https://www.lesswrong.com/posts/3dcuTNbvzg9JQvetv/subscriber-count-graphs-of-popular-youtubers-on-asi-risk
LessWrong (RSS Feed) profile picture
Operationalizing FDT

This post is an attempt to better operationalize FDT (functional decision theory).  It answers the following questions:given a logical causal graph, how do we define the logical do-operator?what is logical causality and how might it be formalized?how does FDT interact with anthropic updating?why do we need logical causality?  why FDT and not EDT?Defining the logical do-operatorConsider https://www.lesswrong.com/w/parfits-hitchhiker


https://www.lesswrong.com/posts/RyDkpWGLQsCnABE78/operationalizing-fdt
LessWrong (RSS Feed) profile picture
How does LessWrong's Ranking Algorithm Work?

I've had a couple of instances where I posted something that didn't get much engagement, so I edited it slightly and reposted a few days latter. When I do this, is it better to just make a new post with the same content, or unpublish the old post and then republish it? The latter would start with more upvotes, but also potentially be older, so I am not sure how the ranking algorithm would treat it. https://www.lesswrong.com/posts/N3Gw3fd9KnF9bLAD4/how-does-lesswrong-s-ranking-algorithm-work#comments


https://www.lesswrong.com/posts/N3Gw3fd9KnF9bLAD4/how-does-lesswrong-s-ranking-algorithm-work
LessWrong (RSS Feed) profile picture
The Law of Positive-Sum Badness

I keep running into similar arguments online, where people attack “the other” and use the (correct) observation of badness to claim their side is therefore doing well. There’s a temptation to correct this by saying that in a dispute between two sides, one side being bad isn’t causally making the other better, or asserting that badness of the two are not correlated.This is tempting, but wrong - because they are correlated, in the opposite direction, and that leads to my observation:https://x.com/davidmanheim/status/2029492778737021341


https://www.lesswrong.com/posts/HZCEoFRKdzFAKcgFj/the-law-of-positive-sum-badness