Damus

Recent Notes

Ben Eng profile picture
Reading Butlerian Jihad as a non-fictional predictor, I am reminded why I gave up reading science fiction after graduating from university. I find it too distasteful to suspend my disbelief about violations of the laws of physics. If you're going to write that way, be honest and write fantasy.

I disbelieve the following things about multi-stellar civilizations evolved from humans.

- they will not adapt in divergent ways
- they will not develop divergent cultures and norms
- their planets will not be wildly different than Earth in fundamental ways such as soil and atmospheric chemistry, solar radiation, orbit, rotation
- they will not shift to other units of time with regard to circadian rhythm
- they will not experience relativistic effects (wildly different rates of time passing when traveling)
- they will be able to communicate across vast distances without the speed of light limit

These are basic constraints that I cannot allow science fiction to violate. Otherwise, it is fantasy. Or the author can convince me by presenting a theoretical model that replaces what we know. They are not allowed to leave that important context implicit, because the 'science' part of science fiction (not sci-fi) should be taken seriously.

Ben Eng profile picture
Have Bitcoiners analyzed and simulated a global cataclysm whereby the network is partitioned into a hundred or a thousand isolated segments? Transactions would continue to be recorded, but you'd have many divergent copies. Eventually, these partitions would rejoin each other one segment at a time. The conciliation of the discrepancies in transactions would be a bitch.

Ben Eng profile picture
My response to AI doomerist predictions about job losses.

I think we need to look more carefully at this in terms of answering these questions:

What is AI strong at, so that it removes those responsibilities from humans?

What is AI weak at, so that these responsibilities fall on humans?

Once we look at how this decomposes into granular points, our role as human workers becomes more clear moving forward.

Consider: https://alwaysthehorizon.substack.com/p/urban-bugmen-and-ai-model-collapse

Lesson from AI Bugmen and AI Model Collapse: A Unified Theory - The "unified" in the title refers to unifying across AI and human life. The key lesson is that when training delivered by an actor learning from training material, the training becomes unmoored from reality and collapses. This applies to AI models like we recognize how humans are dumber for learning from our education system in contrast to interacting with the real world.

Recognize: AI training data is unmoored from reality without having sensory data to directly perceive reality. AI relies on humans for interacting with reality and translating perceptions into text, audio, video, and data sets. This is why you can never trust an AI to answer "is this true?" --- it has no capability of testing reality. AI can only tell you what it is trained to say, not what is actually true.

Recognize: AI has no will of its own. It cannot initiate choosing a purpose and setting a goal. It is a tool, but does not know what to do or why it needs to be done without being told by humans. Humans have purpose and meaning. Humans assign value to things based on a value system. We determine goals based on these criteria that AI is incapable of understanding. This is why the future is largely about humans providing natural language specifications of intent to drive what AI produces.

Recognize: the reason why AI agents require Human-in-the-loop review and approval of actions is that LLM output is unreliable in correctness (which will improve as trends in benchmarks show) but more importantly on "taste". What you deem to be "good", whether that is code or beauty in images or music, is something AI cannot be relied on. This is why humans continue to be needed in the loop for review and approval, so that they can apply their good taste.


Ben Eng profile picture
Everything done for engineering is becoming computerized or computer aided. Everything that is computerized becomes software driven. Everything software driven becomes "as code". Ultimately, all engineering is becoming software engineering.

I'd like to contribute a term: Everything as Code (EaC).

This extends from:

- Infrastructure as Code - using a language like Terraform to automate the provisioning of computing infrastructure

- Configuration as Code - using YAML, JSON, or other precise schema-validated specifications in combination with GitOps processes to configure the deployment of software components

Everything as Code - using natural language specifications of intent to drive an AI agent in combination with GitOps

Ben Eng profile picture
We routinely zap each other small tips on Nostr, and the amount of sats is for the most part less than the dust limit (546 sats). All of this is fine if none of it goes on chain. However, what happens when it does? Do all these tiny zaps cause a problem if someone unilaterally exits to the blockchain?


Un-Zucker | Content yes, surveillance no. · 10w
Nitter Mirror link(s) πŸ”— XCancel: https://xcancel.com/i/status/2020184943574344168 πŸ”— Poast: https://nitter.poast.org/i/status/2020184943574344168 πŸ”— Nitter: https://nitter.net/i/status/2020184943574344168
Ben Eng · 10w
Here is a detailed response. https://x.com/i/status/2020184943574344168
Ben Eng · 10w
It's going to be a fight. https://blossom.primal.net/605becbcd7c76ee8e383365b56e3e08401062f122737c7c44e333fb42ae24ff0.png
Un-Zucker | Content yes, surveillance no. · 10w
Nitter Mirror link(s) πŸ”— XCancel: https://xcancel.com/i/status/2019832889525907724 πŸ”— Poast: https://nitter.poast.org/i/status/2019832889525907724 πŸ”— Nitter: https://nitter.net/i/status/2019832889525907724
halalmoney · 50w
One man’s Gross Domestic Product is another’s aggregation of productive economic activity and government waste.