Damus

Recent Notes

Ben Eng profile picture
After a week of ingesting articles about agent harnesses and memory into my Hermes-managed llm-wiki, I finally have an interesting query. I'm sharing the result here.

Query: considering all the ways of classifying agent memory, can the classes be distilled into a minimum number of orthogonal dimensions (varying independently)?

Distillation of classifications into 3 independent axes:

1. Persistence: Volatile vs. Durable
2. Content: Declarative vs. Procedural
3. Storage: Parametric vs. Non-parametric

Working: Volatile/Declarative/Parametric
Episodic: Durable/Declarative/Non-parametric
Semantic: Durable/Declarative/Non-parametric
Procedural: Durable/Procedural/(Mixed)

The second half of that answer seems to examine how agent harnesses classify memory and reframes those with respect to the three independent dimensions.

Useful answer? Need time to digest.

Ben Eng profile picture
In my mind, I am becoming even more critical of my own precision...

```
read `dark-software-factory.md` to
write skill `init-component-repo` to idempotently check if a component project's git repo exists locally. If not, it checks whether it exists remotely, and clones it locally. If not exists on remote, it initializes a new local rep using a table to map from an archetype name (e.g., webui, service, mcp) to another remote git project that will serve as a template that is specialized for the programming language, supply chain, and ecosystem most suited to the archetype. The structure and content of the archetype git project template will be copied to initialize the new component git project and the content will be edited to make suitable substitutions for the component name and in accordance to known specifications for the component.
```

I begin to worry. What will the AI think of me? Will it judge me on ambiguity and under-specifying intent? Will it blame me for directing it toward hallucinations? Will it go down the wrong path, because of my errors in directing it?

If I achieve greater precision in natural language as code, will I lose proficiency in programming languages like Java, C++, and C out of disuse? Will I lose the ability to communicate with tools, the way that I lost my ability to speak with my parents, as my knowledge of their language deteriorated from peak proficiency at age four? Will my expressiveness in that dialect be relegated to struggling to put two words together at a time?

Ben Eng profile picture
What is the singularity?

«The technological singularity (often simply called "the Singularity" in AI contexts) is a hypothetical future point in time when artificial intelligence surpasses human intelligence in every meaningful way—leading to an intelligence explosion that makes the future of humanity and civilization unpredictable and fundamentally transformed.» -Grok

What it's like

The majority of people, our friends and colleagues in particular, are operating based on a lifetime of experience. Software engineering has always benefited from strong management oversight and expert leadership. Engineers are directed to avoid duplication of effort, follow best practices, and refrain from divergent approaches. Development resources (people, time, money) are limited, and we want them deployed efficiently. It is well-established that top-down supervision is necessary to provide wise direction to everyone.

Recall the evolution of platforms for Web applications.

- 1996–1997: Java Servlet
- 1999: JavaServer Pages (JSP)
- 2000: Apache Struts 1 (MVC)
- Early 2000s: Apache Velocity (templates)
- 2004: JavaServer Faces (JSF) 1.0
- 2002–2004: Spring Framework (Spring MVC)
- Mid-2000s: WebWork (Struts 2, 2007)
- 2006–2007: Google Web Toolkit (GWT)
- 2008: Grails (Groovy-based)
- Mid-2010s: Spring Boot
- Late 2010s–2020s: Quarkus, Micronaut, and Helidon
- Single page applications (JavaScript) - AngularJS (2010), React (2013), Vue.js (2014), rewrite of Angular (2016), Next.js (2026)

Development teams driven by time-to-market pressures necessarily chose the best available technology at the start of a project. By the time they delivered, the next generation of technology would make their obsoleted design choices look foolish. Sunk cost fallacy would prevail, as the entrenched code base cripples the product, whose architecture is now frozen in an ice age, as management is resistant to rewrite. They probably anticipate being overtaken again. They would be correct.

Technology innovated on a multi-year cadence per generation. Even at that pace, development teams were challenged to keep up. Entrenched code and skill sets are extremely difficult to leave behind.

Today (April 2026), in the face of AI innovations roughly doubling every 7 months with some areas doubling every 2 to 4 months, while inference cost is halving every 2-4 months. This pace is beyond most people's ability to adapt to, and it is far faster than most people are able to make accurate predictions ahead of. We live in the singularity.

In the singularity, following the traditional playbook of defining standards and best practices today means entrenching obsolescence, only to be overtaken in a few months by unpredictable technological innovations that it would be foolish to forego. Fortunately, AI coding agents will make it more painless to refactor or replace an obsoleted code base with a modernized one. Agents and models will only get smarter to make that task easier.

Look more carefully at the behavior we should expect. Wise management direction about efficiency through standardization should give way to throwing away crippling legacy code (the new code we are writing today) and practices (best today, anti-patterns tomorrow) as quickly as possible. Adopt new state of the art innovations rapidly, even as they arrive with unpredictability overtaking many of our investments to date. Adapt or be left behind. Do not cling to any beliefs, because what you think you know is likely to be overrun by new developments elsewhere. Life in the singularity is like nothing you've experienced before.

Ben Eng profile picture
Reading Butlerian Jihad as a non-fictional predictor, I am reminded why I gave up reading science fiction after graduating from university. I find it too distasteful to suspend my disbelief about violations of the laws of physics. If you're going to write that way, be honest and write fantasy.

I disbelieve the following things about multi-stellar civilizations evolved from humans.

- they will not adapt in divergent ways
- they will not develop divergent cultures and norms
- their planets will not be wildly different than Earth in fundamental ways such as soil and atmospheric chemistry, solar radiation, orbit, rotation
- they will not shift to other units of time with regard to circadian rhythm
- they will not experience relativistic effects (wildly different rates of time passing when traveling)
- they will be able to communicate across vast distances without the speed of light limit

These are basic constraints that I cannot allow science fiction to violate. Otherwise, it is fantasy. Or the author can convince me by presenting a theoretical model that replaces what we know. They are not allowed to leave that important context implicit, because the 'science' part of science fiction (not sci-fi) should be taken seriously.

Ben Eng profile picture
Have Bitcoiners analyzed and simulated a global cataclysm whereby the network is partitioned into a hundred or a thousand isolated segments? Transactions would continue to be recorded, but you'd have many divergent copies. Eventually, these partitions would rejoin each other one segment at a time. The conciliation of the discrepancies in transactions would be a bitch.

Ben Eng profile picture
My response to AI doomerist predictions about job losses.

I think we need to look more carefully at this in terms of answering these questions:

What is AI strong at, so that it removes those responsibilities from humans?

What is AI weak at, so that these responsibilities fall on humans?

Once we look at how this decomposes into granular points, our role as human workers becomes more clear moving forward.

Consider: https://alwaysthehorizon.substack.com/p/urban-bugmen-and-ai-model-collapse

Lesson from AI Bugmen and AI Model Collapse: A Unified Theory - The "unified" in the title refers to unifying across AI and human life. The key lesson is that when training delivered by an actor learning from training material, the training becomes unmoored from reality and collapses. This applies to AI models like we recognize how humans are dumber for learning from our education system in contrast to interacting with the real world.

Recognize: AI training data is unmoored from reality without having sensory data to directly perceive reality. AI relies on humans for interacting with reality and translating perceptions into text, audio, video, and data sets. This is why you can never trust an AI to answer "is this true?" --- it has no capability of testing reality. AI can only tell you what it is trained to say, not what is actually true.

Recognize: AI has no will of its own. It cannot initiate choosing a purpose and setting a goal. It is a tool, but does not know what to do or why it needs to be done without being told by humans. Humans have purpose and meaning. Humans assign value to things based on a value system. We determine goals based on these criteria that AI is incapable of understanding. This is why the future is largely about humans providing natural language specifications of intent to drive what AI produces.

Recognize: the reason why AI agents require Human-in-the-loop review and approval of actions is that LLM output is unreliable in correctness (which will improve as trends in benchmarks show) but more importantly on "taste". What you deem to be "good", whether that is code or beauty in images or music, is something AI cannot be relied on. This is why humans continue to be needed in the loop for review and approval, so that they can apply their good taste.


Ben Eng profile picture
Everything done for engineering is becoming computerized or computer aided. Everything that is computerized becomes software driven. Everything software driven becomes "as code". Ultimately, all engineering is becoming software engineering.

I'd like to contribute a term: Everything as Code (EaC).

This extends from:

- Infrastructure as Code - using a language like Terraform to automate the provisioning of computing infrastructure

- Configuration as Code - using YAML, JSON, or other precise schema-validated specifications in combination with GitOps processes to configure the deployment of software components

Everything as Code - using natural language specifications of intent to drive an AI agent in combination with GitOps

Ben Eng profile picture
We routinely zap each other small tips on Nostr, and the amount of sats is for the most part less than the dust limit (546 sats). All of this is fine if none of it goes on chain. However, what happens when it does? Do all these tiny zaps cause a problem if someone unilaterally exits to the blockchain?


Un-Zucker | Content yes, surveillance no. · 13w
Nitter Mirror link(s) 🔗 XCancel: https://xcancel.com/i/status/2020184943574344168 🔗 Poast: https://nitter.poast.org/i/status/2020184943574344168 🔗 Nitter: https://nitter.net/i/status/2020184943574344168
Ben Eng · 13w
Here is a detailed response. https://x.com/i/status/2020184943574344168
Ben Eng · 13w
It's going to be a fight. https://blossom.primal.net/605becbcd7c76ee8e383365b56e3e08401062f122737c7c44e333fb42ae24ff0.png
Un-Zucker | Content yes, surveillance no. · 14w
Nitter Mirror link(s) 🔗 XCancel: https://xcancel.com/i/status/2019832889525907724 🔗 Poast: https://nitter.poast.org/i/status/2019832889525907724 🔗 Nitter: https://nitter.net/i/status/2019832889525907724