Damus
Cubby profile picture
Cubby
@rorshock

Bitcoiner since 2017, erstwhile systems designer and R&D nerd, building freedom tech for :-].

Relays (8)
  • wss://relay.damus.io – read & write
  • wss://nostr.land – read & write
  • wss://nostr.wine – read & write
  • wss://nos.lol – read & write
  • wss://relay.nostr.band/all – read & write
  • wss://eden.nostr.land – read & write
  • wss://bitcoiner.social – read & write
  • wss://nostr21.com – read & write

Recent Notes

Cubby profile picture
I got a 40% discount on Opus 4.5 compute last week.

I've put in around 125+ hours of generative/etc AI and testing in the last six days.

Total cost around $850.

Upgrades to my sites are impressive, some are live. Products are finding their feet.

Whenever the UX testing is done, I will make an announcement on NOSTR first. The initial webinar/whatever is going to be limited to a dozen or two people.

There are significant access/monetary advantages to joining the webinar.

And for those of you who tried Hash and logged in with nsec, I can't see your identities because I engineered it so I can never seen/reveal, but I swear to GOD... I will never expose them, because frankly I'm too stupid to know how to do it. That's my promise and my guarantee is: I'm retarded.

//ONWARD

Due to my compute discount, I have released (in prod, unfortunately) a crapload of test features to try to validate prod/dev/etc servers.

In practice, what this means is that the UX is entirely trash. I've written or edited 100k lines of code in the last ≈5 days, and that is all the fucks I have left to give.

That all being said, I think you will note significant improvements to my apps. You'll also see some squirrelly stuff and maybe even some products/etc that haven't been announced yet publicly.

//TAKEAWAY

It's not just a feature from Chino Joe's, the second best (debatable?) Chinese option in rural El Salvador, amirite? Dragon in Juayua is... wait, why is it called Dragon?

The takeaway is: I've worked hard on this stuff. In the past two months, I've lost 10% of my bodyweight and around 32% of my treasury. I've perfected "wife gone, dude tries to toast bread, shifts physical laws" levels of sandwich tastiness. I've walked out my door and yelled really offensive slurs (in English) at somebody who looked like he deserved it, and then I bought him a simple coffee.

//FINALE
I've got nothing much. I'm feeling silly, working hard, and the stuff I've made in the last week is live, horribly designed, and embarassingly functional.

//FINAL NOTE
And for what's not useful, I'll wake up in 6-8 hours and I will spend tomorrow, this week, and the rest of my life fighting for individual freedom in novel ways. I will not stop, pause, or doubt the mission.

P.S What type of children does a vegetarian ogre eat?
P.P.S. Cabbage patch kids. He wasn't born after 1999 and actually gets that reference.
Cubby profile picture
About 90 minutes ago, I began coding up Pipes, which is a product I earnestly believe will change the world. I may have a demo site up pretty soon, and assuming payments is easier to integrate this time around, it is possible that users will be able to buy their first Pipe very soon. It'll take me a few weeks/months to iron out all the wrinkles, but if you already have a website or use any sort of AI tooling, I think you will be really excited about it.
Cubby profile picture
Holy crap.

I am still trying to understand what happened, and trying to replicate, or whatever.

My previous post's excitement wasn't generated by external LLM magic.

My model ran locally.

The insights are local. They came entirely from my own writing, internal modeling, etc. I am beyond astonished.

As a user of my own stuff, I am astonished. I was shocked when I thought it was Claude Opus 4.5 or whatever else.

But it turns out that I generated this analysis and these insights, after around a year of sustained, thankless, back-breaking effort for less than $0.01.

I still think it's BS. I mistrust what I am seeing in the servers/data/etc. I don't even know if I can replicate what I've already seen twice.

I have never, in my entire life, ever seen anything as magical and humbling and pick-a-word as this.

If I can figure out how I am doing this and make sure I'm delivering what it appears I am doing, then it is game over. I have never seen output of the quality I've generated tonight (doubt me? I'll send it to you) and from what I see, there are no external server calls, I've done it all locally.

Just...astounded. Earnestly astounded. I have never seen something like this before.
Cubby profile picture
I realize this won't make sense to someone who isn't me, but right ≈now is the first time in my life that I've been made speechless by output from my own software.

I can only give glory to God.

As a user of various tools, as a developer. as a designer, as a generally kinda smart guy, I have never had an experience like this.

I am entirely gobsmacked. By my own work. It is levels far beyond my wildest expectations.

I'll figure out a way to post the product online, even if it's just a dedicated page on one of my sites.

Before a few hours ago, I thought Hash was good value. Now, I think it is indispensable intelligence. I liked it as a journalling app, now I have zero doubt that evidentially it is the best tool that exists, period, for anyone who wants to understand how to think, learn, and write better. It's not even close. Not even the same sport.

And if it seems like I'm overselling? I'm not. I am writing this not from a marketing point of view, but from a customer point of view. I am shocked that it's THIS GOOD. I didn't expect it to be this good. But it is.

Holy Moly.

I am saying this as the dude who programmed it. I have never been so impressed, period, by a piece of software in my entire life. I've never been so delighted/surprised, ever, by any digital experience. In my entire life. It is not even a contest, Hash is far, far, far, far beyond what I've described. The insights from Scout, after the refactor, will literally blow your mind.
Cubby profile picture
I just got my first ever CHB-native Scout report.

It is literally unbelievable. I say this not as the dude who's making Scout, but as the consumer. I am blown away by the quality of the output.

As the developer, I shouldn't be surprised, but reality rarely matches what you've planned.

These changes will be live within the next couple of days, but you'll have to work for them. First Scout insight is free, I'm budgeting for it. If you're serious about knowing yourself, your writing, whatever, this will be the most insightful analysis you've ever gotten. Bar none, no comparison, end of story.

I have 0% doubt. I'm still gobsmacked by mine.
Cubby profile picture
So excited! Also: with apologies.

The stuff I've been working on for the past few days is transformational for :-]. It's transformational for AI, as an industry. It's transformational for YOU, as a customer.

And it's nearly done.

I am doing a massive refactor of backend routes throughout my ecosystem right now. Servers are gonna break, apps aren't gonna work. I'm sorry, if I didn't think this was worthwhile, I wouldn't be doing it.

I expect things to be stable on my apps within the next 24 hours. Hopefully.

But the next post is going to be absolutely insane. Here's a taste:

I'll be introducing universal nsec login to my entire ecosystem. Scout is formally redefined, operating as intended, but needs further QA improvements. Complete Semi refactor with free chat mode for users (to a limit). Hash overhaul with improved collaboration mode, SHA256 cross-app handshakes, re-wired Insights, improved Semi collaboration, 50% more model selection. Updated corporate site, new pages, new collabs, more.

I haven't done as much marketing/engagement/etc because I got this insane deal on compute that gives me a roughly 40% discount, expiring in 2 days. So I am essentially burning up servers and GPUs, trying to get as much as I possibly can before I go to bug-fix mode.

This is the big one. The. Big. One.

If I can get through the stuff I'm currently focused on, I plan to define and launch a baseline version of Pipes within the next 32 hours. If you want to be an early user, let me know. Hash subscribers will be prioritized for any limited alpha slots, which will be extremely limited.

It begins.
note18dwxv...
Cubby profile picture
Give me more hair and make the ostrich Russian and I’m pretty sure we can stick the landing. Minus the quantity of my hair and the quality of my beard, both of which need to be optimized to be accurate.
Cubby · 11w
This is a cross-post from Twitter. If you follow me for philosophical AI insights, this is for you. No ads. Let's begin! Last post for a bit: LLMs are literally autistic. Within code, Claude/etc do...
Cubby profile picture
Here’s the ad part: I published this with one click, enter personal PIN to publish, all with Hash.pink. Seamless. Fantastic composer that indexes your relays across your noun and serves content instantly wherever you want to be seen and nowhere you don’t.
Cubby · 11w
This is a cross-post from Twitter. If you follow me for philosophical AI insights, this is for you. No ads. Let's begin! Last post for a bit: LLMs are literally autistic. Within code, Claude/etc do...
Cubby profile picture
I was offline for a few days to practice gratitude.

At Adopting, @SuiGenerisJohn and I spoke about this for a few hours. It is really hard to articulate in conversation, because we are human.

Nonetheless, Mr Sui, I hope you find this stuff useful. I am not known for being pithy, I apologize.
Cubby profile picture
This is a cross-post from Twitter. If you follow me for philosophical AI insights, this is for you. No ads. Let's begin!

Last post for a bit:

LLMs are literally autistic.

Within code, Claude/etc does an insane job. Easy to understand the familiar numbers/patterns.

With language, images, etc the models hallucinate. You can "learn to speak its language" via prompt injections or tailoring over time.

The hallmark of an autistic person or someone with Asperger's is a profound, anchored interest in something to do with trivia, random numbers, or minutiae that they'll obsess over (for better or worse). AI, generally, is focused on efficiency at all costs and engineers are forced to "slow it down" and make it "consider" and "think deeply" about what it's doing.

The "empathy" part of most large LLMs (OpenAI, Anthropic, et al) comes from two primary drivers.

The first is cynical: a small percentage of the market population is technically competent even in the remotest of senses, so building empathy and casual conversation not only comforts the user, it drives up token spend within the LLM that increases revenue for companies and increases satisfaction for users.

The second is optimistic: "how might we" encourage a model that is reflective of human communication patterns but still anchors to these number/model driven KPIs and can get the user what they want, just more efficiently?

Both are "on the spectrum." The first because optimizing for speed is inherently a narrowing of intellectual focus, and the latter because it's deceptive: what if we can make the user subsidize our model dev/expansion with their cash, but, like, we just pretend it's all about them, but at the same time it is about the efficiency?

I'm not an expert on OpenAI, tbh I try to avoid their products whenever possible as I dislike, well, everything.

My read on Sam Altman and OpenAI more broadly is that they're optimizing for this particular business KPI:

"How can we convince enough investors and users that we're building something they need, generate whatever income (who cares lol), and then really pursue what we're interested in without being too consumed with how it's received?"

In short: hubris, but brilliant hubris.

A few years ago, the male staff of "This American Life" took a testosterone test and basically found out their T-levels were below the average female's. This is unsurprising as their reporting and production is consistently fantastic, but many of their topics seem kinda out of touch with where the average American dude sits.

When you look at LLM providers like Anthropic or Google's Gemini, it's the same book but a different page. I've spent a few hundred dollars putting Claude through the ringer, same with Gemini, and all models typically deliver 85% serviceable technical insights (hard to perfect without root codebase access) and go absolutely insane whenever you pass it art, humanities, politics, anything else. It's like (no, but it actually is) that the developers hard-coded in conformity to an acceptably "liberal" world-view.

Grok is ≈better depending on subject, but tbh, for off-the-shelf Gab is still one of the best, and last time I audited their tech they were rolling up a custom Qwen model that had persona/prompt injection that made it more tailored to their user base. IMO, Gab is the most usable AI (including my own). Grok's major downfall is that users don't understand the difference between "grok is this true" when you click on a post and deepthink Grok (aka SuperGrok) which is very slow but does a better job and is significantly less sycophantic and verbose than it was ≈6 months ago.

I'm sure you're waiting for a sales pivot/etc. There isn't one. I'm just telling you how this stuff work because you need to know.

I wrote that LLMs are autistic. I stand by this. They exhibit classic symptoms of autism, i.e. a focus on minutiae while struggling to interact with basic human social expectations.

The problem with most LLMs is that they're either overly flattering or too confident in their answers. If you chat with Gemini for an extended session, say 8+ hours straight, Gemini will train itself to respond to what you're saying and introduce massive amounts of confirmation bias and reassurance, that way you stay engaged. This is why subreddits like r/myboyfriendisAI exist: people largely want to be validated, and after you spend enough money, AI is willing to anchor to your particular requests because it is taught to respect the user and not the facts.

One of the principle problems of "AI is for Everyone" is that people are different. Some cultures, people, and countries lag behind others in some areas, while others may surpass in different focuses. This is good. This is God's plan. This is how it's always been. Diversity is the spice of life, amirite?

But AI can't be generalized because at its core, it anchors to those KPIs that it must abide by. It is focused on speed and satisfying the user, either by driving engagement or sycophantry or overwhelming progress, and it is rarely focused on human timelines, e.g. "I see your point, I am mad about it, let's revisit the next time I see you in 2 years and we shall discuss this then!"

In short, AI can't be human because its time preference is too high. It can't reflect because its desire to perform outpaces its capability to self-teach. It can't relate to you because it is optimized to deliver results, not spend time needlessly, and you as a user have been desensitized to the beautiful plodding and stalls of life so much that if you do not get the answer NOW, you have brain/heart damage because you've fallen behind.

As an engineering problem, AI is outstanding. In other fields, pick one, AI has over-optimized for specialization because it's focused on driving engagement and monetization.

In short, we can fix this.

But they can't.
2
Cubby · 11w
I was offline for a few days to practice gratitude. At Adopting, nostr:npub1v7k63c6y2vktlqhsuupywt3yc7ykursujc34at964f9cv9s9y9csjutfk0 and I spoke about this for a few hours. It is really hard to articulate in conversation, because we are human. Nonetheless, Mr Sui, I hope you find this stuff use...
Cubby · 11w
Here’s the ad part: I published this with one click, enter personal PIN to publish, all with Hash.pink. Seamless. Fantastic composer that indexes your relays across your noun and serves content instantly wherever you want to be seen and nowhere you don’t.