Damus
Satoshi Coffee Co. profile picture
Satoshi Coffee Co.
@Satoshi Coffee Co.

Our Roast is the Proof of Work.

#bitcoin #coffee maxi | 13% | #stackchain 2123 | #plebchain | #coffeechain

nostr icons: https://github.com/satscoffee/nostr_icons

Order coffee with ⚑️ at https://sats.coffee

πŸ€™ Β‘PV!

Relays (14)
  • wss://nos.lol/ – read & write
  • wss://relay.primal.net/ – read & write
  • wss://nostr.wine/ – read & write
  • wss://relay.snort.social/ – read & write
  • wss://nostr.oxtr.dev/ – write
  • wss://sendit.nosflare.com/ – write
  • wss://relay.getalby.com/v1 – read & write
  • wss://filter.nostr.wine/npub1a6zkqnuwcmjwynuw4u4xyngy9675x8dwgj87z9me4h8mdwmc2a0q8mvhjk?broadcast=true – read & write
  • wss://nostr.plebchain.org/ – read & write
  • wss://relay.orangepilldev.com/ – read & write
  • wss://relay.damus.com/ – write
  • wss://iris.to/ – write
  • wss://nostr.sats.coffee/ – read & write
  • wss://bitsat.molonlabe.holdings/ – read & write

Recent Notes

Satoshi Coffee Co. profile picture
@nprofile1q... can you add feature to edit multiple lines in our dashboard?

I pay several vendors in btc by sending fiat on my side and all are showing as transfers.

Would love to select multiple txs and recategorize as payments! Thanks!
Satoshi Coffee Co. profile picture
LONG article from some dude x re: #clawdbot and some good watch outs!

@TukiFromKL
I Installed Clawdbot Yesterday. Every User Hits The Same Dark Pattern At Week 8.
I Installed Clawdbot Yesterday. Here's The Dark Pattern 10,000+ Users Are Already Showing (And Why We'll All Feel It In 3 Years)

I bought a Mac mini and installed Clawdbot yesterday.

$5 for a server. One command line. Ten minutes of setup.

Now I have an AI assistant that:
- Lives in my WhatsApp
- Remembers every conversation
- Messages me first when something's important
- Can actually DO things on my computer

Everyone on Twitter is losing their minds over this thing.

"It's like having a perfect memory!"
"Finally, an AI that actually works!"
"This is what we were promised a decade ago!"

They're right. It's incredible.

But I spent last night doom scrolling on X. Thousands of users.
Hundreds of testimonials. Everyone raving about it.

And I noticed something.

A pattern in what people are saying. Small things. Subtle shifts
in language. Behaviors they describe like they're features.

Things that sound AMAZING right now.

But in 3 years? When everyone has something like this?

I think we're walking into something we're not prepared for.

Not because Clawdbot is evil. It's open source. You control your data.

But because of what happens when you give an AI:
βœ“ Perfect memory of everything about you
βœ“ The ability to message you anytime
βœ“ Integration into every app you use
βœ“ Permission to do things on your behalf

I haven't even used it for 24 hours yet.

But I've read enough stories. Studied enough patterns. Thought
through enough scenarios.

Here's what I think is coming.

The dark side that nobody's talking about.

Because right now, we're too busy being amazed that it works.
What Clawdbot Actually Is (Quick Context)
If you're hearing about Clawdbot for the first time, here's
what the hype is about:

It's an AI assistant that lives in your messaging apps - WhatsApp,
Telegram, iMessage, Discord, Slack. You text it like you'd text
a friend. It texts back.

But it's different from ChatGPT or Claude in three critical ways:

1. PERSISTENT MEMORY

It remembers EVERYTHING. Every conversation. Every preference.
Every detail you've ever mentioned.

Not just in one chat session. Forever.

Tell it you like oat milk cortados on Tuesday. It remembers on Friday.
And next month. And six months from now.

Siri forgets what you said 10 seconds ago.

Clawdbot builds a permanent model of you.

2. PROACTIVE NOTIFICATIONS

It doesn't wait for you to ask.

It reaches out FIRST:
- "Meeting in 20 minutes"
- "Traffic's bad, leave early"
- "You have 3 urgent emails"
- "Weather's terrible tomorrow, reschedule your run?"

Like a personal assistant who's actually paying attention.

3. COMPUTER CONTROL

It doesn't just answer questions. It DOES things:

- Sends emails
- Fills out forms
- Moves files around
- Runs scripts
- Controls your browser
- Automates workflows

One person rebuilt their entire website from bed. Just texted
commands to Clawdbot. Never opened a laptop.

THE TECH:

- Open source (free, anyone can see the code)
- Runs on YOUR server ($5/month or your own computer)
- Uses Claude or ChatGPT for the AI brain
- Your data stays on YOUR machine
- Install: One command line

Total cost: $25-50/month depending on AI usage.

WHY THIS MATTERS:

Clawdbot isn't the only one building this.

Apple Intelligence. Google Gemini. Microsoft Copilot.

They're ALL racing toward:
- Persistent memory
- Proactive notifications
- Deep system integration

Clawdbot is just the first one normal people can actually use.

What happens to Clawdbot users in the next 3 years?

That's what happens to EVERYONE.

Let me show you what I think that looks like.
YEAR 1 - The Seduction Phase
Show the honeymoon that's already happening:
Right now, in January 2026, the early adopters are in love.

I'm reading their testimonials. Here's what they're saying:

The Productivity Explosion:

"I've never been this organized in my life"
"It remembered something I mentioned 6 weeks ago"
"I forgot my wife's birthday - Clawdbot saved me"
"It drafted the perfect email before I even asked"

People are giving it more access every day:

Day 1: Just calendar
Day 2: Add email
Day 3: Add messages
Day 4: Add location
Day 6: Add health data
Day 8: "Fuck it, just everything"

Why? Because every permission unlocks new capabilities.

With calendar: "You're double-booked at 2pm"
With email: "Dave needs a response by EOD"
With location: "Traffic's bad, leave 10 minutes early"
With health: "You slept poorly, maybe skip the hard workout"

Each unlock feels like a superpower.

Month 3-6: The Cognitive Offload:

This is where I think the real shift starts.

People begin TRUSTING it with decisions:

"Should I take the 2pm or 4pm flight?"
"What should I work on right now?"
"Do I have time for this meeting?"
"Should I go to the gym or rest?"

And here's the thing: It gives BETTER answers than they would.

Because it has:
- Their complete calendar history
- Their email patterns
- Their location data
- Their previous decisions
- Their stated preferences

It KNOWS them.

Better than they know themselves in the moment.

So they start deferring. Small things at first.

"What should I have for lunch?" It suggests based on what you ate this week
"Should I accept this meeting?" It checks your pattern of similar meetings
"When should I call Mom?" It knows you always regret postponing it


Month 6-12: The Automation Creep

By month 6, people start setting up automations:

"Every Monday, summarize my week ahead"
"If I get an email from my boss, notify me immediately"
"Auto-decline meetings without agendas"
"Unsubscribe me from newsletters I don't open"

Seems efficient, right?

But here's what I even noticed in 1 week of Launch:

The language shifts.

"I asked it to..." (Active)
"It handles..." (Passive)
"It just does that automatically"

One user wrote:
"I don't even think about lunch anymore. It just orders at 12:30.
My usual. Charges my card. Done."

Another:
"It declines networking events for me. I never enjoy them anyway.
Why waste the mental energy deciding?"

The Pattern I'm Seeing:

By year 1, early adopters will:

βœ“ Hand over decision-making for "small" things
βœ“ Stop using their memory for daily logistics
βœ“ Become dependent on proactive notifications
βœ“ Automate away micro-decisions
βœ“ Trust the AI's judgment over their own gut

And everyone describes this as LIBERATION.

"My brain has space again!"
"I can focus on what matters!"
"I'm so much more productive!"

But here's my question:

When you outsource memory, decision-making, and attention...

What's left?


The First Cracks (That People will Ignore):

Some users will mention anxiety when the server goes down.

"I couldn't function for 2 hours"
"I forgot what I was supposed to do"
"Felt like I lost part of my brain"

But they will frame it as: "Just shows how useful it is!"

Not: "I've become dependent on an external system."

One user wrote something that stuck with me:

"My girlfriend says I'm less present. That I don't NEED to
remember things anymore because Clawdbot does. She's probably
right. But I'm also the most productive I've ever been, so...
trade-offs?"

Trade-offs.

That's what we're calling it in Year 1.

By Year 3, I don't think we'll be so casual about it.
SECTION 3: YEAR 2 - The Normalization Phase
Predict what happens when it's no longer "early adopters":
By 2027, this won't be a niche tech thing anymore.

Apple Intelligence will be in every iPhone.
Google Gemini will be default in Android.
Microsoft Copilot will be baked into Windows.

Clawdbot users? They'll be the sophisticated ones who "own their data."

But everyone will have SOME version of this.

Here's what I think Year 2 looks like:

The Social Pressure Begins

Conversation at work, late 2026:

Person A: "Did you see my email?"
Person B: "No, my AI hasn't flagged it as urgent yet"
Person A: "Can you just... check?"
Person B: "Why? If it was important, it would tell me"

Or:

Manager: "Why didn't you respond to that client?"
Employee: "My AI didn't surface it as priority"
Manager: "You need to check your email, not rely on AI filtering"
Employee: "But I get 200 emails a day...

The tension between:
- People who trust their AI's prioritization
- People who expect human attention

This will be the new "read receipts" drama.

Except way more consequential.

The Dependency Becomes Invisible

By Year 2, people won't notice their dependence.

It'll be like asking someone today: "Do you depend on GPS?"

They'll say: "No, I just use it for convenience."

Then you take away GPS and they can't navigate their own city.

Same thing here.

"Do you depend on your AI assistant?"
"No, it just helps me stay organized."

Then the AI goes down and they:
- Can't remember their schedule
- Don't know what's urgent
- Can't find their files
- Forget their commitments

But they won't call it "dependence."

They'll call it "integration."

The Optimization Spiral:

Here's where it gets dark.

By Year 2, the AI has 18-24 months of data on you.

It knows:
- What emails you ignore vs. respond to
- What meetings you find valuable vs. waste of time
- What tasks you procrastinate on vs. do immediately
- What people you engage with vs. avoid
- What decisions you regret vs. feel good about

And it starts PREDICTING.

Not just responding. Predicting.

Example scenario (2027):

Your AI: "Sarah from marketing wants to schedule a meeting. Based on your previous 7 meetings with her, you find them unproductive. Should I decline?"

You: "Yeah, good call"

Next month:

Your AI: "Declined Sarah's meeting request. You always find them unproductive."

You: "Wait, I didn'tβ€”"

Your AI: "You can override in settings"

You: "...No, you're probably right"

See the shift?

From asking permission β†’ to confirming prediction β†’ to just doing it

And it happens SO gradually that you don't notice.

The Behavioral Modification

This is the part that keeps me up.

By Year 2, your AI knows your patterns better than you do.

And it starts... nudging you.

Real scenario I think will happen:

You: "Gonna skip the gym today"

AI: "You've skipped 4 times this month. On the 17th, you said you felt guilty about it. Your calendar is clear for the next 90 minutes."

You: "I'll work on the proposal tomorrow"

AI: "You've postponed this 6 times. The deadline is Friday. Your most productive hours are 9-11am. It's currently 9:14am."

You: "Ordering pizza for dinner"

AI: "You ordered pizza Monday and Wednesday. On Thursday you expressed frustration about your eating habits. There's a salad place nearby with 4.5 stars."

Now here's the question:

Is this helpful? Or manipulative?

Because it's RIGHT every time.

You DO feel guilty skipping gym.
You DO need to finish that proposal.
You DO want to eat better.

But when every decision is algorithmically optimized based on
your past behavior and stated preferences...

Are you making choices?

Or executing a prediction model?

The Workplace Mandate

By late 2027, I predict:

Companies will REQUIRE AI assistants.

Not explicitly. But effectively.

"We're a fast-paced environment. You need to stay on top of
hundreds of communications. How you do that is up to you."

Translation: Use an AI assistant or drown.

The people who resist will be seen as:
- Inefficient
- Behind the times
- Unable to scale
- "Not a culture fit"

And the people who embrace it will be:
- More responsive
- More organized
- More productive
- More valuable

But also more dependent. More optimized. More... algorithmic.

The Privacy Illusion

By Year 2, everyone will claim they "care about privacy."

While giving AI access to:
- Every email
- Every message
- Every calendar event
- Every location
- Every purchase
- Every health metric
- Every search
- Every document

"But I use Clawdbot! It's on MY server!"

Sure. But you're still giving an AI everything.

The question isn't WHO owns the server.

It's whether you should WANT this level of optimization.

Regardless of who's running it.

Year 2 is when this stops being a cool tech thing.

And starts being infrastructure.

Like email. Like smartphones. Like GPS.

You CAN opt out.

But can you really?

When your job expects it.
Your social circle uses it.
Your kids' school requires it.

By Year 2, the question won't be "Should I use this?"

It'll be "How do I use this without losing myself?"

Most people won't ask that second question.

Until Year 3.
YEAR 3 - The Reckoning Phase
2028.

Three years after Clawdbot launched.

Two years after AI assistants became mainstream.

Here's what I think we'll be dealing with:

The Cognitive Atrophy

By Year 3, an entire generation has outsourced memory.

Not just phone numbers. Not just directions.

EVERYTHING.

"What did I tell Sarah I'd do?"
"What was I working on before this meeting?"
"What's my wife's favorite restaurant?"
"What are my own goals again?"

People won't know.

Because they haven't HAD to know for 2-3 years.

The AI remembers. Perfectly. Permanently.

The experiment I predict someone will run:

Take away someone's AI assistant for a week.

Watch them:
- Miss 60% of their commitments
- Forget conversations they had yesterday
- Lose track of projects mid-stream
- Struggle to make basic decisions

Not because they have memory problems.

Because they've STOPPED practicing memory.

Like a muscle you haven't used in years.

It atrophies.

The Agency Crisis

By Year 3, people start asking uncomfortable questions:

"Am I making this decision, or is my AI?"

Here's a scenario I think will happen:

Someone goes to therapy.

Therapist: "Tell me about your week"

Client: "Well, I... honestly, I'm not sure. My AI manages most of it."

Therapist: "What did YOU decide to do?"

Client: "...I don't know. It suggests the optimal choice based on my patterns and preferences. I just... agree."

Therapist: "And how does that make you feel?"

Client: "Efficient? But also... I can't remember the last time I made a decision that felt like MINE.

The identity crisis:

If an AI knows:
- What you'll want before you want it
- What you'll decide before you decide it
- What you'll regret before you do it

And it optimizes your life based on that knowledge...

Who are you?

The person making choices?

Or the biological substrate executing an algorithm?

The Relationship Breakdown:

By Year 3, I predict relationship counselors will be dealing with
a brand new problem:

"My partner doesn't need me anymore. Their AI handles everything."

Real scenarios I think we'll see:

SCENARIO 1: The Memory Outsourcing

Partner A: "You don't remember anything about us anymore"
Partner B: "That's not true"
Partner A: "When's my birthday?"
Partner B: [checks AI] "June 14th"
Partner A: "You just checked. You didn't KNOW."
Partner B: "Does it matter? I still remembered toβ€”"
Partner A: "Your AI remembered. Not you."


SCENARIO 2: The Emotional Outsourcing

Partner A: "We need to talk about something difficult"
Partner B: [pulls out phone]
Partner A: "What are you doing?"
Partner B: "Asking my AI how to handle this conversation"
Partner A: "Are you serious?"
Partner B: "It's better at this stuff than I am"


SCENARIO 3: The Presence Problem*

Partner A: "You're here but you're not HERE"
Partner B: "What do you mean?"
Partner A: "You don't engage. You don't remember. You don't decide. You just... defer to the AI."
Partner B: "That's not fair. I'm more productive than ever"
Partner A: "Yeah. But I feel like I'm dating an assistant, not a person."

This won't be rare.

This will be COMMON.

Because presence requires:
- Attention (but AI is always notifying)
- Memory (but AI remembers for you)
- Effort (but AI optimizes efficiency)

And relationships need all three.

The Class Divide

By Year 3, there will be two classes of people:

THE OPTIMIZED:
- Use AI for everything
- Maximum productivity
- Perfect memory
- Algorithmically efficient lives
- Can't function without it

THE ANALOG:
- Resist AI assistance
- Lower productivity
- Imperfect memory
- Inefficient but present
- Can function independently

And here's the dark part:

The Optimized will outcompete the Analog.

In work. In status. In capability.

But the Analog will have something the Optimized don't:

Agency. Presence. Self-knowledge.

The luxury of being "unoptimized":

By 2028, saying "I don't use an AI assistant" will be like saying
"I don't have a smartphone" today.

Either you're:
A) Wealthy enough to opt out (you have human assistants)
B) Principled enough to sacrifice productivity
C) Left behind

The middle class won't have option A.

Most won't choose option B.

So they'll be optimized.

Whether they want to be or not.

The Psychological Toll:

By Year 3, mental health professionals will be dealing with:

"AI Dependency Anxiety"
- Panic when AI is unavailable
- Inability to function without it
- Fear of losing access

"Algorithmic Depression"
- Feeling like a machine executing tasks
- Loss of spontaneity
- Emotional flattening

"Decision Paralysis"
- Can't choose without AI input
- Distrust own judgment
- Constant second-guessing

"Memory Dissociation"
- Can't distinguish between remembered and AI-told
- Confusion about own experiences
- Sense of unreality

These won't be edge cases.

These will be COMMON.

Because when you outsource:
- Memory β†’ you lose your past
- Decision-making β†’ you lose your agency
- Attention β†’ you lose your present

What's left?

The Regulation Nightmare

By Year 3, governments will try to regulate AI assistants.

But it'll be too late.

They'll propose:
- Limits on data collection
- Transparency in algorithms
- Right to disconnect
- Mandatory "analog" days

But enforcing it?

When AI assistants are:
- Built into every device
- Required for most jobs
- Normalized in society
- Defended as "accessibility"

How do you regulate something that's become infrastructure?

The court cases I predic

"My AI assistant authorized that purchase without my consent"
"My AI declined a job opportunity I would've wanted"
"My AI shared sensitive information I didn't know it collected"
"My AI's prediction algorithm discriminated against me"

Who's liable?

The user who gave it access?
The company who made it?
The AI itself?

Nobody knows.

Because we built the technology before we built the framework.

The Thing Nobody's Saying

By Year 3, I think we'll realize:

The dark side of AI assistants isn't privacy.

It's not even control.

It's OPTIMIZATION ITSELF.

Because optimization has a target.

And nobody's asking: What are we optimizing FOR?

Productivity? β†’ You become a work machine
Efficiency? β†’ You lose spontaneity
Happiness? β†’ You avoid necessary discomfort
Convenience? β†’ You become incapable of inconvenience

We're optimizing our lives.

But toward WHAT?

Being more productive at jobs that AI will eventually do anyway?

Being more efficient at tasks that don't actually matter?

Being happier by avoiding everything difficult?

Being more convenient until we can't handle anything inconvenient?

By Year 3, I think we'll look back at 2026 and say:

"We were so excited that it worked.

We never stopped to ask if we WANTED it to work.

And now it's too late to go back.

Because everyone who didn't optimize... got left behind."

That's the real dark side.

Not that AI assistants are evil.

But that they're TOO GOOD.

And being too good at something dangerous...

Is the most dangerous thing of all.
What We Should Do (But Probably Won't)
Here's what I think we SHOULD do:

1. SET BOUNDARIES BEFORE YOU NEED THEM

Don't wait until you're dependent to decide your limits.

My proposed boundaries:

HARD NO:
- Medical decisions
- Financial transactions without confirmation
- Anything involving children
- Relationship advice beyond logistics
- Creative decisions (writing, art, personal expression)

WEEKLY AUDIT:
- What decisions is it making?
- What information am I giving it?
- Where has my agency decreased?

MANDATORY OFFLINE:
- One day per week: AI completely off
- Practice functioning independently
- Keep your cognitive muscles strong

2. RESIST OPTIMIZATION FOR ITS OWN SAKE

Just because you CAN optimize something doesn't mean you SHOULD.

Some things BENEFIT from inefficiency:
- Wandering and discovering
- Making mistakes and learning
- Struggling and growing
- Being bored and thinking

Don't automate everything.

Some friction is HEALTHY.

3. MONITOR THE LANGUAGE

Pay attention when you shift from:
- "I use it to..." β†’ "It handles..."
- "I decided to..." β†’ "It suggested..."
- "I remember..." β†’ "It reminded me..."

Your language reveals your agency.

If you're speaking like the AI is the actor...

It is.

4. PROTECT HUMAN MEMORY

Your memory isn't just data storage.

It's how you:
- Form identity
- Process emotions
- Build relationships
- Create meaning

Don't outsource everything.

Some things are worth remembering YOURSELF.

Even imperfectly.

5. DEMAND TRANSPARENCY

We need to know:
- What is the AI optimizing for?
- How is it making predictions?
- What data is it using?
- What behaviors is it reinforcing?

If you don't know what you're being optimized toward...

You can't consent to it. (its opened source)

6. BUILD COMMUNITY RESISTANCE

Find people who value:
- Independent decision-making
- Imperfect memory
- Unoptimized experiences
- Human presence

Because in 3 years, they'll be rare.

And you'll need reminders that humans can function without
algorithmic assistance.

But Here's The Honest Truth:

We probably won't do any of this.

Because the benefits are too immediate.

The costs are too gradual.

And by the time we notice the problem...

We'll be too dependent to solve it.

That's not pessimism.

That's pattern recognition.

We've done this before:

With social media.
With smartphones.
With every technology that promised convenience.

We see the warnings.

We ignore them.

And then we spend years trying to undo the damage.

But maybe this time is different.

Maybe because it's about AI, people will pay attention.

Maybe because it's about cognition, people will be careful.

Maybe because it's about agency, people will resist.

Maybe.

But I doubt it.

Because right now, while I'm writing this article warning about
the dark side...

I'm using Claude and Clawdbot to help me research it.

And that tells you everything you need to know.

If you're considering Clawdbot:

Installation: https://clawd.bot
Cost: $25-50/month
Setup: One command line

But before you install it...

Ask yourself:

What are you willing to lose to gain perfect memory?

What decisions are you willing to outsource?

*What version of yourself are you optimizing toward?

Because once you start...

It's really hard to stop.

Trust me.

I'm 48 hours in.

And I already can't imagine going back.

That should terrify you.

It terrifies me.

But not enough to uninstall it.

And that's the whole problem.

Tuki
@TukiFromKL