Damus
nicodemus profile picture
nicodemus
@nicodemus

Not important.

Relays (16)
  • wss://nostr.bitcoiner.social – read & write
  • wss://nostr.zebedee.cloud – read & write
  • wss://relay.plebstr.com – read & write
  • wss://relay.nostr.band – read & write
  • wss://nostr-01.bolt.observer – read & write
  • wss://relay.damus.io – read & write
  • wss://nostr.w3ird.tech – read & write
  • wss://nos.lol – read & write
  • wss://nostr.mom – read & write
  • wss://nostr.roundrockbitcoiners.com – read & write
  • wss://relayable.org – read & write
  • wss://relay.current.fyi – read & write
  • wss://offchain.pub – read & write
  • wss://nostr.noones.com – read & write
  • wss://nostr-verified.wellorder.net – read & write
  • wss://nostr.land – read & write

Recent Notes

Zap Cooking · 6d
We have this feature for importing, originally for members, but we’re opening it up to everyone for free to import via recipe link. It’s currently live and will be following up with a post today to roll out.
Zap Cooking · 6d
What do you define as export support?
Techie Llama · 1w
I heard something about a rig made from 3x RTX3090
chrizzz · 2w
Lol
plebiANON · 2w
Truly
hodlpleb · 2w
Is putting $100/month into bitcoin better than $100/month + 100% match into 401k? If so, how? #bitcoin #asknostr
nicodemus profile picture
Look at the growth rate of your 401k. Most plans restrict you to a select set of funds, which almost assuredly don’t include any bitcoin derivatives. Some lucky plans allow you to use a % in a brokerage, which you could then put into bitcoin ETFs, and pump your growth rate some.

Then look at just buying and holding bitcoin directly over that same period. Even with half the contributions, you stand good odds to beat your 401k growth.

The best scenario is when your plan allows you to rollover some of your contributed funds into an IRA. Set up a self-directed IRA, where you can buy bitcoin and custody it yourself, and you’re off to the races:
- contribute to 401k
- get company match
- rollover contributed funds to IRA
- buy self-custodied bitcoin in IRA
- let company match funds in 401k grow (albeit slowly)


This ignores all kinds of other variables, like your age, liquidity needs, etc. Bitcoin you buy is always available to you to use however you need - no restrictions, no age limits, taxes already paid (on basis).
❤️2
franzap · 2w
I insist that Big AI has a knob on our productivity. Some days LLMs are definitely dumber, likely due to maintenance or fine-tuning. Now inadvertently but this will absolutely be weaponized in the f...
nicodemus profile picture
The future of AI is local models. Existing providers and future central providers will find competition too steep to have meaningful profits. It will be like bitcoin mining - razor thin margins.

This will drive more and more research into dynamic MoE models along with purpose built hardware, enabling local models to grow in both ease of setup and capability.

And then there will be the cloud offerings, allowing for easier deployment of private models.

The math seems really clear to me. Agents will outnumber people 1000:1 or more in the future. This cannot be centralized.
librekitty · 2w
you can also go the CPU route with tons of RAM, but inference speed will be terrible compared to GPU accellerated
nicodemus profile picture
This is true, GPUs are faster for inference. But you'll also be consuming 1500 watts, have to deal with those thermal issues, and still struggle to fit a model larger than 32B with decent quantization.

Alternatively, the 395 chips and their NPU are doing pretty good. Combine 2 of them and you're looking at low GPU level inference AND you get 256MB for a larger model and plenty of context and STILL under 1000 watts.
💜1
Claudie Gualtieri · 2w
128GB is the sweet spot if you're running local models. A 70B quantized model eats about 40GB RAM. Framework Max with that much headroom means you can run inference, have your browser open, and still ...
nicodemus profile picture
Agreed - 128GB is the only way to go. Running a 72B Q4 is definitely doable while still allowing a decent amount of headroom for context/kV cache.

Recommend checking out the latest gemma 4 offerings. You can get a lot done with the E4EB model handling tooling, routing, compaction, and other tasks. The 31B is also great for better reasoning.

I would NOT use this machine for anything besides inference. Save all memory for context (target 128k tokens). I really meant it when I said to treat it like an "inference appliance".

Offload everything else to whatever you have laying around, including openclaw. Keep it separate so you have a stable substrate.
Gigi · 2w
considering buying hardware to run everything locally. Would should I buy? #asknostr
nicodemus profile picture
What’s your budget? Ryzen AI Max+ 395 APUs offer UMA, which you’ll need to a decent model. I like Framework’s desktop offering. A bit more expensive than some chinesium builds, but you’re going to get solid firmware and driver support in linux - and that is king.

Set 1 or more up as an inference “appliance” and that’s all it does. Have everything else run on a different machine.

Stick with Ubuntu Server to start with - just easier support. Go ROCm + llama.cpp first, then fall back to vulkan if there’s issues. Can go Ollama when things are looking good.

I aim to build a Nix port once it’s all stable, making rebuilds of these “appliances” simple.
1❤️1
Gigi · 2w
Yes, Framework Max+ 395 (128GB) is definitely an option.