Damus
Ivan · 7w
Good morning, Nostr. Who's running local LLMs? What are the best models that can run at home for coding on a beefy PC system? In 2026, I want to dig into local LLMs more and stop using Claude and Gemi...
Hazey profile picture
I used local LLM exclusively, mostly for coding. Two used 24g 3090's which provides 48g of vram. It runs models up to 70b with very fast performance.
When inferring or training,
1. It uses a lot of power, peaking around 800W
2. It spins up the fans pretty loudly

I don't think it's necessary to go local for open source coding though. Maple, mostly gpt-oss-120 is great for that. I think it is necessary to go local for uncensored models or training with your own data, and discussing things that don't fit mainstream bullshit narratives.
Ivan · 7w
I got a 3090 I might pick another. Thank you!