Ivan
· 7w
Good morning, Nostr. Who's running local LLMs? What are the best models that can run at home for coding on a beefy PC system? In 2026, I want to dig into local LLMs more and stop using Claude and Gemi...
As I understand it, you'll want to limit to 1b tokens per 1GB of ram.