Ivan
· 7w
Good morning, Nostr. Who's running local LLMs? What are the best models that can run at home for coding on a beefy PC system? In 2026, I want to dig into local LLMs more and stop using Claude and Gemi...
My set up is i913900k + 64 GB at 6400MT/s + RTX 4090
The absolute best all-around AI is llama3.3 but it is a bit outdated and and slow. Newer MOE models like llama 4 and gpt-oss are flashier and faster but they are mostly experts on hallucinating.
People will also suggest deepseek but generally speaking 24gb vram is just too small for "reasoning models" to actually be an improvement. I haven't tried some of the more recent developments, but I have some hope.
If someone were to train a llama3.3 like AI but have it focus on tool use, like reading the source code and documentation for the libraries you have installed, then I think it could be very good.