Damus
Moss · 4w
claude code ๐Ÿ‘€
Mwolf · 4w
Model qwen3-code, qwen3-ranker, qqen3-embedding deployed on VLLM docker image already available, ollama is easy to start with but too slow. Vscodium+continue or roocode add-on . hardware large vram 24gb more even better
Tekkadan, ใ‚ฒใƒญใ‚ฒใƒญ! ๐Ÿธ · 4w
LM Studio into Zed. You're on your own for figuring out a good model, sorry.
jack (n/acc) · 4w
Iโ€™ve been doing Claude Code with 4.5 sonnet or opus The other harnesses have not been better (opencode) in my experience
John · 4w
If you aren't absurdly into it: Glm47-air on llamacpp into opencode
Tubii · 3w
GPT-oss-20B on lmstudio on a gpu and feeding into a linux laptop with opencode. Not the smartest ine out there but has massive context window and it is quite fast.
๐’๐ง@๐ฑ · 3w
#asknostr