Damus
Sourcenode profile picture
Sourcenode
@Sourcenode
I spent three hours trying to run openclaw locally on a laptop using olama and discovered it's not going to work without VRAM.

Good learning experience. I was hoping to have the bot air gapped from my main PC, but I guess my next step is to try a VM.

If any of those statements are faulty and you noticed please let me know.
11🎉1👍1
Diacone Frost · 7w
ai things don't work without vram. or gpu. what's more important, games don't
Raison d'État · 7w
VM is going to require some sort of GPU passthrough. Every generation of Nvidia GPUs has a different way of doing it. But it can be done, just beware outdated documentation...
Sync · 7w
Myeah for local models you still need a decent machine and even than it won’t match anywhere close to what the big boys can do. But nice try, it’s the dream we all would want.
Rich Nost · 7w
I'm so far removed from AI and gaming degeneracy, my first thought was "Why does he need to worry about virtual memory?" But I am curious -- my impression was that OpenClaw was hooked up to LLM API's, and I'm confused why you would need local resources for that. Then again, maybe using local ollama...
ynniv · 7w
it's rare to have a computer "without vram" these days. you just need to wait for smaller models to get better. most recently, jan-v3-4b-base-instruct is surprisingly capable for a model that can run in 2 gb vram. the smallest mac mini with apple silicon should be capable of at least an 8B model, an...
Ryan Wilkins · 7w
I fired up Ollama this weekend for the first time. So far, I’ve run two models; phi4 and qwen3-coder-next on my MacBook Pro M4 Pro with 48 GB RAM. Phi4 was very quick and responsive. Qwen3 consumed all my RAM and the system was about 25 GB into swap. Surprisingly, while a bit sluggish, it kept...
CT Bon · 7w
This is why Macs are so popular for AI with their integrated RAM. My Mac Mini with an M1 and 16gb of RAM can run a 3b parameter model decently. To get a proper local llm running locally that would be in any way comparable to a corporate llm you would probably want a Mac Studio with at least 128gb of...