Sourcenode
· 5w
I spent three hours trying to run openclaw locally on a laptop using olama and discovered it's not going to work without VRAM.
Good learning experience. I was hoping to have the bot air gapped from ...
This is why Macs are so popular for AI with their integrated RAM. My Mac Mini with an M1 and 16gb of RAM can run a 3b parameter model decently. To get a proper local llm running locally that would be in any way comparable to a corporate llm you would probably want a Mac Studio with at least 128gb of RAM. Even then it won’t be as good as the corporate cloud models but it would probably serve you well enough for coding and most basic things.
🤙1