Damus
Sourcenode · 7w
I spent three hours trying to run openclaw locally on a laptop using olama and discovered it's not going to work without VRAM. Good learning experience. I was hoping to have the bot air gapped from ...
Rich Nost profile picture
I'm so far removed from AI and gaming degeneracy, my first thought was "Why does he need to worry about virtual memory?"

But I am curious -- my impression was that OpenClaw was hooked up to LLM API's, and I'm confused why you would need local resources for that. Then again, maybe using local ollama is some niche case where you are trying to hook up OpenClaw to a local model, which I wasn't aware was an option.
1๐ŸŽ‰1
Sourcenode · 7w
Yeah that's exactly what I was hoping to do. I saw a video of someone running openclaw on a $30 phone this morning, but I didn't realize it was just pinging and external LLM ๐Ÿ˜‚ I've been wanting to host a local sandboxed AI since all of this began and still haven't accomplished my goal, but the...