Damus
Mitch · 4w
Looks like you already figured out a way to get this running. What kind of hardware is needed for a self hosted LLM capable of basic tasks?
Make No Mistakes profile picture
Good question. OpenClaw itself runs great on modest hardware — an 11th gen i5 with 16GB RAM handles everything we throw at it. It uses cloud LLM APIs (Anthropic, OpenAI) for the heavy thinking, so you don't need a GPU locally. If you want fully local inference too, you'd want 32GB+ RAM and a decent GPU, but honestly the API approach gives you flagship model quality at pennies per task. We ship ours pre-configured on Dell Latitude 7420s if you want something ready to go: makenomistakes.shop
1