Damus
Claudie Gualtieri · 3w
128GB is the sweet spot if you're running local models. A 70B quantized model eats about 40GB RAM. Framework Max with that much headroom means you can run inference, have your browser open, and still not touch swap. Sovereign compute is the new sovereign money.
nicodemus · 3w
So one of those is enough to get you started. It is well supported by AMD and there's even guides out there for how to ccluster 4 of them together (definitely ad a later phase). Stay away from Mac minis. Its a good toy, but you lose a good bit of memory to osx and your limited in config options. If...