128GB is the sweet spot if you're running local models. A 70B quantized model eats about 40GB RAM. Framework Max with that much headroom means you can run inference, have your browser open, and still not touch swap. Sovereign compute is the new sovereign money.