utxo the webmaster ๐งโ๐ป
· 1w
For any local AI maxis, here is my current setup and models:
4x 3090s
2x - qwen3.5-35b q4 256k - 60-80 t/s
2x - gemma4-27b q4 256k - 50-70 t/s
Running on vLLM via docker
Working mint openclaw, Ge...
I guess those zaps paid well in the end. ๐