AlexAnarcho
· 3w
Considering building my own AI server.
Does anybody have experience with this?
Do I need to get nvidia gpus?
I m thinking of buying a macmini with maximum RAM, installing NIXOS with asahi and then ...
Asahi will not support apple silicon (mlx) inference, macos is not optional, but you can ssh into it and run it as a server.
NVIDIA - the main parameter to look for is vram. 4090 with 24GB is not very useful for inference of the largest models.
Look at amd with integrated ram and vram (that's what Apple does as well). Maybe framework desktop. That's probably the best deal now.
Avoid multiple GPUs (2x 4090 with 48GB combined vs one Mac with 64GB ram - the Mac is much easier to run for inference, splitting the models between two GPUs is pain and often does not work with inference code of new models).