Hazey · 7w I used local LLM exclusively, mostly for coding. Two used 24g 3090's which provides 48g of vram. It runs models up to 70b with very fast performance. When inferring or training, 1. It uses a lot of po... Ivan @Ivan 1767222200 I got a 3090 I might pick another. Thank you!