QnA
· 6d
#AskNostr
Who’s running AI local models? I want to hear about your hardware specs, model choice, use cases and experience etc
I know you might have some insights nostr:npub1dcung444l0z6n3f4jtej882...
Gemma3 27b model on ollama for text summaries (work). Running on a Ryzen 8 core with 32GB ram and a 3070. Doesn't use the GPU much though since it doesn't fit in VRAM.
❤️1