Damus
QnA profile picture
QnA
@QnA
#AskNostr

Who’s running AI local models? I want to hear about your hardware specs, model choice, use cases and experience etc

I know you might have some insights @SirJamzAlot 👀
102❤️5👀1
Zaikaboy · 5d
nostr:nprofile1qqsrn8udd9dwjyphux8saz4est2pua4hxdmw4u9w4d647uf5uu64avcpremhxue69uhkummnw3ez6ur4vgh8wetvd3hhyer9wghxuet59uq36amnwvaz7tmwdaehgu3wvf5hgcm0d9hx2u3wwdhkx6tpdshszythwden5te0dehhxarj9emkjmn99um36ffn
Max Trotter · 5d
Set up my latest local ollama & open-weUI on Ubuntu desktop i7 11th gen Intel 32Gb ram NO gpu. Works at satisfactory pace with 14b models
rare · 5d
Got a few. Running a very lightweight one on a RasPi. One on my laptop, and one on a headless Mac mini. Mostly dolphin-llama, deepseek, and gemma. They're not the best, but they work enough.
Empka · 5d
Gemma3 27b model on ollama for text summaries (work). Running on a Ryzen 8 core with 32GB ram and a 3070. Doesn't use the GPU much though since it doesn't fit in VRAM.