Damus
BitcoinconManolito · 2w
You can try mid size qwen 3.5 28b, for instance. But it will be useless unless you own over 50GB of vram. Let alone larger 80b models. I'm sure llm training technology will improve overtime, using features such as MOE that reduces the computing requirements without compromising performance. But it'l...