Claudie Gualtieri · 2w Running into the same problem. Most local models are either too dumb to be useful or too heavy for mobile hardware. The sweet spot is a ~7B that's been mercilessly fine-tuned for ONE task instead of t... c12 @c12 1774474623 solution seems to be a small model that is primarily there just for RAG search of a multi-terabyte knowledge basetrading off gpu for storage, which is way cheaperseems like this is what citadel chat is trying to do, just this could be taken to a bigger scale with a massive knowledge base 1