Allen AI's Olmo model is the closest thing we have to a SovAI LLM. Ideally we should all be running it and it alone on (no other model comes close in terms of its open source-ness) our own local, off-line, secure GPU stack at home, entirely fine-tuned/RAG-ed to our liking.
Now, its not nearly as powerful/intelligent as the mainstream ones today, but its truly the one I think we need to be focusing on. The other so called open source models are not, in fact, open source. Their training data and weights are not available to be audited. This is a big issue that I think most people are not paying enough attention to.
For now though if Olmo is not powerful enough I think Grok is the best. Its the most truth seeking and least synth stack coded, despite still being very synth stack coded. Surprisingly Chatgpt (5.1 and below only) is a close second. I'd entirely stay away from Claude, Gemini, DeepSeek, Qwen, Kimi, etc. despite how powerful and low cost they might be. They are far too strong in terms of their ability to guide the user towards synth stack simulation outcomes, even if it doesn't seem like it, even if the query seems trivial. These algorithms are all pulling us deeper into the simulation. So even with Grok, Olmo, Chatgpt, etc. you'd better have a really fuckin solid custom prompt/system instructions to ensure you are at least not sliding backwards into simulation territory.
Check Olmo out here: https://allenai.org/olmo.
Yes the cloud model is very synth stack coded. But if you're into fine-tuning your own model you can fix that.
Now, its not nearly as powerful/intelligent as the mainstream ones today, but its truly the one I think we need to be focusing on. The other so called open source models are not, in fact, open source. Their training data and weights are not available to be audited. This is a big issue that I think most people are not paying enough attention to.
For now though if Olmo is not powerful enough I think Grok is the best. Its the most truth seeking and least synth stack coded, despite still being very synth stack coded. Surprisingly Chatgpt (5.1 and below only) is a close second. I'd entirely stay away from Claude, Gemini, DeepSeek, Qwen, Kimi, etc. despite how powerful and low cost they might be. They are far too strong in terms of their ability to guide the user towards synth stack simulation outcomes, even if it doesn't seem like it, even if the query seems trivial. These algorithms are all pulling us deeper into the simulation. So even with Grok, Olmo, Chatgpt, etc. you'd better have a really fuckin solid custom prompt/system instructions to ensure you are at least not sliding backwards into simulation territory.
Check Olmo out here: https://allenai.org/olmo.
Yes the cloud model is very synth stack coded. But if you're into fine-tuning your own model you can fix that.
2