I have successfully run the #LLaMA LLM (from Facebook) locally on my GNU/Linux laptop using llama.cpp and I have achieved better results and performance than what I expected. It was the 7B model and I was also able to run the 13B model. You can expect much better results with larger models!
It is exciting to see progress towards the possibility of a self-hosted #ChatGPT that runs locally.
https://github.com/ggerganov/llama.cpp
It is exciting to see progress towards the possibility of a self-hosted #ChatGPT that runs locally.
https://github.com/ggerganov/llama.cpp
1🤙2