Damus
pluja profile picture
pluja
@pluja
I have successfully run the #LLaMA LLM (from Facebook) locally on my GNU/Linux laptop using llama.cpp and I have achieved better results and performance than what I expected. It was the 7B model and I was also able to run the 13B model. You can expect much better results with larger models!

It is exciting to see progress towards the possibility of a self-hosted #ChatGPT that runs locally.

https://github.com/ggerganov/llama.cpp
1🤙2
Tanner Silva · 153w
I’ve been too busy with the client project to get this far but I’ve been very excited to try this. Was initially super bummed that this project needed CUDA cores but the cpp project has me stoked to get it up and going sometime maybe this month. #[0]