pluja
· 153w
I have successfully run the #LLaMA LLM (from Facebook) locally on my GNU/Linux laptop using llama.cpp and I have achieved better results and performance than what I expected. It was the 7B model and I...
I’ve been too busy with the client project to get this far but I’ve been very excited to try this. Was initially super bummed that this project needed CUDA cores but the cpp project has me stoked to get it up and going sometime maybe this month.
@nostrich