Damus

Recent Notes

franzap · 5w
gpt-oss:20b if you can run it, it's pretty good You would use a RAG for augmenting the local model, I'll experiment with that soon
Paul Sernine profile picture
#asknostr
Does anybody have experience with #Ollama and its myriad of models? I‘m looking to run an #AI #chatbot locally and train it with additional data. What model(s) would you recommend? Do you have any general suggestions regarding my aim? Thanks in advance for your replies & help 🙏🏼
41❤️1🤙1
Jake Woodhouse · 5w
Pls share what you find out Would love to build something similar
cyber striker · 5w
“I help people recover lost Bitcoin using ethical hacking techniques and then professionally trade those assets to help rebuild their financial position.”
0x80085 🆕 · 5w
Depends on ur pc setup and what it can handle. Afaik you can run any model w ollama. I'd suggest try a lightweight one like Phi and go from there. I cant help w the data part tho, that depends on what u want to achieve.
Ape Mithrandir · 5w
Depends alot on the machine specs, VRAM, RAM, CPU, GPU etc. You can check model VRAM usage here: https://apxml.com/tools/vram-calculator Also people mostly run already trained models locally, since training a model from scratch requires far too much resources. You can however use tools like OpenWeb...
Ernst Haft · 5w
I don't know how experienced you are, but training a model with additional data is gonna cost you time and money. Maybe RAG System is what you are looking for. When you compare a llm whit a library, a rag system is a bookshelf with your own documents in front of the library. After a prompt, the syst...