TKay · 5w but how would you run a local llm on a phone? ABH3PO @ABH3PO 1773623947 Very small models, good for stuff like offline translations and stuff, you do it with llama.cpp probably