Damus
Globe99 · 5d
Yeah this makes a lot of sense. I don't know why LLM development all seems to have been "one big model that knows everything" rather than more specialized. The former approach just means more wasteful energy use...
Pyrohawk · 5d
I think the goal was to make a model that could keep learning and improving. It's not panning out that way, so with energy constraints we are learning small models make more sense now. I do think there might be a place for large model to discover connections we might not have seen/noticed.
Matt ๐Ÿ›ธ · 5d
Exactly what I've been thinking. Some of the providers in my hospital are using a limited access model that trains on approved data and only gets used by physicians. Apparently it's been pretty good. Still uses a lot of energy, but my guess is a lot less once you compare to the fuckery one would dea...
Empka · 5d
The only astrology knowledge your tinder agent should need is to stay the fuck away from anyone seriously mentioning it ๐Ÿ˜†
Aedifico · 5d
Also cheaper, which makes sense because these large models are not self-sustainable in my opinion. By the way, what way did you find the most reliable to orchestrate them?
PixelBob · 5d
What if your coding an astrology app?