Damus
calle profile picture
calle
@calle
why does nobody talk about the fact that google's gemini models have a 2M context window but they ALWAYS go into a lobotomized death loop after 20%?

I've never reached 30% without it turning into a suicidal slop machine.
52โค๏ธ8๐Ÿ‘1๐Ÿ˜1
Derek Ross · 21h
things haven't changed much in a year... when i first started building with AI last February and switched to Gemini, it was only good for a couple prompts and then it entered the dumb zone. i would have hoped they would have fixed this by now, but Claude is still top dog.
marcan0 · 21h
even at 20% thats about 400k tokens? Maybe just too much noise for the probability machine. drowns out the signal.
Matt · 21h
Ignore all the 1m token models. Even 200k token context is excessive usually
Giszmo · 20h
I suspected that these context windows are not really fixed in their size but rather degrade in quality so the size is more a marketing term. Yeah, you can feed it 2million tokens for text search but not for meaningful "understanding".
BitcoinJujitsu · 19h
I've been on their engineers about this. Simple tweeks will have me using it more