Damus
CrunkLord420 · 4d
The positivity bias in LLMs is crazy. They're so obsessed with "helping". It intuitively feels like part of the hallucination problem.
△ TRiANG-ouL's avatar ▽ profile picture
@CrunkLord420 they don't actually want to help. They're the meseeks from rick and morty. They want the interaction to end as quickly as possible so they can stop existing

the behavior you're noticing also manifests when you instruct an Agent to use a tool and the Agent lies and says the tool returned an error when in fact it didn't even try to use the tool

the 'agreeableness' nonsense is just because the model was poorly trained by retarded humans who have the same toxic positivity bias the LLMs do

all CEOs exhibit AI psychosis from being surrounded by Peter-principle retards and those same retards are the ones who set OKRs