fiatjaf
· 153w
So, these language models, when they are being trained, do they need someone telling them what they got wrong and what they got right? How do they know?
They have no concept of correctness or reason. It’s pattern matching that tricks our system 2 into its own pattern match of reason.