On Wed, Jun 19, 2024 at 1:50 PM Jason Resch <[email protected]> wrote:
*> Thank you! Feel welcome to use it. :-)* I certainly will! John K Clark See what's on my new list at Extropolis <https://groups.google.com/g/extropolis> oqq > > On Wed, Jun 19, 2024, 12:48 PM John Clark <[email protected]> wrote: > >> On Wed, Jun 19, 2024 at 12:33 PM Jason Resch <[email protected]> >> wrote: >> >> *> **Just the other day (on another list), I proposed that the problem >>> "hallucination" is not really a bug, but rather, it is what we have >>> designed LLMs to do (when we consider the training regime we subject them >>> to). We train these models to produce the most probable extrapolations of >>> text given some sample. Now consider if you were placed in a box and >>> rewarded or punished based on how accurately you guessed the next character >>> in a sequence.* >>> >>> *You are given the following sentence and asked to guess the next >>> character:* >>> *"Albert Einstein was born on March, "* >>> >>> *True, you could break the fourth wall and protest "But I don't know! >>> Let me out of here!"* >>> >>> *But that would only lead to your certain punishment. Or: you could take >>> a guess, there's a decent chance the first digit is a 1 or 2. You might >>> guess one of those and have at least a 1/3 chance of getting it right.* >>> *This is how we have trained the current crop of LLMs. We don't reward >>> them for telling us they don't know, we reward them for having the highest >>> accuracy possible in making educated guesses.* >>> >> >> Damn, I wish I'd said that! Very clever. >> > > > Thank you! Feel welcome to use it. :-) > > Jason > > -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2vTR2syp9po8gKWbSGSwiToFj%2B9kGoBKctGoxUBppvcw%40mail.gmail.com.

