On 12/7/2025 12:35 PM, John Clark wrote:
On Sun, Dec 7, 2025 at 3:20 PM Brent Meeker <[email protected]> wrote:
>/Why? Do you think there's a lot more to be sucked up? /
*No, but I thinkthere's a lot more ways to think about the facts
that we already know, and even more important I think there are a
lot more ways to think about thinking and to figure out ways of
learning faster. *
*
*
*People have been saying for at least the last two years that
synthetic data doesn't work and we're running out of real data so
AI improvement is about to hit a ceiling; but that hasn't
happened because high quality synthetic data can work if used
correctly. For example, in the process called "AI distillation" a
very large AI model supplies synthetic data to a much smaller AI
model and asks it a few billion questions about that data and
tells it when it made a correct answer and when it has not. After
a month or two the small model becomes much more efficient and is
nearly as capable as the far larger one, sometimes even more so;
it has been able to do this not by thinking more but by thinking
smarter. After that the small model is scaled up and is allowed
access to much more computing hardware, and then the process is
repeated and it starts teaching a much smaller model. *
/> This strikes me as a positive feedback hallucination feedback
amplifier
/
*Then why does it work so well?*
I don't know. Do you know how it avoids amplifying hallucinations? Do
you even know how well it works?
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/everything-list/7852a377-657a-4840-b56c-230bf24733df%40gmail.com.