On Sun, Dec 7, 2025 at 3:20 PM Brent Meeker <[email protected]> wrote:


>> > *Why?  Do you think there's a lot more to be sucked up? *
>>
>
> *No, but I think there's a lot more ways to think about the facts that we
> already know, and even more important I think there are a lot more ways to
> think about thinking and to figure out ways of learning faster. *
>
> *People have been saying for at least the last two years that synthetic
> data doesn't work and we're running out of real data so AI improvement is
> about to hit a ceiling; but that hasn't happened because high quality
> synthetic data can work if used correctly. For example, in the process
> called "AI distillation" a very large AI model supplies synthetic data to a
> much smaller AI model and asks it a few billion questions about that data
> and tells it when it made a correct answer and when it has not. After a
> month or two the small model becomes much more efficient and is nearly as
> capable as the far larger one, sometimes even more so; it has been able to
> do this not by thinking more but by thinking smarter. After that the small
> model is scaled up and is allowed access to much more computing hardware,
> and then the process is repeated and it starts teaching a much smaller
> model. *
>
>
> *> This strikes me as a positive feedback hallucination feedback amplifier*
>

*Then why does it work so well? *

*John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*

edd


>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2wV9k%2BTFzzv0cZeJxj7SOcta2Vz2RPU8_E6eLA6vwBiA%40mail.gmail.com.

Reply via email to