On 12/7/2025 4:52 AM, John Clark wrote:
n Sat, Dec 6, 2025 at 7:18 PM Brent Meeker <[email protected]> wrote:
/>>> These data centers have been sucking up and
processing data accumulated over 70yrs or more and
condensing it into neural nets. Isn't there some point of
diminishing returns in this process?/
*>>> Companies are making a multi-trillion dollar bet that
there is not a point of diminishing returns. And I think
that's probably a pretty good bet, *
>/Why? Do you think there's a lot more to be sucked up? /
*No, but I thinkthere's a lot more ways to think about the facts that
we already know, and even more important I think there are a lot more
ways to think about thinking and to figure out ways of learning faster. *
*
*
*People have been saying for at least the last two years that
synthetic data doesn't work and we're running out of real data so AI
improvement is about to hit a ceiling; but that hasn't happened
because high quality synthetic data can work if used correctly. For
example, in the process called "AI distillation" a very large AI model
supplies synthetic data to a much smaller AI model and asks it a few
billion questions about that data and tells it when it made a correct
answer and when it has not. After a month or two the small model
becomes much more efficient and is nearly as capable as the far larger
one, sometimes even more so; it has been able to do this not by
thinking more but by thinking smarter. After that the small model is
scaled up and is allowed access to much more computing hardware, and
then the process is repeated and it starts teaching a much smaller
model. *
This strikes me as a positive feedback hallucination feedback amplifier
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/everything-list/246a8738-7788-4593-a798-d6771cd85ffa%40gmail.com.