Sabine is wondering about reported failures of the new generations of
LLM's to scale the way the their developers expected.


https://backreaction.blogspot.com/2024/11/ai-scaling-hits-wall-rumours-say-how.html

On one slide she essentially draws the typical picture of an emergent level
of organization arising from an underlying reality and asserts, as every
physicist knows, that you cannot deduce the underlying reality from the
emergent level.  Ergo, if you try to deduce physical reality from language,
pictures, and videos you will inevitably hit a wall, because it can not be
done.

So she's actually grinding two axes at once: one is AI enthusiasts who
expect LLM's to discover physics, and the other is AI enthusiasts who
foresee no end to the improvement of LLM's as they throw more data and
compute effort at them.

But, of course, the usual failure of deduction runs in the opposite
direction, you can't predict the emergent level from the rules of the
underlying level.  Do LLM's believe in particle collliders?  Or do they
think we hallucinated them?

-- rec --
.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to