Anyone that has looked at job ads for these companies can see that they are 
putting extensive effort into reinforcement learning and developing focused 
training. It’s not like one is limited to training on stuff on the internet (or 
even copyrighted physics textbooks). They can teach LLMs how 
programming/physics/whatever works by giving example programs and then running 
them. (This isn’t the same thing as using a LLM to extend data.) The robot taxi 
companies have extensive training that is simulated physics, for example.

In terms of copyright material, I see the Atlantic is providing their archives: 

https://www.theatlantic.com/press-releases/archive/2024/05/atlantic-product-content-partnership-openai/678529/
 
<https://www.theatlantic.com/press-releases/archive/2024/05/atlantic-product-content-partnership-openai/678529/>



From: Friam <[email protected]> on behalf of Roger Critchlow 
<[email protected]>
Date: Sunday, November 17, 2024 at 8:46 AM
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: [FRIAM] deducing underlying realities from emergent realities 

Sabine is wondering about reported failures of the new generations of LLM's to 
scale the way the their developers expected. 


https://backreaction.blogspot.com/2024/11/ai-scaling-hits-wall-rumours-say-how.html
 
<https://backreaction.blogspot.com/2024/11/ai-scaling-hits-wall-rumours-say-how.html>
 



On one slide she essentially draws the typical picture of an emergent level of 
organization arising from an underlying reality and asserts, as every physicist 
knows, that you cannot deduce the underlying reality from the emergent level. 
Ergo, if you try to deduce physical reality from language, pictures, and videos 
you will inevitably hit a wall, because it can not be done. 



So she's actually grinding two axes at once: one is AI enthusiasts who expect 
LLM's to discover physics, and the other is AI enthusiasts who foresee no end 
to the improvement of LLM's as they throw more data and compute effort at them. 



But, of course, the usual failure of deduction runs in the opposite direction, 
you can't predict the emergent level from the rules of the underlying level. Do 
LLM's believe in particle collliders? Or do they think we hallucinated them? 



-- rec -- 



Attachment: smime.p7s
Description: S/MIME cryptographic signature

.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to