Alex,

The only relevant item in that reference is a publication that is cited before 
the paywall:  https://arxiv.org/pdf/2309.06979.pdf

What they prove is that you can train a system of LLMs to simulate a Turing 
machine.  But that proves nothing.  Almost every AI system designed in the past 
60 years can be "trained" to simulate a Turing machine.

Every LLM that is trained on natural language data is limited to the kind of 
"thinking" that is done in natural languages.  As I pointed out in Section6.pdf 
(and many other publications), NL thinking is limited to all the ambiguities 
and limitations of NL speaking.   In human communication, NLs must be 
supplemented by context, shared background knowledge, and gestures that 
indicate or point to non-linguistic information.

The great leap of science by the Egyptians, Stone-hengers, Babylonians, 
Chinese, Indians, Greeks, Mayans, etc., was to increase the precision and 
accuracy of their thinking by going beyond what can be stated in ordinary 
languages.  And guess what their magic happens to be?   It's DIAGRAMS!!!!

Translating thoughts from diagrams to words is a great leap in communication.  
But it cannot replace the precision and generality of the original thinking 
expressed in the original diagrams.

As I said, you cannot design the great architectures of ancient times, the 
complex machinery of today, or any of the great scientific innovations of the 
past 500 years without geometrical diagrams that are far more complex than 
anything you can state in humanly readable natural language.

I admit that it is possible to translate any geometrical design or any bit 
pattern in a digital computer into a specification that uses the words and 
syntax of a natural language.  But what you get is an immense amount of 
verbiage that no human could read and understand.

That is the most important message that we can get across in the forthcoming 
mini-summit.   LLMs trained on NL input cannot go beyond NL thinking, and they 
cannot do any thinking that can go beyond thoughts expressible in NLs.   To 
test that statement, show somebody (anybody you know) a picture, have them 
describe it, and have somebody else draw or explain what they heard, and have a 
fourth person compare the original to the explanation.  (By the way, my 
previous sentence would be much clearer if I had included a drawing.)

John

----------------------------------------
From: "Alex Shkotin" <alex.shko...@gmail.com>

Hi Andrea,

The topic you touched is so hot that there should be a lot of overview done 
yet. For me one source of it is a medium portal. Unfortunately it's a little 
bit paywalled :-(

Have a look at the newest [1].

Alex

[1] 
https://medium.com/@paul.k.pallaghy/llms-like-gpt-do-understand-agi-implications-dc54f4f86494
_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to