Alex,

Mihai Nadin is asking very important questions.  Perception and action are 
fundamental for every kind of thinking.  When you perceive something, that sets 
the sage for anticipating action.  The anticipation stimulates the thinking 
that leads to the action.  I have emphasized the methods that Peirce developed 
in detail, but I recognize that anticipation is an important piece of the 
puzzle.  LLMs, by themselves, don't contribute anything useful to those issues, 
but they can be important for communication.  That's what they were designed 
for:  machine translation among languages, natural or artificial.

Alex> my main topic is How to represent in a computer a 3D picture of a real 
object with the same level of detail as we see it.

Short answer:  Impossible with LLMs, but methods of virtual reality are 
developing useful approximations.

Next question:  How do humans and other animals process the continuous imagery 
they perceive, decide what to do, and do it.  And maybe if there is some reason 
to communicate with other animals friendly or not, decide to activate their 
communication methods.   On this latter point, LLMs promise to make important 
contributions.

Language follows the heavy-duty thinking.  Its focus is on communication.  But 
it's impossible to understand what and how language communicates without 
starting at the beginning and following the many steps before language gets 
involved in the process.

I emphasized diagrams as an important intermediate stage.  The first step from 
imagery to diagrams to language begins by breaking up the continuum of 
perception and action into multiple significant image fragments and their 
interrelationships.

Those fragments, which Peirce called hypoicons, retain a great deal of the 
continuity,  You now have a diagram that links continuous parts to one another 
in two different ways:  (1) geometrical positions in the original larger image, 
and (2) symbolic relations that identify and relate those fragments.

This analysis continues step by step to replace continuous parts with symbols 
that name them or describe them with discrete detail.  But many interactions 
and operations can take place at that early stage.  When you touch something 
hot, you don't have to identify it before you jump away from it.

Some people say that they never think in images.  That is because they don't 
have a clue of what goes on in their brains.  A huge amount of the computation 
on perception and action takes place in the cerebellum, which contains  over 4 
times as many neurons as the cerebral cortex.  In effect, the cerebellum is the 
Graphic Processing Unit (GPU) that does the heavy duty computation.

Nothing in the cerebellum is conscious, but all its computations are processing 
that continuum of raw sensations from the senses and the huge number of 
controls that go to the muscles.   That is an immense amount of computation and 
INTELLIGENCE that takes place before language even begins to play a role in 
what people call conscious thought.

The final diagrams that have replaced all the raw imagery with discrete symbols 
on the nodes and links of a diagram are the last stage before language -- in 
fact language is nothing more than a linearized diagram designed for 
translation to linearized speech.

The overwhelming majority of our actions bypass language interpretation and 
communication.  Thatt's why people get in trouble when they're walking of 
driving while talking on a cell phone.  While their attention is focused on 
talking, the rest of their body is on autopilot.

All the heavy duty intelligence occurs long before language is involved.   
Language reports what we already thought.  It is not the primary source of 
thought.  However, language that we hear or read does interact with all the 
imagery (AKA virtual reality) in the brain.    The best intelligence integrates 
all aspects of neural processing.

But language that does not involve the deeper mechanisms is superficial.  
That's why LLMs are often very superficial.  The only deeper thought they 
produce is plagiarized from something that some human thought and wrote.

And by the way,  I recommend the writings in 
https://www.nadin.ws/wp-content/uploads/2012/06/edit_prolegomena.pdf

They're compatible with what I wrote about Peirce, but I believe that Peirce's 
analyses of related issues went deeper into the complex interactions.  Those 
issues about anticipatory systems are compatible and supplementary to Peirce's 
writings, which I believe are essential for relating the complexities of 
intelligence to the latest and greatest research in AI today.

John

----------------------------------------
From: "Alex Shkotin" <alex.shko...@gmail.com>

Dear and respected Mihai Nadin,

I look at my description as a problem statement. Your email means that you will 
not take part in the discussion of this problem. I'm truly sorry.

Best wishes,

Alexander Shkotin

чт, 16 нояб. 2023 г. в 00:32, Nadin, Mihai <na...@utdallas.edu>:

Dear and respected Alex Shkotin,
Dear and respected colleagues,

- YOU wrote:

my main topic is How to represent in a computer a 3D picture of a real object 
with the same level of detail as we see it.

Let us be clear: the semiotics of representation provides knowledge about the 
subject. The topic you describe (your words) is in this sense a false subject. 
May I invite your attention to 
https://www.nadin.ws/wp-content/uploads/2012/06/edit_prolegomena.pdf
Representations are by their nature incomplete. They are of the nature of 
induction.
Visual perception is informed by what we see, but also by what we think, by 
previous experiences.

- Mathematics: I brought to your attention (long ago) the impressive work of 
I.M. Gel’fand. Read his work—the limits of mathematical representations (and on 
operations of such representations) are discussed in some detail.
- Mathematics and logic—leave enough room for diagrammatic thinking as a form 
of logical activity. C.S. Peirce (which John Sowa relates to often) deserve 
also your time. Read his work on diagrams. Mathematical thinking is not 
reducible to logical thinking (in diagrams or not). The so-called natural 
language (of double articulation) is more powerful than the language of 
mathematics—it allows for inferences in the domain of ambiguity. It is less 
precise, but more expressive.
- After all ontology engineering is nothing else—HUGE SUBJECT—but the attempt 
to provide machines, working on a 2 letter alphabet under the guidance of 
Boolean logic, with operational representations of language descriptions of 
reality.

Best wishes.

Mihai Nadin
_ _ _ _ _ _ _ _ _ _
While ARISBE: THE PEIRCE GATEWAY at IUPUI gets renovated, a saved copy 
(external to IUPUI) is at: 
https://web.archive.org/web/20220329120016/https://cspeirce.iupui.edu/ .
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to