On Sun, Feb 8, 2026 at 10:30 AM Stefano Ticozzi <[email protected]>
wrote:

*> The article you linked here appeared to refer to a convergence toward a
> Platonic concept of the Idea; it therefore seemed relevant to recall that
> Platonic Ideas have been extensively demonstrated to be “false” by science.*
>

*No. You can't use a tape measure to prove that a poem is "false". Science
deals with what you can see, hear, feel, taste and smell, Plato was dealing
with the metaphysical, the underlying nature of being. However, far from
disproving it, in the 20th century Quantum Mechanics actually gave some
support to Plato's ideas. In Plato's Allegory of the Cave we can only see
the "shadows" of the fundamental underlying reality, and in a similar way
modern physics says we can only observe reality through a probability (not
a certainty) obtained by the Quantum Wavefunction.  *

* > human language has grown and developed around images, driven almost
> exclusively by the need to emulate the sense of sight.*
>

*We may not be able to directly observe fundamental underlying reality but
we are certainly affected by it, and over the eons human language has been
optimized to maximize the probability that one's genes get into the next
generation. So although words are not the fundamental reality they must be
congruent with it. That has been known for a long time but very recently AI
has taught us that the connection is much deeper and far more subtle than
previously suspected. *

*Just a few years ago many people (including me) were saying that words
were not enough and that for a machine to be truly intelligent it would
need a body, or at least sense organs that can interact with the real
physical world. But we now know that is untrue. It is still not entirely
clear, at least not to me, exactly how it is possible for words alone to do
that, but it is an undeniable fact that somehow it is.*

*John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>*

v53




>
> Il dom 8 feb 2026, 13:13 John Clark <[email protected]> ha scritto:
>
>> On Sat, Feb 7, 2026 at 12:14 PM Stefano Ticozzi <
>> [email protected]> wrote:
>>
>> *> Scientific thought has long since moved beyond Platonism,*
>>>
>>
>> *Philosophical thought perhaps, but scientific thought never embraced
>> Platonism because the most famous of the ancient Greeks were good
>> philosophers but lousy scientists. Neither Socrates, Plato or Aristotle
>> used the Scientific Method. Aristotle wrote that women had fewer teeth than
>> men, it's known that he was married, twice in fact, yet he never thought of
>> just looking into his wife's mouth and counting. Today thanks to AI, for
>> the first time some very abstract philosophical ideas can actually be
>> tested scientifically. *
>>
>> *> 1. Ideas do not exist independently of the human mind.  Rather, they
>>> are constructs we develop to optimize and structure our thinking.*
>>>
>>
>> *True but irrelevant.  *
>>
>>
>>> *> 2. Ideas are neither fixed, immutable, nor perfect; they evolve over
>>> time, as does the world in which we live—in a Darwinian sense. For
>>> instance, the concept of a sheep held by a human prior to the agricultural
>>> era would have differed significantly from that held by a modern
>>> individual.*
>>>
>>
>> *The meanings of words and of groups of words evolve over the eons in
>> fundamental ways, but camera pictures do not.  And yet minds educated by
>> those two very different things become more similar as they become smarter.
>> That is a surprising revelation that has, I think, interesting
>> implications. *
>>
>> *> In my view, the convergence of AI “ideas” (i.e., language and visual
>>> models) is more plausibly explained by a process of continuous
>>> self-optimization, performed by systems that are trained on datasets and
>>> information which are, at least to a considerable extent, shared across
>>> models.*
>>>
>>
>> *Do you claim that the very recent discovery that the behavior of minds
>> that are trained exclusively by words and minds that are trained
>> exclusively by pictures are similar and the discovery that the smarter
>> those two minds become the greater the similarities, has no important
>> philosophical ramifications? *
>>
>>
>>
>> 4x@
>>
>>
>>
>>
>>
>>
>>
>>>
>>> Il sab 7 feb 2026, 12:57 John Clark via extropy-chat <
>>> [email protected]> ha scritto:
>>>
>>>> *Why do the language model and the vision model align? Because they’re
>>>> both shadows of the same world*
>>>> <https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/?mc_cid=b288d90ab2&mc_eid=1b0caa9e8c>
>>>>
>>>> *The following quote is from the above: *
>>>>
>>>> *"More powerful AI models seem to have more similarities in their
>>>> representations than weaker ones. Successful AI models are all alike, and
>>>> every unsuccessful model is unsuccessful in its own particular way.[...] He
>>>> would feed the pictures into the vision models and the captions into the
>>>> language models, and then compare clusters of vectors in the two types. He
>>>> observed a steady increase in representational similarity as models became
>>>> more powerful. It was exactly what the Platonic representation hypothesis
>>>> predicted."*
>>>>
>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2z4g-bcnkYH0U9E3qnQAfa7g3UjDEKCSxHn_2NFiX66w%40mail.gmail.com.

Reply via email to