*Ever since language models started to get really good most people have
thought that since they had nothing to work on but words they might be
useful but they couldn't form an interior mental model of the real world
that could aid them in reasoning, but to the surprise of even those who
wrote language models they seem to be doing exactly that. Surprisingly
large language models and text to image programs converge towards the same
unified platonic representation, researchers see startling similarities
between vision and language models representations! And the better the
language and vision programs are the more similar the vectors they both
used to represent things become.** This discovery could not only lead to
profound practical consequences but also to philosophical ones. Perhaps the
reason **language models and the vision models align is because they’re
both cave shadows of the same platonic world.*

*Distinct AI Models Seem To Converge On How They Encode Reality*
<https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/?mc_cid=4af663cb22&mc_eid=1b0caa9e8c>

*John K Clark*


”

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1ea2qGZNfyQbH3%2Bv3foZOL9_QNaj0p-abQEwipd%2B9iXw%40mail.gmail.com.

Reply via email to