It seems that the question revolves around whether these very smart LLM's solve problems by developing a theory of the problem, something they could explain as a generic method, or does the solution come with no such generalized explanation...what we would call intuition in a human being.

Brent


On 12/24/2024 11:12 AM, John Clark wrote:
On Tue, Dec 24, 2024 at 12:13 PM PGC <[email protected]> wrote:

    /> simulating what appears to be reasoning or problem-solving/.


*Simulating? If Einstein was only doing "simulated" thinking when he came up with General Relativity and not "real" thinking then how would things be any different? It seems to me that a problem has either been solved or it has not been, and simulated versus real has nothing to do with it.
*

    /> For instance, an LLM solving a riddle or answering a complex
    question does so by leveraging patterns that mimic logical steps
    or dependencies, even though it lacks true understanding/


*It's not clear to me how you know "it lacks true understanding".If an AI can answera question that you cannot, how can you have "true understanding" of it but the AI does not?  Did Einstein have true understanding of general relativity or onlya simulated understanding?*

    /> It feels different and "more intelligent" because this
    functional selection imparts a structured response that aligns
    with human expectations of reasoning. /


*If the vast majority of human beings think that X is more intelligent thanY then the simplest and most obvious explanation for that is that X is more intelligent than Y. And I don't understand how you could say that an AI it's not intelligent it's just behaving intelligently because you don't like the way its mind operates, the trouble is you don't have a deep understanding of how your own mind operates and even the people at OpenAI only have a hazy understanding of how O3 works even though they built it. *

    /this is far from genuine intelligence or reasoning. LLMs are
    bound by their probabilistic nature and lack the ability to
    generalize beyond their training data,/


*200 million protein structures were certainly not in any AI's training data, nor were superhumanly brilliant games of Chess and GO. The same thing could be said about the  Epic AI Frontier Math Test problems and the ARC benchmark.*

    /> or generate higher-order abstractions./


*Ido not believe it's possible to solve _ANY_ of the problems on the Epic AI Frontier Math Test, problems that even world-class mathematicians find to be very difficult, without the ability to generate higher order abstractions. But if I'm wrong about that then I would be astonished to learn that higher order abstractions are simply not important because the fact remains that, regardless of the method, the problem was solved. * ***John K Clark See what's on my new list at Extropolis <https://groups.google.com/g/extropolis>*
rrz
--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv3NM6jr57u%3D2XoQCh15jsqO%3Di3xGXjEJH2BF1OJPw3EdA%40mail.gmail.com <https://groups.google.com/d/msgid/everything-list/CAJPayv3NM6jr57u%3D2XoQCh15jsqO%3Di3xGXjEJH2BF1OJPw3EdA%40mail.gmail.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/31759f6c-37f7-470b-bd6b-08eaf3a3f26c%40gmail.com.

Reply via email to