On Tue, Dec 24, 2024 at 12:13 PM PGC <[email protected]> wrote:
> *> simulating what appears to be reasoning or problem-solving*. > *Simulating? If Einstein was only doing "simulated" thinking when he came up with General Relativity and not "real" thinking then how would things be any different? It seems to me that a problem has either been solved or it has not been, and simulated versus real has nothing to do with it. * > *> For instance, an LLM solving a riddle or answering a complex question > does so by leveraging patterns that mimic logical steps or dependencies, > even though it lacks true understanding* > *It's not clear to me how you know "it lacks true understanding". If an AI can answer a question that you cannot, how can you have "true understanding" of it but the AI does not? Did Einstein have true understanding of general relativity or only a simulated understanding? * * > It feels different and "more intelligent" because this functional > selection imparts a structured response that aligns with human expectations > of reasoning. * *If the vast majority of human beings think that X is more intelligent than Y then the simplest and most obvious explanation for that is that X is more intelligent than Y. And I don't understand how you could say that an AI it's not intelligent it's just behaving intelligently because you don't like the way its mind operates, the trouble is you don't have a deep understanding of how your own mind operates and even the people at OpenAI only have a hazy understanding of how O3 works even though they built it. * > *this is far from genuine intelligence or reasoning. LLMs are bound by > their probabilistic nature and lack the ability to generalize beyond their > training data,* > *200 million protein structures were certainly not in any AI's training data, nor were superhumanly brilliant games of Chess and GO. The same thing could be said about the Epic AI Frontier Math Test problems and the ARC benchmark.* *> or generate higher-order abstractions.* *I do not believe it's possible to solve ANY of the problems on the Epic AI Frontier Math Test, problems that even world-class mathematicians find to be very difficult, without the ability to generate higher order abstractions. But if I'm wrong about that then I would be astonished to learn that higher order abstractions are simply not important because the fact remains that, regardless of the method, the problem was solved. * *John K Clark See what's on my new list at Extropolis <https://groups.google.com/g/extropolis>* rrz -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv3NM6jr57u%3D2XoQCh15jsqO%3Di3xGXjEJH2BF1OJPw3EdA%40mail.gmail.com.

