On Mon, Jun 17, 2024 at 4:58 PM PGC <[email protected]> wrote:
> > > On Monday, June 17, 2024 at 7:28:13 PM UTC+2 John Clark wrote: > > On Sun, Jun 16, 2024 at 10:26 PM PGC <[email protected]> wrote: > > > > > *> you can always brute force some LLM through huge compute and large, > highly domain specific training data, to "solve" a set of problems;* > > > I don't know what those quotation marks are supposed to mean but if you > are able to "solve" a set of problems then the problems have been solved, > the method of doing so is irrelevant. Are you sure you're not whistling > past the graveyard? > > > In discussing the distinction between memorization, which LLMs heavily > rely on, and genuine reasoning, which involves building new mental models > capable of broad generalizations and depth, consider the following: > > Even if I have no prior knowledge of a specific domain, with a large > enough library, memory, and pattern recognition of word sequences and > probabilistic associations, I could "generate" a solution by merely > matching patterns. The more my memory aligns with the problem and domain, > the higher the likelihood of "solving" it. > > To illustrate, imagine a student unfamiliar with an advanced topic who > stumbles upon a book in a library that contains the exact problem and its > solution. By copying the solution verbatim, they have effectively cheated. > This is akin to undergraduates peeking at each other's exams: they are > unable to model the problem and derive a solution themselves but can > memorize and reproduce the solution by glancing at others' work. This > differs from students who, through understanding the domain's fundamentals, > experiment with various approaches and reason their way to a solution. > These students might even discover novel solutions, unlike those who merely > copy and paste from their peers. Hence the quotation marks; there is no > "solving" going on by cheating through memory. > > This analogy extends to the internet, where some people fake expertise by > parroting buzzwords and formulations from Wikipedia, in contrast to genuine > experts who contribute original insights. As discussions progress and > become more complex, these parrots often become lost, unable to keep up > with the depth and specificity required. Higher education attempts to > address this by rewarding original, effective problem-solving approaches > over mere memorization and repetition. > This is a really well articulated distinction, thank you for that. I agree that LLMs are not AGI (yet), but it's hard to ignore they're (sometimes astonishingly) competent at answering multi-modal questions across most, if not all domains of human knowledge. I spent a couple hours this morning trying to use chatGPT to design a prompt that might demonstrate that it's not merely parroting, synthesizing, or rearranging existing human ideas, but actually generating novel ones. Here's probably the best result <https://chatgpt.com/share/b4403435-e071-46ef-b1ce-ac1def2ce501> but I'm not sure there's anything actually novel there. Despite that, it's still quite impressive, and to John's point, it's clearly an intelligent response, even if there are aspects of "cheating off of humans" in it. It's clear that the line between the genuine reasoning & creativity that are implicit in whatever we think of as human intelligence, vs the permutative repackaging of existing ideas we might think of as inherent in the intelligence exhibited by LLMs, is blurry. Human creativity and intelligence is probably a lot closer to what LLMs do than we'd like to think. But it's also clear to me that we're not going to get Einsteinian leaps forward in any given domain from LLMs. That may well be coming from AI in the future, but the way I see it, there's still some significant breakthrough(s) necessary to get there. Terren -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA-t1d64Wa-sOsvzAQ6n9j2jo12JYsf_BGVM2ZDjtJFLzg%40mail.gmail.com.

