On Tue, 23 May 2023 at 07:56, Terren Suydam <[email protected]> wrote:

> Many, myself included, are captivated by the amazing capabilities of
> chatGPT and other LLMs. They are, truly, incredible. Depending on your
> definition of Turing Test, it passes with flying colors in many, many
> contexts. It would take a much stricter Turing Test than we might have
> imagined this time last year, before we could confidently say that we're
> not talking to a human. One way to improve chatGPT's performance on an
> actual Turing Test would be to slow it down, because it is too fast to be
> human.
>
> All that said, is chatGPT actually intelligent?  There's no question that
> it behaves in a way that we would all agree is intelligent. The answers it
> gives, and the speed it gives them in, reflect an intelligence that often
> far exceeds most if not all humans.
>
> I know some here say intelligence is as intelligence does. Full stop,
> conversation over. ChatGPT is intelligent, because it acts intelligently.
>
> But this is an oversimplified view!  The reason it's over-simple is that
> it ignores what the source of the intelligence is. The source of the
> intelligence is in the texts it's trained on. If ChatGPT was trained on
> gibberish, that's what you'd get out of it. It is amazingly similar to the
> Chinese Room thought experiment proposed by John Searle. It is manipulating
> symbols without having any understanding of what those symbols are. As a
> result, it does not and can not know if what it's saying is correct or not.
> This is a well known caveat of using LLMs.
>
> ChatGPT, therefore, is more like a search engine that can extract the
> intelligence that is already structured within the data it's trained on.
> Think of it as a semantic google. It's a huge achievement in the sense that
> training on the data in the way it does, it encodes the *context* that
> words appear in with sufficiently high resolution that it's usually
> indistinguishable from humans who actually understand context in a way
> that's *grounded in experience*. LLMs don't experience anything. They are
> feed-forward machines. The algorithms that implement chatGPT are useless
> without enormous amounts of text that expresses actual intelligence.
>
> Cal Newport does a good job of explaining this here
> <https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have>
> .
>

It could be argued that the human brain is just a complex machine that has
been trained on vast amounts of data to produce a certain output given a
certain input, and doesn’t really understand anything. This is a response
to the Chinese room argument. How would I know if I really understand
something or just think I understand something?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU63GQuAJNQ%2BAM%3DcYHxi%3D57x_bGAoF35npeMcXcEdiNaA%40mail.gmail.com.

Reply via email to