On Tue, 23 May 2023 at 10:48, Terren Suydam <terren.suy...@gmail.com> wrote:

> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stath...@gmail.com>
> wrote:
>> On Tue, 23 May 2023 at 10:03, Terren Suydam <terren.suy...@gmail.com>
>> wrote:
>>> On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou <stath...@gmail.com>
>>> wrote:
>>>> On Tue, 23 May 2023 at 07:56, Terren Suydam <terren.suy...@gmail.com>
>>>> wrote:
>>>>> Many, myself included, are captivated by the amazing capabilities of
>>>>> chatGPT and other LLMs. They are, truly, incredible. Depending on your
>>>>> definition of Turing Test, it passes with flying colors in many, many
>>>>> contexts. It would take a much stricter Turing Test than we might have
>>>>> imagined this time last year, before we could confidently say that we're
>>>>> not talking to a human. One way to improve chatGPT's performance on an
>>>>> actual Turing Test would be to slow it down, because it is too fast to be
>>>>> human.
>>>>> All that said, is chatGPT actually intelligent?  There's no question
>>>>> that it behaves in a way that we would all agree is intelligent. The
>>>>> answers it gives, and the speed it gives them in, reflect an intelligence
>>>>> that often far exceeds most if not all humans.
>>>>> I know some here say intelligence is as intelligence does. Full stop,
>>>>> conversation over. ChatGPT is intelligent, because it acts intelligently.
>>>>> But this is an oversimplified view!  The reason it's over-simple is
>>>>> that it ignores what the source of the intelligence is. The source of the
>>>>> intelligence is in the texts it's trained on. If ChatGPT was trained on
>>>>> gibberish, that's what you'd get out of it. It is amazingly similar to the
>>>>> Chinese Room thought experiment proposed by John Searle. It is 
>>>>> manipulating
>>>>> symbols without having any understanding of what those symbols are. As a
>>>>> result, it does not and can not know if what it's saying is correct or 
>>>>> not.
>>>>> This is a well known caveat of using LLMs.
>>>>> ChatGPT, therefore, is more like a search engine that can extract the
>>>>> intelligence that is already structured within the data it's trained on.
>>>>> Think of it as a semantic google. It's a huge achievement in the sense 
>>>>> that
>>>>> training on the data in the way it does, it encodes the *context*
>>>>> that words appear in with sufficiently high resolution that it's usually
>>>>> indistinguishable from humans who actually understand context in a way
>>>>> that's *grounded in experience*. LLMs don't experience anything. They
>>>>> are feed-forward machines. The algorithms that implement chatGPT are
>>>>> useless without enormous amounts of text that expresses actual 
>>>>> intelligence.
>>>>> Cal Newport does a good job of explaining this here
>>>>> <https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have>
>>>>> .
>>>> It could be argued that the human brain is just a complex machine that
>>>> has been trained on vast amounts of data to produce a certain output given
>>>> a certain input, and doesn’t really understand anything. This is a response
>>>> to the Chinese room argument. How would I know if I really understand
>>>> something or just think I understand something?
>>>>> --
>>>> Stathis Papaioannou
>>> it is true that my brain has been trained on a large amount of data -
>>> data that contains intelligence outside of my own. But when I introspect, I
>>> notice that my understanding of things is ultimately rooted/grounded in my
>>> phenomenal experience. Ultimately, everything we know, we know either by
>>> our experience, or by analogy to experiences we've had. This is in
>>> opposition to how LLMs train on data, which is strictly about how
>>> words/symbols relate to one another.
>> The functionalist position is that phenomenal experience supervenes on
>> behaviour, such that if the behaviour is replicated (same output for same
>> input) the phenomenal experience will also be replicated. This is what
>> philosophers like Searle (and many laypeople) can’t stomach.
> I think the kind of phenomenal supervenience you're talking about is
> typically asserted for behavior at the level of the neuron, not the level
> of the whole agent. Is that what you're saying?  That chatGPT must be
> having a phenomenal experience if it talks like a human?   If so, that is
> stretching the explanatory domain of functionalism past its breaking point.

The best justification for functionalism is David Chalmers' "Fading Qualia"
argument. The paper considers replacing neurons with functionally
equivalent silicon chips, but it could be generalised to replacing any part
of the brain with a functionally equivalent black box, the whole brain, the
whole person.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Reply via email to