On Mon, May 22, 2023 at 11:37 PM Terren Suydam <terren.suy...@gmail.com>
wrote:

>
>
> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:48, Terren Suydam <terren.suy...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stath...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, 23 May 2023 at 10:03, Terren Suydam <terren.suy...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>> it is true that my brain has been trained on a large amount of data -
>>>>> data that contains intelligence outside of my own. But when I introspect, 
>>>>> I
>>>>> notice that my understanding of things is ultimately rooted/grounded in my
>>>>> phenomenal experience. Ultimately, everything we know, we know either by
>>>>> our experience, or by analogy to experiences we've had. This is in
>>>>> opposition to how LLMs train on data, which is strictly about how
>>>>> words/symbols relate to one another.
>>>>>
>>>>
>>>> The functionalist position is that phenomenal experience supervenes on
>>>> behaviour, such that if the behaviour is replicated (same output for same
>>>> input) the phenomenal experience will also be replicated. This is what
>>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>>
>>>
>>> I think the kind of phenomenal supervenience you're talking about is
>>> typically asserted for behavior at the level of the neuron, not the level
>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>> having a phenomenal experience if it talks like a human?   If so, that is
>>> stretching the explanatory domain of functionalism past its breaking point.
>>>
>>
>> The best justification for functionalism is David Chalmers' "Fading
>> Qualia" argument. The paper considers replacing neurons with functionally
>> equivalent silicon chips, but it could be generalised to replacing any part
>> of the brain with a functionally equivalent black box, the whole brain, the
>> whole person.
>>
>
> You're saying that an algorithm that provably does not have experiences of
> rabbits and lollipops - but can still talk about them in a way that's
> indistinguishable from a human - essentially has the same phenomenology as
> a human talking about rabbits and lollipops. That's just absurd on its
> face. You're essentially hand-waving away the grounding problem. Is that
> your position? That symbols don't need to be grounded in any sort of
> phenomenal experience?
>
> Terren
>

Are you talking here about Chalmer's thought experiment in which each
neuron is replaced by a functional duplicate, or about an algorithm like
ChatGPT that has no detailed resemblance to the structure of a human
being's brain? I think in the former case the case for identical experience
is very strong, though note Chalmers is not really a functionalist, he
postulates "psychophysical laws" which map physical patterns to
experiences, and uses the replacement argument to argue that such laws
would have the property of "functional invariance".

In you are just talking about ChatGPT style programs, I would agree with
you, a system trained only on the high-level symbols of human language (as
opposed to symbols representing neural impulses or other low-level events
on the microscopic level) is not likely to have experience anything like a
human being using the same symbols. If Stathis' black box argument is meant
to suggest otherwise I don't the logic, since it's not like a ChatGPT style
program would replicate the detailed output of a composite group of neurons
either, or even the exact verbal output of a specific person, so there is
no equivalent to gradual replacement of parts of a real human. If we are
just talking about qualitatively behaving in a "human-like" way without
replicating the behavior of a specific person or sub-component of a person
like a group of neurons in their brain, Chalmer's thought-experiment
doesn't apply. And even in a qualitative sense, count me as very skeptical
that a LLM trained only on human writing will ever pass any really rigorous
Turing test.

Jesse




> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fnyGDNxfQJXaqdUsYdSw7Sm5kx5j_5n94K8trJA57Jg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fnyGDNxfQJXaqdUsYdSw7Sm5kx5j_5n94K8trJA57Jg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAPCWU3KuhsgeUQm4nz3OSHhyDWTBggnZYm4PiWJfQmACkthxww%40mail.gmail.com.

Reply via email to