On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <[email protected]>
wrote:

>
>
> On Tue, 23 May 2023 at 10:48, Terren Suydam <[email protected]>
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <[email protected]>
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:03, Terren Suydam <[email protected]>
>>> wrote:
>>>
>>>>
>>>> it is true that my brain has been trained on a large amount of data -
>>>> data that contains intelligence outside of my own. But when I introspect, I
>>>> notice that my understanding of things is ultimately rooted/grounded in my
>>>> phenomenal experience. Ultimately, everything we know, we know either by
>>>> our experience, or by analogy to experiences we've had. This is in
>>>> opposition to how LLMs train on data, which is strictly about how
>>>> words/symbols relate to one another.
>>>>
>>>
>>> The functionalist position is that phenomenal experience supervenes on
>>> behaviour, such that if the behaviour is replicated (same output for same
>>> input) the phenomenal experience will also be replicated. This is what
>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>
>>
>> I think the kind of phenomenal supervenience you're talking about is
>> typically asserted for behavior at the level of the neuron, not the level
>> of the whole agent. Is that what you're saying?  That chatGPT must be
>> having a phenomenal experience if it talks like a human?   If so, that is
>> stretching the explanatory domain of functionalism past its breaking point.
>>
>
> The best justification for functionalism is David Chalmers' "Fading
> Qualia" argument. The paper considers replacing neurons with functionally
> equivalent silicon chips, but it could be generalised to replacing any part
> of the brain with a functionally equivalent black box, the whole brain, the
> whole person.
>

You're saying that an algorithm that provably does not have experiences of
rabbits and lollipops - but can still talk about them in a way that's
indistinguishable from a human - essentially has the same phenomenology as
a human talking about rabbits and lollipops. That's just absurd on its
face. You're essentially hand-waving away the grounding problem. Is that
your position? That symbols don't need to be grounded in any sort of
phenomenal experience?

Terren

> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fnyGDNxfQJXaqdUsYdSw7Sm5kx5j_5n94K8trJA57Jg%40mail.gmail.com.

Reply via email to