On Tue, 23 May 2023 at 15:58, Terren Suydam <terren.suy...@gmail.com> wrote:

>
>
> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 14:23, Terren Suydam <terren.suy...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <stath...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, 23 May 2023 at 13:37, Terren Suydam <terren.suy...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
>>>>> stath...@gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, 23 May 2023 at 10:48, Terren Suydam <terren.suy...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
>>>>>>> stath...@gmail.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, 23 May 2023 at 10:03, Terren Suydam <
>>>>>>>> terren.suy...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>> it is true that my brain has been trained on a large amount of
>>>>>>>>> data - data that contains intelligence outside of my own. But when I
>>>>>>>>> introspect, I notice that my understanding of things is ultimately
>>>>>>>>> rooted/grounded in my phenomenal experience. Ultimately, everything we
>>>>>>>>> know, we know either by our experience, or by analogy to experiences 
>>>>>>>>> we've
>>>>>>>>> had. This is in opposition to how LLMs train on data, which is 
>>>>>>>>> strictly
>>>>>>>>> about how words/symbols relate to one another.
>>>>>>>>>
>>>>>>>>
>>>>>>>> The functionalist position is that phenomenal experience supervenes
>>>>>>>> on behaviour, such that if the behaviour is replicated (same output for
>>>>>>>> same input) the phenomenal experience will also be replicated. This is 
>>>>>>>> what
>>>>>>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>>>>>>
>>>>>>>
>>>>>>> I think the kind of phenomenal supervenience you're talking about is
>>>>>>> typically asserted for behavior at the level of the neuron, not the 
>>>>>>> level
>>>>>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>>>>>> having a phenomenal experience if it talks like a human?   If so, that 
>>>>>>> is
>>>>>>> stretching the explanatory domain of functionalism past its breaking 
>>>>>>> point.
>>>>>>>
>>>>>>
>>>>>> The best justification for functionalism is David Chalmers' "Fading
>>>>>> Qualia" argument. The paper considers replacing neurons with functionally
>>>>>> equivalent silicon chips, but it could be generalised to replacing any 
>>>>>> part
>>>>>> of the brain with a functionally equivalent black box, the whole brain, 
>>>>>> the
>>>>>> whole person.
>>>>>>
>>>>>
>>>>> You're saying that an algorithm that provably does not have
>>>>> experiences of rabbits and lollipops - but can still talk about them in a
>>>>> way that's indistinguishable from a human - essentially has the same
>>>>> phenomenology as a human talking about rabbits and lollipops. That's just
>>>>> absurd on its face. You're essentially hand-waving away the grounding
>>>>> problem. Is that your position? That symbols don't need to be grounded in
>>>>> any sort of phenomenal experience?
>>>>>
>>>>
>>>> It's not just talking about them in a way that is indistinguishable
>>>> from a human, in order to have human-like consciousness the entire I/O
>>>> behaviour of the human would need to be replicated. But in principle, I
>>>> don't see why a LLM could not have some other type of phenomenal
>>>> experience. And I don't think the grounding problem is a problem: I was
>>>> never grounded in anything, I just grew up associating one symbol with
>>>> another symbol, it's symbols all the way down.
>>>>
>>>
>>> Is the smell of your grandmother's kitchen a symbol?
>>>
>>
>> Yes, I can't pull away the facade to check that there was a real
>> grandmother and a real kitchen against which I can check that the sense
>> data matches.
>>
>
> The ground problem is about associating symbols with a phenomenal
> experience, or the memory of one - which is not the same thing as the
> functional equivalent or the neural correlate. It's the feeling, what it's
> like to experience the thing the symbol stands for. The experience of
> redness. The shock of plunging into cold water. The smell of coffee. etc.
>
> Take a migraine headache - if that's just a symbol, then why does that
> symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
> like anything? If you say evolution did it, that doesn't actually answer
> the question, because evolution doesn't do anything except select for
> traits, roughly speaking. So it just pushes the question to: how did the
> subjective feeling of pain or pleasure emerge from some genetic mutation,
> when it wasn't there before?
>
> Without a functionalist explanation of the *origin* of aesthetic valence,
> then I don't think you can "get it from bit".
>

That seems more like the hard problem of consciousness. There is no
solution to it.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWZz4fP1nS_uSNRS6%3Drp63cCpWRLt0_Oeq77Yrfi8WS_w%40mail.gmail.com.

Reply via email to