On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren.suy...@gmail.com> wrote:

>
>
> On Tue, May 23, 2023 at 7:09 AM Jason Resch <jasonre...@gmail.com> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the same
>> fine-grained causal organization *would* have the same phenomenology, the
>> same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with regards
>> to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I believe
>> this is partly responsible for why you are both talking past each other,
>> because there are many levels involved in brains (and computational
>> systems). I believe you were discussing completely different levels in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>> logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things down
>> at this low level, however, we lose all context for what the meaning, idea,
>> and quale are or where or how they come in. We cannot see or find the idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible, how
>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>> leap from how we get "it" from a bunch of cells squirting ions back and
>> forth. Trying to understand a smartphone by looking at the flows of
>> electrons is a similar kind of problem, it would seem just as difficult or
>> impossible to explain and understand the high-level features and complexity
>> out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the level
>> one is operation on when one discusses symbols, substrates, or quale. In
>> summary, I think a chief reason you have been talking past each other is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer my
>> perspective in the hope it might help the conversation.
>>
>
> I appreciate the callout, but it is necessary to talk at both the micro
> and the macro for this discussion. We're talking about symbol grounding. I
> should make it clear that I don't believe symbols can be grounded in other
> symbols (i.e. symbols all the way down as Stathis put it), that leads to
> infinite regress and the illusion of meaning.  Symbols ultimately must
> stand for something. The only thing they can stand *for*, ultimately, is
> something that cannot be communicated by other symbols: conscious
> experience. There is no concept in our brains that is not ultimately
> connected to something we've seen, heard, felt, smelled, or tasted.
>

I agree everything you have experienced is rooted in consciousness.

But at the low level, that only thing your brain senses are neural signals
(symbols, on/off, ones and zeros).

In your arguments you rely on the high-level conscious states of human
brains to establish that they have grounding, but then use the low-level
descriptions of machines to deny their own consciousness, and hence deny
they can ground their processing to anything.

If you remained in the space of low-level descriptions for both brains and
machine intelligences, however, you would see each struggles to make a
connection to what may exist at the high-level. You would see, the lack of
any apparent grounding in what are just neurons firing or not firing at
certain times. Just as a wire in a circuit either carries or doesn't carry
a charge.

Conversely, if you stay in the high-level realm of consciousness ideas,
well then you must face the problem of other minds. You know you are
conscious, but you cannot prove or disprove the conscious of others, at
least not with first defining a theory of consciousness and explaining why
some minds satisfy the definition of not. Until you present a theory of
consciousness then this conversation is, I am afraid, doomed to continue in
this circle forever.

This same conversation and outcome played out over the past few months on
the extropy-chat-list, although with different actors, so I can say with
some confidence where some topics are likely to lead.



>
> In my experience with conversations like this, you usually have people on
> one side who take consciousness seriously as the only thing that is
> actually undeniable, and you have people who'd rather not talk about it,
> hand-wave it away, or outright deny it. That's the talking-past that
> usually happens, and that's what's happening here.
>


Do you have a theory for why neurology supports consciousness but silicon
circuitry cannot?

Jason


>
>>
>> Jason
>>
>> On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou <stath...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 15:58, Terren Suydam <terren.suy...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou <
>>>> stath...@gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, 23 May 2023 at 14:23, Terren Suydam <terren.suy...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <
>>>>>> stath...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, 23 May 2023 at 13:37, Terren Suydam <terren.suy...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
>>>>>>>> stath...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, 23 May 2023 at 10:48, Terren Suydam <
>>>>>>>>> terren.suy...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
>>>>>>>>>> stath...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, 23 May 2023 at 10:03, Terren Suydam <
>>>>>>>>>>> terren.suy...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> it is true that my brain has been trained on a large amount of
>>>>>>>>>>>> data - data that contains intelligence outside of my own. But when 
>>>>>>>>>>>> I
>>>>>>>>>>>> introspect, I notice that my understanding of things is ultimately
>>>>>>>>>>>> rooted/grounded in my phenomenal experience. Ultimately, 
>>>>>>>>>>>> everything we
>>>>>>>>>>>> know, we know either by our experience, or by analogy to 
>>>>>>>>>>>> experiences we've
>>>>>>>>>>>> had. This is in opposition to how LLMs train on data, which is 
>>>>>>>>>>>> strictly
>>>>>>>>>>>> about how words/symbols relate to one another.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> The functionalist position is that phenomenal experience
>>>>>>>>>>> supervenes on behaviour, such that if the behaviour is replicated 
>>>>>>>>>>> (same
>>>>>>>>>>> output for same input) the phenomenal experience will also be 
>>>>>>>>>>> replicated.
>>>>>>>>>>> This is what philosophers like Searle (and many laypeople) can’t 
>>>>>>>>>>> stomach.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I think the kind of phenomenal supervenience you're talking about
>>>>>>>>>> is typically asserted for behavior at the level of the neuron, not 
>>>>>>>>>> the
>>>>>>>>>> level of the whole agent. Is that what you're saying?  That chatGPT 
>>>>>>>>>> must be
>>>>>>>>>> having a phenomenal experience if it talks like a human?   If so, 
>>>>>>>>>> that is
>>>>>>>>>> stretching the explanatory domain of functionalism past its breaking 
>>>>>>>>>> point.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The best justification for functionalism is David Chalmers'
>>>>>>>>> "Fading Qualia" argument. The paper considers replacing neurons with
>>>>>>>>> functionally equivalent silicon chips, but it could be generalised to
>>>>>>>>> replacing any part of the brain with a functionally equivalent black 
>>>>>>>>> box,
>>>>>>>>> the whole brain, the whole person.
>>>>>>>>>
>>>>>>>>
>>>>>>>> You're saying that an algorithm that provably does not have
>>>>>>>> experiences of rabbits and lollipops - but can still talk about them 
>>>>>>>> in a
>>>>>>>> way that's indistinguishable from a human - essentially has the same
>>>>>>>> phenomenology as a human talking about rabbits and lollipops. That's 
>>>>>>>> just
>>>>>>>> absurd on its face. You're essentially hand-waving away the grounding
>>>>>>>> problem. Is that your position? That symbols don't need to be grounded 
>>>>>>>> in
>>>>>>>> any sort of phenomenal experience?
>>>>>>>>
>>>>>>>
>>>>>>> It's not just talking about them in a way that is indistinguishable
>>>>>>> from a human, in order to have human-like consciousness the entire I/O
>>>>>>> behaviour of the human would need to be replicated. But in principle, I
>>>>>>> don't see why a LLM could not have some other type of phenomenal
>>>>>>> experience. And I don't think the grounding problem is a problem: I was
>>>>>>> never grounded in anything, I just grew up associating one symbol with
>>>>>>> another symbol, it's symbols all the way down.
>>>>>>>
>>>>>>
>>>>>> Is the smell of your grandmother's kitchen a symbol?
>>>>>>
>>>>>
>>>>> Yes, I can't pull away the facade to check that there was a real
>>>>> grandmother and a real kitchen against which I can check that the sense
>>>>> data matches.
>>>>>
>>>>
>>>> The ground problem is about associating symbols with a phenomenal
>>>> experience, or the memory of one - which is not the same thing as the
>>>> functional equivalent or the neural correlate. It's the feeling, what it's
>>>> like to experience the thing the symbol stands for. The experience of
>>>> redness. The shock of plunging into cold water. The smell of coffee. etc.
>>>>
>>>> Take a migraine headache - if that's just a symbol, then why does that
>>>> symbol *feel* *bad* while others feel *good*?  Why does any symbol
>>>> feel like anything? If you say evolution did it, that doesn't actually
>>>> answer the question, because evolution doesn't do anything except select
>>>> for traits, roughly speaking. So it just pushes the question to: how did
>>>> the subjective feeling of pain or pleasure emerge from some genetic
>>>> mutation, when it wasn't there before?
>>>>
>>>> Without a functionalist explanation of the *origin* of aesthetic
>>>> valence, then I don't think you can "get it from bit".
>>>>
>>>
>>> That seems more like the hard problem of consciousness. There is no
>>> solution to it.
>>>
>>> --
>>> Stathis Papaioannou
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-list+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWZz4fP1nS_uSNRS6%3Drp63cCpWRLt0_Oeq77Yrfi8WS_w%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWZz4fP1nS_uSNRS6%3Drp63cCpWRLt0_Oeq77Yrfi8WS_w%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUibqVw6uAgxYFjT2HdnFdeF67jORYt63hVjAj1oH6n7jg%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUibqVw6uAgxYFjT2HdnFdeF67jORYt63hVjAj1oH6n7jg%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9qkQUefHJZd8xueN2NXUADCwXuf%2BetYcrJh912iwzEjA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA9qkQUefHJZd8xueN2NXUADCwXuf%2BetYcrJh912iwzEjA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhFbKA67nnxw9KLur9VZm5g9qEmEB3x-oNfMK%3DmnErY1g%40mail.gmail.com.

Reply via email to