On Tue, May 23, 2023 at 1:46 PM Jason Resch <jasonre...@gmail.com> wrote:

>
>
> On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren.suy...@gmail.com>
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:09 AM Jason Resch <jasonre...@gmail.com> wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I appreciate the callout, but it is necessary to talk at both the micro
>> and the macro for this discussion. We're talking about symbol grounding. I
>> should make it clear that I don't believe symbols can be grounded in other
>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>> infinite regress and the illusion of meaning.  Symbols ultimately must
>> stand for something. The only thing they can stand *for*, ultimately, is
>> something that cannot be communicated by other symbols: conscious
>> experience. There is no concept in our brains that is not ultimately
>> connected to something we've seen, heard, felt, smelled, or tasted.
>>
>
> I agree everything you have experienced is rooted in consciousness.
>
> But at the low level, that only thing your brain senses are neural signals
> (symbols, on/off, ones and zeros).
>
> In your arguments you rely on the high-level conscious states of human
> brains to establish that they have grounding, but then use the low-level
> descriptions of machines to deny their own consciousness, and hence deny
> they can ground their processing to anything.
>
> If you remained in the space of low-level descriptions for both brains and
> machine intelligences, however, you would see each struggles to make a
> connection to what may exist at the high-level. You would see, the lack of
> any apparent grounding in what are just neurons firing or not firing at
> certain times. Just as a wire in a circuit either carries or doesn't carry
> a charge.
>

Ah, I see your point now. That's valid, thanks for raising it and let me
clarify.

Bringing this back to LLMs, it's clear to me that LLMs do not have
phenomenal experience, but you're right to insist that I explain why I
think so. I don't know if this amounts to a theory of consciousness, but
the reason I believe that LLMs are not conscious is that, in my view,
consciousness entails a continuous flow of experience. Assuming for this
discussion that consciousness is realizable in a substrate-independent way,
that means that consciousness is, in some sort of way, a process in the
domain of information. And so to *realize* a conscious process, whether in
a brain or in silicon, the physical dynamics of that information process
must also be continuous, which is to say, recursive. The behavior or output
of the brain in one moment is the input to the brain in the next moment.

But LLMs do not exhibit this. They have a training phase, and then they
respond to discrete queries. As far as I know, once it's out of the
training phase, there is no feedback outside of the flow of a single
conversation. None of that seems isomorphic to the kind of process that
could support a flow of experience, whatever experience would mean for an
LLM.

So to me, the suggestion that chatGPT could one day be used to functionally
replace some subset of the brain that is responsible for mediating
conscious experience in a human, just strikes me as absurd.


>
> Conversely, if you stay in the high-level realm of consciousness ideas,
> well then you must face the problem of other minds. You know you are
> conscious, but you cannot prove or disprove the conscious of others, at
> least not with first defining a theory of consciousness and explaining why
> some minds satisfy the definition of not. Until you present a theory of
> consciousness then this conversation is, I am afraid, doomed to continue in
> this circle forever.
>
> This same conversation and outcome played out over the past few months on
> the extropy-chat-list, although with different actors, so I can say with
> some confidence where some topics are likely to lead.
>
>
>
>>
>> In my experience with conversations like this, you usually have people on
>> one side who take consciousness seriously as the only thing that is
>> actually undeniable, and you have people who'd rather not talk about it,
>> hand-wave it away, or outright deny it. That's the talking-past that
>> usually happens, and that's what's happening here.
>>
>
>
> Do you have a theory for why neurology supports consciousness but silicon
> circuitry cannot?
>

I'm agnostic about this, but that's because I no longer assume physicalism.
For me, the hard problem signals that physicalism is impossible. I've
argued on this list many times as a physicalist, as one who believes in the
possibility of artificial consciousness, uploading, etc. I've argued that
there is something it is like to be a cybernetic system. But at the end of
it all, I just couldn't overcome the problem of aesthetic valence. As an
aside, the folks at Qualia Computing have put forth a theory
<https://qualiacomputing.com/2017/05/17/principia-qualia-part-ii-valence/>
that symmetry in the state space isomorphic to ongoing experience is what
corresponds to positive valence, and anti-symmetry to negative valence.
It's a very interesting argument but one is still forced to leap from a
mathematical concept to a subjective feeling. Regardless, it's the most
sophisticated attempt to reconcile the hard problem that I've come across.

I've since come around to the idealist stance that reality is fundamentally
consciousness, and that the physical is a manifestation of that
consciousness, like in a dream. It has its own "hard problem", which is
explaining why the world appears so orderly. But if you don't get too hung
up on that, it's not as clear that artificial consciousness is possible. It
might be!  it may even be that efforts like the above to explain how you
get it from bit are relevant to idealist explanations of physical reality.
But the challenge with idealism is that the explanations that are on offer
sound more like mythology and metaphor than science. I should note that
Bernardo Kastrup has some interesting ideas on idealism, and he approaches
it in a way that is totally devoid of woo. That said, one really intriguing
set of evidence in favor of idealism is near-death-experience (NDE)
testimony, which is pretty remarkable if one actually studies it.

Terren


>
> Jason
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8ExMvHh6M4%2B%3DZDC1Qs1PazRwk9kRXQqDGdLNLVQJPC%3DA%40mail.gmail.com.

Reply via email to