On Thu, May 25, 2023 at 9:05 AM Terren Suydam <terren.suy...@gmail.com>
wrote:

>
>
> On Tue, May 23, 2023 at 5:47 PM Jason Resch <jasonre...@gmail.com> wrote:
>
>>
>>
>> On Tue, May 23, 2023, 3:50 PM Terren Suydam <terren.suy...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 1:46 PM Jason Resch <jasonre...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 23, 2023, 9:34 AM Terren Suydam <terren.suy...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, May 23, 2023 at 7:09 AM Jason Resch <jasonre...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> As I see this thread, Terren and Stathis are both talking past each
>>>>>> other. Please either of you correct me if i am wrong, but in an effort to
>>>>>> clarify and perhaps resolve this situation:
>>>>>>
>>>>>> I believe Stathis is saying the functional substitution having the
>>>>>> same fine-grained causal organization *would* have the same 
>>>>>> phenomenology,
>>>>>> the same experience, and the same qualia as the brain with the same
>>>>>> fine-grained causal organization.
>>>>>>
>>>>>> Therefore, there is no disagreement between your positions with
>>>>>> regards to symbols groundings, mappings, etc.
>>>>>>
>>>>>> When you both discuss the problem of symbology, or bits, etc. I
>>>>>> believe this is partly responsible for why you are both talking past each
>>>>>> other, because there are many levels involved in brains (and 
>>>>>> computational
>>>>>> systems). I believe you were discussing completely different levels in 
>>>>>> the
>>>>>> hierarchical organization.
>>>>>>
>>>>>> There are high-level parts of minds, such as ideas, thoughts,
>>>>>> feelings, quale, etc. and there are low-level, be they neurons,
>>>>>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>>>>>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>>>>>
>>>>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>>>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>>>>>> quale
>>>>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>>>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>>>>> answer/description for it can only be supplied in terms of a vast amount 
>>>>>> of
>>>>>> information concerning low level structures, be they patterns of neuron
>>>>>> firings, or patterns of bits being processed. When we consider things 
>>>>>> down
>>>>>> at this low level, however, we lose all context for what the meaning, 
>>>>>> idea,
>>>>>> and quale are or where or how they come in. We cannot see or find the 
>>>>>> idea
>>>>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>>>>
>>>>>> Of course then it should seem deeply mysterious, if not impossible,
>>>>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>>>>> greater a leap from how we get "it" from a bunch of cells squirting ions
>>>>>> back and forth. Trying to understand a smartphone by looking at the flows
>>>>>> of electrons is a similar kind of problem, it would seem just as 
>>>>>> difficult
>>>>>> or impossible to explain and understand the high-level features and
>>>>>> complexity out of the low-level simplicity.
>>>>>>
>>>>>> This is why it's crucial to bear in mind and explicitly discuss the
>>>>>> level one is operation on when one discusses symbols, substrates, or 
>>>>>> quale.
>>>>>> In summary, I think a chief reason you have been talking past each other 
>>>>>> is
>>>>>> because you are each operating on different assumed levels.
>>>>>>
>>>>>> Please correct me if you believe I am mistaken and know I only offer
>>>>>> my perspective in the hope it might help the conversation.
>>>>>>
>>>>>
>>>>> I appreciate the callout, but it is necessary to talk at both the
>>>>> micro and the macro for this discussion. We're talking about symbol
>>>>> grounding. I should make it clear that I don't believe symbols can be
>>>>> grounded in other symbols (i.e. symbols all the way down as Stathis put
>>>>> it), that leads to infinite regress and the illusion of meaning.  Symbols
>>>>> ultimately must stand for something. The only thing they can stand
>>>>> *for*, ultimately, is something that cannot be communicated by other
>>>>> symbols: conscious experience. There is no concept in our brains that is
>>>>> not ultimately connected to something we've seen, heard, felt, smelled, or
>>>>> tasted.
>>>>>
>>>>
>>>> I agree everything you have experienced is rooted in consciousness.
>>>>
>>>> But at the low level, that only thing your brain senses are neural
>>>> signals (symbols, on/off, ones and zeros).
>>>>
>>>> In your arguments you rely on the high-level conscious states of human
>>>> brains to establish that they have grounding, but then use the low-level
>>>> descriptions of machines to deny their own consciousness, and hence deny
>>>> they can ground their processing to anything.
>>>>
>>>> If you remained in the space of low-level descriptions for both brains
>>>> and machine intelligences, however, you would see each struggles to make a
>>>> connection to what may exist at the high-level. You would see, the lack of
>>>> any apparent grounding in what are just neurons firing or not firing at
>>>> certain times. Just as a wire in a circuit either carries or doesn't carry
>>>> a charge.
>>>>
>>>
>>> Ah, I see your point now. That's valid, thanks for raising it and let me
>>> clarify.
>>>
>>
>> I appreciate that thank you.
>>
>>
>>> Bringing this back to LLMs, it's clear to me that LLMs do not have
>>> phenomenal experience, but you're right to insist that I explain why I
>>> think so. I don't know if this amounts to a theory of consciousness, but
>>> the reason I believe that LLMs are not conscious is that, in my view,
>>> consciousness entails a continuous flow of experience. Assuming for this
>>> discussion that consciousness is realizable in a substrate-independent way,
>>> that means that consciousness is, in some sort of way, a process in the
>>> domain of information. And so to *realize* a conscious process, whether
>>> in a brain or in silicon, the physical dynamics of that information process
>>> must also be continuous, which is to say, recursive.
>>>
>>
>>
>> I am quite partial to the idea that recursion or loops may me necessary
>> to realize consciousness, or at least certain types of consciousness, such
>> as self-consciousness (which I take to be models which include the self as
>> an actor within the environment), but I also believe that loops may exist
>> in non obvious forms, and even extend beyond the physical domain of a
>> creature's body or the confines of a physical computer.
>>
>> Allow me to explain.
>>
>> Consider a something like the robot arm I described that is programmed to
>> catch a ball. Now consider that the at each time step, a process is run
>> they receives the current coordinates of th robot arm position and the ball
>> position. This is not technically a loop, and bit really recursive, it may
>> be implemented by a time that fires off the process say 1000 times a second.
>>
>> But, if you consider the pair of the robot arm and the environment, a
>> recursive loop emerges, in the sense that the action decided and executed
>> in the previous time step affects the sensory input in subsequent time
>> steps. If the robots had enough sophistication to have a language function
>> and we asked it, "what caused your arm to move?" The only answer it could
>> give would have to be a reflexive one: a process within me caused my arm to
>> move. So we get self reference, and recursion through environmental
>> interaction.
>>
>>  Now let's consider the LLM in this context, each invocation is indeed a
>> feed forward independent process, but through this back and forth flow, the
>> LLM interacting with the user, a recursive continuous loop of processing
>> emerges. The LLM could be said to perceive an ever growing thread of
>> conversation, with new words constantly being appended to its perception
>> window. Moreover, some of these words would be external inputs, while
>> others are internal outputs. If you ask the LLM: where did those internally
>> generated outputs come from? Again th only valid answer it could supply
>> would have to be reflexive.
>>
>> Reflexivity is I think the essence of self awareness, and though a single
>> LLM invocation cannot do this, an LLM that generates output and then is
>> subsequently asked about the source of this output, must turn it's
>> attention inward towards itself.
>>
>> This is something like how Dennett describes how a zombie asked to look
>> inward bootstraps itself into consciousness.
>>
>
> I see what you're saying, and within the context of a single conversation,
> what you're suggesting seems possible. But with every new conversation it
> starts at the same exact state. There is no learning, no updating, from one
> conversation to the next. It doesn't pass the smell test to me. I would
> think for real sentience to occur, that kind of emergent self-model would
> need more than just a few iterations. But this is all just intuition. You
> raise an interesting possibility.
>

Thank you. I do think such a learning capacity would greatly expand the
potential of these systems.


>
> To take it one step further, if chatGPT's next iteration of training
> included the millions of conversations humans had with it, you could see a
> self model become instantiated in a more permanent way. But again, at the
> end of its training, the state would be frozen. That's the sticking point
> for me.
>

I think in a round-about way, they do. I believe OpenAI makes users opt-out
to have their data used for further training. This suggests to me that they
may be collecting user conversational interactions and using them to train
successive versions of the model. Such training does not take place in real
time, but perhaps we can view it as somewhat analogous to how the brain
learns and incorporates new memories and skills while we sleep.

I also am not sure how necessary real-time modification to long-term
memories is necessary to consciousness. There have been several cases of
humans that due to some kind of brain damage, lost the ability to form long
term memories (e.g. https://en.wikipedia.org/wiki/Clive_Wearing and H.M.
https://www.pbs.org/newshour/show/bringing-new-life-patient-h-m-man-couldnt-make-memories
). Though they live only in the immediate present of their short-term
working memory. And though they are certainly greatly impaired in their
functioning, I do not doubt these people are conscious. Perhaps then,
short-term working memory is enough, and GPT has this in terms of its
context window (which is on the order of tens of thousands of words,
perhaps 100 pages of text).


>
>
>> The behavior or output of the brain in one moment is the input to the
>>> brain in the next moment.
>>>
>>> But LLMs do not exhibit this. They have a training phase, and then they
>>> respond to discrete queries. As far as I know, once it's out of the
>>> training phase, there is no feedback outside of the flow of a single
>>> conversation. None of that seems isomorphic to the kind of process that
>>> could support a flow of experience, whatever experience would mean for an
>>> LLM.
>>>
>>> So to me, the suggestion that chatGPT could one day be used to
>>> functionally replace some subset of the brain that is responsible for
>>> mediating conscious experience in a human, just strikes me as absurd.
>>>
>>
>> One aspect of artificial neural networks that is worth considering here
>> is that they are (by the 'universal approximation theorem') completely
>> general and universal in the functions they can learn and model. That is,
>> any logical circuit which can be computed in finite time, can in principle,
>> be learned and implemented by a neural network. This gives me some pause
>> when I consider what things neural networks will never be able to do.
>>
>
> Yup, you made that point a couple months ago here and that stuck with me -
> that it's possible the way that LLMs are sort of outperforming expectations
> could be that it's literally modelling minds and using that to generate its
> responses. I'm not sure that's possible, because I'm not clear on whether
> the neural networks used in LLMs qualify as being general/universal.
>

I should note that I am not an expert in this space either, so take what I
say as supposition rather than fact, but the task these models are trained
to do is one that requires a kind of universal intelligence (predicting the
next observation O_n, given prior observations O_1 ... O_(n-1). All forms
of intelligence derive from this kind of prediction ability, and any
intelligent behavior can be framed in these terms. To accomplish this task
efficiently, I believe LLMs have internally developed all kinds of specific
neural circuitry to handle prediction in each domain it has experienced,
and this is where the universal approximation theorem comes in. In order to
learn to better predict future observations from past ones, all kinds of
unique abilities had to be learned, and none of these were explicitly put
in. For example, consider this testing which found ChatGPT able to play
chess better than most humans:
https://dkb.blog/p/chatgpts-chess-elo-is-1400 This implies it has learned
the ability to model the board state in its mind merely from a textual list
of past moves, and it can predict what move it expects a good player to
make next based on this history. I think this remarkable ability can only
be explained by the universal ability of neural networks to learn any
computable function.


>
>
>
>>
>>
>> Conversely, if you stay in the high-level realm of consciousness ideas,
>>>> well then you must face the problem of other minds. You know you are
>>>> conscious, but you cannot prove or disprove the conscious of others, at
>>>> least not with first defining a theory of consciousness and explaining why
>>>> some minds satisfy the definition of not. Until you present a theory of
>>>> consciousness then this conversation is, I am afraid, doomed to continue in
>>>> this circle forever.
>>>>
>>>> This same conversation and outcome played out over the past few months
>>>> on the extropy-chat-list, although with different actors, so I can say with
>>>> some confidence where some topics are likely to lead.
>>>>
>>>>
>>>>
>>>>>
>>>>> In my experience with conversations like this, you usually have people
>>>>> on one side who take consciousness seriously as the only thing that is
>>>>> actually undeniable, and you have people who'd rather not talk about it,
>>>>> hand-wave it away, or outright deny it. That's the talking-past that
>>>>> usually happens, and that's what's happening here.
>>>>>
>>>>
>>>>
>>>> Do you have a theory for why neurology supports consciousness but
>>>> silicon circuitry cannot?
>>>>
>>>
>>> I'm agnostic about this, but that's because I no longer assume
>>> physicalism. For me, the hard problem signals that physicalism is
>>> impossible. I've argued on this list many times as a physicalist, as one
>>> who believes in the possibility of artificial consciousness, uploading,
>>> etc. I've argued that there is something it is like to be a cybernetic
>>> system. But at the end of it all, I just couldn't overcome the problem of
>>> aesthetic valence. As an aside, the folks at Qualia Computing have put
>>> forth a theory
>>> <https://qualiacomputing.com/2017/05/17/principia-qualia-part-ii-valence/>
>>> that symmetry in the state space isomorphic to ongoing experience is what
>>> corresponds to positive valence, and anti-symmetry to negative valence.
>>>
>>
Looking at Qualia Computing I realize I have read much of this site in the
past and seen many of Andrés Gómez Emilsson's videos. I e-mailed him a few
years back but never got a reply. I thought that we were both interested in
many of the same topics and had been asking similar questions. I like
Emilsson's approach and ideas, though I don't know that I embrace his
theory of consciousness (if I recall they are related to or inspired by the
ideas of David Pearce, who I also admire).


>
>> But is there not much more to conscious then these two binary states? Is
>> the state space sufficiently large in their theory to account for the
>> seemingly infinite possible diversity of conscious experience?
>>
>
> They're not saying the state *is* binary. I don't even think they're
> saying symmetry is a binary. They're deriving the property of symmetry
> (presumably through some kind of mathematical transform) and hypothesizing
> that aesthetic valence corresponds to the outcome of that transform. I also
> think it's possible for symmetry and anti-symmetry to be present at the
> same time; the mathematical object isomorphic to experience is a
> high-dimensional object and probably has nearly infinite ways of being
> symmetrical and anti-symmetrical.
>

I see what you mean, though I don't know what this symmetry-antisymmetry
buys that isn't already possible to similarly structure in high-dimensional
objects that have infinite ways of being  related to 1-ness and 0-ness.




>
>
>
>>
>>
>> It's a very interesting argument but one is still forced to leap from a
>>> mathematical concept to a subjective feeling. Regardless, it's the most
>>> sophisticated attempt to reconcile the hard problem that I've come across.
>>>
>>> I've since come around to the idealist stance that reality is
>>> fundamentally consciousness, and that the physical is a manifestation of
>>> that consciousness, like in a dream.
>>>
>>
>> I agree. Or at least I would say, consciousness is more fundamental than
>> the physical universe. It might then be more appropriate to say my position
>> is a kind of neutral monism, where platonically existing
>> information/computation is the glue that relates consciousness to physics
>> and explains why we perceive an ordered world with apparent laws.
>>
>> I explain this in much more detail here:
>>
>> https://alwaysasking.com/why-does-anything-exist/#Why_Laws
>>
>
> I assume that's inspired by Bruno's ideas?
>

Yes, largely. There has been much related work by Russell Standish, Markus
Muller ( https://arxiv.org/abs/1712.01826 ), Mark Tegmark, and more
recently by Stephen Wolfram (
https://writings.stephenwolfram.com/2021/11/the-concept-of-the-ruliad/ ).


> I miss that guy. I still see him on FB from time to time.
>

I was worried about him, I noticed he had dropped off this list and he
hadn't replied to an e-mail I sent him. I am glad to know he is still
active.


> He was super influential on me too. Probably the single smartest person I
> ever "met".
>

Yes, I feel the same.


>
>
>>
>> It has its own "hard problem", which is explaining why the world appears
>>> so orderly.
>>>
>>
>> Yes, the "hard problem of matter" as some call it. I agree this problem
>> is much more solvable than the hard problem of consciousness.
>>
>>
>> But if you don't get too hung up on that, it's not as clear that
>>> artificial consciousness is possible. It might be!  it may even be that
>>> efforts like the above to explain how you get it from bit are relevant to
>>> idealist explanations of physical reality. But the challenge with idealism
>>> is that the explanations that are on offer sound more like mythology and
>>> metaphor than science. I should note that Bernardo Kastrup
>>>
>>
>> I will have to look into him.
>>
>
> I take him with a grain of salt - he's fairly combative and dismissive of
> people who are physicalists. But his ideas are super interesting, I don't
> know if he's the first to take analytical approach to idealism, but he's
> definitely the first to become well known for it.
>

Seeing his face now, I remembered I watched many of his videos some months
ago. He was interesting and I agreed with many of his points.

You might also like some of the writings by Galen Strawson:
https://www.nytimes.com/2016/05/16/opinion/consciousness-isnt-a-mystery-its-matter.html


Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjLDXgeY529A9%3DLzUnnbkv3NgXe-_khDjdEifY-58OGPQ%40mail.gmail.com.

Reply via email to