Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
wrote:

>
>
> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:
>>>
 As I see this thread, Terren and Stathis are both talking past each
 other. Please either of you correct me if i am wrong, but in an effort to
 clarify and perhaps resolve this situation:

 I believe Stathis is saying the functional substitution having the same
 fine-grained causal organization *would* have the same phenomenology, the
 same experience, and the same qualia as the brain with the same
 fine-grained causal organization.

 Therefore, there is no disagreement between your positions with regards
 to symbols groundings, mappings, etc.

 When you both discuss the problem of symbology, or bits, etc. I believe
 this is partly responsible for why you are both talking past each other,
 because there are many levels involved in brains (and computational
 systems). I believe you were discussing completely different levels in the
 hierarchical organization.

 There are high-level parts of minds, such as ideas, thoughts, feelings,
 quale, etc. and there are low-level, be they neurons, neurotransmitters,
 atoms, quantum fields, and laws of physics as in human brains, or circuits,
 logic gates, bits, and instructions as in computers.

 I think when Terren mentions a "symbol for the smell of grandmother's
 kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
 or idea or memory of the smell of GMK is a very high-level feature of a
 mind. When Terren asks for or discusses a symbol for it, a complete
 answer/description for it can only be supplied in terms of a vast amount of
 information concerning low level structures, be they patterns of neuron
 firings, or patterns of bits being processed. When we consider things down
 at this low level, however, we lose all context for what the meaning, idea,
 and quale are or where or how they come in. We cannot see or find the idea
 of GMK in any neuron, no more than we can see or find it in any neuron.

 Of course then it should seem deeply mysterious, if not impossible, how
 we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
 leap from how we get "it" from a bunch of cells squirting ions back and
 forth. Trying to understand a smartphone by looking at the flows of
 electrons is a similar kind of problem, it would seem just as difficult or
 impossible to explain and understand the high-level features and complexity
 out of the low-level simplicity.

 This is why it's crucial to bear in mind and explicitly discuss the
 level one is operation on when one discusses symbols, substrates, or quale.
 In summary, I think a chief reason you have been talking past each other is
 because you are each operating on different assumed levels.

 Please correct me if you believe I am mistaken and know I only offer my
 perspective in the hope it might help the conversation.

>>>
>>> I think you’ve captured my position. But in addition I think replicating
>>> the fine-grained causal organisation is not necessary in order to replicate
>>> higher level phenomena such as GMK. By extension of Chalmers’ substitution
>>> experiment,
>>>
>>
>> Note that Chalmers's argument is based on assuming the functional
>> substitution occurs at a certain level of fine-grained-ness. If you lose
>> this step, and look at only the top-most input-output of the mind as black
>> box, then you can no longer distinguish a rock from a dreaming person, nor
>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>> into the Blockhead "lookup table" argument against functionalism.
>>
>
> Yes, those are perhaps problems with functionalism. But a major point in
> Chalmers' argument is that if qualia were substrate-specific (hence,
> functionalism false) it would be possible to make a partial zombie or an
> entity whose consciousness and behaviour diverged from the point the
> substitution was made. And this argument works not just by replacing the
> neurons with silicon chips, but by replacing any part of the human with
> anything that reproduces the interactions with the remaining parts.
>


How deeply do you have to go when you consider or define those "other
parts" though? That seems to be a critical but unstated assumption, and
something that depends on how finely grained you consider the
relevant/important parts of a brain to be.

For reference, this is what Chalmers says:


"In this paper I defend this view. Specifically, I defend a principle of
organizational invariance, holding that experience is invariant across
systems with the same fine-grained functional organization. More precisely,
the 

Re: what chatGPT is and is not

2023-05-23 Thread Stathis Papaioannou
On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I think you’ve captured my position. But in addition I think replicating
>> the fine-grained causal organisation is not necessary in order to replicate
>> higher level phenomena such as GMK. By extension of Chalmers’ substitution
>> experiment,
>>
>
> Note that Chalmers's argument is based on assuming the functional
> substitution occurs at a certain level of fine-grained-ness. If you lose
> this step, and look at only the top-most input-output of the mind as black
> box, then you can no longer distinguish a rock from a dreaming person, nor
> a calculator computing 2+3 and a human computing 2+3, and one also runs
> into the Blockhead "lookup table" argument against functionalism.
>

Yes, those are perhaps problems with functionalism. But a major point in
Chalmers' argument is that if qualia were substrate-specific (hence,
functionalism false) it would be possible to make a partial zombie or an
entity whose consciousness and behaviour diverged from the point the
substitution was made. And this argument works not just by replacing the
neurons with silicon chips, but by replacing any part of the human with
anything that reproduces the interactions with the remaining parts.


> Accordingly, I think intermediate steps and the fine-grained organization
> are important (to some minimum level of fidelity) but as Bruno would say,
> we can never be certain what this necessary substitution level is. Is it
> neocortical columns, is it the connectome, is it the proteome, is it the
> molecules and atoms, is it QFT? Chalmers argues that at least at the level
> where noise introduces deviations in a brain simulation, simulating lower
> levels should not be necessary, as human consciousness appears robust to
> such noise at low levels (photon strikes, brownian motion, quantum
> uncertainties, etc.)
>

-- 
Stathis Papaioannou

-- 
You received this 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023, 4:14 PM Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 2:27 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
>>> wrote:
>>>
>>>
 And yes, I'm arguing that a true simulation (let's say for the sake of
 a thought experiment we were able to replicate every neural connection of a
 human being in code, including the connectomes, and neurotransmitters,
 along with a simulated nerve that was connected to a button on the desk we
 could press which would simulate the signal sent when a biological pain
 receptor is triggered) would feel pain that is just as real as the pain you
 and I feel as biological organisms.

>>>
>>> This follows from the physicalist no-zombies-possible stance. But it
>>> still runs into the hard problem, basically. How does stuff give rise to
>>> experience.
>>>
>>>
>> I would say stuff doesn't give rise to conscious experience. Conscious
>> experience is the logically necessary and required state of knowledge that
>> is present in any consciousness-necessitating behaviors. If you design a
>> simple robot with a camera and robot arm that is able to reliably catch a
>> ball thrown in its general direction, then something in that system *must*
>> contain knowledge of the ball's relative position and trajectory. It simply
>> isn't logically possible to have a system that behaves in all situations as
>> if it knows where the ball is, without knowing where the ball is.
>> Consciousness is simply the state of being with knowledge.
>>
>> Con- "Latin for with"
>> -Scious- "Latin for knowledge"
>> -ness "English suffix meaning the state of being X"
>>
>> Consciousness -> The state of being with knowledge.
>>
>> There is an infinite variety of potential states and levels of knowledge,
>> and this contributes to much of the confusion, but boiled down to the
>> simplest essence of what is or isn't conscious, it is all about knowledge
>> states. Knowledge states require activity/reactivity to the presence of
>> information, and counterfactual behaviors (if/then, greater than less than,
>> discriminations and comparisons that lead to different downstream
>> consequences in a system's behavior). At least, this is my theory of
>> consciousness.
>>
>> Jason
>>
>
> This still runs into the valence problem though. Why does some "knowledge"
> correspond with a positive *feeling* and other knowledge with a negative
> feeling?
>

That is a great question. Though I'm not sure it's fundamentally insoluble
within model where every conscious state is a particular state of knowledge.

I would propose that having positive and negative experiences, i.e. pain or
pleasure, requires knowledge states with a certain minium degree of
sophistication. For example, knowing:

Pain being associated with knowledge states such as: "I don't like this,
this is bad, I'm in pain, I want to change my situation."

Pleasure being associated with knowledge states such as: "This is good for
me, I could use more of this, I don't want this to end.'

Such knowledge states require a degree of reflexive awareness, to have a
notion of a self where some outcomes may be either positive or negative to
that self, and perhaps some notion of time or a sufficient agency to be
able to change one's situation.

Sone have argued that plants can't feel pain because there's little they
can do to change their situation (though I'm agnostic on this).

  I'm not talking about the functional accounts of positive and negative
> experiences. I'm talking about phenomenology. The functional aspect of it
> is not irrelevant, but to focus *only* on that is to sweep the feeling
> under the rug. So many dialogs on this topic basically terminate here,
> where it's just a clash of belief about the relative importance of
> consciousness and phenomenology as the mediator of all experience and
> knowledge.
>

You raise important questions which no complete theory of consciousness
should ignore. I think one reason things break down here is because there's
such incredible complexity behind and underlying the states of
consciousness we humans perceive and no easy way to communicate all the
salient properties of those experiences.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiyaxumARPQ42bZty5ZhrLbFQuSHe_cNXvjUxd_gniFfg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023, 3:50 PM Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 1:46 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:09 AM Jason Resch 
>>> wrote:
>>>
 As I see this thread, Terren and Stathis are both talking past each
 other. Please either of you correct me if i am wrong, but in an effort to
 clarify and perhaps resolve this situation:

 I believe Stathis is saying the functional substitution having the same
 fine-grained causal organization *would* have the same phenomenology, the
 same experience, and the same qualia as the brain with the same
 fine-grained causal organization.

 Therefore, there is no disagreement between your positions with regards
 to symbols groundings, mappings, etc.

 When you both discuss the problem of symbology, or bits, etc. I believe
 this is partly responsible for why you are both talking past each other,
 because there are many levels involved in brains (and computational
 systems). I believe you were discussing completely different levels in the
 hierarchical organization.

 There are high-level parts of minds, such as ideas, thoughts, feelings,
 quale, etc. and there are low-level, be they neurons, neurotransmitters,
 atoms, quantum fields, and laws of physics as in human brains, or circuits,
 logic gates, bits, and instructions as in computers.

 I think when Terren mentions a "symbol for the smell of grandmother's
 kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
 or idea or memory of the smell of GMK is a very high-level feature of a
 mind. When Terren asks for or discusses a symbol for it, a complete
 answer/description for it can only be supplied in terms of a vast amount of
 information concerning low level structures, be they patterns of neuron
 firings, or patterns of bits being processed. When we consider things down
 at this low level, however, we lose all context for what the meaning, idea,
 and quale are or where or how they come in. We cannot see or find the idea
 of GMK in any neuron, no more than we can see or find it in any neuron.

 Of course then it should seem deeply mysterious, if not impossible, how
 we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
 leap from how we get "it" from a bunch of cells squirting ions back and
 forth. Trying to understand a smartphone by looking at the flows of
 electrons is a similar kind of problem, it would seem just as difficult or
 impossible to explain and understand the high-level features and complexity
 out of the low-level simplicity.

 This is why it's crucial to bear in mind and explicitly discuss the
 level one is operation on when one discusses symbols, substrates, or quale.
 In summary, I think a chief reason you have been talking past each other is
 because you are each operating on different assumed levels.

 Please correct me if you believe I am mistaken and know I only offer my
 perspective in the hope it might help the conversation.

>>>
>>> I appreciate the callout, but it is necessary to talk at both the micro
>>> and the macro for this discussion. We're talking about symbol grounding. I
>>> should make it clear that I don't believe symbols can be grounded in other
>>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>>> infinite regress and the illusion of meaning.  Symbols ultimately must
>>> stand for something. The only thing they can stand *for*, ultimately,
>>> is something that cannot be communicated by other symbols: conscious
>>> experience. There is no concept in our brains that is not ultimately
>>> connected to something we've seen, heard, felt, smelled, or tasted.
>>>
>>
>> I agree everything you have experienced is rooted in consciousness.
>>
>> But at the low level, that only thing your brain senses are neural
>> signals (symbols, on/off, ones and zeros).
>>
>> In your arguments you rely on the high-level conscious states of human
>> brains to establish that they have grounding, but then use the low-level
>> descriptions of machines to deny their own consciousness, and hence deny
>> they can ground their processing to anything.
>>
>> If you remained in the space of low-level descriptions for both brains
>> and machine intelligences, however, you would see each struggles to make a
>> connection to what may exist at the high-level. You would see, the lack of
>> any apparent grounding in what are just neurons firing or not firing at
>> certain times. Just as a wire in a circuit either carries or doesn't carry
>> a charge.
>>
>
> Ah, I see your point now. That's valid, thanks for raising it and let me
> clarify.
>

I appreciate that thank you.


> Bringing this back to LLMs, it's clear to me that LLMs do not have
> 

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
If I had confidence that my answers to your questions would be met with
anything but a "defend/destroy" mentality I'd go there with you. I's gotta
be fun for me, and you're not someone I enjoy getting into it with. Not
trying to be insulting, but it's the truth.

On Tue, May 23, 2023 at 4:33 PM John Clark  wrote:

> On Tue, May 23, 2023  Terren Suydam  wrote:
>
> *> reality is fundamentally consciousness. *
>
>
> Then why does a simple physical molecule like *N**2**O *stop
> consciousness temporarily and another simple physical molecule like *CN- *do
> so permanently?
>
>
>> *> Why does some "knowledge" correspond with a positive feeling and other
>> knowledge with a negative feeling?*
>
>
> Because sometimes new knowledge requires you to re-organize hundreds of
> other important concepts you already had in your brain and that could be
> difficult and depending on circumstances may endanger or benefit your
> mental health.
>
> John K ClarkSee what's on my new list at  Extropolis
> 
> 2nv
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1DCmuv8Yj5th%2BaV%3Dx%3DBa_WCJsNePunLHpufXwi-W_EbQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8fZ9NDoyPBKQT6-iygsk8%2BwkfiU_3JWLVLbzregBUwgw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Tue, May 23, 2023 at 4:30 PM Terren Suydam 
wrote:

>> If I could instantly stop all physical processes that are going on
>> inside your head for one year and then start them up again, to an
>> outside objective observer you would appear to lose consciousness for one
>> year, but to you your consciousness would still feel continuous but the
>> outside world would appear to have discontinuously jumped to something
>> new.
>>
>
> *> I meant continuous in terms of the flow of state from one moment to the
> next. What you're describing is continuous because it's not the passage of
> time that needs to be continuous, but the state of information in the model
> as the physical processes evolve*.
>

Sorry but it's not at all clear to me what you're talking about. If the
state of information is not evolving in time then what in the world is it
evolving in?!  If nothing changes then nothing can evolve, and the very
definition of time stopping is that nothing changes and nothing evolves.

  John K ClarkSee what's on my new list at  Extropolis

xqj

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3gS3MO24ERbpdjopdJ%3DC_a-FEw8%2Bue08AxowskfBwOFQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Tue, May 23, 2023  Terren Suydam  wrote:

*> reality is fundamentally consciousness. *


Then why does a simple physical molecule like *N**2**O *stop consciousness
temporarily and another simple physical molecule like *CN- *do so
permanently?


> *> Why does some "knowledge" correspond with a positive feeling and other
> knowledge with a negative feeling?*


Because sometimes new knowledge requires you to re-organize hundreds of
other important concepts you already had in your brain and that could be
difficult and depending on circumstances may endanger or benefit your
mental health.

John K ClarkSee what's on my new list at  Extropolis

2nv


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1DCmuv8Yj5th%2BaV%3Dx%3DBa_WCJsNePunLHpufXwi-W_EbQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 4:17 PM John Clark  wrote:

> On Tue, May 23, 2023 at 3:50 PM Terren Suydam 
> wrote:
>
>
>> * > in my view, consciousness entails a continuous flow of experience.*
>>
>
> If I could instantly stop all physical processes that are going on inside
> your head for one year and then start them up again, to an outside
> objective observer you would appear to lose consciousness for one year, but
> to you your consciousness would still feel continuous but the outside world
> would appear to have discontinuously jumped to something new.
>

I meant continuous in terms of the flow of state from one moment to the
next. What you're describing *is* continuous because it's not the passage
of time that needs to be continuous, but the state of information in the
model as the physical processes evolve. And my understanding is that in an
LLM, each new query starts from the same state... it does not evolve in
time.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> 
> 2b0
>
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0B5ce20hEsOv_GLcmJP5kkOpqMggwUFTcBQfGL_59y0g%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA86qv5H8QEKcEWCjPteD6B89PAZ6GbEcUy7km4Q9yibPQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Tue, May 23, 2023 at 3:50 PM Terren Suydam 
wrote:


> * > in my view, consciousness entails a continuous flow of experience.*
>

If I could instantly stop all physical processes that are going on inside
your head for one year and then start them up again, to an outside
objective observer you would appear to lose consciousness for one year, but
to you your consciousness would still feel continuous but the outside world
would appear to have discontinuously jumped to something new.

John K ClarkSee what's on my new list at  Extropolis

2b0




>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0B5ce20hEsOv_GLcmJP5kkOpqMggwUFTcBQfGL_59y0g%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:27 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
>> wrote:
>>
>>
>>> And yes, I'm arguing that a true simulation (let's say for the sake of a
>>> thought experiment we were able to replicate every neural connection of a
>>> human being in code, including the connectomes, and neurotransmitters,
>>> along with a simulated nerve that was connected to a button on the desk we
>>> could press which would simulate the signal sent when a biological pain
>>> receptor is triggered) would feel pain that is just as real as the pain you
>>> and I feel as biological organisms.
>>>
>>
>> This follows from the physicalist no-zombies-possible stance. But it
>> still runs into the hard problem, basically. How does stuff give rise to
>> experience.
>>
>>
> I would say stuff doesn't give rise to conscious experience. Conscious
> experience is the logically necessary and required state of knowledge that
> is present in any consciousness-necessitating behaviors. If you design a
> simple robot with a camera and robot arm that is able to reliably catch a
> ball thrown in its general direction, then something in that system *must*
> contain knowledge of the ball's relative position and trajectory. It simply
> isn't logically possible to have a system that behaves in all situations as
> if it knows where the ball is, without knowing where the ball is.
> Consciousness is simply the state of being with knowledge.
>
> Con- "Latin for with"
> -Scious- "Latin for knowledge"
> -ness "English suffix meaning the state of being X"
>
> Consciousness -> The state of being with knowledge.
>
> There is an infinite variety of potential states and levels of knowledge,
> and this contributes to much of the confusion, but boiled down to the
> simplest essence of what is or isn't conscious, it is all about knowledge
> states. Knowledge states require activity/reactivity to the presence of
> information, and counterfactual behaviors (if/then, greater than less than,
> discriminations and comparisons that lead to different downstream
> consequences in a system's behavior). At least, this is my theory of
> consciousness.
>
> Jason
>

This still runs into the valence problem though. Why does some "knowledge"
correspond with a positive *feeling* and other knowledge with a negative
feeling?  I'm not talking about the functional accounts of positive and
negative experiences. I'm talking about phenomenology. The functional
aspect of it is not irrelevant, but to focus *only* on that is to sweep the
feeling under the rug. So many dialogs on this topic basically terminate
here, where it's just a clash of belief about the relative importance of
consciousness and phenomenology as the mediator of all experience and
knowledge.

Terren


> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj8F0xkGD7Pe82R_FsLzGO51Z4cgN6J71Er_F5ptMo3EA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9OE6MMMSXia2eXHTajXZq068OFG4HNZuamBpy6ORCSGg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:05 PM Jesse Mazer  wrote:

>
>
> On Tue, May 23, 2023 at 9:34 AM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I appreciate the callout, but it is necessary to talk at both the micro
>> and the macro for this discussion. We're talking about symbol grounding. I
>> should make it clear that I don't believe symbols can be grounded in other
>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>> infinite regress and the illusion of meaning.  Symbols ultimately must
>> stand for something. The only thing they can stand *for*, ultimately, is
>> something that cannot be communicated by other symbols: conscious
>> experience. There is no concept in our brains that is not ultimately
>> connected to something we've seen, heard, felt, smelled, or tasted.
>>
>> In my experience with conversations like this, you usually have people on
>> one side who take consciousness seriously as the only thing that is
>> actually undeniable, and you have people who'd rather not talk about it,
>> hand-wave it away, or outright deny it. That's the talking-past that
>> usually happens, and that's what's happening here.
>>
>> Terren
>>
>
> But are you talking specifically about symbols with high-level meaning
> like the words humans use in ordinary language, which large language models
> like ChatGPT are trained on? Or are you talking more generally about any
> kinds of symbols, including something like the 1s and 0s in a giant
> computer that was performing an extremely detailed simulation of a physical
> world, perhaps down to the level of particle physics, where that simulation
> could include things like detailed physical simulations of things in
> external environment (a flower, say) and components of a simulated
> biological organism with a nervous system (with particle-level simulations
> of neurons etc.)? Would you say that even in the case of the detailed
> physics simulation, nothing in there could ever give rise to conscious
> experience like 

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 1:46 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I appreciate the callout, but it is necessary to talk at both the micro
>> and the macro for this discussion. We're talking about symbol grounding. I
>> should make it clear that I don't believe symbols can be grounded in other
>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>> infinite regress and the illusion of meaning.  Symbols ultimately must
>> stand for something. The only thing they can stand *for*, ultimately, is
>> something that cannot be communicated by other symbols: conscious
>> experience. There is no concept in our brains that is not ultimately
>> connected to something we've seen, heard, felt, smelled, or tasted.
>>
>
> I agree everything you have experienced is rooted in consciousness.
>
> But at the low level, that only thing your brain senses are neural signals
> (symbols, on/off, ones and zeros).
>
> In your arguments you rely on the high-level conscious states of human
> brains to establish that they have grounding, but then use the low-level
> descriptions of machines to deny their own consciousness, and hence deny
> they can ground their processing to anything.
>
> If you remained in the space of low-level descriptions for both brains and
> machine intelligences, however, you would see each struggles to make a
> connection to what may exist at the high-level. You would see, the lack of
> any apparent grounding in what are just neurons firing or not firing at
> certain times. Just as a wire in a circuit either carries or doesn't carry
> a charge.
>

Ah, I see your point now. That's valid, thanks for raising it and let me
clarify.

Bringing this back to LLMs, it's clear to me that LLMs do not have
phenomenal experience, but you're right to insist that I explain why I
think so. I don't know if this amounts to a theory of consciousness, but
the reason I believe that LLMs are not conscious is 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
> wrote:
>
>
>> And yes, I'm arguing that a true simulation (let's say for the sake of a
>> thought experiment we were able to replicate every neural connection of a
>> human being in code, including the connectomes, and neurotransmitters,
>> along with a simulated nerve that was connected to a button on the desk we
>> could press which would simulate the signal sent when a biological pain
>> receptor is triggered) would feel pain that is just as real as the pain you
>> and I feel as biological organisms.
>>
>
> This follows from the physicalist no-zombies-possible stance. But it still
> runs into the hard problem, basically. How does stuff give rise to
> experience.
>
>
I would say stuff doesn't give rise to conscious experience. Conscious
experience is the logically necessary and required state of knowledge that
is present in any consciousness-necessitating behaviors. If you design a
simple robot with a camera and robot arm that is able to reliably catch a
ball thrown in its general direction, then something in that system *must*
contain knowledge of the ball's relative position and trajectory. It simply
isn't logically possible to have a system that behaves in all situations as
if it knows where the ball is, without knowing where the ball is.
Consciousness is simply the state of being with knowledge.

Con- "Latin for with"
-Scious- "Latin for knowledge"
-ness "English suffix meaning the state of being X"

Consciousness -> The state of being with knowledge.

There is an infinite variety of potential states and levels of knowledge,
and this contributes to much of the confusion, but boiled down to the
simplest essence of what is or isn't conscious, it is all about knowledge
states. Knowledge states require activity/reactivity to the presence of
information, and counterfactual behaviors (if/then, greater than less than,
discriminations and comparisons that lead to different downstream
consequences in a system's behavior). At least, this is my theory of
consciousness.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj8F0xkGD7Pe82R_FsLzGO51Z4cgN6J71Er_F5ptMo3EA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023 at 1:12 PM John Clark  wrote:

> On Tue, May 23, 2023  Terren Suydam  wrote:
>
> *> What was the biochemical or neural change that suddenly birthed the
>> feeling of pain? *
>
>
> It would not be difficult to make a circuit such that that whenever a
> specific binary sequence of zeros and ones is in a register the circuit
> stops doing everything else and changes that sequence to something else
> as fast as possible. As I've said before, intelligence is hard but
> emotion is easy.
>

I believe I have made simple neural networks that are conscious and can
experience both pleasure and displeasure, insofar as they have evolved to
learn and apply multiple and various strategies for both attraction and
avoidance behaviors. They can achieve this even with just 16
artificial neurons and within only a dozen generations of simulated
evolution:

https://github.com/jasonkresch/bots

https://www.youtube.com/watch?v=InBsqlWQTts=PLq_mdJjNRPT11IF4NFyLcIWJ1C0Z3hTAX=2

I am of course interested in hearing any arguments for why these bots are
either capable of some primitive sensation or not.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiE3oAds_DoJ1XXQN7NbwezozCoqh_j-%2BizORdCusOz-w%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 11:08 AM Dylan Distasio  wrote:

> Let me start out by saying I don't believe in zombies.   We are
> biophysical systems with a long history of building on and repurposing
> earlier systems of genes and associated proteins.   I saw you don't believe
> it is symbols all the way down.   I agree with you, but I am arguing the
> beginning of that chain of symbols for many things begins with sensory
> input and ends with a higher level symbol/abstraction, particularly in
> fully conscious animals like human beings that are self aware and capable
> of an inner dialogue.
>

Of course, and hopefully I was clear that I meant symbols are *ultimately*
grounded in phenomenal experience, but yes, there are surely many layers to
get there depending on the concept.

>
> An earlier example I gave of someone born blind results in someone with no
> concept of red or any color for that matter, or images, and so on.   I
> don't believe redness is hiding in some molecule in the brain like Brent
> does.   It's only created via pruned neural networks based on someone who
> has sensory inputs working properly.   That's the beginning of the chain of
> symbols, but it starts with an electrical impulse sent down nerves from a
> sensory organ.
>

I agree, the idea of a certain kind of physical molecule transducing a quale*
by virtue of the properties of that physical molecule* is super
problematic. Like, if redness were inherent to glutamate, what about all
the other colors?  And sounds? And smells?  And textures?  Just how many
molecules would we need to represent the vast pantheon of possible qualia?


> It's the same thing with pain.  If a certain gene is screwed up related to
> a subset of sodium channels (which are critical for proper transmission of
> signals propagating along certain nerves), a human being is incapable of
> feeling pain.   I'd argue they don't know what pain is, just like a
> congenital blind person doesn't know what red is.   It's the same thing
> with hearing and music.   If a brain is missing that initial sensory input,
> your consciousness does not have the ability to feel the related subjective
> sensation.
>

No problem there.


> And yes, I'm arguing that a true simulation (let's say for the sake of a
> thought experiment we were able to replicate every neural connection of a
> human being in code, including the connectomes, and neurotransmitters,
> along with a simulated nerve that was connected to a button on the desk we
> could press which would simulate the signal sent when a biological pain
> receptor is triggered) would feel pain that is just as real as the pain you
> and I feel as biological organisms.
>

This follows from the physicalist no-zombies-possible stance. But it still
runs into the hard problem, basically. How does stuff give rise to
experience.


> You asked me for the principle behind how a critter could start having a
> negative feeling that didn't exist in its progenitors.   Again, I believe
> the answer is as simple as it happened when pain receptors evolved that may
> have started as a random mutation where the behavior they induced in lower
> organisms resulted in increased survival.
>

Before you said that you don't believe redness is hiding in a molecule. But
here, you're saying pain is hiding in a pain receptor, which is nothing
more or less than a protein molecule.


>   I'm not claiming to have solved the hard problem of consciousness.   I
> don't claim to have the answer for why pain subjectively feels the way it
> does, or why pleasure does, but I do know that reward systems that evolved
> much earlier are involved (like dopamine based ones), and that pleasure can
> be directly triggered via various recreational drugs.   That doesn't mean I
> think the dopamine molecule is where the pleasure qualia is hiding.
>

> Even lower forms of life like bacteria move towards what their limited
> sensory systems tell them is a reward and away from what it tells them is a
> danger.   I believe our subjective experiences are layered onto these much
> earlier evolutionary artifacts, although as eukaryotes I am not claiming
> that much of this is inherited from LUCA.   I think it blossomed once
> predator/prey dynamics were possible in the Cambrian explosion and was
> built on from there over many many years.
>

Bacteria can move towards or away from certain stimuli, but it doesn't
follow that it feels pain or pleasure as it does so. That is using
functionalism to sweep the hard problem under the rug.

Terren


> Getting slightly off topic, I don't think substrate likely matters as far
> as producing consciousness.   The only possible way I could see that it
> would is if quantum effects are actually involved in generating it that we
> can't reasonably replicate.   That said, I think Penrose and others do not
> have the odds on their side there for a number of reasons.
>
> Like I said though, I don't believe in zombies.
>
> On Tue, May 23, 2023 at 9:12 AM Terren 

Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Tue, May 23, 2023  Terren Suydam  wrote:

*> What was the biochemical or neural change that suddenly birthed the
> feeling of pain? *


It would not be difficult to make a circuit such that that whenever a
specific binary sequence of zeros and ones is in a register the circuit
stops doing everything else and changes that sequence to something else as
fast as possible. As I've said before, intelligence is hard but emotion is
easy.

*> I don't believe symbols can be grounded in other symbols*
>

But it would be easy to ground symbols with examples, such as the symbol
"2" with the number of shoes most people wear and the number of arms most
people have, and the symbol "greenness" with the thing that leaves and
emeralds and Harry Potter's eyes have in common.


* > There is no concept in our brains that is not ultimately connected to
> something we've seen, heard, felt, smelled, or tasted.*
>

And that's why examples are important but definitions are not.

John K ClarkSee what's on my new list at  Extropolis

tlw

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0HNSqOnMzz14yx9U0y0eRkUWHZt%2BbzKY%3DBx_%2Bu%3D_M1HA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Jesse Mazer
On Tue, May 23, 2023 at 9:34 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the same
>> fine-grained causal organization *would* have the same phenomenology, the
>> same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with regards
>> to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I believe
>> this is partly responsible for why you are both talking past each other,
>> because there are many levels involved in brains (and computational
>> systems). I believe you were discussing completely different levels in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>> logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things down
>> at this low level, however, we lose all context for what the meaning, idea,
>> and quale are or where or how they come in. We cannot see or find the idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible, how
>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>> leap from how we get "it" from a bunch of cells squirting ions back and
>> forth. Trying to understand a smartphone by looking at the flows of
>> electrons is a similar kind of problem, it would seem just as difficult or
>> impossible to explain and understand the high-level features and complexity
>> out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the level
>> one is operation on when one discusses symbols, substrates, or quale. In
>> summary, I think a chief reason you have been talking past each other is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer my
>> perspective in the hope it might help the conversation.
>>
>
> I appreciate the callout, but it is necessary to talk at both the micro
> and the macro for this discussion. We're talking about symbol grounding. I
> should make it clear that I don't believe symbols can be grounded in other
> symbols (i.e. symbols all the way down as Stathis put it), that leads to
> infinite regress and the illusion of meaning.  Symbols ultimately must
> stand for something. The only thing they can stand *for*, ultimately, is
> something that cannot be communicated by other symbols: conscious
> experience. There is no concept in our brains that is not ultimately
> connected to something we've seen, heard, felt, smelled, or tasted.
>
> In my experience with conversations like this, you usually have people on
> one side who take consciousness seriously as the only thing that is
> actually undeniable, and you have people who'd rather not talk about it,
> hand-wave it away, or outright deny it. That's the talking-past that
> usually happens, and that's what's happening here.
>
> Terren
>

But are you talking specifically about symbols with high-level meaning like
the words humans use in ordinary language, which large language models like
ChatGPT are trained on? Or are you talking more generally about any kinds
of symbols, including something like the 1s and 0s in a giant computer that
was performing an extremely detailed simulation of a physical world,
perhaps down to the level of particle physics, where that simulation could
include things like detailed physical simulations of things in external
environment (a flower, say) and components of a simulated biological
organism with a nervous system (with particle-level simulations of neurons
etc.)? Would you say that even in the case of the detailed physics
simulation, nothing in there could ever give rise to conscious experience
like our own?

Jesse





>
>
>>
>> Jason
>>
>> On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 15:58, Terren Suydam 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the same
>> fine-grained causal organization *would* have the same phenomenology, the
>> same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with regards
>> to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I believe
>> this is partly responsible for why you are both talking past each other,
>> because there are many levels involved in brains (and computational
>> systems). I believe you were discussing completely different levels in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>> logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things down
>> at this low level, however, we lose all context for what the meaning, idea,
>> and quale are or where or how they come in. We cannot see or find the idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible, how
>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>> leap from how we get "it" from a bunch of cells squirting ions back and
>> forth. Trying to understand a smartphone by looking at the flows of
>> electrons is a similar kind of problem, it would seem just as difficult or
>> impossible to explain and understand the high-level features and complexity
>> out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the level
>> one is operation on when one discusses symbols, substrates, or quale. In
>> summary, I think a chief reason you have been talking past each other is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer my
>> perspective in the hope it might help the conversation.
>>
>
> I think you’ve captured my position. But in addition I think replicating
> the fine-grained causal organisation is not necessary in order to replicate
> higher level phenomena such as GMK. By extension of Chalmers’ substitution
> experiment,
>

Note that Chalmers's argument is based on assuming the functional
substitution occurs at a certain level of fine-grained-ness. If you lose
this step, and look at only the top-most input-output of the mind as black
box, then you can no longer distinguish a rock from a dreaming person, nor
a calculator computing 2+3 and a human computing 2+3, and one also runs
into the Blockhead "lookup table" argument against functionalism.

Accordingly, I think intermediate steps and the fine-grained organization
are important (to some minimum level of fidelity) but as Bruno would say,
we can never be certain what this necessary substitution level is. Is it
neocortical columns, is it the connectome, is it the proteome, is it the
molecules and atoms, is it QFT? Chalmers argues that at least at the level
where noise introduces deviations in a brain simulation, simulating lower
levels should not be necessary, as human consciousness appears robust to
such noise at low levels (photon strikes, brownian motion, quantum
uncertainties, etc.)


> replicating the behaviour of the human through any means, such as training
> an AI not only on language but also movement, would also preserve
> consciousness, even though it does not simulate any physiological
> processes. Another way to say this is that it is not possible to make a
> philosophical zombie.
>

I agree zombies are impossible. I think they are even logically impossible.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023, 9:34 AM Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the same
>> fine-grained causal organization *would* have the same phenomenology, the
>> same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with regards
>> to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I believe
>> this is partly responsible for why you are both talking past each other,
>> because there are many levels involved in brains (and computational
>> systems). I believe you were discussing completely different levels in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>> logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things down
>> at this low level, however, we lose all context for what the meaning, idea,
>> and quale are or where or how they come in. We cannot see or find the idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible, how
>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>> leap from how we get "it" from a bunch of cells squirting ions back and
>> forth. Trying to understand a smartphone by looking at the flows of
>> electrons is a similar kind of problem, it would seem just as difficult or
>> impossible to explain and understand the high-level features and complexity
>> out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the level
>> one is operation on when one discusses symbols, substrates, or quale. In
>> summary, I think a chief reason you have been talking past each other is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer my
>> perspective in the hope it might help the conversation.
>>
>
> I appreciate the callout, but it is necessary to talk at both the micro
> and the macro for this discussion. We're talking about symbol grounding. I
> should make it clear that I don't believe symbols can be grounded in other
> symbols (i.e. symbols all the way down as Stathis put it), that leads to
> infinite regress and the illusion of meaning.  Symbols ultimately must
> stand for something. The only thing they can stand *for*, ultimately, is
> something that cannot be communicated by other symbols: conscious
> experience. There is no concept in our brains that is not ultimately
> connected to something we've seen, heard, felt, smelled, or tasted.
>

I agree everything you have experienced is rooted in consciousness.

But at the low level, that only thing your brain senses are neural signals
(symbols, on/off, ones and zeros).

In your arguments you rely on the high-level conscious states of human
brains to establish that they have grounding, but then use the low-level
descriptions of machines to deny their own consciousness, and hence deny
they can ground their processing to anything.

If you remained in the space of low-level descriptions for both brains and
machine intelligences, however, you would see each struggles to make a
connection to what may exist at the high-level. You would see, the lack of
any apparent grounding in what are just neurons firing or not firing at
certain times. Just as a wire in a circuit either carries or doesn't carry
a charge.

Conversely, if you stay in the high-level realm of consciousness ideas,
well then you must face the problem of other minds. You know you are
conscious, but you cannot prove or disprove the conscious of others, at
least not with first defining a theory of consciousness and explaining why
some minds satisfy the definition of not. Until you present a theory of
consciousness then this conversation is, I am afraid, doomed to continue in
this circle forever.

This same conversation and outcome played out 

Embodied AI ...Robots

2023-05-23 Thread John Clark
Embodied AI ...Robots


John K ClarkSee what's on my new list at  Extropolis

2t3

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3BxdkRNr10pRTz8Qh4NM0vzoYLDTBQPwUt3Z6kj133Tg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Dylan Distasio
Let me start out by saying I don't believe in zombies.   We are biophysical
systems with a long history of building on and repurposing earlier systems
of genes and associated proteins.   I saw you don't believe it is symbols
all the way down.   I agree with you, but I am arguing the beginning of
that chain of symbols for many things begins with sensory input and ends
with a higher level symbol/abstraction, particularly in fully conscious
animals like human beings that are self aware and capable of an inner
dialogue.

An earlier example I gave of someone born blind results in someone with no
concept of red or any color for that matter, or images, and so on.   I
don't believe redness is hiding in some molecule in the brain like Brent
does.   It's only created via pruned neural networks based on someone who
has sensory inputs working properly.   That's the beginning of the chain of
symbols, but it starts with an electrical impulse sent down nerves from a
sensory organ.

It's the same thing with pain.  If a certain gene is screwed up related to
a subset of sodium channels (which are critical for proper transmission of
signals propagating along certain nerves), a human being is incapable of
feeling pain.   I'd argue they don't know what pain is, just like a
congenital blind person doesn't know what red is.   It's the same thing
with hearing and music.   If a brain is missing that initial sensory input,
your consciousness does not have the ability to feel the related subjective
sensation.

And yes, I'm arguing that a true simulation (let's say for the sake of a
thought experiment we were able to replicate every neural connection of a
human being in code, including the connectomes, and neurotransmitters,
along with a simulated nerve that was connected to a button on the desk we
could press which would simulate the signal sent when a biological pain
receptor is triggered) would feel pain that is just as real as the pain you
and I feel as biological organisms.

You asked me for the principle behind how a critter could start having a
negative feeling that didn't exist in its progenitors.   Again, I believe
the answer is as simple as it happened when pain receptors evolved that may
have started as a random mutation where the behavior they induced in lower
organisms resulted in increased survival.I'm not claiming to have
solved the hard problem of consciousness.   I don't claim to have the
answer for why pain subjectively feels the way it does, or why pleasure
does, but I do know that reward systems that evolved much earlier are
involved (like dopamine based ones), and that pleasure can be directly
triggered via various recreational drugs.   That doesn't mean I think the
dopamine molecule is where the pleasure qualia is hiding.

Even lower forms of life like bacteria move towards what their limited
sensory systems tell them is a reward and away from what it tells them is a
danger.   I believe our subjective experiences are layered onto these much
earlier evolutionary artifacts, although as eukaryotes I am not claiming
that much of this is inherited from LUCA.   I think it blossomed once
predator/prey dynamics were possible in the Cambrian explosion and was
built on from there over many many years.

Getting slightly off topic, I don't think substrate likely matters as far
as producing consciousness.   The only possible way I could see that it
would is if quantum effects are actually involved in generating it that we
can't reasonably replicate.   That said, I think Penrose and others do not
have the odds on their side there for a number of reasons.

Like I said though, I don't believe in zombies.

On Tue, May 23, 2023 at 9:12 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 2:25 AM Dylan Distasio 
> wrote:
>
>> While we may not know everything about explaining it, pain doesn't seem
>> to be that much of a mystery to me, and I don't consider it a symbol per
>> se.   It seems obvious to me anyways that pain arose out of a very early
>> neural circuit as a survival mechanism.
>>
>
> But how?  What was the biochemical or neural change that suddenly birthed
> the feeling of pain?  I'm not asking you to know the details, just the
> principle - by what principle can a critter that comes into being with some
> modification of its organization start having a negative feeling when it
> didn't exist in its progenitors?  This doesn't seem mysterious to you?
>
> Very early neural circuits are relatively easy to simulate, and I'm
> guessing some team has done this for the level of organization you're
> talking about. What you're saying, if I'm reading you correctly, is that
> that simulation feels pain. If so, how do you get that feeling of pain out
> of code?
>
> Terren
>
>
>
>> Pain is the feeling you experience when pain receptors detect an area of
>> the body is being damaged.   It is ultimately based on a sensory input that
>> transmits to the brain via nerves where it is translated into a sensation
>> that tells you 

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 9:15 AM John Clark  wrote:

> On Mon, May 22, 2023 at 5:56 PM Terren Suydam 
> wrote:
>
> *> Many, myself included, are captivated by the amazing capabilities of
>> chatGPT and other LLMs. They are, truly, incredible. Depending on your
>> definition of Turing Test, it passes with flying colors in many, many
>> contexts. It would take a much stricter Turing Test than we might have
>> imagined this time last year,*
>>
>
> The trouble with having a much tougher Turing Test  is that although it
> would correctly conclude that it was talking to a computer it would also
> incorrectly conclude that it was talking with a computer when in reality it
> was talking to a human being who had an IQ of 200. Yes GPT can occasionally
> do something that is very stupid, but if you had not also at one time or
> another in your life done something that is very stupid then you are a VERY
> remarkable person.
>
> > One way to improve chatGPT's performance on an actual Turing Test would
>> be to slow it down, because it is too fast to be human.
>>
>
> It would be easy to make GPT dumber, but what would that prove ? We could
> also mass-produce Olympic gold medals so everybody on earth could get one,
> but what would be the point?
>
>>
> *> All that said, is chatGPT actually intelligent?*
>>
>
> Obviously.
>
>
>> * > There's no question that it behaves in a way that we would all agree
>> is intelligent. The answers it gives, and the speed it gives them in,
>> reflect an intelligence that often far exceeds most if not all humans. I
>> know some here say intelligence is as intelligence does. Full stop, *
>>
>
> All I'm saying is you should play fair, whatever test you decide to use
> to measure the intelligence of a human you should use exactly the same test
> on an AI. Full stop.
>
> > *But this is an oversimplified view! *
>>
>
> Maybe so, but it's the only view we're ever going to get so we're just
> gonna have to make the best of it.  But I know there are some people who
> will continue to disagree with me about that until the day they die.
>
>  and so just five seconds before he was vaporized the last surviving
> human being turned to Mr. Jupiter Brain and said "*I still think I'm
> smarter than you*".
>
> *< If ChatGPT was trained on gibberish, that's what you'd get out of it.*
>
>
> And if you were trained on gibberish what sort of post do you imagine
> you'd be writing right now?
>
> * > the Chinese Room thought experiment proposed by John Searle.*
>>
>
> You mean the silliest thought experiment ever devised by the mind of man?
>
> *> ChatGPT, therefore, is more like a search engine*
>
>
> Oh for heaven sake, not that canard again!  I'm not young but since my
> early teens I've been hearing people say you only get out of a computer
> what you put in. I thought that was silly when I was 13 and I still do.
> John K ClarkSee what's on my new list at  Extropolis
> 
>

I'm just going to say up front that I'm not going to engage with you on
this particular topic, because I'm already well aware of your position,
that you do not take consciousness seriously, and that your mind won't be
changed on that. So anything we argue about will be about that fundamental
difference, and that's just not interesting or productive, not to mention
we've already had that pointless argument.

Terren


>
> nw4
>
>
>
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2ZG8Vo3LBF2nvUP5umHZVvUusjgPYQEkKhwptmKaNUWw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9wJ4us5_z3cMPBcT6KGNTnB1_0CiSXdTKpDkdvV%3DmpMw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each other.
> Please either of you correct me if i am wrong, but in an effort to clarify
> and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the same
> fine-grained causal organization *would* have the same phenomenology, the
> same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with regards to
> symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I believe
> this is partly responsible for why you are both talking past each other,
> because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts, feelings,
> quale, etc. and there are low-level, be they neurons, neurotransmitters,
> atoms, quantum fields, and laws of physics as in human brains, or circuits,
> logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible, how we
> get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
> leap from how we get "it" from a bunch of cells squirting ions back and
> forth. Trying to understand a smartphone by looking at the flows of
> electrons is a similar kind of problem, it would seem just as difficult or
> impossible to explain and understand the high-level features and complexity
> out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the level
> one is operation on when one discusses symbols, substrates, or quale. In
> summary, I think a chief reason you have been talking past each other is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer my
> perspective in the hope it might help the conversation.
>

I appreciate the callout, but it is necessary to talk at both the micro and
the macro for this discussion. We're talking about symbol grounding. I
should make it clear that I don't believe symbols can be grounded in other
symbols (i.e. symbols all the way down as Stathis put it), that leads to
infinite regress and the illusion of meaning.  Symbols ultimately must
stand for something. The only thing they can stand *for*, ultimately, is
something that cannot be communicated by other symbols: conscious
experience. There is no concept in our brains that is not ultimately
connected to something we've seen, heard, felt, smelled, or tasted.

In my experience with conversations like this, you usually have people on
one side who take consciousness seriously as the only thing that is
actually undeniable, and you have people who'd rather not talk about it,
hand-wave it away, or outright deny it. That's the talking-past that
usually happens, and that's what's happening here.

Terren


>
> Jason
>
> On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 15:58, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 14:23, Terren Suydam 
 wrote:

>
>
> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 13:37, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
>>> stath...@gmail.com> wrote:
>>>


 On Tue, 23 May 2023 at 10:48, Terren Suydam <
 terren.suy...@gmail.com> wrote:

>
>
> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:03, Terren Suydam <
>> terren.suy...@gmail.com> wrote:
>>
>>>
>>> it is true that my brain has been trained on a large amount of
>>> data - 

Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Mon, May 22, 2023 at 5:56 PM Terren Suydam 
wrote:

*> Many, myself included, are captivated by the amazing capabilities of
> chatGPT and other LLMs. They are, truly, incredible. Depending on your
> definition of Turing Test, it passes with flying colors in many, many
> contexts. It would take a much stricter Turing Test than we might have
> imagined this time last year,*
>

The trouble with having a much tougher Turing Test  is that although it
would correctly conclude that it was talking to a computer it would also
incorrectly conclude that it was talking with a computer when in reality it
was talking to a human being who had an IQ of 200. Yes GPT can occasionally
do something that is very stupid, but if you had not also at one time or
another in your life done something that is very stupid then you are a VERY
remarkable person.

> One way to improve chatGPT's performance on an actual Turing Test would
> be to slow it down, because it is too fast to be human.
>

It would be easy to make GPT dumber, but what would that prove ? We could
also mass-produce Olympic gold medals so everybody on earth could get one,
but what would be the point?

>
*> All that said, is chatGPT actually intelligent?*
>

Obviously.


> * > There's no question that it behaves in a way that we would all agree
> is intelligent. The answers it gives, and the speed it gives them in,
> reflect an intelligence that often far exceeds most if not all humans. I
> know some here say intelligence is as intelligence does. Full stop, *
>

All I'm saying is you should play fair, whatever test you decide to use to
measure the intelligence of a human you should use exactly the same test on an
AI. Full stop.

> *But this is an oversimplified view! *
>

Maybe so, but it's the only view we're ever going to get so we're just
gonna have to make the best of it.  But I know there are some people who
will continue to disagree with me about that until the day they die.

 and so just five seconds before he was vaporized the last surviving
human being turned to Mr. Jupiter Brain and said "*I still think I'm
smarter than you*".

*< If ChatGPT was trained on gibberish, that's what you'd get out of it.*


And if you were trained on gibberish what sort of post do you imagine you'd
be writing right now?

* > the Chinese Room thought experiment proposed by John Searle.*
>

You mean the silliest thought experiment ever devised by the mind of man?

*> ChatGPT, therefore, is more like a search engine*


Oh for heaven sake, not that canard again!  I'm not young but since my
early teens I've been hearing people say you only get out of a computer
what you put in. I thought that was silly when I was 13 and I still do.
John K ClarkSee what's on my new list at  Extropolis

nw4




>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2ZG8Vo3LBF2nvUP5umHZVvUusjgPYQEkKhwptmKaNUWw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:25 AM Dylan Distasio  wrote:

> While we may not know everything about explaining it, pain doesn't seem to
> be that much of a mystery to me, and I don't consider it a symbol per se.
>  It seems obvious to me anyways that pain arose out of a very early neural
> circuit as a survival mechanism.
>

But how?  What was the biochemical or neural change that suddenly birthed
the feeling of pain?  I'm not asking you to know the details, just the
principle - by what principle can a critter that comes into being with some
modification of its organization start having a negative feeling when it
didn't exist in its progenitors?  This doesn't seem mysterious to you?

Very early neural circuits are relatively easy to simulate, and I'm
guessing some team has done this for the level of organization you're
talking about. What you're saying, if I'm reading you correctly, is that
that simulation feels pain. If so, how do you get that feeling of pain out
of code?

Terren



> Pain is the feeling you experience when pain receptors detect an area of
> the body is being damaged.   It is ultimately based on a sensory input that
> transmits to the brain via nerves where it is translated into a sensation
> that tells you to avoid whatever is causing the pain if possible, or let's
> you know you otherwise have a problem with your hardware.
>
> That said, I agree with you on LLMs for the most part, although I think
> they are showing some potentially emergent, interesting behaviors.
>
> On Tue, May 23, 2023 at 1:58 AM Terren Suydam 
> wrote:
>
>>
>> Take a migraine headache - if that's just a symbol, then why does that
>> symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
>> like anything? If you say evolution did it, that doesn't actually answer
>> the question, because evolution doesn't do anything except select for
>> traits, roughly speaking. So it just pushes the question to: how did the
>> subjective feeling of pain or pleasure emerge from some genetic mutation,
>> when it wasn't there before?
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9R_BDWiZvvGrA-8Tgx4kC7Syh29Z%3D_Jev09AJvOvknew%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each other.
> Please either of you correct me if i am wrong, but in an effort to clarify
> and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the same
> fine-grained causal organization *would* have the same phenomenology, the
> same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with regards to
> symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I believe
> this is partly responsible for why you are both talking past each other,
> because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts, feelings,
> quale, etc. and there are low-level, be they neurons, neurotransmitters,
> atoms, quantum fields, and laws of physics as in human brains, or circuits,
> logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible, how we
> get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
> leap from how we get "it" from a bunch of cells squirting ions back and
> forth. Trying to understand a smartphone by looking at the flows of
> electrons is a similar kind of problem, it would seem just as difficult or
> impossible to explain and understand the high-level features and complexity
> out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the level
> one is operation on when one discusses symbols, substrates, or quale. In
> summary, I think a chief reason you have been talking past each other is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer my
> perspective in the hope it might help the conversation.
>

I think you’ve captured my position. But in addition I think replicating
the fine-grained causal organisation is not necessary in order to replicate
higher level phenomena such as GMK. By extension of Chalmers’ substitution
experiment, replicating the behaviour of the human through any means, such
as training an AI not only on language but also movement, would also
preserve consciousness, even though it does not simulate any physiological
processes. Another way to say this is that it is not possible to make a
philosophical zombie.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXAnJWknDjVxCNZPGXoMocK%3DVim8QegA6GDeJBTLF%3DKBQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
As I see this thread, Terren and Stathis are both talking past each other.
Please either of you correct me if i am wrong, but in an effort to clarify
and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same
fine-grained causal organization *would* have the same phenomenology, the
same experience, and the same qualia as the brain with the same
fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to
symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe
this is partly responsible for why you are both talking past each other,
because there are many levels involved in brains (and computational
systems). I believe you were discussing completely different levels in the
hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings,
quale, etc. and there are low-level, be they neurons, neurotransmitters,
atoms, quantum fields, and laws of physics as in human brains, or circuits,
logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's
kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
or idea or memory of the smell of GMK is a very high-level feature of a
mind. When Terren asks for or discusses a symbol for it, a complete
answer/description for it can only be supplied in terms of a vast amount of
information concerning low level structures, be they patterns of neuron
firings, or patterns of bits being processed. When we consider things down
at this low level, however, we lose all context for what the meaning, idea,
and quale are or where or how they come in. We cannot see or find the idea
of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we
get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
leap from how we get "it" from a bunch of cells squirting ions back and
forth. Trying to understand a smartphone by looking at the flows of
electrons is a similar kind of problem, it would seem just as difficult or
impossible to explain and understand the high-level features and complexity
out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level
one is operation on when one discusses symbols, substrates, or quale. In
summary, I think a chief reason you have been talking past each other is
because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my
perspective in the hope it might help the conversation.

Jason

On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 15:58, Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 14:23, Terren Suydam 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <
 stath...@gmail.com> wrote:

>
>
> On Tue, 23 May 2023 at 13:37, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
>> stath...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>>> wrote:
>>>


 On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
 stath...@gmail.com> wrote:

>
>
> On Tue, 23 May 2023 at 10:03, Terren Suydam <
> terren.suy...@gmail.com> wrote:
>
>>
>> it is true that my brain has been trained on a large amount of
>> data - data that contains intelligence outside of my own. But when I
>> introspect, I notice that my understanding of things is ultimately
>> rooted/grounded in my phenomenal experience. Ultimately, everything 
>> we
>> know, we know either by our experience, or by analogy to experiences 
>> we've
>> had. This is in opposition to how LLMs train on data, which is 
>> strictly
>> about how words/symbols relate to one another.
>>
>
> The functionalist position is that phenomenal experience
> supervenes on behaviour, such that if the behaviour is replicated 
> (same
> output for same input) the phenomenal experience will also be 
> replicated.
> This is what philosophers like Searle (and many laypeople) can’t 
> stomach.
>

 I think the kind of phenomenal supervenience you're talking about
 is typically asserted for behavior at the level of the neuron, not the
 level of the whole agent. Is that what you're saying?  That chatGPT 
 must be
 having a phenomenal experience if it talks like a human?   If so, that 
 is

Re: what chatGPT is and is not

2023-05-23 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 15:58, Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 14:23, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 13:37, Terren Suydam 
 wrote:

>
>
> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
>>> stath...@gmail.com> wrote:
>>>


 On Tue, 23 May 2023 at 10:03, Terren Suydam <
 terren.suy...@gmail.com> wrote:

>
> it is true that my brain has been trained on a large amount of
> data - data that contains intelligence outside of my own. But when I
> introspect, I notice that my understanding of things is ultimately
> rooted/grounded in my phenomenal experience. Ultimately, everything we
> know, we know either by our experience, or by analogy to experiences 
> we've
> had. This is in opposition to how LLMs train on data, which is 
> strictly
> about how words/symbols relate to one another.
>

 The functionalist position is that phenomenal experience supervenes
 on behaviour, such that if the behaviour is replicated (same output for
 same input) the phenomenal experience will also be replicated. This is 
 what
 philosophers like Searle (and many laypeople) can’t stomach.

>>>
>>> I think the kind of phenomenal supervenience you're talking about is
>>> typically asserted for behavior at the level of the neuron, not the 
>>> level
>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>> having a phenomenal experience if it talks like a human?   If so, that 
>>> is
>>> stretching the explanatory domain of functionalism past its breaking 
>>> point.
>>>
>>
>> The best justification for functionalism is David Chalmers' "Fading
>> Qualia" argument. The paper considers replacing neurons with functionally
>> equivalent silicon chips, but it could be generalised to replacing any 
>> part
>> of the brain with a functionally equivalent black box, the whole brain, 
>> the
>> whole person.
>>
>
> You're saying that an algorithm that provably does not have
> experiences of rabbits and lollipops - but can still talk about them in a
> way that's indistinguishable from a human - essentially has the same
> phenomenology as a human talking about rabbits and lollipops. That's just
> absurd on its face. You're essentially hand-waving away the grounding
> problem. Is that your position? That symbols don't need to be grounded in
> any sort of phenomenal experience?
>

 It's not just talking about them in a way that is indistinguishable
 from a human, in order to have human-like consciousness the entire I/O
 behaviour of the human would need to be replicated. But in principle, I
 don't see why a LLM could not have some other type of phenomenal
 experience. And I don't think the grounding problem is a problem: I was
 never grounded in anything, I just grew up associating one symbol with
 another symbol, it's symbols all the way down.

>>>
>>> Is the smell of your grandmother's kitchen a symbol?
>>>
>>
>> Yes, I can't pull away the facade to check that there was a real
>> grandmother and a real kitchen against which I can check that the sense
>> data matches.
>>
>
> The ground problem is about associating symbols with a phenomenal
> experience, or the memory of one - which is not the same thing as the
> functional equivalent or the neural correlate. It's the feeling, what it's
> like to experience the thing the symbol stands for. The experience of
> redness. The shock of plunging into cold water. The smell of coffee. etc.
>
> Take a migraine headache - if that's just a symbol, then why does that
> symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
> like anything? If you say evolution did it, that doesn't actually answer
> the question, because evolution doesn't do anything except select for
> traits, roughly speaking. So it just pushes the question to: how did the
> subjective feeling of pain or pleasure emerge from some genetic mutation,
> when it wasn't there before?
>
> Without a functionalist explanation of the *origin* of aesthetic valence,
> then I don't think you can "get it from bit".
>

That seems more like the hard problem of consciousness. There is no
solution to it.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group 

Re: what chatGPT is and is not

2023-05-23 Thread Dylan Distasio
While we may not know everything about explaining it, pain doesn't seem to
be that much of a mystery to me, and I don't consider it a symbol per se.
 It seems obvious to me anyways that pain arose out of a very early neural
circuit as a survival mechanism.   Pain is the feeling you experience when
pain receptors detect an area of the body is being damaged.   It is
ultimately based on a sensory input that transmits to the brain via nerves
where it is translated into a sensation that tells you to avoid whatever is
causing the pain if possible, or let's you know you otherwise have a
problem with your hardware.

That said, I agree with you on LLMs for the most part, although I think
they are showing some potentially emergent, interesting behaviors.

On Tue, May 23, 2023 at 1:58 AM Terren Suydam 
wrote:

>
> Take a migraine headache - if that's just a symbol, then why does that
> symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
> like anything? If you say evolution did it, that doesn't actually answer
> the question, because evolution doesn't do anything except select for
> traits, roughly speaking. So it just pushes the question to: how did the
> subjective feeling of pain or pleasure emerge from some genetic mutation,
> when it wasn't there before?
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com.