Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 14:47, Brent Meeker  wrote:

>
>
> On 5/24/2023 9:29 PM, Stathis Papaioannou wrote:
>
>
>
> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>>>
>>> >An RNG would be a bad design choice because it would be extremely
 unreliable. However, as a thought experiment, it could work. If the visual
 cortex were removed and replaced with an RNG which for five minutes
 replicated the interactions with the remaining brain, the subject would
 behave as if they had normal vision and report that they had normal vision,
 then after five minutes behave as if they were blind and report that they
 were blind. It is perhaps contrary to intuition that the subject would
 really have visual experiences in that five minute period, but I don't
 think there is any other plausible explanation.

>>>
 I think they would be a visual zombie in that five minute period,
 though as described they would not be able to report any difference.

 I think if one's entire brain were replaced by an RNG, they would be a
 total zombie who would fool us into thinking they were conscious and we
 would not notice a difference. So by extension a brain partially replaced
 by an RNG would be a partial zombie that fooled the other parts of the
 brain into thinking nothing was amiss.

>>>
>>> I think the concept of a partial zombie makes consciousness nonsensical.
>>>
>>
>> It borders on the nonsensical, but between the two bad alternatives I
>> find the idea of a RNG instantiating human consciousness somewhat less
>> sensical than the idea of partial zombies.
>>
>
> If consciousness persists no matter what the brain is replaced with as
> long as the output remains the same this is consistent with the idea that
> consciousness does not reside in a particular substance (even a magical
> substance) or in a particular process. This is a strange idea, but it is
> akin to the existence of platonic objects. The number three can be
> implemented by arranging three objects in a row but it does not depend
> those three objects unless it is being used for a particular purpose, such
> as three beads on an abacus.
>
>
>> How would I know that I am not a visual zombie now, or a visual zombie
>>> every Tuesday, Thursday and Saturday?
>>>
>>
>> Here, we have to be careful what we mean by "I". Our own brains have
>> various spheres of consciousness as demonstrated by the Wada Test: we can
>> shut down one hemisphere of the brain and lose partial awareness and
>> functionality such as the ability to form words and yet one remains
>> conscious. I think being a partial zombie would be like that, having one's
>> sphere of awareness shrink.
>>
>
> But the subject's sphere of awareness would not shrink in the thought
> experiment, since by assumption their behaviour stays the same, while if
> their sphere of awareness shrank they notice that something was different
> and say so.
>
>
> Why do you think they would notice?  Color blind people don't notice they
> are color blind...until somebody tells them about it and even then they
> don't "notice" it.
>

There would either be objective or subjective evidence of a change due to
the substitution. If there is neither objective nor subjective evidence of
a change, then there is no change.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWc%3D1gzg1LQ%3DLYdtJTUYH3anjzOSFNvP9CTeqUC8x3KQg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Brent Meeker



On 5/24/2023 9:29 PM, Stathis Papaioannou wrote:



On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:



On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou
 wrote:



On Thu, 25 May 2023 at 11:48, Jason Resch
 wrote:

>An RNG would be a bad design choice because it would be
extremely unreliable. However, as a thought experiment, it
could work. If the visual cortex were removed and replaced
with an RNG which for five minutes replicated the
interactions with the remaining brain, the subject would
behave as if they had normal vision and report that they
had normal vision, then after five minutes behave as if
they were blind and report that they were blind. It is
perhaps contrary to intuition that the subject would
really have visual experiences in that five minute period,
but I don't think there is any other plausible explanation.


I think they would be a visual zombie in that five minute
period, though as described they would not be able to
report any difference.

I think if one's entire brain were replaced by an RNG,
they would be a total zombie who would fool us into
thinking they were conscious and we would not notice a
difference. So by extension a brain partially replaced by
an RNG would be a partial zombie that fooled the other
parts of the brain into thinking nothing was amiss.


I think the concept of a partial zombie makes consciousness
nonsensical.


It borders on the nonsensical, but between the two bad
alternatives I find the idea of a RNG instantiating human
consciousness somewhat less sensical than the idea of partial zombies.


If consciousness persists no matter what the brain is replaced with as 
long as the output remains the same this is consistent with the idea 
that consciousness does not reside in a particular substance (even a 
magical substance) or in a particular process. This is a strange idea, 
but it is akin to the existence of platonic objects. The number three 
can be implemented by arranging three objects in a row but it does not 
depend those three objects unless it is being used for a particular 
purpose, such as three beads on an abacus.


How would I know that I am not a visual zombie now, or a
visual zombie every Tuesday, Thursday and Saturday?


Here, we have to be careful what we mean by "I". Our own brains
have various spheres of consciousness as demonstrated by the Wada
Test: we can shut down one hemisphere of the brain and lose
partial awareness and functionality such as the ability to form
words and yet one remains conscious. I think being a partial
zombie would be like that, having one's sphere of awareness shrink.


But the subject's sphere of awareness would not shrink in the thought 
experiment, since by assumption their behaviour stays the same, while 
if their sphere of awareness shrank they notice that something was 
different and say so.


Why do you think they would notice?  Color blind people don't notice 
they are color blind...until somebody tells them about it and even then 
they don't "notice" it.


Brent



What is the advantage of having "real" visual experiences if
they make no objective difference and no subjective difference
either?


The advantage of real computations (which imply having real
awareness/experiences) is that real computations are more reliable
than RNGs for producing intelligent behavioral responses.


Yes, so an RNG would be a bad design choice. But the point remains 
that if the output of the system remains the same, the consciousness 
remains the same, regardless of how the system functions. The 
reasonable-sounding belief that somehow the consciousness resides in 
the brain, in particular biochemical reactions or even in electronic 
circuits simulating the brain is wrong.



--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUcTU%3D1P3bkeoki894AQ7PrLXSFH6zFXwGPVhrqwaKoYA%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>>
>> >An RNG would be a bad design choice because it would be extremely
>>> unreliable. However, as a thought experiment, it could work. If the visual
>>> cortex were removed and replaced with an RNG which for five minutes
>>> replicated the interactions with the remaining brain, the subject would
>>> behave as if they had normal vision and report that they had normal vision,
>>> then after five minutes behave as if they were blind and report that they
>>> were blind. It is perhaps contrary to intuition that the subject would
>>> really have visual experiences in that five minute period, but I don't
>>> think there is any other plausible explanation.
>>>
>>
>>> I think they would be a visual zombie in that five minute period, though
>>> as described they would not be able to report any difference.
>>>
>>> I think if one's entire brain were replaced by an RNG, they would be a
>>> total zombie who would fool us into thinking they were conscious and we
>>> would not notice a difference. So by extension a brain partially replaced
>>> by an RNG would be a partial zombie that fooled the other parts of the
>>> brain into thinking nothing was amiss.
>>>
>>
>> I think the concept of a partial zombie makes consciousness nonsensical.
>>
>
> It borders on the nonsensical, but between the two bad alternatives I find
> the idea of a RNG instantiating human consciousness somewhat less sensical
> than the idea of partial zombies.
>

If consciousness persists no matter what the brain is replaced with as long
as the output remains the same this is consistent with the idea that
consciousness does not reside in a particular substance (even a magical
substance) or in a particular process. This is a strange idea, but it is
akin to the existence of platonic objects. The number three can be
implemented by arranging three objects in a row but it does not depend
those three objects unless it is being used for a particular purpose, such
as three beads on an abacus.


> How would I know that I am not a visual zombie now, or a visual zombie
>> every Tuesday, Thursday and Saturday?
>>
>
> Here, we have to be careful what we mean by "I". Our own brains have
> various spheres of consciousness as demonstrated by the Wada Test: we can
> shut down one hemisphere of the brain and lose partial awareness and
> functionality such as the ability to form words and yet one remains
> conscious. I think being a partial zombie would be like that, having one's
> sphere of awareness shrink.
>

But the subject's sphere of awareness would not shrink in the thought
experiment, since by assumption their behaviour stays the same, while if
their sphere of awareness shrank they notice that something was different
and say so.


> What is the advantage of having "real" visual experiences if they make no
>> objective difference and no subjective difference either?
>>
>
> The advantage of real computations (which imply having real
> awareness/experiences) is that real computations are more reliable than
> RNGs for producing intelligent behavioral responses.
>

Yes, so an RNG would be a bad design choice. But the point remains that if
the output of the system remains the same, the consciousness remains the
same, regardless of how the system functions. The reasonable-sounding
belief that somehow the consciousness resides in the brain, in particular
biochemical reactions or even in electronic circuits simulating the brain
is wrong.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUcTU%3D1P3bkeoki894AQ7PrLXSFH6zFXwGPVhrqwaKoYA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>
> >An RNG would be a bad design choice because it would be extremely
>> unreliable. However, as a thought experiment, it could work. If the visual
>> cortex were removed and replaced with an RNG which for five minutes
>> replicated the interactions with the remaining brain, the subject would
>> behave as if they had normal vision and report that they had normal vision,
>> then after five minutes behave as if they were blind and report that they
>> were blind. It is perhaps contrary to intuition that the subject would
>> really have visual experiences in that five minute period, but I don't
>> think there is any other plausible explanation.
>>
>
>> I think they would be a visual zombie in that five minute period, though
>> as described they would not be able to report any difference.
>>
>> I think if one's entire brain were replaced by an RNG, they would be a
>> total zombie who would fool us into thinking they were conscious and we
>> would not notice a difference. So by extension a brain partially replaced
>> by an RNG would be a partial zombie that fooled the other parts of the
>> brain into thinking nothing was amiss.
>>
>
> I think the concept of a partial zombie makes consciousness nonsensical.
>

It borders on the nonsensical, but between the two bad alternatives I find
the idea of a RNG instantiating human consciousness somewhat less sensical
than the idea of partial zombies.


How would I know that I am not a visual zombie now, or a visual zombie
> every Tuesday, Thursday and Saturday?
>

Here, we have to be careful what we mean by "I". Our own brains have
various spheres of consciousness as demonstrated by the Wada Test: we can
shut down one hemisphere of the brain and lose partial awareness and
functionality such as the ability to form words and yet one remains
conscious. I think being a partial zombie would be like that, having one's
sphere of awareness shrink.


What is the advantage of having "real" visual experiences if they make no
> objective difference and no subjective difference either?
>

The advantage of real computations (which imply having real
awareness/experiences) is that real computations are more reliable than
RNGs for producing intelligent behavioral responses.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj5O4vwxKjOC60cORyU7qHxM5HsQo-9xDFogiYwBZ9mtA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 9:32 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 06:46, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 15:37, Jason Resch 
>> wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou <
>>> stath...@gmail.com> wrote:
>>>


 On Wed, 24 May 2023 at 04:03, Jason Resch 
 wrote:

>
>
> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>> wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past
>>> each other. Please either of you correct me if i am wrong, but in 
>>> an effort
>>> to clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having
>>> the same fine-grained causal organization *would* have the same
>>> phenomenology, the same experience, and the same qualia as the 
>>> brain with
>>> the same fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with
>>> regards to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I
>>> believe this is partly responsible for why you are both talking 
>>> past each
>>> other, because there are many levels involved in brains (and 
>>> computational
>>> systems). I believe you were discussing completely different levels 
>>> in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts,
>>> feelings, quale, etc. and there are low-level, be they neurons,
>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>> human
>>> brains, or circuits, logic gates, bits, and instructions as in 
>>> computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of
>>> grandmother's kitchen" (GMK) the trouble is we are crossing a 
>>> myriad of
>>> levels. The quale or idea or memory of the smell of GMK is a very
>>> high-level feature of a mind. When Terren asks for or discusses a 
>>> symbol
>>> for it, a complete answer/description for it can only be supplied 
>>> in terms
>>> of a vast amount of information concerning low level structures, be 
>>> they
>>> patterns of neuron firings, or patterns of bits being processed. 
>>> When we
>>> consider things down at this low level, however, we lose all 
>>> context for
>>> what the meaning, idea, and quale are or where or how they come in. 
>>> We
>>> cannot see or find the idea of GMK in any neuron, no more than we 
>>> can see
>>> or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not
>>> impossible, how we get "it" (GMK or otherwise) from "bit", but to 
>>> me, this
>>> is no greater a leap from how we get "it" from a bunch of cells 
>>> squirting
>>> ions back and forth. Trying to understand a smartphone by looking 
>>> at the
>>> flows of electrons is a similar kind of problem, it would seem just 
>>> as
>>> difficult or impossible to explain and understand the high-level 
>>> features
>>> and complexity out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss
>>> the level one is operation on when one discusses symbols, 
>>> substrates, or
>>> quale. In summary, I think a chief reason you have been talking 
>>> past each
>>> other is because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only
>>> offer my perspective in the hope it might help the conversation.
>>>
>>
>> I think you’ve captured my position. But in addition I think
>> replicating the fine-grained causal organisation is not necessary in 
>> order
>> to replicate higher level phenomena such as GMK. By extension of 
>> Chalmers’
>> substitution experiment,
>>
>
> Note that Chalmers's argument is based on assuming the 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:

>An RNG would be a bad design choice because it would be extremely
> unreliable. However, as a thought experiment, it could work. If the visual
> cortex were removed and replaced with an RNG which for five minutes
> replicated the interactions with the remaining brain, the subject would
> behave as if they had normal vision and report that they had normal vision,
> then after five minutes behave as if they were blind and report that they
> were blind. It is perhaps contrary to intuition that the subject would
> really have visual experiences in that five minute period, but I don't
> think there is any other plausible explanation.
>

> I think they would be a visual zombie in that five minute period, though
> as described they would not be able to report any difference.
>
> I think if one's entire brain were replaced by an RNG, they would be a
> total zombie who would fool us into thinking they were conscious and we
> would not notice a difference. So by extension a brain partially replaced
> by an RNG would be a partial zombie that fooled the other parts of the
> brain into thinking nothing was amiss.
>

I think the concept of a partial zombie makes consciousness nonsensical.
How would I know that I am not a visual zombie now, or a visual zombie
every Tuesday, Thursday and Saturday? What is the advantage of having
"real" visual experiences if they make no objective difference and no
subjective difference either?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXogfdS6mi9%3Df60U5QNcbnLaEYyp6Honrt-u8CcNWpsVw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 9:32 PM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 06:46, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:
>>>


 On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
 wrote:

>
>
> On Wed, 24 May 2023 at 15:37, Jason Resch 
> wrote:
>
>>
>>
>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 04:03, Jason Resch 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
 stath...@gmail.com> wrote:

>
>
> On Tue, 23 May 2023 at 21:09, Jason Resch 
> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past
>> each other. Please either of you correct me if i am wrong, but in an 
>> effort
>> to clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having
>> the same fine-grained causal organization *would* have the same
>> phenomenology, the same experience, and the same qualia as the brain 
>> with
>> the same fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with
>> regards to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I
>> believe this is partly responsible for why you are both talking past 
>> each
>> other, because there are many levels involved in brains (and 
>> computational
>> systems). I believe you were discussing completely different levels 
>> in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts,
>> feelings, quale, etc. and there are low-level, be they neurons,
>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>> human
>> brains, or circuits, logic gates, bits, and instructions as in 
>> computers.
>>
>> I think when Terren mentions a "symbol for the smell of
>> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad 
>> of
>> levels. The quale or idea or memory of the smell of GMK is a very
>> high-level feature of a mind. When Terren asks for or discusses a 
>> symbol
>> for it, a complete answer/description for it can only be supplied in 
>> terms
>> of a vast amount of information concerning low level structures, be 
>> they
>> patterns of neuron firings, or patterns of bits being processed. 
>> When we
>> consider things down at this low level, however, we lose all context 
>> for
>> what the meaning, idea, and quale are or where or how they come in. 
>> We
>> cannot see or find the idea of GMK in any neuron, no more than we 
>> can see
>> or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not
>> impossible, how we get "it" (GMK or otherwise) from "bit", but to 
>> me, this
>> is no greater a leap from how we get "it" from a bunch of cells 
>> squirting
>> ions back and forth. Trying to understand a smartphone by looking at 
>> the
>> flows of electrons is a similar kind of problem, it would seem just 
>> as
>> difficult or impossible to explain and understand the high-level 
>> features
>> and complexity out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss
>> the level one is operation on when one discusses symbols, 
>> substrates, or
>> quale. In summary, I think a chief reason you have been talking past 
>> each
>> other is because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only
>> offer my perspective in the hope it might help the conversation.
>>
>
> I think you’ve captured my position. But in addition I think
> replicating the fine-grained causal organisation is not necessary in 
> order
> to replicate higher level phenomena such as GMK. By extension of 
> Chalmers’
> substitution experiment,
>

 Note that Chalmers's argument is based on assuming the functional
 substitution occurs at a certain level of fine-grained-ness. If you 
 lose
 this step, and look at only the top-most input-output of the mind as 
 black
 box, 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 06:46, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 04:03, Jason Resch 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
>>> stath...@gmail.com> wrote:
>>>


 On Tue, 23 May 2023 at 21:09, Jason Resch 
 wrote:

> As I see this thread, Terren and Stathis are both talking past
> each other. Please either of you correct me if i am wrong, but in an 
> effort
> to clarify and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the
> same fine-grained causal organization *would* have the same 
> phenomenology,
> the same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with
> regards to symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I
> believe this is partly responsible for why you are both talking past 
> each
> other, because there are many levels involved in brains (and 
> computational
> systems). I believe you were discussing completely different levels 
> in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts,
> feelings, quale, etc. and there are low-level, be they neurons,
> neurotransmitters, atoms, quantum fields, and laws of physics as in 
> human
> brains, or circuits, logic gates, bits, and instructions as in 
> computers.
>
> I think when Terren mentions a "symbol for the smell of
> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad 
> of
> levels. The quale or idea or memory of the smell of GMK is a very
> high-level feature of a mind. When Terren asks for or discusses a 
> symbol
> for it, a complete answer/description for it can only be supplied in 
> terms
> of a vast amount of information concerning low level structures, be 
> they
> patterns of neuron firings, or patterns of bits being processed. When 
> we
> consider things down at this low level, however, we lose all context 
> for
> what the meaning, idea, and quale are or where or how they come in. We
> cannot see or find the idea of GMK in any neuron, no more than we can 
> see
> or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not
> impossible, how we get "it" (GMK or otherwise) from "bit", but to me, 
> this
> is no greater a leap from how we get "it" from a bunch of cells 
> squirting
> ions back and forth. Trying to understand a smartphone by looking at 
> the
> flows of electrons is a similar kind of problem, it would seem just as
> difficult or impossible to explain and understand the high-level 
> features
> and complexity out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss
> the level one is operation on when one discusses symbols, substrates, 
> or
> quale. In summary, I think a chief reason you have been talking past 
> each
> other is because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only
> offer my perspective in the hope it might help the conversation.
>

 I think you’ve captured my position. But in addition I think
 replicating the fine-grained causal organisation is not necessary in 
 order
 to replicate higher level phenomena such as GMK. By extension of 
 Chalmers’
 substitution experiment,

>>>
>>> Note that Chalmers's argument is based on assuming the functional
>>> substitution occurs at a certain level of fine-grained-ness. If you lose
>>> this step, and look at only the top-most input-output of the mind as 
>>> black
>>> box, then you can no longer distinguish a rock from a dreaming person, 
>>> nor
>>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>>> into the Blockhead "lookup table" argument against 

Fwd: what chatGPT is and is not

2023-05-24 Thread Brent Meeker



On 5/23/2023 6:33 AM, Terren Suydam wrote:



On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:

As I see this thread, Terren and Stathis are both talking past
each other. Please either of you correct me if i am wrong, but in
an effort to clarify and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the
same fine-grained causal organization *would* have the same
phenomenology, the same experience, and the same qualia as the
brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your positions with
regards to symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I
believe this is partly responsible for why you are both talking
past each other, because there are many levels involved in brains
(and computational systems). I believe you were discussing
completely different levels in the hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts,
feelings, quale, etc. and there are low-level, be they neurons,
neurotransmitters, atoms, quantum fields, and laws of physics as
in human brains, or circuits, logic gates, bits, and instructions
as in computers.

I think when Terren mentions a "symbol for the smell of
grandmother's kitchen" (GMK) the trouble is we are crossing a
myriad of levels. The quale or idea or memory of the smell of GMK
is a very high-level feature of a mind. When Terren asks for or
discusses a symbol for it, a complete answer/description for it
can only be supplied in terms of a vast amount of information
concerning low level structures, be they patterns of neuron
firings, or patterns of bits being processed. When we consider
things down at this low level, however, we lose all context for
what the meaning, idea, and quale are or where or how they come
in. We cannot see or find the idea of GMK in any neuron, no more
than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not
impossible, how we get "it" (GMK or otherwise) from "bit", but to
me, this is no greater a leap from how we get "it" from a bunch of
cells squirting ions back and forth. Trying to understand a
smartphone by looking at the flows of electrons is a similar kind
of problem, it would seem just as difficult or impossible to
explain and understand the high-level features and complexity out
of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss
the level one is operation on when one discusses symbols,
substrates, or quale. In summary, I think a chief reason you have
been talking past each other is because you are each operating on
different assumed levels.

Please correct me if you believe I am mistaken and know I only
offer my perspective in the hope it might help the conversation.


I appreciate the callout, but it is necessary to talk at both the 
micro and the macro for this discussion. We're talking about symbol 
grounding. I should make it clear that I don't believe symbols can be 
grounded in other symbols (i.e. symbols all the way down as Stathis 
put it), that leads to infinite regress and the illusion of meaning.  
Symbols ultimately must stand for something. The only thing they can 
stand /for/, ultimately, is something that cannot be communicated by 
other symbols: conscious experience. There is no concept in our brains 
that is not ultimately connected to something we've seen, heard, felt, 
smelled, or tasted.
Right.  That's why children learn words by ostensive definition. You 
point to a stop sign and say "red".  You point to their foot and say 
"foot".  The amazing thing is how a child learns this so easily and 
doesn't think "red" means octagon or "foot" means toes.  I think we have 
evolution to thank for this.


I personally think consciousness is waaay over rated.  Most thinking, 
even abstruse thinking like mathematical proofs are subconscious (c.f. 
Poincare' effect).  I think consciousness is an effect of language; an 
internalization of communication with others (c.f. Julian Jaynes),


Brent



In my experience with conversations like this, you usually have people 
on one side who take consciousness seriously as the only thing that is 
actually undeniable, and you have people who'd rather not talk about 
it, hand-wave it away, or outright deny it. That's the talking-past 
that usually happens, and that's what's happening here.


Terren


Jason

On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou
 wrote:



On Tue, 23 May 2023 at 15:58, Terren Suydam
 wrote:



On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou
 wrote:



On Tue, 23 May 2023 at 14:23, Terren Suydam
 wrote:



On Tue, 

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
wrote:

>
>
> On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>>>


 On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
 wrote:

>
>
> On Wed, 24 May 2023 at 04:03, Jason Resch 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
>> stath...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>>> wrote:
>>>
 As I see this thread, Terren and Stathis are both talking past each
 other. Please either of you correct me if i am wrong, but in an effort 
 to
 clarify and perhaps resolve this situation:

 I believe Stathis is saying the functional substitution having the
 same fine-grained causal organization *would* have the same 
 phenomenology,
 the same experience, and the same qualia as the brain with the same
 fine-grained causal organization.

 Therefore, there is no disagreement between your positions with
 regards to symbols groundings, mappings, etc.

 When you both discuss the problem of symbology, or bits, etc. I
 believe this is partly responsible for why you are both talking past 
 each
 other, because there are many levels involved in brains (and 
 computational
 systems). I believe you were discussing completely different levels in 
 the
 hierarchical organization.

 There are high-level parts of minds, such as ideas, thoughts,
 feelings, quale, etc. and there are low-level, be they neurons,
 neurotransmitters, atoms, quantum fields, and laws of physics as in 
 human
 brains, or circuits, logic gates, bits, and instructions as in 
 computers.

 I think when Terren mentions a "symbol for the smell of
 grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of
 levels. The quale or idea or memory of the smell of GMK is a very
 high-level feature of a mind. When Terren asks for or discusses a 
 symbol
 for it, a complete answer/description for it can only be supplied in 
 terms
 of a vast amount of information concerning low level structures, be 
 they
 patterns of neuron firings, or patterns of bits being processed. When 
 we
 consider things down at this low level, however, we lose all context 
 for
 what the meaning, idea, and quale are or where or how they come in. We
 cannot see or find the idea of GMK in any neuron, no more than we can 
 see
 or find it in any neuron.

 Of course then it should seem deeply mysterious, if not impossible,
 how we get "it" (GMK or otherwise) from "bit", but to me, this is no
 greater a leap from how we get "it" from a bunch of cells squirting 
 ions
 back and forth. Trying to understand a smartphone by looking at the 
 flows
 of electrons is a similar kind of problem, it would seem just as 
 difficult
 or impossible to explain and understand the high-level features and
 complexity out of the low-level simplicity.

 This is why it's crucial to bear in mind and explicitly discuss the
 level one is operation on when one discusses symbols, substrates, or 
 quale.
 In summary, I think a chief reason you have been talking past each 
 other is
 because you are each operating on different assumed levels.

 Please correct me if you believe I am mistaken and know I only
 offer my perspective in the hope it might help the conversation.

>>>
>>> I think you’ve captured my position. But in addition I think
>>> replicating the fine-grained causal organisation is not necessary in 
>>> order
>>> to replicate higher level phenomena such as GMK. By extension of 
>>> Chalmers’
>>> substitution experiment,
>>>
>>
>> Note that Chalmers's argument is based on assuming the functional
>> substitution occurs at a certain level of fine-grained-ness. If you lose
>> this step, and look at only the top-most input-output of the mind as 
>> black
>> box, then you can no longer distinguish a rock from a dreaming person, 
>> nor
>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>> into the Blockhead "lookup table" argument against functionalism.
>>
>
> Yes, those are perhaps problems with functionalism. But a major point
> in Chalmers' argument is that if qualia were substrate-specific (hence,
> functionalism false) it 

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023 at 11:12 AM Brent Meeker  wrote:

>
>
>
> On 5/23/2023 10:37 PM, Jason Resch wrote:
>
>
>
> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each
> other. Please either of you correct me if i am wrong, but in an effort to
> clarify and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the
> same fine-grained causal organization *would* have the same phenomenology,
> the same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with
> regards to symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I
> believe this is partly responsible for why you are both talking past each
> other, because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts,
> feelings, quale, etc. and there are low-level, be they neurons,
> neurotransmitters, atoms, quantum fields, and laws of physics as in human
> brains, or circuits, logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
> quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount 
> of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, 
> idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible,
> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
> greater a leap from how we get "it" from a bunch of cells squirting ions
> back and forth. Trying to understand a smartphone by looking at the flows
> of electrons is a similar kind of problem, it would seem just as difficult
> or impossible to explain and understand the high-level features and
> complexity out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the
> level one is operation on when one discusses symbols, substrates, or 
> quale.
> In summary, I think a chief reason you have been talking past each other 
> is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer
> my perspective in the hope it might help the conversation.
>

 I think you’ve captured my position. But in addition I think
 replicating the fine-grained causal organisation is not necessary in order
 to replicate higher level phenomena such as GMK. By extension of Chalmers’
 substitution experiment,

>>>
>>> Note that Chalmers's argument is based on assuming the functional
>>> substitution occurs at a certain level of fine-grained-ness. If you lose
>>> this step, and look at only the top-most input-output of the mind as black
>>> box, then you can no longer distinguish a rock from a dreaming person, nor
>>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>>> into the Blockhead "lookup table" argument against functionalism.
>>>
>>
>> Yes, those are perhaps problems with functionalism. But a major point in
>> Chalmers' argument is that if qualia were substrate-specific (hence,
>> functionalism false) it would be possible to make a partial zombie or an
>> entity whose consciousness and behaviour diverged from the point the
>> substitution was made. And this argument works not just by replacing the
>> neurons with silicon chips, but by replacing any part of the human with
>> anything that reproduces the interactions with the remaining parts.
>>
>
>
> How deeply do you have to go when you consider or define those "other
> parts" though? That seems to be a critical but unstated assumption, and
> something that depends on how finely grained you consider the
> relevant/important parts of a brain to be.
>
> For reference, this is 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 04:08, Brent Meeker  wrote:

>
>
> On 5/24/2023 10:41 AM, Stathis Papaioannou wrote:
>
>
>
> On Thu, 25 May 2023 at 02:14, Brent Meeker  wrote:
>
>>
>>
>> On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>> wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort 
>>> to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the
>>> same fine-grained causal organization *would* have the same 
>>> phenomenology,
>>> the same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with
>>> regards to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I
>>> believe this is partly responsible for why you are both talking past 
>>> each
>>> other, because there are many levels involved in brains (and 
>>> computational
>>> systems). I believe you were discussing completely different levels in 
>>> the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts,
>>> feelings, quale, etc. and there are low-level, be they neurons,
>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>> human
>>> brains, or circuits, logic gates, bits, and instructions as in 
>>> computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of
>>> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of
>>> levels. The quale or idea or memory of the smell of GMK is a very
>>> high-level feature of a mind. When Terren asks for or discusses a symbol
>>> for it, a complete answer/description for it can only be supplied in 
>>> terms
>>> of a vast amount of information concerning low level structures, be they
>>> patterns of neuron firings, or patterns of bits being processed. When we
>>> consider things down at this low level, however, we lose all context for
>>> what the meaning, idea, and quale are or where or how they come in. We
>>> cannot see or find the idea of GMK in any neuron, no more than we can 
>>> see
>>> or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible,
>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>> greater a leap from how we get "it" from a bunch of cells squirting ions
>>> back and forth. Trying to understand a smartphone by looking at the 
>>> flows
>>> of electrons is a similar kind of problem, it would seem just as 
>>> difficult
>>> or impossible to explain and understand the high-level features and
>>> complexity out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or 
>>> quale.
>>> In summary, I think a chief reason you have been talking past each 
>>> other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer
>>> my perspective in the hope it might help the conversation.
>>>
>>
>> I think you’ve captured my position. But in addition I think
>> replicating the fine-grained causal organisation is not necessary in 
>> order
>> to replicate higher level phenomena such as GMK. By extension of 
>> Chalmers’
>> substitution experiment,
>>
>
> Note that Chalmers's argument is based on assuming the functional
> substitution occurs at a certain level of fine-grained-ness. If you lose
> this step, and look at only the top-most input-output of the mind as black
> box, then you can no longer distinguish a rock from a dreaming person, nor
> a calculator computing 2+3 and a human computing 2+3, and one also runs
> into the Blockhead "lookup table" argument against functionalism.
>

 Yes, those are perhaps problems with functionalism. But a major point
 in Chalmers' argument is that if qualia were substrate-specific (hence,
 functionalism false) it would be possible to make a partial zombie or an
 entity whose consciousness and behaviour diverged from the point the
 substitution 

Re: what chatGPT is and is not

2023-05-24 Thread John Clark
On Wed, May 24, 2023 at 8:07 AM Jason Resch  wrote:

>> But you'd still need a computation to find the particular tape recording
>> that you need, and the larger your library of recordings the more complex
>> the computation you'd need to do would be. And in that very silly
>> thought experiment your library needs to contain every sentence that is
>> syntactically and grammatically correct. And there are an astronomical
>> number to an astronomical power of those. Even if every electron, proton,
>> neutron, photon and neutrino in the observable universe could record
>> 1000 million billion trillion sentences there would still be well over a
>> googolplex number of sentences that remained unrecorded.  Blockhead is just
>> a slight variation on Searle's idiotic Chinese room.
>>
>
>
> *> It's very different. Note they you don't need to realize or store every
> possible input for the central point of Block's argument to work.*
> *For example, let's say that AlphaZero was conscious for the purposes of
> this argument. We record each of its 361 possible responses AlphaZero
> produces to each of the different opening moves on a Go board and store the
> result in a lookup table. This table would be only a few kilobytes.*
>

Nobody in their right mind would conclude that  AlphaZero is intelligent or
conscious after just watching the opening move, but after watching an
entire game is another matter because in a typical game of GO there 150
moves and there are 10^360 different 150 move Go games, and there are only
10^78 atoms in the observable universe. And the number of possible
responses that GPT4 can produce is *VASTLY* greater than 10^360.



> * > Then we can ask, what has happened to the conscious of AlphaZero?*
>

I'm not saying  intelligent behavior creates consciousness, I'm just saying
intelligent behavior is a TEST for consciousness, and it's an imperfect one
too, but it's the only test for consciousness that we've got. I'm saying if
something displays intelligent behavior then it's intelligent and
conscious, but if something does NOT display intelligent behavior then it
may or may not be intelligent or conscious.

John K ClarkSee what's on my new list at  Extropolis

nic

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0fbSeBx2EkA-%2BfOvuxqTnS9yn-1ZJ3K8ixtQesRCNQ6w%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Brent Meeker



On 5/24/2023 10:41 AM, Stathis Papaioannou wrote:



On Thu, 25 May 2023 at 02:14, Brent Meeker  wrote:



On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:



On Wed, 24 May 2023 at 15:37, Jason Resch 
wrote:



On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou
 wrote:



On Wed, 24 May 2023 at 04:03, Jason Resch
 wrote:



On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou
 wrote:



On Tue, 23 May 2023 at 21:09, Jason Resch
 wrote:

As I see this thread, Terren and Stathis are
both talking past each other. Please either
of you correct me if i am wrong, but in an
effort to clarify and perhaps resolve this
situation:

I believe Stathis is saying the functional
substitution having the same fine-grained
causal organization *would* have the same
phenomenology, the same experience, and the
same qualia as the brain with the same
fine-grained causal organization.

Therefore, there is no disagreement between
your positions with regards to symbols
groundings, mappings, etc.

When you both discuss the problem of
symbology, or bits, etc. I believe this is
partly responsible for why you are both
talking past each other, because there are
many levels involved in brains (and
computational systems). I believe you were
discussing completely different levels in the
hierarchical organization.

There are high-level parts of minds, such as
ideas, thoughts, feelings, quale, etc. and
there are low-level, be they neurons,
neurotransmitters, atoms, quantum fields, and
laws of physics as in human brains, or
circuits, logic gates, bits, and instructions
as in computers.

I think when Terren mentions a "symbol for
the smell of grandmother's kitchen" (GMK) the
trouble is we are crossing a myriad of
levels. The quale or idea or memory of the
smell of GMK is a very high-level feature of
a mind. When Terren asks for or discusses a
symbol for it, a complete answer/description
for it can only be supplied in terms of a
vast amount of information concerning low
level structures, be they patterns of neuron
firings, or patterns of bits being processed.
When we consider things down at this low
level, however, we lose all context for what
the meaning, idea, and quale are or where or
how they come in. We cannot see or find the
idea of GMK in any neuron, no more than we
can see or find it in any neuron.

Of course then it should seem deeply
mysterious, if not impossible, how we get
"it" (GMK or otherwise) from "bit", but to
me, this is no greater a leap from how we get
"it" from a bunch of cells squirting ions
back and forth. Trying to understand a
smartphone by looking at the flows of
electrons is a similar kind of problem, it
would seem just as difficult or impossible to
explain and understand the high-level
features and complexity out of the low-level
simplicity.

This is why it's crucial to bear in mind and
explicitly discuss the level one is operation
on when one discusses symbols, substrates, or
quale. In summary, I think a chief reason you
have been talking past each other is because
you are each operating on different assumed
levels.

Please correct me if you believe I am
mistaken and know I only offer my perspective
in the hope it might help the conversation.


I think you’ve captured my 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 02:14, Brent Meeker  wrote:

>
>
> On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>>


 On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
 wrote:

>
>
> On Tue, 23 May 2023 at 21:09, Jason Resch 
> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the
>> same fine-grained causal organization *would* have the same 
>> phenomenology,
>> the same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with
>> regards to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I
>> believe this is partly responsible for why you are both talking past each
>> other, because there are many levels involved in brains (and 
>> computational
>> systems). I believe you were discussing completely different levels in 
>> the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts,
>> feelings, quale, etc. and there are low-level, be they neurons,
>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>> quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount 
>> of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things 
>> down
>> at this low level, however, we lose all context for what the meaning, 
>> idea,
>> and quale are or where or how they come in. We cannot see or find the 
>> idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible,
>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>> greater a leap from how we get "it" from a bunch of cells squirting ions
>> back and forth. Trying to understand a smartphone by looking at the flows
>> of electrons is a similar kind of problem, it would seem just as 
>> difficult
>> or impossible to explain and understand the high-level features and
>> complexity out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the
>> level one is operation on when one discusses symbols, substrates, or 
>> quale.
>> In summary, I think a chief reason you have been talking past each other 
>> is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer
>> my perspective in the hope it might help the conversation.
>>
>
> I think you’ve captured my position. But in addition I think
> replicating the fine-grained causal organisation is not necessary in order
> to replicate higher level phenomena such as GMK. By extension of Chalmers’
> substitution experiment,
>

 Note that Chalmers's argument is based on assuming the functional
 substitution occurs at a certain level of fine-grained-ness. If you lose
 this step, and look at only the top-most input-output of the mind as black
 box, then you can no longer distinguish a rock from a dreaming person, nor
 a calculator computing 2+3 and a human computing 2+3, and one also runs
 into the Blockhead "lookup table" argument against functionalism.

>>>
>>> Yes, those are perhaps problems with functionalism. But a major point in
>>> Chalmers' argument is that if qualia were substrate-specific (hence,
>>> functionalism false) it would be possible to make a partial zombie or an
>>> entity whose consciousness and behaviour diverged from the point the
>>> substitution was made. And this argument works not just by replacing the
>>> neurons with silicon chips, but by replacing any part of the human with
>>> anything that reproduces the interactions with the remaining parts.
>>>
>>
>>
>> How deeply do you have to go when you consider or define those 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>> wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort 
>>> to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the
>>> same fine-grained causal organization *would* have the same 
>>> phenomenology,
>>> the same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with
>>> regards to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I
>>> believe this is partly responsible for why you are both talking past 
>>> each
>>> other, because there are many levels involved in brains (and 
>>> computational
>>> systems). I believe you were discussing completely different levels in 
>>> the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts,
>>> feelings, quale, etc. and there are low-level, be they neurons,
>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>> human
>>> brains, or circuits, logic gates, bits, and instructions as in 
>>> computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of
>>> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of
>>> levels. The quale or idea or memory of the smell of GMK is a very
>>> high-level feature of a mind. When Terren asks for or discusses a symbol
>>> for it, a complete answer/description for it can only be supplied in 
>>> terms
>>> of a vast amount of information concerning low level structures, be they
>>> patterns of neuron firings, or patterns of bits being processed. When we
>>> consider things down at this low level, however, we lose all context for
>>> what the meaning, idea, and quale are or where or how they come in. We
>>> cannot see or find the idea of GMK in any neuron, no more than we can 
>>> see
>>> or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible,
>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>> greater a leap from how we get "it" from a bunch of cells squirting ions
>>> back and forth. Trying to understand a smartphone by looking at the 
>>> flows
>>> of electrons is a similar kind of problem, it would seem just as 
>>> difficult
>>> or impossible to explain and understand the high-level features and
>>> complexity out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or 
>>> quale.
>>> In summary, I think a chief reason you have been talking past each 
>>> other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer
>>> my perspective in the hope it might help the conversation.
>>>
>>
>> I think you’ve captured my position. But in addition I think
>> replicating the fine-grained causal organisation is not necessary in 
>> order
>> to replicate higher level phenomena such as GMK. By extension of 
>> Chalmers’
>> substitution experiment,
>>
>
> Note that Chalmers's argument is based on assuming the functional
> substitution occurs at a certain level of fine-grained-ness. If you lose
> this step, and look at only the top-most input-output of the mind as black
> box, then you can no longer distinguish a rock from a dreaming person, nor
> a calculator computing 2+3 and a human computing 2+3, and one also runs
> into the Blockhead "lookup table" argument against functionalism.
>

 Yes, those are perhaps problems with functionalism. But a major point
 in Chalmers' argument is that if qualia were substrate-specific (hence,
 functionalism false) it would be possible to make a partial zombie or an
 entity whose consciousness and behaviour diverged from the point the
 substitution was made. And this argument works not just by replacing the
 neurons with silicon chips, but by replacing 

Re: what chatGPT is and is not

2023-05-24 Thread Brent Meeker



On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:



On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:



On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou
 wrote:



On Wed, 24 May 2023 at 04:03, Jason Resch
 wrote:



On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou
 wrote:



On Tue, 23 May 2023 at 21:09, Jason Resch
 wrote:

As I see this thread, Terren and Stathis are both
talking past each other. Please either of you
correct me if i am wrong, but in an effort to
clarify and perhaps resolve this situation:

I believe Stathis is saying the functional
substitution having the same fine-grained causal
organization *would* have the same phenomenology,
the same experience, and the same qualia as the
brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your
positions with regards to symbols groundings,
mappings, etc.

When you both discuss the problem of symbology, or
bits, etc. I believe this is partly responsible
for why you are both talking past each other,
because there are many levels involved in brains
(and computational systems). I believe you were
discussing completely different levels in the
hierarchical organization.

There are high-level parts of minds, such as
ideas, thoughts, feelings, quale, etc. and there
are low-level, be they neurons, neurotransmitters,
atoms, quantum fields, and laws of physics as in
human brains, or circuits, logic gates, bits, and
instructions as in computers.

I think when Terren mentions a "symbol for the
smell of grandmother's kitchen" (GMK) the trouble
is we are crossing a myriad of levels. The quale
or idea or memory of the smell of GMK is a very
high-level feature of a mind. When Terren asks for
or discusses a symbol for it, a complete
answer/description for it can only be supplied in
terms of a vast amount of information concerning
low level structures, be they patterns of neuron
firings, or patterns of bits being processed. When
we consider things down at this low level,
however, we lose all context for what the meaning,
idea, and quale are or where or how they come in.
We cannot see or find the idea of GMK in any
neuron, no more than we can see or find it in any
neuron.

Of course then it should seem deeply mysterious,
if not impossible, how we get "it" (GMK or
otherwise) from "bit", but to me, this is no
greater a leap from how we get "it" from a bunch
of cells squirting ions back and forth. Trying to
understand a smartphone by looking at the flows of
electrons is a similar kind of problem, it would
seem just as difficult or impossible to explain
and understand the high-level features and
complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and
explicitly discuss the level one is operation on
when one discusses symbols, substrates, or quale.
In summary, I think a chief reason you have been
talking past each other is because you are each
operating on different assumed levels.

Please correct me if you believe I am mistaken and
know I only offer my perspective in the hope it
might help the conversation.


I think you’ve captured my position. But in addition I
think replicating the fine-grained causal organisation
is not necessary in order to replicate higher level
phenomena such as GMK. By extension of Chalmers’
substitution experiment,


Note that Chalmers's argument is based on assuming the
functional substitution occurs at a certain level of
fine-grained-ness. If you lose this step, and look at only
the top-most input-output of the mind as black box, then
you can no longer distinguish 

Fwd: what chatGPT is and is not

2023-05-24 Thread Brent Meeker




On 5/23/2023 10:37 PM, Jason Resch wrote:



On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou  
wrote:




On Wed, 24 May 2023 at 04:03, Jason Resch 
wrote:



On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou
 wrote:



On Tue, 23 May 2023 at 21:09, Jason Resch
 wrote:

As I see this thread, Terren and Stathis are both
talking past each other. Please either of you correct
me if i am wrong, but in an effort to clarify and
perhaps resolve this situation:

I believe Stathis is saying the functional
substitution having the same fine-grained causal
organization *would* have the same phenomenology, the
same experience, and the same qualia as the brain with
the same fine-grained causal organization.

Therefore, there is no disagreement between your
positions with regards to symbols groundings,
mappings, etc.

When you both discuss the problem of symbology, or
bits, etc. I believe this is partly responsible for
why you are both talking past each other, because
there are many levels involved in brains (and
computational systems). I believe you were discussing
completely different levels in the hierarchical
organization.

There are high-level parts of minds, such as ideas,
thoughts, feelings, quale, etc. and there are
low-level, be they neurons, neurotransmitters, atoms,
quantum fields, and laws of physics as in human
brains, or circuits, logic gates, bits, and
instructions as in computers.

I think when Terren mentions a "symbol for the smell
of grandmother's kitchen" (GMK) the trouble is we are
crossing a myriad of levels. The quale or idea or
memory of the smell of GMK is a very high-level
feature of a mind. When Terren asks for or discusses a
symbol for it, a complete answer/description for it
can only be supplied in terms of a vast amount of
information concerning low level structures, be they
patterns of neuron firings, or patterns of bits being
processed. When we consider things down at this low
level, however, we lose all context for what the
meaning, idea, and quale are or where or how they come
in. We cannot see or find the idea of GMK in any
neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if
not impossible, how we get "it" (GMK or otherwise)
from "bit", but to me, this is no greater a leap from
how we get "it" from a bunch of cells squirting ions
back and forth. Trying to understand a smartphone by
looking at the flows of electrons is a similar kind of
problem, it would seem just as difficult or impossible
to explain and understand the high-level features and
complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and
explicitly discuss the level one is operation on when
one discusses symbols, substrates, or quale. In
summary, I think a chief reason you have been talking
past each other is because you are each operating on
different assumed levels.

Please correct me if you believe I am mistaken and
know I only offer my perspective in the hope it might
help the conversation.


I think you’ve captured my position. But in addition I
think replicating the fine-grained causal organisation is
not necessary in order to replicate higher level phenomena
such as GMK. By extension of Chalmers’ substitution
experiment,


Note that Chalmers's argument is based on assuming the
functional substitution occurs at a certain level of
fine-grained-ness. If you lose this step, and look at only the
top-most input-output of the mind as black box, then you can
no longer distinguish a rock from a dreaming person, nor a
calculator computing 2+3 and a human computing 2+3, and one
also runs into the Blockhead "lookup table" argument against
functionalism.


Yes, those are perhaps problems with functionalism. But a major
point in Chalmers' argument is that if qualia were
substrate-specific (hence, functionalism false) it would be
possible to make a partial 

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 5:35 AM John Clark  wrote:

>
> On Wed, May 24, 2023 at 1:37 AM Jason Resch  wrote:
>
> *> By substituting a recording of a computation for a computation, you
>> replace a conscious mind with a tape recording of the prior behavior of a
>> conscious mind. *
>
>
> But you'd still need a computation to find the particular tape recording
> that you need, and the larger your library of recordings the more complex
> the computation you'd need to do would be.
>
> *> This is what happens in the Blockhead thought experiment*
>
>
> And in that very silly thought experiment your library needs to contain
> every sentence that is syntactically and grammatically correct. And there
> are an astronomical number to an astronomical power of those. Even if every
> electron, proton, neutron, photon and neutrino in the observable universe
> could record 1000 million billion trillion sentences there would still be
> well over a googolplex number of sentences that remained unrecorded.
> Blockhead is just a slight variation on Searle's idiotic Chinese room.
>


It's very different.

Note they you don't need to realize or store every possible input for the
central point of Block's argument to work.

For example, let's say that AlphaZero was conscious for the purposes of
this argument. We record each of its 361 possible responses AlphaZero
produces to each of the different opening moves on a Go board and store the
result in a lookup table. This table would be only a few kilobytes. Then we
can ask, what has happened to the conscious of AlphaZero? Here we have a
functionally equivalent response for all possible second moves, but we've
done away with all the complexity of the prior computation.

What the substitution level argument really asks is how far up in the
subroutines of a mind's program can we implement memoization (
https://en.m.wikipedia.org/wiki/Memoization ) before the result is some
kind of altered consciousness, or at least some diminished contribution to
the measure of a conscious experience (under duplicationist conceptions of
measure).


Jason

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg29ypDhX_3sZTTLZuFdWV%2Be86WVvin9r878U%3D1XNMAxg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
wrote:

>
>
> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>>


 On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
 wrote:

>
>
> On Tue, 23 May 2023 at 21:09, Jason Resch 
> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the
>> same fine-grained causal organization *would* have the same 
>> phenomenology,
>> the same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with
>> regards to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I
>> believe this is partly responsible for why you are both talking past each
>> other, because there are many levels involved in brains (and 
>> computational
>> systems). I believe you were discussing completely different levels in 
>> the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts,
>> feelings, quale, etc. and there are low-level, be they neurons,
>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>> quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount 
>> of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things 
>> down
>> at this low level, however, we lose all context for what the meaning, 
>> idea,
>> and quale are or where or how they come in. We cannot see or find the 
>> idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible,
>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>> greater a leap from how we get "it" from a bunch of cells squirting ions
>> back and forth. Trying to understand a smartphone by looking at the flows
>> of electrons is a similar kind of problem, it would seem just as 
>> difficult
>> or impossible to explain and understand the high-level features and
>> complexity out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the
>> level one is operation on when one discusses symbols, substrates, or 
>> quale.
>> In summary, I think a chief reason you have been talking past each other 
>> is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer
>> my perspective in the hope it might help the conversation.
>>
>
> I think you’ve captured my position. But in addition I think
> replicating the fine-grained causal organisation is not necessary in order
> to replicate higher level phenomena such as GMK. By extension of Chalmers’
> substitution experiment,
>

 Note that Chalmers's argument is based on assuming the functional
 substitution occurs at a certain level of fine-grained-ness. If you lose
 this step, and look at only the top-most input-output of the mind as black
 box, then you can no longer distinguish a rock from a dreaming person, nor
 a calculator computing 2+3 and a human computing 2+3, and one also runs
 into the Blockhead "lookup table" argument against functionalism.

>>>
>>> Yes, those are perhaps problems with functionalism. But a major point in
>>> Chalmers' argument is that if qualia were substrate-specific (hence,
>>> functionalism false) it would be possible to make a partial zombie or an
>>> entity whose consciousness and behaviour diverged from the point the
>>> substitution was made. And this argument works not just by replacing the
>>> neurons with silicon chips, but by replacing any part of the human with
>>> anything that reproduces the interactions with the remaining parts.
>>>
>>
>>
>> How deeply do you have to go when you consider or define those "other
>> parts" though? That seems to be a critical 

Gravitational wave detector LIGO is back online!

2023-05-24 Thread John Clark
And now LIGO is much more sensitive so it will be able to detect about 10
times more Black Hole mergers than it was capable of doing back in 2015
when it detected its first Black Hole collision.

Gravitational wave detector LIGO is back online


John K ClarkSee what's on my new list at  Extropolis

bol

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2UcGfhmTaSLy1qE45FMZ83SNKANmfow1sY91cr5exsbQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread John Clark
On Wed, May 24, 2023 at 1:37 AM Jason Resch  wrote:

*> By substituting a recording of a computation for a computation, you
> replace a conscious mind with a tape recording of the prior behavior of a
> conscious mind. *


But you'd still need a computation to find the particular tape recording
that you need, and the larger your library of recordings the more complex
the computation you'd need to do would be.

*> This is what happens in the Blockhead thought experiment*


And in that very silly thought experiment your library needs to contain
every sentence that is syntactically and grammatically correct. And there
are an astronomical number to an astronomical power of those. Even if every
electron, proton, neutron, photon and neutrino in the observable universe
could record 1000 million billion trillion sentences there would still be
well over a googolplex number of sentences that remained unrecorded.
Blockhead is just a slight variation on Searle's idiotic Chinese room.

John K ClarkSee what's on my new list at  Extropolis

hdf

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3YRnBkkCoi9YgyQFh126Qi4KpiYR2K5mZZc%2BgtjoncYA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each
> other. Please either of you correct me if i am wrong, but in an effort to
> clarify and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the
> same fine-grained causal organization *would* have the same phenomenology,
> the same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with
> regards to symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I
> believe this is partly responsible for why you are both talking past each
> other, because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts,
> feelings, quale, etc. and there are low-level, be they neurons,
> neurotransmitters, atoms, quantum fields, and laws of physics as in human
> brains, or circuits, logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
> quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount 
> of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, 
> idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible,
> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
> greater a leap from how we get "it" from a bunch of cells squirting ions
> back and forth. Trying to understand a smartphone by looking at the flows
> of electrons is a similar kind of problem, it would seem just as difficult
> or impossible to explain and understand the high-level features and
> complexity out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the
> level one is operation on when one discusses symbols, substrates, or 
> quale.
> In summary, I think a chief reason you have been talking past each other 
> is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer
> my perspective in the hope it might help the conversation.
>

 I think you’ve captured my position. But in addition I think
 replicating the fine-grained causal organisation is not necessary in order
 to replicate higher level phenomena such as GMK. By extension of Chalmers’
 substitution experiment,

>>>
>>> Note that Chalmers's argument is based on assuming the functional
>>> substitution occurs at a certain level of fine-grained-ness. If you lose
>>> this step, and look at only the top-most input-output of the mind as black
>>> box, then you can no longer distinguish a rock from a dreaming person, nor
>>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>>> into the Blockhead "lookup table" argument against functionalism.
>>>
>>
>> Yes, those are perhaps problems with functionalism. But a major point in
>> Chalmers' argument is that if qualia were substrate-specific (hence,
>> functionalism false) it would be possible to make a partial zombie or an
>> entity whose consciousness and behaviour diverged from the point the
>> substitution was made. And this argument works not just by replacing the
>> neurons with silicon chips, but by replacing any part of the human with
>> anything that reproduces the interactions with the remaining parts.
>>
>
>
> How deeply do you have to go when you consider or define those "other
> parts" though? That seems to be a critical but unstated assumption, and
> something that depends on how finely grained you consider the
> relevant/important parts of a brain to be.
>
> For reference, this is what Chalmers says:
>
>
> "In this paper I defend this