Re: what chatGPT is and is not

2023-05-28 Thread John Clark
On Sat, May 27, 2023 at 8:19 PM smitra  wrote:


> *> chatGPT was able to give the derivation of the moment of inertia of a
> sphere, but was unable to derive this in amuch simpler way *


First of all, GPT-4 is much smarter than chatGPT, so you should try that.
And for reasons that are not entirely clear, if you find it acting dumb on
a particular problem you can greatly improve performance by encouraging it
simply by ending your request with the words "*Let's work this out in a
step by step way to be sure we have the right answer*".

Also, there are an infinite number of ways to prove a true statement, the
fact that chatGPT did not use the proof that you personally like best does
not necessarily mean it doesn't understand the concept involved because the
shortest derivation is not necessarily the simplest if "simplest" is to
mean easiest to understand. If that's what the word means then the
"simplest" would be a subjective matter of taste. A proof that's simple for
me may be confusing to you and vice versa even though both proofs are
correct.

And by the way, currently GPT-4 is as dumb as it's ever going to be.

John K ClarkSee what's on my new list at  Extropolis

sfm

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv22xb6nRr6MoW0gQ93HYyKtR%3DuzUeNnS4MpCepNbvgb%2Bg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-27 Thread smitra
Indeed, and as I pointed out, it's not all that difficult to debunk the 
idea that it understands anything at all by asking simple questions that 
are not included in its database. You can test chatGPT just like you can 
test students who you suspect have cheated at exams. You invite them for 
clarification in your office, and let them do some problems in front of 
you on the blackboard. If those questions are simpler than the exam 
problems and the student cannot do those, then that's a red flag.


Similarly, as discussed here, chatGPT was able to give the derivation of 
the moment of inertia of a sphere, but was unable to derive this in a 
much simpler way by invoking spherical symmetry even when given lots of 
hints. All it could do was write down the original derivation again and 
the  argue that the moment of inertia is the same for all axes, and that 
the result is spherically symmetric, But it couldn't derive the 
expression for the moment of inertia by making use of that (adding up 
the momenta of inertia in 3 orthogonal directions yields a spherically 
symmetric integral that's much easier to compute). The reason why it 
can't do this is because it's not in its database.


And there are quite few of such cases where there the widely published 
solution is significantly more complex than another solution which isn't 
widely published and may not be in chatGPT's database. For example:


Derive that the flux of isotropic radiation incident on an area is 1/4 u 
c where u is the energy density and c the speed of light.


Standard solution: The part of the flux coming from a solid angle range 
dOmega is u c cos(theta) dOmega/(4 pi) where theta is the angle with the 
normal of the surface. Writing dOmega as sin(theta) dtheta dphi and 
integrating over the half-sphere from which the radiation can reach the 
area, yields:


Flux = u c/(4 pi)Integral over phi from  0 o 2 pi dphi Integral over 
theta from 0 to pi/2 of sin(theta) cos(theta) d theta =1/4 u c


chatGPT will probably have no problems blurting this out, because this 
can be found in almost all sources.


But the fact that radiation is isotropic should be something that we 
could exploit to simplify this derivation. That's indeed possible. The 
reason why we couldn't in the above derivation was because we let the 
area be a small straight area that broke spherical symmetry. So let's 
fix that:


Much simpler derivation: Consider a small sphere of radius r inside a 
cavity filled with isotropic radiation. The amount of radiation 
intercepted from a solid angle range dOmega around any direction is then 
 u c pi r^2 dOmega/(4 pi), because the radiation is intercepted by the 
cross section of the sphere in the direction orthogonal from where the 
radiation is coming and hat's always pi r^2. Because this doesn't depend 
on the direction the radiation is coming from, integrating over the  
solid angle is now trivial, this yields u c pi r^2. The flux intercepted 
by an area element on the sphere is then obtained by dividing this by 
the area 4 pi r^2 of the sphere which is therefore 1/4 u c. And if 
that's the flux incident on an area element of a sphere, it is also the 
flux though it if the rest of the sphere wouldn't be there.


chatGPT probably won't be able to present this much simpler derivation 
regardless of how many hints you give it.



Saibal







On 22-05-2023 23:56, Terren Suydam wrote:

Many, myself included, are captivated by the amazing capabilities of
chatGPT and other LLMs. They are, truly, incredible. Depending on your
definition of Turing Test, it passes with flying colors in many, many
contexts. It would take a much stricter Turing Test than we might have
imagined this time last year, before we could confidently say that
we're not talking to a human. One way to improve chatGPT's performance
on an actual Turing Test would be to slow it down, because it is too
fast to be human.

All that said, is chatGPT actually intelligent?  There's no question
that it behaves in a way that we would all agree is intelligent. The
answers it gives, and the speed it gives them in, reflect an
intelligence that often far exceeds most if not all humans.

I know some here say intelligence is as intelligence does. Full stop,
conversation over. ChatGPT is intelligent, because it acts
intelligently.

But this is an oversimplified view!  The reason it's over-simple is
that it ignores what the source of the intelligence is. The source of
the intelligence is in the texts it's trained on. If ChatGPT was
trained on gibberish, that's what you'd get out of it. It is amazingly
similar to the Chinese Room thought experiment proposed by John
Searle. It is manipulating symbols without having any understanding of
what those symbols are. As a result, it does not and can not know if
what it's saying is correct or not. This is a well known caveat of
using LLMs.

ChatGPT, therefore, is more like a search engine that can extract the
intelligence that is already structured 

Re: what chatGPT is and is not

2023-05-25 Thread Brent Meeker



On 5/25/2023 7:04 AM, Terren Suydam wrote:


Do you have a theory for why neurology supports consciousness but
silicon circuitry cannot?


I'm agnostic about this, but that's because I no longer assume 
physicalism. For me, the hard problem signals that physicalism is 
impossible. I've argued on this list many times as a physicalist, as 
one who believes in the possibility of artificial consciousness, 
uploading, etc. I've argued that there is something it is like to be a 
cybernetic system. But at the end of it all, I just couldn't overcome 
the problem of aesthetic valence


Why would aesthetic valence be a problem for physicalism.  Even bacteria 
know enough to swim away from some chemical gradients and toward others.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/56ba1075-5e52-2bd0-77f6-bd65b7031435%40gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread Stathis Papaioannou
On Fri, 26 May 2023 at 00:21, Jason Resch  wrote:

>
>
> On Thu, May 25, 2023, 9:43 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 21:28, Jason Resch  wrote:
>>
>>>
>>>
>>> On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 11:48, Jason Resch 
>> wrote:
>>
>> >An RNG would be a bad design choice because it would be extremely
>>> unreliable. However, as a thought experiment, it could work. If the 
>>> visual
>>> cortex were removed and replaced with an RNG which for five minutes
>>> replicated the interactions with the remaining brain, the subject would
>>> behave as if they had normal vision and report that they had normal 
>>> vision,
>>> then after five minutes behave as if they were blind and report that 
>>> they
>>> were blind. It is perhaps contrary to intuition that the subject would
>>> really have visual experiences in that five minute period, but I don't
>>> think there is any other plausible explanation.
>>>
>>
>>> I think they would be a visual zombie in that five minute period,
>>> though as described they would not be able to report any difference.
>>>
>>> I think if one's entire brain were replaced by an RNG, they would be
>>> a total zombie who would fool us into thinking they were conscious and 
>>> we
>>> would not notice a difference. So by extension a brain partially 
>>> replaced
>>> by an RNG would be a partial zombie that fooled the other parts of the
>>> brain into thinking nothing was amiss.
>>>
>>
>> I think the concept of a partial zombie makes consciousness
>> nonsensical.
>>
>
> It borders on the nonsensical, but between the two bad alternatives I
> find the idea of a RNG instantiating human consciousness somewhat less
> sensical than the idea of partial zombies.
>

 If consciousness persists no matter what the brain is replaced with as
 long as the output remains the same this is consistent with the idea that
 consciousness does not reside in a particular substance (even a magical
 substance) or in a particular process.

>>>
>>> Yes but this is a somewhat crude 1960s version of functionalism, which
>>> as I described and as you recognized, is vulnerable to all kinds of
>>> attacks. Modern functionalism is about more than high level inputs and
>>> outputs, and includes causal organization and implementation details at
>>> some level (the functional substitution level).
>>>
>>> Don't read too deeply into the mathematical definition of function as
>>> simply inputs and outputs, think of it more in terms of what a mind does,
>>> rather than what a mind is, this is the thinking that led to functionalism
>>> and an acceptance of multiple realizability.
>>>
>>>
>>>
>>> This is a strange idea, but it is akin to the existence of platonic
 objects. The number three can be implemented by arranging three objects in
 a row but it does not depend those three objects unless it is being used
 for a particular purpose, such as three beads on an abacus.

>>>
>>> Bubble sort and merge sort both compute the same thing and both have the
>>> same inputs and outputs, but they are different mathematical objects, with
>>> different behaviors, steps, subroutines and runtime efficiency.
>>>
>>>
>>>

> How would I know that I am not a visual zombie now, or a visual zombie
>> every Tuesday, Thursday and Saturday?
>>
>
> Here, we have to be careful what we mean by "I". Our own brains have
> various spheres of consciousness as demonstrated by the Wada Test: we can
> shut down one hemisphere of the brain and lose partial awareness and
> functionality such as the ability to form words and yet one remains
> conscious. I think being a partial zombie would be like that, having one's
> sphere of awareness shrink.
>

 But the subject's sphere of awareness would not shrink in the thought
 experiment,

>>>
>>> Have you ever wondered what delineates the mind from its environment?
>>> Why it is that you are not aware of my thoughts but you see me as an object
>>> that only affects your senses, even though we could represent the whole
>>> earth as one big functional system?
>>>
>>> I don't have a good answer to this question but it seems it might be a
>>> factor here. The randomly generated outputs from the RNG would seem an
>>> environmental noise/sensation coming from the outside, rather than a
>>> recursively linked and connected loop of processing as would exist in a
>>> genuinely functioning brain of two hemispheres.
>>>
>>>
>>> since by assumption their behaviour stays the same, while if their
 sphere of awareness shrank they 

Re: what chatGPT is and is not

2023-05-25 Thread Brent Meeker




On 5/25/2023 4:28 AM, Jason Resch wrote:
Have you ever wondered what delineates the mind from its environment? 
Why it is that you are not aware of my thoughts but you see me as an 
object that only affects your senses, even though we could represent 
the whole earth as one big functional system?


I don't have a good answer to this question but it seems it might be a 
factor here. The randomly generated outputs from the RNG would seem an 
environmental noise/sensation coming from the outside, rather than a 
recursively linked and connected loop of processing as would exist in 
a genuinely functioning brain of two hemispheres.


I would reject this radical output=function.  The brain evolved as 
support for the sensory systems.  I is inherently and sensitively 
engaged with the environment.  The RNG thought experiment is based on 
the idea that the brain can function in isolation, an idea supported by 
concentrating on consciousness as the main function of the brain, which 
I also reject.  Consciousness is a relatively small part of the brain's 
function, mainly concerned with communication to others.  Remember that 
the Poincare' effect was described by a great mathematician.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b8f9616d-9506-8c41-3d90-656365baf7a5%40gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread John Clark
On Thu, May 25, 2023 at 7:28 AM Jason Resch  wrote:

*> Have you ever wondered what delineates the mind from its environment?*
>

No.


> * > Why it is that you are not aware of my thoughts but you see me as an
> object that only affects your senses, even though we could represent the
> whole earth as one big functional system?*
>

The reason is lack of information and lack of computational resources, it's
the same reason you're not aware of the velocity of every molecule of the air
in the room you're in right now  nor can you predict what all the molecules
will be doing one hour from now, but you are aware of the air's temperature
now and you can make a pretty good guess about what the temperature will be in
one hour.


> *> I don't have a good answer to this question*
>

Then how fortunate it is for you to be able to talk to me.

*> The randomly generated outputs from the RNG would seem an environmental
> noise/sensation coming from the outside, rather than a recursively linked
> and connected loop of processing *


In your ridiculous example the cause of the neuron acting the way it does is
not coming from the inside and it does not come from the outside either because
you claim the neuron is acting randomly and the very definition of "random"
is an event without a cause.

> *But here (almost by magic), the RNG outputs have forced the physical
> behavior of the remaining hemisphere to remain the same*
>

That is incorrect. The neuron is not behaving "*ALMOST*" magically, it
*IS *magical;
but you were the one who dreamed up this magical thought experiment, not
me.

*Arnold Zuboff has written a thought experiment to this effect.*
>

I'm not going to bother looking it up because you and I have very different
ideas about what constitutes a good thought experiment.

*> But if a theory cannot acknowledge a difference in the conscious between
> an electron and a dreaming brain inside a skull, then the theory is (in my
> opinion) operationally useless.*
>

Correct. Unless you make the unprovable assumption that intelligent
behavior implies consciousness then EVERY consciousness theory is
operationally useless. And useless for the study of Ontology and
Epistemology too. In other words just plain useless. That's why I'm vastly
more interested in intelligence theories than consciousness theories; one
is easy to fake and the other is impossible to fake.

John K ClarkSee what's on my new list at  Extropolis

tic

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0UZTyZ67zO%2Bjj2cmoyLbCyxDpMTEpj7yJipHiWdNdJUg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023 at 9:16 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 6:00 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023, 4:14 PM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 2:27 PM Jason Resch 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
 wrote:

>
>
> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
> wrote:
>
>
>> And yes, I'm arguing that a true simulation (let's say for the sake
>> of a thought experiment we were able to replicate every neural connection
>> of a human being in code, including the connectomes, and 
>> neurotransmitters,
>> along with a simulated nerve that was connected to a button on the desk 
>> we
>> could press which would simulate the signal sent when a biological pain
>> receptor is triggered) would feel pain that is just as real as the pain 
>> you
>> and I feel as biological organisms.
>>
>
> This follows from the physicalist no-zombies-possible stance. But it
> still runs into the hard problem, basically. How does stuff give rise to
> experience.
>
>
 I would say stuff doesn't give rise to conscious experience. Conscious
 experience is the logically necessary and required state of knowledge that
 is present in any consciousness-necessitating behaviors. If you design a
 simple robot with a camera and robot arm that is able to reliably catch a
 ball thrown in its general direction, then something in that system *must*
 contain knowledge of the ball's relative position and trajectory. It simply
 isn't logically possible to have a system that behaves in all situations as
 if it knows where the ball is, without knowing where the ball is.
 Consciousness is simply the state of being with knowledge.

 Con- "Latin for with"
 -Scious- "Latin for knowledge"
 -ness "English suffix meaning the state of being X"

 Consciousness -> The state of being with knowledge.

 There is an infinite variety of potential states and levels of
 knowledge, and this contributes to much of the confusion, but boiled down
 to the simplest essence of what is or isn't conscious, it is all about
 knowledge states. Knowledge states require activity/reactivity to the
 presence of information, and counterfactual behaviors (if/then, greater
 than less than, discriminations and comparisons that lead to different
 downstream consequences in a system's behavior). At least, this is my
 theory of consciousness.

 Jason

>>>
>>> This still runs into the valence problem though. Why does some
>>> "knowledge" correspond with a positive *feeling* and other knowledge
>>> with a negative feeling?
>>>
>>
>> That is a great question. Though I'm not sure it's fundamentally
>> insoluble within model where every conscious state is a particular state of
>> knowledge.
>>
>> I would propose that having positive and negative experiences, i.e. pain
>> or pleasure, requires knowledge states with a certain minium degree of
>> sophistication. For example, knowing:
>>
>> Pain being associated with knowledge states such as: "I don't like this,
>> this is bad, I'm in pain, I want to change my situation."
>>
>> Pleasure being associated with knowledge states such as: "This is good
>> for me, I could use more of this, I don't want this to end.'
>>
>> Such knowledge states require a degree of reflexive awareness, to have a
>> notion of a self where some outcomes may be either positive or negative to
>> that self, and perhaps some notion of time or a sufficient agency to be
>> able to change one's situation.
>>
>> Sone have argued that plants can't feel pain because there's little they
>> can do to change their situation (though I'm agnostic on this).
>>
>>   I'm not talking about the functional accounts of positive and negative
>>> experiences. I'm talking about phenomenology. The functional aspect of it
>>> is not irrelevant, but to focus *only* on that is to sweep the feeling
>>> under the rug. So many dialogs on this topic basically terminate here,
>>> where it's just a clash of belief about the relative importance of
>>> consciousness and phenomenology as the mediator of all experience and
>>> knowledge.
>>>
>>
>> You raise important questions which no complete theory of consciousness
>> should ignore. I think one reason things break down here is because there's
>> such incredible complexity behind and underlying the states of
>> consciousness we humans perceive and no easy way to communicate all the
>> salient properties of those experiences.
>>
>> Jason
>>
>
> Thanks for that. These kinds of questions are rarely acknowledged in the
> mainstream. The problem is how much we take valence as a given, or how much
> it's conflated with its function, that most people aren't aware of how
> strange it is if you're coming from a physicalist metaphysics.  "Evolution
> did 

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023 at 9:05 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 5:47 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023, 3:50 PM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 1:46 PM Jason Resch 
>>> wrote:
>>>


 On Tue, May 23, 2023, 9:34 AM Terren Suydam 
 wrote:

>
>
> On Tue, May 23, 2023 at 7:09 AM Jason Resch 
> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the
>> same fine-grained causal organization *would* have the same 
>> phenomenology,
>> the same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with
>> regards to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I
>> believe this is partly responsible for why you are both talking past each
>> other, because there are many levels involved in brains (and 
>> computational
>> systems). I believe you were discussing completely different levels in 
>> the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts,
>> feelings, quale, etc. and there are low-level, be they neurons,
>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>> quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount 
>> of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things 
>> down
>> at this low level, however, we lose all context for what the meaning, 
>> idea,
>> and quale are or where or how they come in. We cannot see or find the 
>> idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible,
>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>> greater a leap from how we get "it" from a bunch of cells squirting ions
>> back and forth. Trying to understand a smartphone by looking at the flows
>> of electrons is a similar kind of problem, it would seem just as 
>> difficult
>> or impossible to explain and understand the high-level features and
>> complexity out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the
>> level one is operation on when one discusses symbols, substrates, or 
>> quale.
>> In summary, I think a chief reason you have been talking past each other 
>> is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer
>> my perspective in the hope it might help the conversation.
>>
>
> I appreciate the callout, but it is necessary to talk at both the
> micro and the macro for this discussion. We're talking about symbol
> grounding. I should make it clear that I don't believe symbols can be
> grounded in other symbols (i.e. symbols all the way down as Stathis put
> it), that leads to infinite regress and the illusion of meaning.  Symbols
> ultimately must stand for something. The only thing they can stand
> *for*, ultimately, is something that cannot be communicated by other
> symbols: conscious experience. There is no concept in our brains that is
> not ultimately connected to something we've seen, heard, felt, smelled, or
> tasted.
>

 I agree everything you have experienced is rooted in consciousness.

 But at the low level, that only thing your brain senses are neural
 signals (symbols, on/off, ones and zeros).

 In your arguments you rely on the high-level conscious states of human
 brains to establish that they have grounding, but then use the low-level
 descriptions of machines to deny their own consciousness, and hence deny
 they can ground their processing to anything.

 If you remained in the space of low-level descriptions for both brains
 and machine intelligences, however, you would see each struggles to make a
 connection to what may exist at the 

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023, 9:43 AM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 21:28, Jason Resch  wrote:
>
>>
>>
>> On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>>>


 On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
 wrote:

>
>
> On Thu, 25 May 2023 at 11:48, Jason Resch 
> wrote:
>
> >An RNG would be a bad design choice because it would be extremely
>> unreliable. However, as a thought experiment, it could work. If the 
>> visual
>> cortex were removed and replaced with an RNG which for five minutes
>> replicated the interactions with the remaining brain, the subject would
>> behave as if they had normal vision and report that they had normal 
>> vision,
>> then after five minutes behave as if they were blind and report that they
>> were blind. It is perhaps contrary to intuition that the subject would
>> really have visual experiences in that five minute period, but I don't
>> think there is any other plausible explanation.
>>
>
>> I think they would be a visual zombie in that five minute period,
>> though as described they would not be able to report any difference.
>>
>> I think if one's entire brain were replaced by an RNG, they would be
>> a total zombie who would fool us into thinking they were conscious and we
>> would not notice a difference. So by extension a brain partially replaced
>> by an RNG would be a partial zombie that fooled the other parts of the
>> brain into thinking nothing was amiss.
>>
>
> I think the concept of a partial zombie makes consciousness
> nonsensical.
>

 It borders on the nonsensical, but between the two bad alternatives I
 find the idea of a RNG instantiating human consciousness somewhat less
 sensical than the idea of partial zombies.

>>>
>>> If consciousness persists no matter what the brain is replaced with as
>>> long as the output remains the same this is consistent with the idea that
>>> consciousness does not reside in a particular substance (even a magical
>>> substance) or in a particular process.
>>>
>>
>> Yes but this is a somewhat crude 1960s version of functionalism, which as
>> I described and as you recognized, is vulnerable to all kinds of attacks.
>> Modern functionalism is about more than high level inputs and outputs, and
>> includes causal organization and implementation details at some level (the
>> functional substitution level).
>>
>> Don't read too deeply into the mathematical definition of function as
>> simply inputs and outputs, think of it more in terms of what a mind does,
>> rather than what a mind is, this is the thinking that led to functionalism
>> and an acceptance of multiple realizability.
>>
>>
>>
>> This is a strange idea, but it is akin to the existence of platonic
>>> objects. The number three can be implemented by arranging three objects in
>>> a row but it does not depend those three objects unless it is being used
>>> for a particular purpose, such as three beads on an abacus.
>>>
>>
>> Bubble sort and merge sort both compute the same thing and both have the
>> same inputs and outputs, but they are different mathematical objects, with
>> different behaviors, steps, subroutines and runtime efficiency.
>>
>>
>>
>>>
 How would I know that I am not a visual zombie now, or a visual zombie
> every Tuesday, Thursday and Saturday?
>

 Here, we have to be careful what we mean by "I". Our own brains have
 various spheres of consciousness as demonstrated by the Wada Test: we can
 shut down one hemisphere of the brain and lose partial awareness and
 functionality such as the ability to form words and yet one remains
 conscious. I think being a partial zombie would be like that, having one's
 sphere of awareness shrink.

>>>
>>> But the subject's sphere of awareness would not shrink in the thought
>>> experiment,
>>>
>>
>> Have you ever wondered what delineates the mind from its environment? Why
>> it is that you are not aware of my thoughts but you see me as an object
>> that only affects your senses, even though we could represent the whole
>> earth as one big functional system?
>>
>> I don't have a good answer to this question but it seems it might be a
>> factor here. The randomly generated outputs from the RNG would seem an
>> environmental noise/sensation coming from the outside, rather than a
>> recursively linked and connected loop of processing as would exist in a
>> genuinely functioning brain of two hemispheres.
>>
>>
>> since by assumption their behaviour stays the same, while if their sphere
>>> of awareness shrank they notice that something was different and say so.
>>>
>>
>> But here (almost by magic), the RNG outputs have forced the physical
>> behavior of the remaining hemisphere to remain the same while fundamentally
>> 

Re: what chatGPT is and is not

2023-05-25 Thread Terren Suydam
On Tue, May 23, 2023 at 6:00 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 4:14 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 2:27 PM Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
 wrote:


> And yes, I'm arguing that a true simulation (let's say for the sake of
> a thought experiment we were able to replicate every neural connection of 
> a
> human being in code, including the connectomes, and neurotransmitters,
> along with a simulated nerve that was connected to a button on the desk we
> could press which would simulate the signal sent when a biological pain
> receptor is triggered) would feel pain that is just as real as the pain 
> you
> and I feel as biological organisms.
>

 This follows from the physicalist no-zombies-possible stance. But it
 still runs into the hard problem, basically. How does stuff give rise to
 experience.


>>> I would say stuff doesn't give rise to conscious experience. Conscious
>>> experience is the logically necessary and required state of knowledge that
>>> is present in any consciousness-necessitating behaviors. If you design a
>>> simple robot with a camera and robot arm that is able to reliably catch a
>>> ball thrown in its general direction, then something in that system *must*
>>> contain knowledge of the ball's relative position and trajectory. It simply
>>> isn't logically possible to have a system that behaves in all situations as
>>> if it knows where the ball is, without knowing where the ball is.
>>> Consciousness is simply the state of being with knowledge.
>>>
>>> Con- "Latin for with"
>>> -Scious- "Latin for knowledge"
>>> -ness "English suffix meaning the state of being X"
>>>
>>> Consciousness -> The state of being with knowledge.
>>>
>>> There is an infinite variety of potential states and levels of
>>> knowledge, and this contributes to much of the confusion, but boiled down
>>> to the simplest essence of what is or isn't conscious, it is all about
>>> knowledge states. Knowledge states require activity/reactivity to the
>>> presence of information, and counterfactual behaviors (if/then, greater
>>> than less than, discriminations and comparisons that lead to different
>>> downstream consequences in a system's behavior). At least, this is my
>>> theory of consciousness.
>>>
>>> Jason
>>>
>>
>> This still runs into the valence problem though. Why does some
>> "knowledge" correspond with a positive *feeling* and other knowledge
>> with a negative feeling?
>>
>
> That is a great question. Though I'm not sure it's fundamentally insoluble
> within model where every conscious state is a particular state of knowledge.
>
> I would propose that having positive and negative experiences, i.e. pain
> or pleasure, requires knowledge states with a certain minium degree of
> sophistication. For example, knowing:
>
> Pain being associated with knowledge states such as: "I don't like this,
> this is bad, I'm in pain, I want to change my situation."
>
> Pleasure being associated with knowledge states such as: "This is good for
> me, I could use more of this, I don't want this to end.'
>
> Such knowledge states require a degree of reflexive awareness, to have a
> notion of a self where some outcomes may be either positive or negative to
> that self, and perhaps some notion of time or a sufficient agency to be
> able to change one's situation.
>
> Sone have argued that plants can't feel pain because there's little they
> can do to change their situation (though I'm agnostic on this).
>
>   I'm not talking about the functional accounts of positive and negative
>> experiences. I'm talking about phenomenology. The functional aspect of it
>> is not irrelevant, but to focus *only* on that is to sweep the feeling
>> under the rug. So many dialogs on this topic basically terminate here,
>> where it's just a clash of belief about the relative importance of
>> consciousness and phenomenology as the mediator of all experience and
>> knowledge.
>>
>
> You raise important questions which no complete theory of consciousness
> should ignore. I think one reason things break down here is because there's
> such incredible complexity behind and underlying the states of
> consciousness we humans perceive and no easy way to communicate all the
> salient properties of those experiences.
>
> Jason
>

Thanks for that. These kinds of questions are rarely acknowledged in the
mainstream. The problem is how much we take valence as a given, or how much
it's conflated with its function, that most people aren't aware of how
strange it is if you're coming from a physicalist metaphysics.  "Evolution
did it" is the common refrain, but it begs the question.

With your proposal would bacterium potentially possess the knowledge states
required?

And the idea that plants cannot influence their 

Re: what chatGPT is and is not

2023-05-25 Thread Terren Suydam
On Tue, May 23, 2023 at 5:47 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 3:50 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 1:46 PM Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 7:09 AM Jason Resch 
 wrote:

> As I see this thread, Terren and Stathis are both talking past each
> other. Please either of you correct me if i am wrong, but in an effort to
> clarify and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the
> same fine-grained causal organization *would* have the same phenomenology,
> the same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with
> regards to symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I
> believe this is partly responsible for why you are both talking past each
> other, because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts,
> feelings, quale, etc. and there are low-level, be they neurons,
> neurotransmitters, atoms, quantum fields, and laws of physics as in human
> brains, or circuits, logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
> quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount 
> of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, 
> idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible,
> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
> greater a leap from how we get "it" from a bunch of cells squirting ions
> back and forth. Trying to understand a smartphone by looking at the flows
> of electrons is a similar kind of problem, it would seem just as difficult
> or impossible to explain and understand the high-level features and
> complexity out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the
> level one is operation on when one discusses symbols, substrates, or 
> quale.
> In summary, I think a chief reason you have been talking past each other 
> is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer
> my perspective in the hope it might help the conversation.
>

 I appreciate the callout, but it is necessary to talk at both the micro
 and the macro for this discussion. We're talking about symbol grounding. I
 should make it clear that I don't believe symbols can be grounded in other
 symbols (i.e. symbols all the way down as Stathis put it), that leads to
 infinite regress and the illusion of meaning.  Symbols ultimately must
 stand for something. The only thing they can stand *for*, ultimately,
 is something that cannot be communicated by other symbols: conscious
 experience. There is no concept in our brains that is not ultimately
 connected to something we've seen, heard, felt, smelled, or tasted.

>>>
>>> I agree everything you have experienced is rooted in consciousness.
>>>
>>> But at the low level, that only thing your brain senses are neural
>>> signals (symbols, on/off, ones and zeros).
>>>
>>> In your arguments you rely on the high-level conscious states of human
>>> brains to establish that they have grounding, but then use the low-level
>>> descriptions of machines to deny their own consciousness, and hence deny
>>> they can ground their processing to anything.
>>>
>>> If you remained in the space of low-level descriptions for both brains
>>> and machine intelligences, however, you would see each struggles to make a
>>> connection to what may exist at the high-level. You would see, the lack of
>>> any apparent grounding in what are just neurons firing or not firing at
>>> certain times. Just as a wire in a circuit either carries or doesn't carry
>>> a charge.
>>>
>>
>> 

Re: what chatGPT is and is not

2023-05-25 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 21:28, Jason Resch  wrote:

>
>
> On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:

 >An RNG would be a bad design choice because it would be extremely
> unreliable. However, as a thought experiment, it could work. If the visual
> cortex were removed and replaced with an RNG which for five minutes
> replicated the interactions with the remaining brain, the subject would
> behave as if they had normal vision and report that they had normal 
> vision,
> then after five minutes behave as if they were blind and report that they
> were blind. It is perhaps contrary to intuition that the subject would
> really have visual experiences in that five minute period, but I don't
> think there is any other plausible explanation.
>

> I think they would be a visual zombie in that five minute period,
> though as described they would not be able to report any difference.
>
> I think if one's entire brain were replaced by an RNG, they would be a
> total zombie who would fool us into thinking they were conscious and we
> would not notice a difference. So by extension a brain partially replaced
> by an RNG would be a partial zombie that fooled the other parts of the
> brain into thinking nothing was amiss.
>

 I think the concept of a partial zombie makes consciousness nonsensical.

>>>
>>> It borders on the nonsensical, but between the two bad alternatives I
>>> find the idea of a RNG instantiating human consciousness somewhat less
>>> sensical than the idea of partial zombies.
>>>
>>
>> If consciousness persists no matter what the brain is replaced with as
>> long as the output remains the same this is consistent with the idea that
>> consciousness does not reside in a particular substance (even a magical
>> substance) or in a particular process.
>>
>
> Yes but this is a somewhat crude 1960s version of functionalism, which as
> I described and as you recognized, is vulnerable to all kinds of attacks.
> Modern functionalism is about more than high level inputs and outputs, and
> includes causal organization and implementation details at some level (the
> functional substitution level).
>
> Don't read too deeply into the mathematical definition of function as
> simply inputs and outputs, think of it more in terms of what a mind does,
> rather than what a mind is, this is the thinking that led to functionalism
> and an acceptance of multiple realizability.
>
>
>
> This is a strange idea, but it is akin to the existence of platonic
>> objects. The number three can be implemented by arranging three objects in
>> a row but it does not depend those three objects unless it is being used
>> for a particular purpose, such as three beads on an abacus.
>>
>
> Bubble sort and merge sort both compute the same thing and both have the
> same inputs and outputs, but they are different mathematical objects, with
> different behaviors, steps, subroutines and runtime efficiency.
>
>
>
>>
>>> How would I know that I am not a visual zombie now, or a visual zombie
 every Tuesday, Thursday and Saturday?

>>>
>>> Here, we have to be careful what we mean by "I". Our own brains have
>>> various spheres of consciousness as demonstrated by the Wada Test: we can
>>> shut down one hemisphere of the brain and lose partial awareness and
>>> functionality such as the ability to form words and yet one remains
>>> conscious. I think being a partial zombie would be like that, having one's
>>> sphere of awareness shrink.
>>>
>>
>> But the subject's sphere of awareness would not shrink in the thought
>> experiment,
>>
>
> Have you ever wondered what delineates the mind from its environment? Why
> it is that you are not aware of my thoughts but you see me as an object
> that only affects your senses, even though we could represent the whole
> earth as one big functional system?
>
> I don't have a good answer to this question but it seems it might be a
> factor here. The randomly generated outputs from the RNG would seem an
> environmental noise/sensation coming from the outside, rather than a
> recursively linked and connected loop of processing as would exist in a
> genuinely functioning brain of two hemispheres.
>
>
> since by assumption their behaviour stays the same, while if their sphere
>> of awareness shrank they notice that something was different and say so.
>>
>
> But here (almost by magic), the RNG outputs have forced the physical
> behavior of the remaining hemisphere to remain the same while fundamentally
> altering the definition of the computation that underlies the mind.
>
> If this does not alter the consciousness, if neurons don't need to
> interact in a computationally meaningful way with other 

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>>>
>>> >An RNG would be a bad design choice because it would be extremely
 unreliable. However, as a thought experiment, it could work. If the visual
 cortex were removed and replaced with an RNG which for five minutes
 replicated the interactions with the remaining brain, the subject would
 behave as if they had normal vision and report that they had normal vision,
 then after five minutes behave as if they were blind and report that they
 were blind. It is perhaps contrary to intuition that the subject would
 really have visual experiences in that five minute period, but I don't
 think there is any other plausible explanation.

>>>
 I think they would be a visual zombie in that five minute period,
 though as described they would not be able to report any difference.

 I think if one's entire brain were replaced by an RNG, they would be a
 total zombie who would fool us into thinking they were conscious and we
 would not notice a difference. So by extension a brain partially replaced
 by an RNG would be a partial zombie that fooled the other parts of the
 brain into thinking nothing was amiss.

>>>
>>> I think the concept of a partial zombie makes consciousness nonsensical.
>>>
>>
>> It borders on the nonsensical, but between the two bad alternatives I
>> find the idea of a RNG instantiating human consciousness somewhat less
>> sensical than the idea of partial zombies.
>>
>
> If consciousness persists no matter what the brain is replaced with as
> long as the output remains the same this is consistent with the idea that
> consciousness does not reside in a particular substance (even a magical
> substance) or in a particular process.
>

Yes but this is a somewhat crude 1960s version of functionalism, which as I
described and as you recognized, is vulnerable to all kinds of attacks.
Modern functionalism is about more than high level inputs and outputs, and
includes causal organization and implementation details at some level (the
functional substitution level).

Don't read too deeply into the mathematical definition of function as
simply inputs and outputs, think of it more in terms of what a mind does,
rather than what a mind is, this is the thinking that led to functionalism
and an acceptance of multiple realizability.



This is a strange idea, but it is akin to the existence of platonic
> objects. The number three can be implemented by arranging three objects in
> a row but it does not depend those three objects unless it is being used
> for a particular purpose, such as three beads on an abacus.
>

Bubble sort and merge sort both compute the same thing and both have the
same inputs and outputs, but they are different mathematical objects, with
different behaviors, steps, subroutines and runtime efficiency.



>
>> How would I know that I am not a visual zombie now, or a visual zombie
>>> every Tuesday, Thursday and Saturday?
>>>
>>
>> Here, we have to be careful what we mean by "I". Our own brains have
>> various spheres of consciousness as demonstrated by the Wada Test: we can
>> shut down one hemisphere of the brain and lose partial awareness and
>> functionality such as the ability to form words and yet one remains
>> conscious. I think being a partial zombie would be like that, having one's
>> sphere of awareness shrink.
>>
>
> But the subject's sphere of awareness would not shrink in the thought
> experiment,
>

Have you ever wondered what delineates the mind from its environment? Why
it is that you are not aware of my thoughts but you see me as an object
that only affects your senses, even though we could represent the whole
earth as one big functional system?

I don't have a good answer to this question but it seems it might be a
factor here. The randomly generated outputs from the RNG would seem an
environmental noise/sensation coming from the outside, rather than a
recursively linked and connected loop of processing as would exist in a
genuinely functioning brain of two hemispheres.


since by assumption their behaviour stays the same, while if their sphere
> of awareness shrank they notice that something was different and say so.
>

But here (almost by magic), the RNG outputs have forced the physical
behavior of the remaining hemisphere to remain the same while fundamentally
altering the definition of the computation that underlies the mind.

If this does not alter the consciousness, if neurons don't need to interact
in a computationally meaningful way with other neurons, then in principle
all we need is one neuron to fire once, and this can stand for all possible
consciousness invoked by all possible minds.

Arnold Zuboff has written a thought experiment to this effect.

Re: what chatGPT is and is not

2023-05-25 Thread John Clark
On Wed, May 24, 2023 at 7:56 AM Jason Resch  wrote:

*> Can I ask you what you would believe would happen to the conscious of
> the individual if you replaced the right hemisphere of the brain with a
> black box that interfaced identically with the left hemisphere, but
> internal to this black box is nothing but a random number generator, and it
> is only by fantastic luck that the output of the RNG happens to have caused
> it's interfacing with the left hemisphere to remain unchanged?*


If that were to happen absolutely positively nothing would happen to the
consciousness of the individual, except that such a thing would be
astronomically (that's far too wimpy a word but is the best I could come up
with) to occur, and even if it did it would be astronomically squared
unlikely that such "fantastic luck" would continue and the individual would
remain conscious for another nanosecond. But I want to compete with you in
figuring out a thought experiment that is even more ridiculous than yours,
in fact I want to find one that is almost as ridiculous as the Chinese
Room. Here is my modest proposal:

Have ALL the neurons and not just half behave randomly,  and let them
produce exactly the same output that Albert Einstein's brain did, and let
them continue doing that for all of the 76 years of Albert Einstein's life.

In anticipation of your inevitable questions  Yes, that would be a
reincarnation of Albert Einstein. And yes he would be conscious, assuming
that the original Albert Einstein was conscious and that I am not the only
conscious being in the universe. And yes randomness producing consciousness
would be bizarre, but the bizarre is always to be expected if the starting
conditions are bizarre, and in this case your starting conditions are
bizarre to the bazaar power.  And no, something like that is not going to
happen, not for 76 years, not even for a picosecond .

John K ClarkSee what's on my new list at  Extropolis

n76



>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2ZtGg_AhNZCJZJBitdSLYWffarq3-%3DiyhMdax2Cpao5A%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 14:47, Brent Meeker  wrote:

>
>
> On 5/24/2023 9:29 PM, Stathis Papaioannou wrote:
>
>
>
> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>>>
>>> >An RNG would be a bad design choice because it would be extremely
 unreliable. However, as a thought experiment, it could work. If the visual
 cortex were removed and replaced with an RNG which for five minutes
 replicated the interactions with the remaining brain, the subject would
 behave as if they had normal vision and report that they had normal vision,
 then after five minutes behave as if they were blind and report that they
 were blind. It is perhaps contrary to intuition that the subject would
 really have visual experiences in that five minute period, but I don't
 think there is any other plausible explanation.

>>>
 I think they would be a visual zombie in that five minute period,
 though as described they would not be able to report any difference.

 I think if one's entire brain were replaced by an RNG, they would be a
 total zombie who would fool us into thinking they were conscious and we
 would not notice a difference. So by extension a brain partially replaced
 by an RNG would be a partial zombie that fooled the other parts of the
 brain into thinking nothing was amiss.

>>>
>>> I think the concept of a partial zombie makes consciousness nonsensical.
>>>
>>
>> It borders on the nonsensical, but between the two bad alternatives I
>> find the idea of a RNG instantiating human consciousness somewhat less
>> sensical than the idea of partial zombies.
>>
>
> If consciousness persists no matter what the brain is replaced with as
> long as the output remains the same this is consistent with the idea that
> consciousness does not reside in a particular substance (even a magical
> substance) or in a particular process. This is a strange idea, but it is
> akin to the existence of platonic objects. The number three can be
> implemented by arranging three objects in a row but it does not depend
> those three objects unless it is being used for a particular purpose, such
> as three beads on an abacus.
>
>
>> How would I know that I am not a visual zombie now, or a visual zombie
>>> every Tuesday, Thursday and Saturday?
>>>
>>
>> Here, we have to be careful what we mean by "I". Our own brains have
>> various spheres of consciousness as demonstrated by the Wada Test: we can
>> shut down one hemisphere of the brain and lose partial awareness and
>> functionality such as the ability to form words and yet one remains
>> conscious. I think being a partial zombie would be like that, having one's
>> sphere of awareness shrink.
>>
>
> But the subject's sphere of awareness would not shrink in the thought
> experiment, since by assumption their behaviour stays the same, while if
> their sphere of awareness shrank they notice that something was different
> and say so.
>
>
> Why do you think they would notice?  Color blind people don't notice they
> are color blind...until somebody tells them about it and even then they
> don't "notice" it.
>

There would either be objective or subjective evidence of a change due to
the substitution. If there is neither objective nor subjective evidence of
a change, then there is no change.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWc%3D1gzg1LQ%3DLYdtJTUYH3anjzOSFNvP9CTeqUC8x3KQg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Brent Meeker



On 5/24/2023 9:29 PM, Stathis Papaioannou wrote:



On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:



On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou
 wrote:



On Thu, 25 May 2023 at 11:48, Jason Resch
 wrote:

>An RNG would be a bad design choice because it would be
extremely unreliable. However, as a thought experiment, it
could work. If the visual cortex were removed and replaced
with an RNG which for five minutes replicated the
interactions with the remaining brain, the subject would
behave as if they had normal vision and report that they
had normal vision, then after five minutes behave as if
they were blind and report that they were blind. It is
perhaps contrary to intuition that the subject would
really have visual experiences in that five minute period,
but I don't think there is any other plausible explanation.


I think they would be a visual zombie in that five minute
period, though as described they would not be able to
report any difference.

I think if one's entire brain were replaced by an RNG,
they would be a total zombie who would fool us into
thinking they were conscious and we would not notice a
difference. So by extension a brain partially replaced by
an RNG would be a partial zombie that fooled the other
parts of the brain into thinking nothing was amiss.


I think the concept of a partial zombie makes consciousness
nonsensical.


It borders on the nonsensical, but between the two bad
alternatives I find the idea of a RNG instantiating human
consciousness somewhat less sensical than the idea of partial zombies.


If consciousness persists no matter what the brain is replaced with as 
long as the output remains the same this is consistent with the idea 
that consciousness does not reside in a particular substance (even a 
magical substance) or in a particular process. This is a strange idea, 
but it is akin to the existence of platonic objects. The number three 
can be implemented by arranging three objects in a row but it does not 
depend those three objects unless it is being used for a particular 
purpose, such as three beads on an abacus.


How would I know that I am not a visual zombie now, or a
visual zombie every Tuesday, Thursday and Saturday?


Here, we have to be careful what we mean by "I". Our own brains
have various spheres of consciousness as demonstrated by the Wada
Test: we can shut down one hemisphere of the brain and lose
partial awareness and functionality such as the ability to form
words and yet one remains conscious. I think being a partial
zombie would be like that, having one's sphere of awareness shrink.


But the subject's sphere of awareness would not shrink in the thought 
experiment, since by assumption their behaviour stays the same, while 
if their sphere of awareness shrank they notice that something was 
different and say so.


Why do you think they would notice?  Color blind people don't notice 
they are color blind...until somebody tells them about it and even then 
they don't "notice" it.


Brent



What is the advantage of having "real" visual experiences if
they make no objective difference and no subjective difference
either?


The advantage of real computations (which imply having real
awareness/experiences) is that real computations are more reliable
than RNGs for producing intelligent behavioral responses.


Yes, so an RNG would be a bad design choice. But the point remains 
that if the output of the system remains the same, the consciousness 
remains the same, regardless of how the system functions. The 
reasonable-sounding belief that somehow the consciousness resides in 
the brain, in particular biochemical reactions or even in electronic 
circuits simulating the brain is wrong.



--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUcTU%3D1P3bkeoki894AQ7PrLXSFH6zFXwGPVhrqwaKoYA%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>>
>> >An RNG would be a bad design choice because it would be extremely
>>> unreliable. However, as a thought experiment, it could work. If the visual
>>> cortex were removed and replaced with an RNG which for five minutes
>>> replicated the interactions with the remaining brain, the subject would
>>> behave as if they had normal vision and report that they had normal vision,
>>> then after five minutes behave as if they were blind and report that they
>>> were blind. It is perhaps contrary to intuition that the subject would
>>> really have visual experiences in that five minute period, but I don't
>>> think there is any other plausible explanation.
>>>
>>
>>> I think they would be a visual zombie in that five minute period, though
>>> as described they would not be able to report any difference.
>>>
>>> I think if one's entire brain were replaced by an RNG, they would be a
>>> total zombie who would fool us into thinking they were conscious and we
>>> would not notice a difference. So by extension a brain partially replaced
>>> by an RNG would be a partial zombie that fooled the other parts of the
>>> brain into thinking nothing was amiss.
>>>
>>
>> I think the concept of a partial zombie makes consciousness nonsensical.
>>
>
> It borders on the nonsensical, but between the two bad alternatives I find
> the idea of a RNG instantiating human consciousness somewhat less sensical
> than the idea of partial zombies.
>

If consciousness persists no matter what the brain is replaced with as long
as the output remains the same this is consistent with the idea that
consciousness does not reside in a particular substance (even a magical
substance) or in a particular process. This is a strange idea, but it is
akin to the existence of platonic objects. The number three can be
implemented by arranging three objects in a row but it does not depend
those three objects unless it is being used for a particular purpose, such
as three beads on an abacus.


> How would I know that I am not a visual zombie now, or a visual zombie
>> every Tuesday, Thursday and Saturday?
>>
>
> Here, we have to be careful what we mean by "I". Our own brains have
> various spheres of consciousness as demonstrated by the Wada Test: we can
> shut down one hemisphere of the brain and lose partial awareness and
> functionality such as the ability to form words and yet one remains
> conscious. I think being a partial zombie would be like that, having one's
> sphere of awareness shrink.
>

But the subject's sphere of awareness would not shrink in the thought
experiment, since by assumption their behaviour stays the same, while if
their sphere of awareness shrank they notice that something was different
and say so.


> What is the advantage of having "real" visual experiences if they make no
>> objective difference and no subjective difference either?
>>
>
> The advantage of real computations (which imply having real
> awareness/experiences) is that real computations are more reliable than
> RNGs for producing intelligent behavioral responses.
>

Yes, so an RNG would be a bad design choice. But the point remains that if
the output of the system remains the same, the consciousness remains the
same, regardless of how the system functions. The reasonable-sounding
belief that somehow the consciousness resides in the brain, in particular
biochemical reactions or even in electronic circuits simulating the brain
is wrong.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUcTU%3D1P3bkeoki894AQ7PrLXSFH6zFXwGPVhrqwaKoYA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>
> >An RNG would be a bad design choice because it would be extremely
>> unreliable. However, as a thought experiment, it could work. If the visual
>> cortex were removed and replaced with an RNG which for five minutes
>> replicated the interactions with the remaining brain, the subject would
>> behave as if they had normal vision and report that they had normal vision,
>> then after five minutes behave as if they were blind and report that they
>> were blind. It is perhaps contrary to intuition that the subject would
>> really have visual experiences in that five minute period, but I don't
>> think there is any other plausible explanation.
>>
>
>> I think they would be a visual zombie in that five minute period, though
>> as described they would not be able to report any difference.
>>
>> I think if one's entire brain were replaced by an RNG, they would be a
>> total zombie who would fool us into thinking they were conscious and we
>> would not notice a difference. So by extension a brain partially replaced
>> by an RNG would be a partial zombie that fooled the other parts of the
>> brain into thinking nothing was amiss.
>>
>
> I think the concept of a partial zombie makes consciousness nonsensical.
>

It borders on the nonsensical, but between the two bad alternatives I find
the idea of a RNG instantiating human consciousness somewhat less sensical
than the idea of partial zombies.


How would I know that I am not a visual zombie now, or a visual zombie
> every Tuesday, Thursday and Saturday?
>

Here, we have to be careful what we mean by "I". Our own brains have
various spheres of consciousness as demonstrated by the Wada Test: we can
shut down one hemisphere of the brain and lose partial awareness and
functionality such as the ability to form words and yet one remains
conscious. I think being a partial zombie would be like that, having one's
sphere of awareness shrink.


What is the advantage of having "real" visual experiences if they make no
> objective difference and no subjective difference either?
>

The advantage of real computations (which imply having real
awareness/experiences) is that real computations are more reliable than
RNGs for producing intelligent behavioral responses.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj5O4vwxKjOC60cORyU7qHxM5HsQo-9xDFogiYwBZ9mtA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 9:32 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 06:46, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 15:37, Jason Resch 
>> wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou <
>>> stath...@gmail.com> wrote:
>>>


 On Wed, 24 May 2023 at 04:03, Jason Resch 
 wrote:

>
>
> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>> wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past
>>> each other. Please either of you correct me if i am wrong, but in 
>>> an effort
>>> to clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having
>>> the same fine-grained causal organization *would* have the same
>>> phenomenology, the same experience, and the same qualia as the 
>>> brain with
>>> the same fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with
>>> regards to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I
>>> believe this is partly responsible for why you are both talking 
>>> past each
>>> other, because there are many levels involved in brains (and 
>>> computational
>>> systems). I believe you were discussing completely different levels 
>>> in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts,
>>> feelings, quale, etc. and there are low-level, be they neurons,
>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>> human
>>> brains, or circuits, logic gates, bits, and instructions as in 
>>> computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of
>>> grandmother's kitchen" (GMK) the trouble is we are crossing a 
>>> myriad of
>>> levels. The quale or idea or memory of the smell of GMK is a very
>>> high-level feature of a mind. When Terren asks for or discusses a 
>>> symbol
>>> for it, a complete answer/description for it can only be supplied 
>>> in terms
>>> of a vast amount of information concerning low level structures, be 
>>> they
>>> patterns of neuron firings, or patterns of bits being processed. 
>>> When we
>>> consider things down at this low level, however, we lose all 
>>> context for
>>> what the meaning, idea, and quale are or where or how they come in. 
>>> We
>>> cannot see or find the idea of GMK in any neuron, no more than we 
>>> can see
>>> or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not
>>> impossible, how we get "it" (GMK or otherwise) from "bit", but to 
>>> me, this
>>> is no greater a leap from how we get "it" from a bunch of cells 
>>> squirting
>>> ions back and forth. Trying to understand a smartphone by looking 
>>> at the
>>> flows of electrons is a similar kind of problem, it would seem just 
>>> as
>>> difficult or impossible to explain and understand the high-level 
>>> features
>>> and complexity out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss
>>> the level one is operation on when one discusses symbols, 
>>> substrates, or
>>> quale. In summary, I think a chief reason you have been talking 
>>> past each
>>> other is because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only
>>> offer my perspective in the hope it might help the conversation.
>>>
>>
>> I think you’ve captured my position. But in addition I think
>> replicating the fine-grained causal organisation is not necessary in 
>> order
>> to replicate higher level phenomena such as GMK. By extension of 
>> Chalmers’
>> substitution experiment,
>>
>
> Note that Chalmers's argument is based on assuming the 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:

>An RNG would be a bad design choice because it would be extremely
> unreliable. However, as a thought experiment, it could work. If the visual
> cortex were removed and replaced with an RNG which for five minutes
> replicated the interactions with the remaining brain, the subject would
> behave as if they had normal vision and report that they had normal vision,
> then after five minutes behave as if they were blind and report that they
> were blind. It is perhaps contrary to intuition that the subject would
> really have visual experiences in that five minute period, but I don't
> think there is any other plausible explanation.
>

> I think they would be a visual zombie in that five minute period, though
> as described they would not be able to report any difference.
>
> I think if one's entire brain were replaced by an RNG, they would be a
> total zombie who would fool us into thinking they were conscious and we
> would not notice a difference. So by extension a brain partially replaced
> by an RNG would be a partial zombie that fooled the other parts of the
> brain into thinking nothing was amiss.
>

I think the concept of a partial zombie makes consciousness nonsensical.
How would I know that I am not a visual zombie now, or a visual zombie
every Tuesday, Thursday and Saturday? What is the advantage of having
"real" visual experiences if they make no objective difference and no
subjective difference either?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXogfdS6mi9%3Df60U5QNcbnLaEYyp6Honrt-u8CcNWpsVw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 9:32 PM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 06:46, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:
>>>


 On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
 wrote:

>
>
> On Wed, 24 May 2023 at 15:37, Jason Resch 
> wrote:
>
>>
>>
>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 04:03, Jason Resch 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
 stath...@gmail.com> wrote:

>
>
> On Tue, 23 May 2023 at 21:09, Jason Resch 
> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past
>> each other. Please either of you correct me if i am wrong, but in an 
>> effort
>> to clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having
>> the same fine-grained causal organization *would* have the same
>> phenomenology, the same experience, and the same qualia as the brain 
>> with
>> the same fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with
>> regards to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I
>> believe this is partly responsible for why you are both talking past 
>> each
>> other, because there are many levels involved in brains (and 
>> computational
>> systems). I believe you were discussing completely different levels 
>> in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts,
>> feelings, quale, etc. and there are low-level, be they neurons,
>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>> human
>> brains, or circuits, logic gates, bits, and instructions as in 
>> computers.
>>
>> I think when Terren mentions a "symbol for the smell of
>> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad 
>> of
>> levels. The quale or idea or memory of the smell of GMK is a very
>> high-level feature of a mind. When Terren asks for or discusses a 
>> symbol
>> for it, a complete answer/description for it can only be supplied in 
>> terms
>> of a vast amount of information concerning low level structures, be 
>> they
>> patterns of neuron firings, or patterns of bits being processed. 
>> When we
>> consider things down at this low level, however, we lose all context 
>> for
>> what the meaning, idea, and quale are or where or how they come in. 
>> We
>> cannot see or find the idea of GMK in any neuron, no more than we 
>> can see
>> or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not
>> impossible, how we get "it" (GMK or otherwise) from "bit", but to 
>> me, this
>> is no greater a leap from how we get "it" from a bunch of cells 
>> squirting
>> ions back and forth. Trying to understand a smartphone by looking at 
>> the
>> flows of electrons is a similar kind of problem, it would seem just 
>> as
>> difficult or impossible to explain and understand the high-level 
>> features
>> and complexity out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss
>> the level one is operation on when one discusses symbols, 
>> substrates, or
>> quale. In summary, I think a chief reason you have been talking past 
>> each
>> other is because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only
>> offer my perspective in the hope it might help the conversation.
>>
>
> I think you’ve captured my position. But in addition I think
> replicating the fine-grained causal organisation is not necessary in 
> order
> to replicate higher level phenomena such as GMK. By extension of 
> Chalmers’
> substitution experiment,
>

 Note that Chalmers's argument is based on assuming the functional
 substitution occurs at a certain level of fine-grained-ness. If you 
 lose
 this step, and look at only the top-most input-output of the mind as 
 black
 box, 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 06:46, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 04:03, Jason Resch 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
>>> stath...@gmail.com> wrote:
>>>


 On Tue, 23 May 2023 at 21:09, Jason Resch 
 wrote:

> As I see this thread, Terren and Stathis are both talking past
> each other. Please either of you correct me if i am wrong, but in an 
> effort
> to clarify and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the
> same fine-grained causal organization *would* have the same 
> phenomenology,
> the same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with
> regards to symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I
> believe this is partly responsible for why you are both talking past 
> each
> other, because there are many levels involved in brains (and 
> computational
> systems). I believe you were discussing completely different levels 
> in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts,
> feelings, quale, etc. and there are low-level, be they neurons,
> neurotransmitters, atoms, quantum fields, and laws of physics as in 
> human
> brains, or circuits, logic gates, bits, and instructions as in 
> computers.
>
> I think when Terren mentions a "symbol for the smell of
> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad 
> of
> levels. The quale or idea or memory of the smell of GMK is a very
> high-level feature of a mind. When Terren asks for or discusses a 
> symbol
> for it, a complete answer/description for it can only be supplied in 
> terms
> of a vast amount of information concerning low level structures, be 
> they
> patterns of neuron firings, or patterns of bits being processed. When 
> we
> consider things down at this low level, however, we lose all context 
> for
> what the meaning, idea, and quale are or where or how they come in. We
> cannot see or find the idea of GMK in any neuron, no more than we can 
> see
> or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not
> impossible, how we get "it" (GMK or otherwise) from "bit", but to me, 
> this
> is no greater a leap from how we get "it" from a bunch of cells 
> squirting
> ions back and forth. Trying to understand a smartphone by looking at 
> the
> flows of electrons is a similar kind of problem, it would seem just as
> difficult or impossible to explain and understand the high-level 
> features
> and complexity out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss
> the level one is operation on when one discusses symbols, substrates, 
> or
> quale. In summary, I think a chief reason you have been talking past 
> each
> other is because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only
> offer my perspective in the hope it might help the conversation.
>

 I think you’ve captured my position. But in addition I think
 replicating the fine-grained causal organisation is not necessary in 
 order
 to replicate higher level phenomena such as GMK. By extension of 
 Chalmers’
 substitution experiment,

>>>
>>> Note that Chalmers's argument is based on assuming the functional
>>> substitution occurs at a certain level of fine-grained-ness. If you lose
>>> this step, and look at only the top-most input-output of the mind as 
>>> black
>>> box, then you can no longer distinguish a rock from a dreaming person, 
>>> nor
>>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>>> into the Blockhead "lookup table" argument against 

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou 
wrote:

>
>
> On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>>>


 On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
 wrote:

>
>
> On Wed, 24 May 2023 at 04:03, Jason Resch 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
>> stath...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>>> wrote:
>>>
 As I see this thread, Terren and Stathis are both talking past each
 other. Please either of you correct me if i am wrong, but in an effort 
 to
 clarify and perhaps resolve this situation:

 I believe Stathis is saying the functional substitution having the
 same fine-grained causal organization *would* have the same 
 phenomenology,
 the same experience, and the same qualia as the brain with the same
 fine-grained causal organization.

 Therefore, there is no disagreement between your positions with
 regards to symbols groundings, mappings, etc.

 When you both discuss the problem of symbology, or bits, etc. I
 believe this is partly responsible for why you are both talking past 
 each
 other, because there are many levels involved in brains (and 
 computational
 systems). I believe you were discussing completely different levels in 
 the
 hierarchical organization.

 There are high-level parts of minds, such as ideas, thoughts,
 feelings, quale, etc. and there are low-level, be they neurons,
 neurotransmitters, atoms, quantum fields, and laws of physics as in 
 human
 brains, or circuits, logic gates, bits, and instructions as in 
 computers.

 I think when Terren mentions a "symbol for the smell of
 grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of
 levels. The quale or idea or memory of the smell of GMK is a very
 high-level feature of a mind. When Terren asks for or discusses a 
 symbol
 for it, a complete answer/description for it can only be supplied in 
 terms
 of a vast amount of information concerning low level structures, be 
 they
 patterns of neuron firings, or patterns of bits being processed. When 
 we
 consider things down at this low level, however, we lose all context 
 for
 what the meaning, idea, and quale are or where or how they come in. We
 cannot see or find the idea of GMK in any neuron, no more than we can 
 see
 or find it in any neuron.

 Of course then it should seem deeply mysterious, if not impossible,
 how we get "it" (GMK or otherwise) from "bit", but to me, this is no
 greater a leap from how we get "it" from a bunch of cells squirting 
 ions
 back and forth. Trying to understand a smartphone by looking at the 
 flows
 of electrons is a similar kind of problem, it would seem just as 
 difficult
 or impossible to explain and understand the high-level features and
 complexity out of the low-level simplicity.

 This is why it's crucial to bear in mind and explicitly discuss the
 level one is operation on when one discusses symbols, substrates, or 
 quale.
 In summary, I think a chief reason you have been talking past each 
 other is
 because you are each operating on different assumed levels.

 Please correct me if you believe I am mistaken and know I only
 offer my perspective in the hope it might help the conversation.

>>>
>>> I think you’ve captured my position. But in addition I think
>>> replicating the fine-grained causal organisation is not necessary in 
>>> order
>>> to replicate higher level phenomena such as GMK. By extension of 
>>> Chalmers’
>>> substitution experiment,
>>>
>>
>> Note that Chalmers's argument is based on assuming the functional
>> substitution occurs at a certain level of fine-grained-ness. If you lose
>> this step, and look at only the top-most input-output of the mind as 
>> black
>> box, then you can no longer distinguish a rock from a dreaming person, 
>> nor
>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>> into the Blockhead "lookup table" argument against functionalism.
>>
>
> Yes, those are perhaps problems with functionalism. But a major point
> in Chalmers' argument is that if qualia were substrate-specific (hence,
> functionalism false) it 

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023 at 11:12 AM Brent Meeker  wrote:

>
>
>
> On 5/23/2023 10:37 PM, Jason Resch wrote:
>
>
>
> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each
> other. Please either of you correct me if i am wrong, but in an effort to
> clarify and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the
> same fine-grained causal organization *would* have the same phenomenology,
> the same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with
> regards to symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I
> believe this is partly responsible for why you are both talking past each
> other, because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts,
> feelings, quale, etc. and there are low-level, be they neurons,
> neurotransmitters, atoms, quantum fields, and laws of physics as in human
> brains, or circuits, logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
> quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount 
> of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, 
> idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible,
> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
> greater a leap from how we get "it" from a bunch of cells squirting ions
> back and forth. Trying to understand a smartphone by looking at the flows
> of electrons is a similar kind of problem, it would seem just as difficult
> or impossible to explain and understand the high-level features and
> complexity out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the
> level one is operation on when one discusses symbols, substrates, or 
> quale.
> In summary, I think a chief reason you have been talking past each other 
> is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer
> my perspective in the hope it might help the conversation.
>

 I think you’ve captured my position. But in addition I think
 replicating the fine-grained causal organisation is not necessary in order
 to replicate higher level phenomena such as GMK. By extension of Chalmers’
 substitution experiment,

>>>
>>> Note that Chalmers's argument is based on assuming the functional
>>> substitution occurs at a certain level of fine-grained-ness. If you lose
>>> this step, and look at only the top-most input-output of the mind as black
>>> box, then you can no longer distinguish a rock from a dreaming person, nor
>>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>>> into the Blockhead "lookup table" argument against functionalism.
>>>
>>
>> Yes, those are perhaps problems with functionalism. But a major point in
>> Chalmers' argument is that if qualia were substrate-specific (hence,
>> functionalism false) it would be possible to make a partial zombie or an
>> entity whose consciousness and behaviour diverged from the point the
>> substitution was made. And this argument works not just by replacing the
>> neurons with silicon chips, but by replacing any part of the human with
>> anything that reproduces the interactions with the remaining parts.
>>
>
>
> How deeply do you have to go when you consider or define those "other
> parts" though? That seems to be a critical but unstated assumption, and
> something that depends on how finely grained you consider the
> relevant/important parts of a brain to be.
>
> For reference, this is 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 04:08, Brent Meeker  wrote:

>
>
> On 5/24/2023 10:41 AM, Stathis Papaioannou wrote:
>
>
>
> On Thu, 25 May 2023 at 02:14, Brent Meeker  wrote:
>
>>
>>
>> On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>> wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort 
>>> to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the
>>> same fine-grained causal organization *would* have the same 
>>> phenomenology,
>>> the same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with
>>> regards to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I
>>> believe this is partly responsible for why you are both talking past 
>>> each
>>> other, because there are many levels involved in brains (and 
>>> computational
>>> systems). I believe you were discussing completely different levels in 
>>> the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts,
>>> feelings, quale, etc. and there are low-level, be they neurons,
>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>> human
>>> brains, or circuits, logic gates, bits, and instructions as in 
>>> computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of
>>> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of
>>> levels. The quale or idea or memory of the smell of GMK is a very
>>> high-level feature of a mind. When Terren asks for or discusses a symbol
>>> for it, a complete answer/description for it can only be supplied in 
>>> terms
>>> of a vast amount of information concerning low level structures, be they
>>> patterns of neuron firings, or patterns of bits being processed. When we
>>> consider things down at this low level, however, we lose all context for
>>> what the meaning, idea, and quale are or where or how they come in. We
>>> cannot see or find the idea of GMK in any neuron, no more than we can 
>>> see
>>> or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible,
>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>> greater a leap from how we get "it" from a bunch of cells squirting ions
>>> back and forth. Trying to understand a smartphone by looking at the 
>>> flows
>>> of electrons is a similar kind of problem, it would seem just as 
>>> difficult
>>> or impossible to explain and understand the high-level features and
>>> complexity out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or 
>>> quale.
>>> In summary, I think a chief reason you have been talking past each 
>>> other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer
>>> my perspective in the hope it might help the conversation.
>>>
>>
>> I think you’ve captured my position. But in addition I think
>> replicating the fine-grained causal organisation is not necessary in 
>> order
>> to replicate higher level phenomena such as GMK. By extension of 
>> Chalmers’
>> substitution experiment,
>>
>
> Note that Chalmers's argument is based on assuming the functional
> substitution occurs at a certain level of fine-grained-ness. If you lose
> this step, and look at only the top-most input-output of the mind as black
> box, then you can no longer distinguish a rock from a dreaming person, nor
> a calculator computing 2+3 and a human computing 2+3, and one also runs
> into the Blockhead "lookup table" argument against functionalism.
>

 Yes, those are perhaps problems with functionalism. But a major point
 in Chalmers' argument is that if qualia were substrate-specific (hence,
 functionalism false) it would be possible to make a partial zombie or an
 entity whose consciousness and behaviour diverged from the point the
 substitution 

Re: what chatGPT is and is not

2023-05-24 Thread John Clark
On Wed, May 24, 2023 at 8:07 AM Jason Resch  wrote:

>> But you'd still need a computation to find the particular tape recording
>> that you need, and the larger your library of recordings the more complex
>> the computation you'd need to do would be. And in that very silly
>> thought experiment your library needs to contain every sentence that is
>> syntactically and grammatically correct. And there are an astronomical
>> number to an astronomical power of those. Even if every electron, proton,
>> neutron, photon and neutrino in the observable universe could record
>> 1000 million billion trillion sentences there would still be well over a
>> googolplex number of sentences that remained unrecorded.  Blockhead is just
>> a slight variation on Searle's idiotic Chinese room.
>>
>
>
> *> It's very different. Note they you don't need to realize or store every
> possible input for the central point of Block's argument to work.*
> *For example, let's say that AlphaZero was conscious for the purposes of
> this argument. We record each of its 361 possible responses AlphaZero
> produces to each of the different opening moves on a Go board and store the
> result in a lookup table. This table would be only a few kilobytes.*
>

Nobody in their right mind would conclude that  AlphaZero is intelligent or
conscious after just watching the opening move, but after watching an
entire game is another matter because in a typical game of GO there 150
moves and there are 10^360 different 150 move Go games, and there are only
10^78 atoms in the observable universe. And the number of possible
responses that GPT4 can produce is *VASTLY* greater than 10^360.



> * > Then we can ask, what has happened to the conscious of AlphaZero?*
>

I'm not saying  intelligent behavior creates consciousness, I'm just saying
intelligent behavior is a TEST for consciousness, and it's an imperfect one
too, but it's the only test for consciousness that we've got. I'm saying if
something displays intelligent behavior then it's intelligent and
conscious, but if something does NOT display intelligent behavior then it
may or may not be intelligent or conscious.

John K ClarkSee what's on my new list at  Extropolis

nic

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0fbSeBx2EkA-%2BfOvuxqTnS9yn-1ZJ3K8ixtQesRCNQ6w%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Brent Meeker



On 5/24/2023 10:41 AM, Stathis Papaioannou wrote:



On Thu, 25 May 2023 at 02:14, Brent Meeker  wrote:



On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:



On Wed, 24 May 2023 at 15:37, Jason Resch 
wrote:



On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou
 wrote:



On Wed, 24 May 2023 at 04:03, Jason Resch
 wrote:



On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou
 wrote:



On Tue, 23 May 2023 at 21:09, Jason Resch
 wrote:

As I see this thread, Terren and Stathis are
both talking past each other. Please either
of you correct me if i am wrong, but in an
effort to clarify and perhaps resolve this
situation:

I believe Stathis is saying the functional
substitution having the same fine-grained
causal organization *would* have the same
phenomenology, the same experience, and the
same qualia as the brain with the same
fine-grained causal organization.

Therefore, there is no disagreement between
your positions with regards to symbols
groundings, mappings, etc.

When you both discuss the problem of
symbology, or bits, etc. I believe this is
partly responsible for why you are both
talking past each other, because there are
many levels involved in brains (and
computational systems). I believe you were
discussing completely different levels in the
hierarchical organization.

There are high-level parts of minds, such as
ideas, thoughts, feelings, quale, etc. and
there are low-level, be they neurons,
neurotransmitters, atoms, quantum fields, and
laws of physics as in human brains, or
circuits, logic gates, bits, and instructions
as in computers.

I think when Terren mentions a "symbol for
the smell of grandmother's kitchen" (GMK) the
trouble is we are crossing a myriad of
levels. The quale or idea or memory of the
smell of GMK is a very high-level feature of
a mind. When Terren asks for or discusses a
symbol for it, a complete answer/description
for it can only be supplied in terms of a
vast amount of information concerning low
level structures, be they patterns of neuron
firings, or patterns of bits being processed.
When we consider things down at this low
level, however, we lose all context for what
the meaning, idea, and quale are or where or
how they come in. We cannot see or find the
idea of GMK in any neuron, no more than we
can see or find it in any neuron.

Of course then it should seem deeply
mysterious, if not impossible, how we get
"it" (GMK or otherwise) from "bit", but to
me, this is no greater a leap from how we get
"it" from a bunch of cells squirting ions
back and forth. Trying to understand a
smartphone by looking at the flows of
electrons is a similar kind of problem, it
would seem just as difficult or impossible to
explain and understand the high-level
features and complexity out of the low-level
simplicity.

This is why it's crucial to bear in mind and
explicitly discuss the level one is operation
on when one discusses symbols, substrates, or
quale. In summary, I think a chief reason you
have been talking past each other is because
you are each operating on different assumed
levels.

Please correct me if you believe I am
mistaken and know I only offer my perspective
in the hope it might help the conversation.


I think you’ve captured my 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 02:14, Brent Meeker  wrote:

>
>
> On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>>


 On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
 wrote:

>
>
> On Tue, 23 May 2023 at 21:09, Jason Resch 
> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the
>> same fine-grained causal organization *would* have the same 
>> phenomenology,
>> the same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with
>> regards to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I
>> believe this is partly responsible for why you are both talking past each
>> other, because there are many levels involved in brains (and 
>> computational
>> systems). I believe you were discussing completely different levels in 
>> the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts,
>> feelings, quale, etc. and there are low-level, be they neurons,
>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>> quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount 
>> of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things 
>> down
>> at this low level, however, we lose all context for what the meaning, 
>> idea,
>> and quale are or where or how they come in. We cannot see or find the 
>> idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible,
>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>> greater a leap from how we get "it" from a bunch of cells squirting ions
>> back and forth. Trying to understand a smartphone by looking at the flows
>> of electrons is a similar kind of problem, it would seem just as 
>> difficult
>> or impossible to explain and understand the high-level features and
>> complexity out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the
>> level one is operation on when one discusses symbols, substrates, or 
>> quale.
>> In summary, I think a chief reason you have been talking past each other 
>> is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer
>> my perspective in the hope it might help the conversation.
>>
>
> I think you’ve captured my position. But in addition I think
> replicating the fine-grained causal organisation is not necessary in order
> to replicate higher level phenomena such as GMK. By extension of Chalmers’
> substitution experiment,
>

 Note that Chalmers's argument is based on assuming the functional
 substitution occurs at a certain level of fine-grained-ness. If you lose
 this step, and look at only the top-most input-output of the mind as black
 box, then you can no longer distinguish a rock from a dreaming person, nor
 a calculator computing 2+3 and a human computing 2+3, and one also runs
 into the Blockhead "lookup table" argument against functionalism.

>>>
>>> Yes, those are perhaps problems with functionalism. But a major point in
>>> Chalmers' argument is that if qualia were substrate-specific (hence,
>>> functionalism false) it would be possible to make a partial zombie or an
>>> entity whose consciousness and behaviour diverged from the point the
>>> substitution was made. And this argument works not just by replacing the
>>> neurons with silicon chips, but by replacing any part of the human with
>>> anything that reproduces the interactions with the remaining parts.
>>>
>>
>>
>> How deeply do you have to go when you consider or define those 

Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Wed, 24 May 2023 at 21:56, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 21:09, Jason Resch 
>> wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort 
>>> to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the
>>> same fine-grained causal organization *would* have the same 
>>> phenomenology,
>>> the same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with
>>> regards to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I
>>> believe this is partly responsible for why you are both talking past 
>>> each
>>> other, because there are many levels involved in brains (and 
>>> computational
>>> systems). I believe you were discussing completely different levels in 
>>> the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts,
>>> feelings, quale, etc. and there are low-level, be they neurons,
>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>> human
>>> brains, or circuits, logic gates, bits, and instructions as in 
>>> computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of
>>> grandmother's kitchen" (GMK) the trouble is we are crossing a myriad of
>>> levels. The quale or idea or memory of the smell of GMK is a very
>>> high-level feature of a mind. When Terren asks for or discusses a symbol
>>> for it, a complete answer/description for it can only be supplied in 
>>> terms
>>> of a vast amount of information concerning low level structures, be they
>>> patterns of neuron firings, or patterns of bits being processed. When we
>>> consider things down at this low level, however, we lose all context for
>>> what the meaning, idea, and quale are or where or how they come in. We
>>> cannot see or find the idea of GMK in any neuron, no more than we can 
>>> see
>>> or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible,
>>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>>> greater a leap from how we get "it" from a bunch of cells squirting ions
>>> back and forth. Trying to understand a smartphone by looking at the 
>>> flows
>>> of electrons is a similar kind of problem, it would seem just as 
>>> difficult
>>> or impossible to explain and understand the high-level features and
>>> complexity out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or 
>>> quale.
>>> In summary, I think a chief reason you have been talking past each 
>>> other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer
>>> my perspective in the hope it might help the conversation.
>>>
>>
>> I think you’ve captured my position. But in addition I think
>> replicating the fine-grained causal organisation is not necessary in 
>> order
>> to replicate higher level phenomena such as GMK. By extension of 
>> Chalmers’
>> substitution experiment,
>>
>
> Note that Chalmers's argument is based on assuming the functional
> substitution occurs at a certain level of fine-grained-ness. If you lose
> this step, and look at only the top-most input-output of the mind as black
> box, then you can no longer distinguish a rock from a dreaming person, nor
> a calculator computing 2+3 and a human computing 2+3, and one also runs
> into the Blockhead "lookup table" argument against functionalism.
>

 Yes, those are perhaps problems with functionalism. But a major point
 in Chalmers' argument is that if qualia were substrate-specific (hence,
 functionalism false) it would be possible to make a partial zombie or an
 entity whose consciousness and behaviour diverged from the point the
 substitution was made. And this argument works not just by replacing the
 neurons with silicon chips, but by replacing 

Re: what chatGPT is and is not

2023-05-24 Thread Brent Meeker



On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:



On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:



On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou
 wrote:



On Wed, 24 May 2023 at 04:03, Jason Resch
 wrote:



On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou
 wrote:



On Tue, 23 May 2023 at 21:09, Jason Resch
 wrote:

As I see this thread, Terren and Stathis are both
talking past each other. Please either of you
correct me if i am wrong, but in an effort to
clarify and perhaps resolve this situation:

I believe Stathis is saying the functional
substitution having the same fine-grained causal
organization *would* have the same phenomenology,
the same experience, and the same qualia as the
brain with the same fine-grained causal organization.

Therefore, there is no disagreement between your
positions with regards to symbols groundings,
mappings, etc.

When you both discuss the problem of symbology, or
bits, etc. I believe this is partly responsible
for why you are both talking past each other,
because there are many levels involved in brains
(and computational systems). I believe you were
discussing completely different levels in the
hierarchical organization.

There are high-level parts of minds, such as
ideas, thoughts, feelings, quale, etc. and there
are low-level, be they neurons, neurotransmitters,
atoms, quantum fields, and laws of physics as in
human brains, or circuits, logic gates, bits, and
instructions as in computers.

I think when Terren mentions a "symbol for the
smell of grandmother's kitchen" (GMK) the trouble
is we are crossing a myriad of levels. The quale
or idea or memory of the smell of GMK is a very
high-level feature of a mind. When Terren asks for
or discusses a symbol for it, a complete
answer/description for it can only be supplied in
terms of a vast amount of information concerning
low level structures, be they patterns of neuron
firings, or patterns of bits being processed. When
we consider things down at this low level,
however, we lose all context for what the meaning,
idea, and quale are or where or how they come in.
We cannot see or find the idea of GMK in any
neuron, no more than we can see or find it in any
neuron.

Of course then it should seem deeply mysterious,
if not impossible, how we get "it" (GMK or
otherwise) from "bit", but to me, this is no
greater a leap from how we get "it" from a bunch
of cells squirting ions back and forth. Trying to
understand a smartphone by looking at the flows of
electrons is a similar kind of problem, it would
seem just as difficult or impossible to explain
and understand the high-level features and
complexity out of the low-level simplicity.

This is why it's crucial to bear in mind and
explicitly discuss the level one is operation on
when one discusses symbols, substrates, or quale.
In summary, I think a chief reason you have been
talking past each other is because you are each
operating on different assumed levels.

Please correct me if you believe I am mistaken and
know I only offer my perspective in the hope it
might help the conversation.


I think you’ve captured my position. But in addition I
think replicating the fine-grained causal organisation
is not necessary in order to replicate higher level
phenomena such as GMK. By extension of Chalmers’
substitution experiment,


Note that Chalmers's argument is based on assuming the
functional substitution occurs at a certain level of
fine-grained-ness. If you lose this step, and look at only
the top-most input-output of the mind as black box, then
you can no longer distinguish 

Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 5:35 AM John Clark  wrote:

>
> On Wed, May 24, 2023 at 1:37 AM Jason Resch  wrote:
>
> *> By substituting a recording of a computation for a computation, you
>> replace a conscious mind with a tape recording of the prior behavior of a
>> conscious mind. *
>
>
> But you'd still need a computation to find the particular tape recording
> that you need, and the larger your library of recordings the more complex
> the computation you'd need to do would be.
>
> *> This is what happens in the Blockhead thought experiment*
>
>
> And in that very silly thought experiment your library needs to contain
> every sentence that is syntactically and grammatically correct. And there
> are an astronomical number to an astronomical power of those. Even if every
> electron, proton, neutron, photon and neutrino in the observable universe
> could record 1000 million billion trillion sentences there would still be
> well over a googolplex number of sentences that remained unrecorded.
> Blockhead is just a slight variation on Searle's idiotic Chinese room.
>


It's very different.

Note they you don't need to realize or store every possible input for the
central point of Block's argument to work.

For example, let's say that AlphaZero was conscious for the purposes of
this argument. We record each of its 361 possible responses AlphaZero
produces to each of the different opening moves on a Go board and store the
result in a lookup table. This table would be only a few kilobytes. Then we
can ask, what has happened to the conscious of AlphaZero? Here we have a
functionally equivalent response for all possible second moves, but we've
done away with all the complexity of the prior computation.

What the substitution level argument really asks is how far up in the
subroutines of a mind's program can we implement memoization (
https://en.m.wikipedia.org/wiki/Memoization ) before the result is some
kind of altered consciousness, or at least some diminished contribution to
the measure of a conscious experience (under duplicationist conceptions of
measure).


Jason

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg29ypDhX_3sZTTLZuFdWV%2Be86WVvin9r878U%3D1XNMAxg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Jason Resch
On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou 
wrote:

>
>
> On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>>


 On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
 wrote:

>
>
> On Tue, 23 May 2023 at 21:09, Jason Resch 
> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the
>> same fine-grained causal organization *would* have the same 
>> phenomenology,
>> the same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with
>> regards to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I
>> believe this is partly responsible for why you are both talking past each
>> other, because there are many levels involved in brains (and 
>> computational
>> systems). I believe you were discussing completely different levels in 
>> the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts,
>> feelings, quale, etc. and there are low-level, be they neurons,
>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>> quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount 
>> of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things 
>> down
>> at this low level, however, we lose all context for what the meaning, 
>> idea,
>> and quale are or where or how they come in. We cannot see or find the 
>> idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible,
>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>> greater a leap from how we get "it" from a bunch of cells squirting ions
>> back and forth. Trying to understand a smartphone by looking at the flows
>> of electrons is a similar kind of problem, it would seem just as 
>> difficult
>> or impossible to explain and understand the high-level features and
>> complexity out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the
>> level one is operation on when one discusses symbols, substrates, or 
>> quale.
>> In summary, I think a chief reason you have been talking past each other 
>> is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer
>> my perspective in the hope it might help the conversation.
>>
>
> I think you’ve captured my position. But in addition I think
> replicating the fine-grained causal organisation is not necessary in order
> to replicate higher level phenomena such as GMK. By extension of Chalmers’
> substitution experiment,
>

 Note that Chalmers's argument is based on assuming the functional
 substitution occurs at a certain level of fine-grained-ness. If you lose
 this step, and look at only the top-most input-output of the mind as black
 box, then you can no longer distinguish a rock from a dreaming person, nor
 a calculator computing 2+3 and a human computing 2+3, and one also runs
 into the Blockhead "lookup table" argument against functionalism.

>>>
>>> Yes, those are perhaps problems with functionalism. But a major point in
>>> Chalmers' argument is that if qualia were substrate-specific (hence,
>>> functionalism false) it would be possible to make a partial zombie or an
>>> entity whose consciousness and behaviour diverged from the point the
>>> substitution was made. And this argument works not just by replacing the
>>> neurons with silicon chips, but by replacing any part of the human with
>>> anything that reproduces the interactions with the remaining parts.
>>>
>>
>>
>> How deeply do you have to go when you consider or define those "other
>> parts" though? That seems to be a critical 

Re: what chatGPT is and is not

2023-05-24 Thread John Clark
On Wed, May 24, 2023 at 1:37 AM Jason Resch  wrote:

*> By substituting a recording of a computation for a computation, you
> replace a conscious mind with a tape recording of the prior behavior of a
> conscious mind. *


But you'd still need a computation to find the particular tape recording
that you need, and the larger your library of recordings the more complex
the computation you'd need to do would be.

*> This is what happens in the Blockhead thought experiment*


And in that very silly thought experiment your library needs to contain
every sentence that is syntactically and grammatically correct. And there
are an astronomical number to an astronomical power of those. Even if every
electron, proton, neutron, photon and neutrino in the observable universe
could record 1000 million billion trillion sentences there would still be
well over a googolplex number of sentences that remained unrecorded.
Blockhead is just a slight variation on Searle's idiotic Chinese room.

John K ClarkSee what's on my new list at  Extropolis

hdf

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3YRnBkkCoi9YgyQFh126Qi4KpiYR2K5mZZc%2BgtjoncYA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-24 Thread Stathis Papaioannou
On Wed, 24 May 2023 at 15:37, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each
> other. Please either of you correct me if i am wrong, but in an effort to
> clarify and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the
> same fine-grained causal organization *would* have the same phenomenology,
> the same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with
> regards to symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I
> believe this is partly responsible for why you are both talking past each
> other, because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts,
> feelings, quale, etc. and there are low-level, be they neurons,
> neurotransmitters, atoms, quantum fields, and laws of physics as in human
> brains, or circuits, logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
> quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount 
> of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, 
> idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible,
> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
> greater a leap from how we get "it" from a bunch of cells squirting ions
> back and forth. Trying to understand a smartphone by looking at the flows
> of electrons is a similar kind of problem, it would seem just as difficult
> or impossible to explain and understand the high-level features and
> complexity out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the
> level one is operation on when one discusses symbols, substrates, or 
> quale.
> In summary, I think a chief reason you have been talking past each other 
> is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer
> my perspective in the hope it might help the conversation.
>

 I think you’ve captured my position. But in addition I think
 replicating the fine-grained causal organisation is not necessary in order
 to replicate higher level phenomena such as GMK. By extension of Chalmers’
 substitution experiment,

>>>
>>> Note that Chalmers's argument is based on assuming the functional
>>> substitution occurs at a certain level of fine-grained-ness. If you lose
>>> this step, and look at only the top-most input-output of the mind as black
>>> box, then you can no longer distinguish a rock from a dreaming person, nor
>>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>>> into the Blockhead "lookup table" argument against functionalism.
>>>
>>
>> Yes, those are perhaps problems with functionalism. But a major point in
>> Chalmers' argument is that if qualia were substrate-specific (hence,
>> functionalism false) it would be possible to make a partial zombie or an
>> entity whose consciousness and behaviour diverged from the point the
>> substitution was made. And this argument works not just by replacing the
>> neurons with silicon chips, but by replacing any part of the human with
>> anything that reproduces the interactions with the remaining parts.
>>
>
>
> How deeply do you have to go when you consider or define those "other
> parts" though? That seems to be a critical but unstated assumption, and
> something that depends on how finely grained you consider the
> relevant/important parts of a brain to be.
>
> For reference, this is what Chalmers says:
>
>
> "In this paper I defend this 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou 
wrote:

>
>
> On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:
>>>
 As I see this thread, Terren and Stathis are both talking past each
 other. Please either of you correct me if i am wrong, but in an effort to
 clarify and perhaps resolve this situation:

 I believe Stathis is saying the functional substitution having the same
 fine-grained causal organization *would* have the same phenomenology, the
 same experience, and the same qualia as the brain with the same
 fine-grained causal organization.

 Therefore, there is no disagreement between your positions with regards
 to symbols groundings, mappings, etc.

 When you both discuss the problem of symbology, or bits, etc. I believe
 this is partly responsible for why you are both talking past each other,
 because there are many levels involved in brains (and computational
 systems). I believe you were discussing completely different levels in the
 hierarchical organization.

 There are high-level parts of minds, such as ideas, thoughts, feelings,
 quale, etc. and there are low-level, be they neurons, neurotransmitters,
 atoms, quantum fields, and laws of physics as in human brains, or circuits,
 logic gates, bits, and instructions as in computers.

 I think when Terren mentions a "symbol for the smell of grandmother's
 kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
 or idea or memory of the smell of GMK is a very high-level feature of a
 mind. When Terren asks for or discusses a symbol for it, a complete
 answer/description for it can only be supplied in terms of a vast amount of
 information concerning low level structures, be they patterns of neuron
 firings, or patterns of bits being processed. When we consider things down
 at this low level, however, we lose all context for what the meaning, idea,
 and quale are or where or how they come in. We cannot see or find the idea
 of GMK in any neuron, no more than we can see or find it in any neuron.

 Of course then it should seem deeply mysterious, if not impossible, how
 we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
 leap from how we get "it" from a bunch of cells squirting ions back and
 forth. Trying to understand a smartphone by looking at the flows of
 electrons is a similar kind of problem, it would seem just as difficult or
 impossible to explain and understand the high-level features and complexity
 out of the low-level simplicity.

 This is why it's crucial to bear in mind and explicitly discuss the
 level one is operation on when one discusses symbols, substrates, or quale.
 In summary, I think a chief reason you have been talking past each other is
 because you are each operating on different assumed levels.

 Please correct me if you believe I am mistaken and know I only offer my
 perspective in the hope it might help the conversation.

>>>
>>> I think you’ve captured my position. But in addition I think replicating
>>> the fine-grained causal organisation is not necessary in order to replicate
>>> higher level phenomena such as GMK. By extension of Chalmers’ substitution
>>> experiment,
>>>
>>
>> Note that Chalmers's argument is based on assuming the functional
>> substitution occurs at a certain level of fine-grained-ness. If you lose
>> this step, and look at only the top-most input-output of the mind as black
>> box, then you can no longer distinguish a rock from a dreaming person, nor
>> a calculator computing 2+3 and a human computing 2+3, and one also runs
>> into the Blockhead "lookup table" argument against functionalism.
>>
>
> Yes, those are perhaps problems with functionalism. But a major point in
> Chalmers' argument is that if qualia were substrate-specific (hence,
> functionalism false) it would be possible to make a partial zombie or an
> entity whose consciousness and behaviour diverged from the point the
> substitution was made. And this argument works not just by replacing the
> neurons with silicon chips, but by replacing any part of the human with
> anything that reproduces the interactions with the remaining parts.
>


How deeply do you have to go when you consider or define those "other
parts" though? That seems to be a critical but unstated assumption, and
something that depends on how finely grained you consider the
relevant/important parts of a brain to be.

For reference, this is what Chalmers says:


"In this paper I defend this view. Specifically, I defend a principle of
organizational invariance, holding that experience is invariant across
systems with the same fine-grained functional organization. More precisely,
the 

Re: what chatGPT is and is not

2023-05-23 Thread Stathis Papaioannou
On Wed, 24 May 2023 at 04:03, Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I think you’ve captured my position. But in addition I think replicating
>> the fine-grained causal organisation is not necessary in order to replicate
>> higher level phenomena such as GMK. By extension of Chalmers’ substitution
>> experiment,
>>
>
> Note that Chalmers's argument is based on assuming the functional
> substitution occurs at a certain level of fine-grained-ness. If you lose
> this step, and look at only the top-most input-output of the mind as black
> box, then you can no longer distinguish a rock from a dreaming person, nor
> a calculator computing 2+3 and a human computing 2+3, and one also runs
> into the Blockhead "lookup table" argument against functionalism.
>

Yes, those are perhaps problems with functionalism. But a major point in
Chalmers' argument is that if qualia were substrate-specific (hence,
functionalism false) it would be possible to make a partial zombie or an
entity whose consciousness and behaviour diverged from the point the
substitution was made. And this argument works not just by replacing the
neurons with silicon chips, but by replacing any part of the human with
anything that reproduces the interactions with the remaining parts.


> Accordingly, I think intermediate steps and the fine-grained organization
> are important (to some minimum level of fidelity) but as Bruno would say,
> we can never be certain what this necessary substitution level is. Is it
> neocortical columns, is it the connectome, is it the proteome, is it the
> molecules and atoms, is it QFT? Chalmers argues that at least at the level
> where noise introduces deviations in a brain simulation, simulating lower
> levels should not be necessary, as human consciousness appears robust to
> such noise at low levels (photon strikes, brownian motion, quantum
> uncertainties, etc.)
>

-- 
Stathis Papaioannou

-- 
You received this 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023, 4:14 PM Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 2:27 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
>>> wrote:
>>>
>>>
 And yes, I'm arguing that a true simulation (let's say for the sake of
 a thought experiment we were able to replicate every neural connection of a
 human being in code, including the connectomes, and neurotransmitters,
 along with a simulated nerve that was connected to a button on the desk we
 could press which would simulate the signal sent when a biological pain
 receptor is triggered) would feel pain that is just as real as the pain you
 and I feel as biological organisms.

>>>
>>> This follows from the physicalist no-zombies-possible stance. But it
>>> still runs into the hard problem, basically. How does stuff give rise to
>>> experience.
>>>
>>>
>> I would say stuff doesn't give rise to conscious experience. Conscious
>> experience is the logically necessary and required state of knowledge that
>> is present in any consciousness-necessitating behaviors. If you design a
>> simple robot with a camera and robot arm that is able to reliably catch a
>> ball thrown in its general direction, then something in that system *must*
>> contain knowledge of the ball's relative position and trajectory. It simply
>> isn't logically possible to have a system that behaves in all situations as
>> if it knows where the ball is, without knowing where the ball is.
>> Consciousness is simply the state of being with knowledge.
>>
>> Con- "Latin for with"
>> -Scious- "Latin for knowledge"
>> -ness "English suffix meaning the state of being X"
>>
>> Consciousness -> The state of being with knowledge.
>>
>> There is an infinite variety of potential states and levels of knowledge,
>> and this contributes to much of the confusion, but boiled down to the
>> simplest essence of what is or isn't conscious, it is all about knowledge
>> states. Knowledge states require activity/reactivity to the presence of
>> information, and counterfactual behaviors (if/then, greater than less than,
>> discriminations and comparisons that lead to different downstream
>> consequences in a system's behavior). At least, this is my theory of
>> consciousness.
>>
>> Jason
>>
>
> This still runs into the valence problem though. Why does some "knowledge"
> correspond with a positive *feeling* and other knowledge with a negative
> feeling?
>

That is a great question. Though I'm not sure it's fundamentally insoluble
within model where every conscious state is a particular state of knowledge.

I would propose that having positive and negative experiences, i.e. pain or
pleasure, requires knowledge states with a certain minium degree of
sophistication. For example, knowing:

Pain being associated with knowledge states such as: "I don't like this,
this is bad, I'm in pain, I want to change my situation."

Pleasure being associated with knowledge states such as: "This is good for
me, I could use more of this, I don't want this to end.'

Such knowledge states require a degree of reflexive awareness, to have a
notion of a self where some outcomes may be either positive or negative to
that self, and perhaps some notion of time or a sufficient agency to be
able to change one's situation.

Sone have argued that plants can't feel pain because there's little they
can do to change their situation (though I'm agnostic on this).

  I'm not talking about the functional accounts of positive and negative
> experiences. I'm talking about phenomenology. The functional aspect of it
> is not irrelevant, but to focus *only* on that is to sweep the feeling
> under the rug. So many dialogs on this topic basically terminate here,
> where it's just a clash of belief about the relative importance of
> consciousness and phenomenology as the mediator of all experience and
> knowledge.
>

You raise important questions which no complete theory of consciousness
should ignore. I think one reason things break down here is because there's
such incredible complexity behind and underlying the states of
consciousness we humans perceive and no easy way to communicate all the
salient properties of those experiences.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiyaxumARPQ42bZty5ZhrLbFQuSHe_cNXvjUxd_gniFfg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023, 3:50 PM Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 1:46 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 7:09 AM Jason Resch 
>>> wrote:
>>>
 As I see this thread, Terren and Stathis are both talking past each
 other. Please either of you correct me if i am wrong, but in an effort to
 clarify and perhaps resolve this situation:

 I believe Stathis is saying the functional substitution having the same
 fine-grained causal organization *would* have the same phenomenology, the
 same experience, and the same qualia as the brain with the same
 fine-grained causal organization.

 Therefore, there is no disagreement between your positions with regards
 to symbols groundings, mappings, etc.

 When you both discuss the problem of symbology, or bits, etc. I believe
 this is partly responsible for why you are both talking past each other,
 because there are many levels involved in brains (and computational
 systems). I believe you were discussing completely different levels in the
 hierarchical organization.

 There are high-level parts of minds, such as ideas, thoughts, feelings,
 quale, etc. and there are low-level, be they neurons, neurotransmitters,
 atoms, quantum fields, and laws of physics as in human brains, or circuits,
 logic gates, bits, and instructions as in computers.

 I think when Terren mentions a "symbol for the smell of grandmother's
 kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
 or idea or memory of the smell of GMK is a very high-level feature of a
 mind. When Terren asks for or discusses a symbol for it, a complete
 answer/description for it can only be supplied in terms of a vast amount of
 information concerning low level structures, be they patterns of neuron
 firings, or patterns of bits being processed. When we consider things down
 at this low level, however, we lose all context for what the meaning, idea,
 and quale are or where or how they come in. We cannot see or find the idea
 of GMK in any neuron, no more than we can see or find it in any neuron.

 Of course then it should seem deeply mysterious, if not impossible, how
 we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
 leap from how we get "it" from a bunch of cells squirting ions back and
 forth. Trying to understand a smartphone by looking at the flows of
 electrons is a similar kind of problem, it would seem just as difficult or
 impossible to explain and understand the high-level features and complexity
 out of the low-level simplicity.

 This is why it's crucial to bear in mind and explicitly discuss the
 level one is operation on when one discusses symbols, substrates, or quale.
 In summary, I think a chief reason you have been talking past each other is
 because you are each operating on different assumed levels.

 Please correct me if you believe I am mistaken and know I only offer my
 perspective in the hope it might help the conversation.

>>>
>>> I appreciate the callout, but it is necessary to talk at both the micro
>>> and the macro for this discussion. We're talking about symbol grounding. I
>>> should make it clear that I don't believe symbols can be grounded in other
>>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>>> infinite regress and the illusion of meaning.  Symbols ultimately must
>>> stand for something. The only thing they can stand *for*, ultimately,
>>> is something that cannot be communicated by other symbols: conscious
>>> experience. There is no concept in our brains that is not ultimately
>>> connected to something we've seen, heard, felt, smelled, or tasted.
>>>
>>
>> I agree everything you have experienced is rooted in consciousness.
>>
>> But at the low level, that only thing your brain senses are neural
>> signals (symbols, on/off, ones and zeros).
>>
>> In your arguments you rely on the high-level conscious states of human
>> brains to establish that they have grounding, but then use the low-level
>> descriptions of machines to deny their own consciousness, and hence deny
>> they can ground their processing to anything.
>>
>> If you remained in the space of low-level descriptions for both brains
>> and machine intelligences, however, you would see each struggles to make a
>> connection to what may exist at the high-level. You would see, the lack of
>> any apparent grounding in what are just neurons firing or not firing at
>> certain times. Just as a wire in a circuit either carries or doesn't carry
>> a charge.
>>
>
> Ah, I see your point now. That's valid, thanks for raising it and let me
> clarify.
>

I appreciate that thank you.


> Bringing this back to LLMs, it's clear to me that LLMs do not have
> 

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
If I had confidence that my answers to your questions would be met with
anything but a "defend/destroy" mentality I'd go there with you. I's gotta
be fun for me, and you're not someone I enjoy getting into it with. Not
trying to be insulting, but it's the truth.

On Tue, May 23, 2023 at 4:33 PM John Clark  wrote:

> On Tue, May 23, 2023  Terren Suydam  wrote:
>
> *> reality is fundamentally consciousness. *
>
>
> Then why does a simple physical molecule like *N**2**O *stop
> consciousness temporarily and another simple physical molecule like *CN- *do
> so permanently?
>
>
>> *> Why does some "knowledge" correspond with a positive feeling and other
>> knowledge with a negative feeling?*
>
>
> Because sometimes new knowledge requires you to re-organize hundreds of
> other important concepts you already had in your brain and that could be
> difficult and depending on circumstances may endanger or benefit your
> mental health.
>
> John K ClarkSee what's on my new list at  Extropolis
> 
> 2nv
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv1DCmuv8Yj5th%2BaV%3Dx%3DBa_WCJsNePunLHpufXwi-W_EbQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8fZ9NDoyPBKQT6-iygsk8%2BwkfiU_3JWLVLbzregBUwgw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Tue, May 23, 2023 at 4:30 PM Terren Suydam 
wrote:

>> If I could instantly stop all physical processes that are going on
>> inside your head for one year and then start them up again, to an
>> outside objective observer you would appear to lose consciousness for one
>> year, but to you your consciousness would still feel continuous but the
>> outside world would appear to have discontinuously jumped to something
>> new.
>>
>
> *> I meant continuous in terms of the flow of state from one moment to the
> next. What you're describing is continuous because it's not the passage of
> time that needs to be continuous, but the state of information in the model
> as the physical processes evolve*.
>

Sorry but it's not at all clear to me what you're talking about. If the
state of information is not evolving in time then what in the world is it
evolving in?!  If nothing changes then nothing can evolve, and the very
definition of time stopping is that nothing changes and nothing evolves.

  John K ClarkSee what's on my new list at  Extropolis

xqj

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3gS3MO24ERbpdjopdJ%3DC_a-FEw8%2Bue08AxowskfBwOFQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Tue, May 23, 2023  Terren Suydam  wrote:

*> reality is fundamentally consciousness. *


Then why does a simple physical molecule like *N**2**O *stop consciousness
temporarily and another simple physical molecule like *CN- *do so
permanently?


> *> Why does some "knowledge" correspond with a positive feeling and other
> knowledge with a negative feeling?*


Because sometimes new knowledge requires you to re-organize hundreds of
other important concepts you already had in your brain and that could be
difficult and depending on circumstances may endanger or benefit your
mental health.

John K ClarkSee what's on my new list at  Extropolis

2nv


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1DCmuv8Yj5th%2BaV%3Dx%3DBa_WCJsNePunLHpufXwi-W_EbQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 4:17 PM John Clark  wrote:

> On Tue, May 23, 2023 at 3:50 PM Terren Suydam 
> wrote:
>
>
>> * > in my view, consciousness entails a continuous flow of experience.*
>>
>
> If I could instantly stop all physical processes that are going on inside
> your head for one year and then start them up again, to an outside
> objective observer you would appear to lose consciousness for one year, but
> to you your consciousness would still feel continuous but the outside world
> would appear to have discontinuously jumped to something new.
>

I meant continuous in terms of the flow of state from one moment to the
next. What you're describing *is* continuous because it's not the passage
of time that needs to be continuous, but the state of information in the
model as the physical processes evolve. And my understanding is that in an
LLM, each new query starts from the same state... it does not evolve in
time.

Terren


>
> John K ClarkSee what's on my new list at  Extropolis
> 
> 2b0
>
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0B5ce20hEsOv_GLcmJP5kkOpqMggwUFTcBQfGL_59y0g%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA86qv5H8QEKcEWCjPteD6B89PAZ6GbEcUy7km4Q9yibPQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Tue, May 23, 2023 at 3:50 PM Terren Suydam 
wrote:


> * > in my view, consciousness entails a continuous flow of experience.*
>

If I could instantly stop all physical processes that are going on inside
your head for one year and then start them up again, to an outside
objective observer you would appear to lose consciousness for one year, but
to you your consciousness would still feel continuous but the outside world
would appear to have discontinuously jumped to something new.

John K ClarkSee what's on my new list at  Extropolis

2b0




>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0B5ce20hEsOv_GLcmJP5kkOpqMggwUFTcBQfGL_59y0g%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:27 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
>> wrote:
>>
>>
>>> And yes, I'm arguing that a true simulation (let's say for the sake of a
>>> thought experiment we were able to replicate every neural connection of a
>>> human being in code, including the connectomes, and neurotransmitters,
>>> along with a simulated nerve that was connected to a button on the desk we
>>> could press which would simulate the signal sent when a biological pain
>>> receptor is triggered) would feel pain that is just as real as the pain you
>>> and I feel as biological organisms.
>>>
>>
>> This follows from the physicalist no-zombies-possible stance. But it
>> still runs into the hard problem, basically. How does stuff give rise to
>> experience.
>>
>>
> I would say stuff doesn't give rise to conscious experience. Conscious
> experience is the logically necessary and required state of knowledge that
> is present in any consciousness-necessitating behaviors. If you design a
> simple robot with a camera and robot arm that is able to reliably catch a
> ball thrown in its general direction, then something in that system *must*
> contain knowledge of the ball's relative position and trajectory. It simply
> isn't logically possible to have a system that behaves in all situations as
> if it knows where the ball is, without knowing where the ball is.
> Consciousness is simply the state of being with knowledge.
>
> Con- "Latin for with"
> -Scious- "Latin for knowledge"
> -ness "English suffix meaning the state of being X"
>
> Consciousness -> The state of being with knowledge.
>
> There is an infinite variety of potential states and levels of knowledge,
> and this contributes to much of the confusion, but boiled down to the
> simplest essence of what is or isn't conscious, it is all about knowledge
> states. Knowledge states require activity/reactivity to the presence of
> information, and counterfactual behaviors (if/then, greater than less than,
> discriminations and comparisons that lead to different downstream
> consequences in a system's behavior). At least, this is my theory of
> consciousness.
>
> Jason
>

This still runs into the valence problem though. Why does some "knowledge"
correspond with a positive *feeling* and other knowledge with a negative
feeling?  I'm not talking about the functional accounts of positive and
negative experiences. I'm talking about phenomenology. The functional
aspect of it is not irrelevant, but to focus *only* on that is to sweep the
feeling under the rug. So many dialogs on this topic basically terminate
here, where it's just a clash of belief about the relative importance of
consciousness and phenomenology as the mediator of all experience and
knowledge.

Terren


> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj8F0xkGD7Pe82R_FsLzGO51Z4cgN6J71Er_F5ptMo3EA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9OE6MMMSXia2eXHTajXZq068OFG4HNZuamBpy6ORCSGg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:05 PM Jesse Mazer  wrote:

>
>
> On Tue, May 23, 2023 at 9:34 AM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I appreciate the callout, but it is necessary to talk at both the micro
>> and the macro for this discussion. We're talking about symbol grounding. I
>> should make it clear that I don't believe symbols can be grounded in other
>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>> infinite regress and the illusion of meaning.  Symbols ultimately must
>> stand for something. The only thing they can stand *for*, ultimately, is
>> something that cannot be communicated by other symbols: conscious
>> experience. There is no concept in our brains that is not ultimately
>> connected to something we've seen, heard, felt, smelled, or tasted.
>>
>> In my experience with conversations like this, you usually have people on
>> one side who take consciousness seriously as the only thing that is
>> actually undeniable, and you have people who'd rather not talk about it,
>> hand-wave it away, or outright deny it. That's the talking-past that
>> usually happens, and that's what's happening here.
>>
>> Terren
>>
>
> But are you talking specifically about symbols with high-level meaning
> like the words humans use in ordinary language, which large language models
> like ChatGPT are trained on? Or are you talking more generally about any
> kinds of symbols, including something like the 1s and 0s in a giant
> computer that was performing an extremely detailed simulation of a physical
> world, perhaps down to the level of particle physics, where that simulation
> could include things like detailed physical simulations of things in
> external environment (a flower, say) and components of a simulated
> biological organism with a nervous system (with particle-level simulations
> of neurons etc.)? Would you say that even in the case of the detailed
> physics simulation, nothing in there could ever give rise to conscious
> experience like 

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 1:46 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>>
>>> As I see this thread, Terren and Stathis are both talking past each
>>> other. Please either of you correct me if i am wrong, but in an effort to
>>> clarify and perhaps resolve this situation:
>>>
>>> I believe Stathis is saying the functional substitution having the same
>>> fine-grained causal organization *would* have the same phenomenology, the
>>> same experience, and the same qualia as the brain with the same
>>> fine-grained causal organization.
>>>
>>> Therefore, there is no disagreement between your positions with regards
>>> to symbols groundings, mappings, etc.
>>>
>>> When you both discuss the problem of symbology, or bits, etc. I believe
>>> this is partly responsible for why you are both talking past each other,
>>> because there are many levels involved in brains (and computational
>>> systems). I believe you were discussing completely different levels in the
>>> hierarchical organization.
>>>
>>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>>> logic gates, bits, and instructions as in computers.
>>>
>>> I think when Terren mentions a "symbol for the smell of grandmother's
>>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>>> or idea or memory of the smell of GMK is a very high-level feature of a
>>> mind. When Terren asks for or discusses a symbol for it, a complete
>>> answer/description for it can only be supplied in terms of a vast amount of
>>> information concerning low level structures, be they patterns of neuron
>>> firings, or patterns of bits being processed. When we consider things down
>>> at this low level, however, we lose all context for what the meaning, idea,
>>> and quale are or where or how they come in. We cannot see or find the idea
>>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>>
>>> Of course then it should seem deeply mysterious, if not impossible, how
>>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>>> leap from how we get "it" from a bunch of cells squirting ions back and
>>> forth. Trying to understand a smartphone by looking at the flows of
>>> electrons is a similar kind of problem, it would seem just as difficult or
>>> impossible to explain and understand the high-level features and complexity
>>> out of the low-level simplicity.
>>>
>>> This is why it's crucial to bear in mind and explicitly discuss the
>>> level one is operation on when one discusses symbols, substrates, or quale.
>>> In summary, I think a chief reason you have been talking past each other is
>>> because you are each operating on different assumed levels.
>>>
>>> Please correct me if you believe I am mistaken and know I only offer my
>>> perspective in the hope it might help the conversation.
>>>
>>
>> I appreciate the callout, but it is necessary to talk at both the micro
>> and the macro for this discussion. We're talking about symbol grounding. I
>> should make it clear that I don't believe symbols can be grounded in other
>> symbols (i.e. symbols all the way down as Stathis put it), that leads to
>> infinite regress and the illusion of meaning.  Symbols ultimately must
>> stand for something. The only thing they can stand *for*, ultimately, is
>> something that cannot be communicated by other symbols: conscious
>> experience. There is no concept in our brains that is not ultimately
>> connected to something we've seen, heard, felt, smelled, or tasted.
>>
>
> I agree everything you have experienced is rooted in consciousness.
>
> But at the low level, that only thing your brain senses are neural signals
> (symbols, on/off, ones and zeros).
>
> In your arguments you rely on the high-level conscious states of human
> brains to establish that they have grounding, but then use the low-level
> descriptions of machines to deny their own consciousness, and hence deny
> they can ground their processing to anything.
>
> If you remained in the space of low-level descriptions for both brains and
> machine intelligences, however, you would see each struggles to make a
> connection to what may exist at the high-level. You would see, the lack of
> any apparent grounding in what are just neurons firing or not firing at
> certain times. Just as a wire in a circuit either carries or doesn't carry
> a charge.
>

Ah, I see your point now. That's valid, thanks for raising it and let me
clarify.

Bringing this back to LLMs, it's clear to me that LLMs do not have
phenomenal experience, but you're right to insist that I explain why I
think so. I don't know if this amounts to a theory of consciousness, but
the reason I believe that LLMs are not conscious is 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
> wrote:
>
>
>> And yes, I'm arguing that a true simulation (let's say for the sake of a
>> thought experiment we were able to replicate every neural connection of a
>> human being in code, including the connectomes, and neurotransmitters,
>> along with a simulated nerve that was connected to a button on the desk we
>> could press which would simulate the signal sent when a biological pain
>> receptor is triggered) would feel pain that is just as real as the pain you
>> and I feel as biological organisms.
>>
>
> This follows from the physicalist no-zombies-possible stance. But it still
> runs into the hard problem, basically. How does stuff give rise to
> experience.
>
>
I would say stuff doesn't give rise to conscious experience. Conscious
experience is the logically necessary and required state of knowledge that
is present in any consciousness-necessitating behaviors. If you design a
simple robot with a camera and robot arm that is able to reliably catch a
ball thrown in its general direction, then something in that system *must*
contain knowledge of the ball's relative position and trajectory. It simply
isn't logically possible to have a system that behaves in all situations as
if it knows where the ball is, without knowing where the ball is.
Consciousness is simply the state of being with knowledge.

Con- "Latin for with"
-Scious- "Latin for knowledge"
-ness "English suffix meaning the state of being X"

Consciousness -> The state of being with knowledge.

There is an infinite variety of potential states and levels of knowledge,
and this contributes to much of the confusion, but boiled down to the
simplest essence of what is or isn't conscious, it is all about knowledge
states. Knowledge states require activity/reactivity to the presence of
information, and counterfactual behaviors (if/then, greater than less than,
discriminations and comparisons that lead to different downstream
consequences in a system's behavior). At least, this is my theory of
consciousness.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj8F0xkGD7Pe82R_FsLzGO51Z4cgN6J71Er_F5ptMo3EA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023 at 1:12 PM John Clark  wrote:

> On Tue, May 23, 2023  Terren Suydam  wrote:
>
> *> What was the biochemical or neural change that suddenly birthed the
>> feeling of pain? *
>
>
> It would not be difficult to make a circuit such that that whenever a
> specific binary sequence of zeros and ones is in a register the circuit
> stops doing everything else and changes that sequence to something else
> as fast as possible. As I've said before, intelligence is hard but
> emotion is easy.
>

I believe I have made simple neural networks that are conscious and can
experience both pleasure and displeasure, insofar as they have evolved to
learn and apply multiple and various strategies for both attraction and
avoidance behaviors. They can achieve this even with just 16
artificial neurons and within only a dozen generations of simulated
evolution:

https://github.com/jasonkresch/bots

https://www.youtube.com/watch?v=InBsqlWQTts=PLq_mdJjNRPT11IF4NFyLcIWJ1C0Z3hTAX=2

I am of course interested in hearing any arguments for why these bots are
either capable of some primitive sensation or not.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiE3oAds_DoJ1XXQN7NbwezozCoqh_j-%2BizORdCusOz-w%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 11:08 AM Dylan Distasio  wrote:

> Let me start out by saying I don't believe in zombies.   We are
> biophysical systems with a long history of building on and repurposing
> earlier systems of genes and associated proteins.   I saw you don't believe
> it is symbols all the way down.   I agree with you, but I am arguing the
> beginning of that chain of symbols for many things begins with sensory
> input and ends with a higher level symbol/abstraction, particularly in
> fully conscious animals like human beings that are self aware and capable
> of an inner dialogue.
>

Of course, and hopefully I was clear that I meant symbols are *ultimately*
grounded in phenomenal experience, but yes, there are surely many layers to
get there depending on the concept.

>
> An earlier example I gave of someone born blind results in someone with no
> concept of red or any color for that matter, or images, and so on.   I
> don't believe redness is hiding in some molecule in the brain like Brent
> does.   It's only created via pruned neural networks based on someone who
> has sensory inputs working properly.   That's the beginning of the chain of
> symbols, but it starts with an electrical impulse sent down nerves from a
> sensory organ.
>

I agree, the idea of a certain kind of physical molecule transducing a quale*
by virtue of the properties of that physical molecule* is super
problematic. Like, if redness were inherent to glutamate, what about all
the other colors?  And sounds? And smells?  And textures?  Just how many
molecules would we need to represent the vast pantheon of possible qualia?


> It's the same thing with pain.  If a certain gene is screwed up related to
> a subset of sodium channels (which are critical for proper transmission of
> signals propagating along certain nerves), a human being is incapable of
> feeling pain.   I'd argue they don't know what pain is, just like a
> congenital blind person doesn't know what red is.   It's the same thing
> with hearing and music.   If a brain is missing that initial sensory input,
> your consciousness does not have the ability to feel the related subjective
> sensation.
>

No problem there.


> And yes, I'm arguing that a true simulation (let's say for the sake of a
> thought experiment we were able to replicate every neural connection of a
> human being in code, including the connectomes, and neurotransmitters,
> along with a simulated nerve that was connected to a button on the desk we
> could press which would simulate the signal sent when a biological pain
> receptor is triggered) would feel pain that is just as real as the pain you
> and I feel as biological organisms.
>

This follows from the physicalist no-zombies-possible stance. But it still
runs into the hard problem, basically. How does stuff give rise to
experience.


> You asked me for the principle behind how a critter could start having a
> negative feeling that didn't exist in its progenitors.   Again, I believe
> the answer is as simple as it happened when pain receptors evolved that may
> have started as a random mutation where the behavior they induced in lower
> organisms resulted in increased survival.
>

Before you said that you don't believe redness is hiding in a molecule. But
here, you're saying pain is hiding in a pain receptor, which is nothing
more or less than a protein molecule.


>   I'm not claiming to have solved the hard problem of consciousness.   I
> don't claim to have the answer for why pain subjectively feels the way it
> does, or why pleasure does, but I do know that reward systems that evolved
> much earlier are involved (like dopamine based ones), and that pleasure can
> be directly triggered via various recreational drugs.   That doesn't mean I
> think the dopamine molecule is where the pleasure qualia is hiding.
>

> Even lower forms of life like bacteria move towards what their limited
> sensory systems tell them is a reward and away from what it tells them is a
> danger.   I believe our subjective experiences are layered onto these much
> earlier evolutionary artifacts, although as eukaryotes I am not claiming
> that much of this is inherited from LUCA.   I think it blossomed once
> predator/prey dynamics were possible in the Cambrian explosion and was
> built on from there over many many years.
>

Bacteria can move towards or away from certain stimuli, but it doesn't
follow that it feels pain or pleasure as it does so. That is using
functionalism to sweep the hard problem under the rug.

Terren


> Getting slightly off topic, I don't think substrate likely matters as far
> as producing consciousness.   The only possible way I could see that it
> would is if quantum effects are actually involved in generating it that we
> can't reasonably replicate.   That said, I think Penrose and others do not
> have the odds on their side there for a number of reasons.
>
> Like I said though, I don't believe in zombies.
>
> On Tue, May 23, 2023 at 9:12 AM Terren 

Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Tue, May 23, 2023  Terren Suydam  wrote:

*> What was the biochemical or neural change that suddenly birthed the
> feeling of pain? *


It would not be difficult to make a circuit such that that whenever a
specific binary sequence of zeros and ones is in a register the circuit
stops doing everything else and changes that sequence to something else as
fast as possible. As I've said before, intelligence is hard but emotion is
easy.

*> I don't believe symbols can be grounded in other symbols*
>

But it would be easy to ground symbols with examples, such as the symbol
"2" with the number of shoes most people wear and the number of arms most
people have, and the symbol "greenness" with the thing that leaves and
emeralds and Harry Potter's eyes have in common.


* > There is no concept in our brains that is not ultimately connected to
> something we've seen, heard, felt, smelled, or tasted.*
>

And that's why examples are important but definitions are not.

John K ClarkSee what's on my new list at  Extropolis

tlw

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0HNSqOnMzz14yx9U0y0eRkUWHZt%2BbzKY%3DBx_%2Bu%3D_M1HA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Jesse Mazer
On Tue, May 23, 2023 at 9:34 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the same
>> fine-grained causal organization *would* have the same phenomenology, the
>> same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with regards
>> to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I believe
>> this is partly responsible for why you are both talking past each other,
>> because there are many levels involved in brains (and computational
>> systems). I believe you were discussing completely different levels in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>> logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things down
>> at this low level, however, we lose all context for what the meaning, idea,
>> and quale are or where or how they come in. We cannot see or find the idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible, how
>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>> leap from how we get "it" from a bunch of cells squirting ions back and
>> forth. Trying to understand a smartphone by looking at the flows of
>> electrons is a similar kind of problem, it would seem just as difficult or
>> impossible to explain and understand the high-level features and complexity
>> out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the level
>> one is operation on when one discusses symbols, substrates, or quale. In
>> summary, I think a chief reason you have been talking past each other is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer my
>> perspective in the hope it might help the conversation.
>>
>
> I appreciate the callout, but it is necessary to talk at both the micro
> and the macro for this discussion. We're talking about symbol grounding. I
> should make it clear that I don't believe symbols can be grounded in other
> symbols (i.e. symbols all the way down as Stathis put it), that leads to
> infinite regress and the illusion of meaning.  Symbols ultimately must
> stand for something. The only thing they can stand *for*, ultimately, is
> something that cannot be communicated by other symbols: conscious
> experience. There is no concept in our brains that is not ultimately
> connected to something we've seen, heard, felt, smelled, or tasted.
>
> In my experience with conversations like this, you usually have people on
> one side who take consciousness seriously as the only thing that is
> actually undeniable, and you have people who'd rather not talk about it,
> hand-wave it away, or outright deny it. That's the talking-past that
> usually happens, and that's what's happening here.
>
> Terren
>

But are you talking specifically about symbols with high-level meaning like
the words humans use in ordinary language, which large language models like
ChatGPT are trained on? Or are you talking more generally about any kinds
of symbols, including something like the 1s and 0s in a giant computer that
was performing an extremely detailed simulation of a physical world,
perhaps down to the level of particle physics, where that simulation could
include things like detailed physical simulations of things in external
environment (a flower, say) and components of a simulated biological
organism with a nervous system (with particle-level simulations of neurons
etc.)? Would you say that even in the case of the detailed physics
simulation, nothing in there could ever give rise to conscious experience
like our own?

Jesse





>
>
>>
>> Jason
>>
>> On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 15:58, Terren Suydam 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the same
>> fine-grained causal organization *would* have the same phenomenology, the
>> same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with regards
>> to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I believe
>> this is partly responsible for why you are both talking past each other,
>> because there are many levels involved in brains (and computational
>> systems). I believe you were discussing completely different levels in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>> logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things down
>> at this low level, however, we lose all context for what the meaning, idea,
>> and quale are or where or how they come in. We cannot see or find the idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible, how
>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>> leap from how we get "it" from a bunch of cells squirting ions back and
>> forth. Trying to understand a smartphone by looking at the flows of
>> electrons is a similar kind of problem, it would seem just as difficult or
>> impossible to explain and understand the high-level features and complexity
>> out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the level
>> one is operation on when one discusses symbols, substrates, or quale. In
>> summary, I think a chief reason you have been talking past each other is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer my
>> perspective in the hope it might help the conversation.
>>
>
> I think you’ve captured my position. But in addition I think replicating
> the fine-grained causal organisation is not necessary in order to replicate
> higher level phenomena such as GMK. By extension of Chalmers’ substitution
> experiment,
>

Note that Chalmers's argument is based on assuming the functional
substitution occurs at a certain level of fine-grained-ness. If you lose
this step, and look at only the top-most input-output of the mind as black
box, then you can no longer distinguish a rock from a dreaming person, nor
a calculator computing 2+3 and a human computing 2+3, and one also runs
into the Blockhead "lookup table" argument against functionalism.

Accordingly, I think intermediate steps and the fine-grained organization
are important (to some minimum level of fidelity) but as Bruno would say,
we can never be certain what this necessary substitution level is. Is it
neocortical columns, is it the connectome, is it the proteome, is it the
molecules and atoms, is it QFT? Chalmers argues that at least at the level
where noise introduces deviations in a brain simulation, simulating lower
levels should not be necessary, as human consciousness appears robust to
such noise at low levels (photon strikes, brownian motion, quantum
uncertainties, etc.)


> replicating the behaviour of the human through any means, such as training
> an AI not only on language but also movement, would also preserve
> consciousness, even though it does not simulate any physiological
> processes. Another way to say this is that it is not possible to make a
> philosophical zombie.
>

I agree zombies are impossible. I think they are even logically impossible.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
On Tue, May 23, 2023, 9:34 AM Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the same
>> fine-grained causal organization *would* have the same phenomenology, the
>> same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with regards
>> to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I believe
>> this is partly responsible for why you are both talking past each other,
>> because there are many levels involved in brains (and computational
>> systems). I believe you were discussing completely different levels in the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts, feelings,
>> quale, etc. and there are low-level, be they neurons, neurotransmitters,
>> atoms, quantum fields, and laws of physics as in human brains, or circuits,
>> logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things down
>> at this low level, however, we lose all context for what the meaning, idea,
>> and quale are or where or how they come in. We cannot see or find the idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible, how
>> we get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
>> leap from how we get "it" from a bunch of cells squirting ions back and
>> forth. Trying to understand a smartphone by looking at the flows of
>> electrons is a similar kind of problem, it would seem just as difficult or
>> impossible to explain and understand the high-level features and complexity
>> out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the level
>> one is operation on when one discusses symbols, substrates, or quale. In
>> summary, I think a chief reason you have been talking past each other is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer my
>> perspective in the hope it might help the conversation.
>>
>
> I appreciate the callout, but it is necessary to talk at both the micro
> and the macro for this discussion. We're talking about symbol grounding. I
> should make it clear that I don't believe symbols can be grounded in other
> symbols (i.e. symbols all the way down as Stathis put it), that leads to
> infinite regress and the illusion of meaning.  Symbols ultimately must
> stand for something. The only thing they can stand *for*, ultimately, is
> something that cannot be communicated by other symbols: conscious
> experience. There is no concept in our brains that is not ultimately
> connected to something we've seen, heard, felt, smelled, or tasted.
>

I agree everything you have experienced is rooted in consciousness.

But at the low level, that only thing your brain senses are neural signals
(symbols, on/off, ones and zeros).

In your arguments you rely on the high-level conscious states of human
brains to establish that they have grounding, but then use the low-level
descriptions of machines to deny their own consciousness, and hence deny
they can ground their processing to anything.

If you remained in the space of low-level descriptions for both brains and
machine intelligences, however, you would see each struggles to make a
connection to what may exist at the high-level. You would see, the lack of
any apparent grounding in what are just neurons firing or not firing at
certain times. Just as a wire in a circuit either carries or doesn't carry
a charge.

Conversely, if you stay in the high-level realm of consciousness ideas,
well then you must face the problem of other minds. You know you are
conscious, but you cannot prove or disprove the conscious of others, at
least not with first defining a theory of consciousness and explaining why
some minds satisfy the definition of not. Until you present a theory of
consciousness then this conversation is, I am afraid, doomed to continue in
this circle forever.

This same conversation and outcome played out 

Re: what chatGPT is and is not

2023-05-23 Thread Dylan Distasio
Let me start out by saying I don't believe in zombies.   We are biophysical
systems with a long history of building on and repurposing earlier systems
of genes and associated proteins.   I saw you don't believe it is symbols
all the way down.   I agree with you, but I am arguing the beginning of
that chain of symbols for many things begins with sensory input and ends
with a higher level symbol/abstraction, particularly in fully conscious
animals like human beings that are self aware and capable of an inner
dialogue.

An earlier example I gave of someone born blind results in someone with no
concept of red or any color for that matter, or images, and so on.   I
don't believe redness is hiding in some molecule in the brain like Brent
does.   It's only created via pruned neural networks based on someone who
has sensory inputs working properly.   That's the beginning of the chain of
symbols, but it starts with an electrical impulse sent down nerves from a
sensory organ.

It's the same thing with pain.  If a certain gene is screwed up related to
a subset of sodium channels (which are critical for proper transmission of
signals propagating along certain nerves), a human being is incapable of
feeling pain.   I'd argue they don't know what pain is, just like a
congenital blind person doesn't know what red is.   It's the same thing
with hearing and music.   If a brain is missing that initial sensory input,
your consciousness does not have the ability to feel the related subjective
sensation.

And yes, I'm arguing that a true simulation (let's say for the sake of a
thought experiment we were able to replicate every neural connection of a
human being in code, including the connectomes, and neurotransmitters,
along with a simulated nerve that was connected to a button on the desk we
could press which would simulate the signal sent when a biological pain
receptor is triggered) would feel pain that is just as real as the pain you
and I feel as biological organisms.

You asked me for the principle behind how a critter could start having a
negative feeling that didn't exist in its progenitors.   Again, I believe
the answer is as simple as it happened when pain receptors evolved that may
have started as a random mutation where the behavior they induced in lower
organisms resulted in increased survival.I'm not claiming to have
solved the hard problem of consciousness.   I don't claim to have the
answer for why pain subjectively feels the way it does, or why pleasure
does, but I do know that reward systems that evolved much earlier are
involved (like dopamine based ones), and that pleasure can be directly
triggered via various recreational drugs.   That doesn't mean I think the
dopamine molecule is where the pleasure qualia is hiding.

Even lower forms of life like bacteria move towards what their limited
sensory systems tell them is a reward and away from what it tells them is a
danger.   I believe our subjective experiences are layered onto these much
earlier evolutionary artifacts, although as eukaryotes I am not claiming
that much of this is inherited from LUCA.   I think it blossomed once
predator/prey dynamics were possible in the Cambrian explosion and was
built on from there over many many years.

Getting slightly off topic, I don't think substrate likely matters as far
as producing consciousness.   The only possible way I could see that it
would is if quantum effects are actually involved in generating it that we
can't reasonably replicate.   That said, I think Penrose and others do not
have the odds on their side there for a number of reasons.

Like I said though, I don't believe in zombies.

On Tue, May 23, 2023 at 9:12 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 2:25 AM Dylan Distasio 
> wrote:
>
>> While we may not know everything about explaining it, pain doesn't seem
>> to be that much of a mystery to me, and I don't consider it a symbol per
>> se.   It seems obvious to me anyways that pain arose out of a very early
>> neural circuit as a survival mechanism.
>>
>
> But how?  What was the biochemical or neural change that suddenly birthed
> the feeling of pain?  I'm not asking you to know the details, just the
> principle - by what principle can a critter that comes into being with some
> modification of its organization start having a negative feeling when it
> didn't exist in its progenitors?  This doesn't seem mysterious to you?
>
> Very early neural circuits are relatively easy to simulate, and I'm
> guessing some team has done this for the level of organization you're
> talking about. What you're saying, if I'm reading you correctly, is that
> that simulation feels pain. If so, how do you get that feeling of pain out
> of code?
>
> Terren
>
>
>
>> Pain is the feeling you experience when pain receptors detect an area of
>> the body is being damaged.   It is ultimately based on a sensory input that
>> transmits to the brain via nerves where it is translated into a sensation
>> that tells you 

Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 9:15 AM John Clark  wrote:

> On Mon, May 22, 2023 at 5:56 PM Terren Suydam 
> wrote:
>
> *> Many, myself included, are captivated by the amazing capabilities of
>> chatGPT and other LLMs. They are, truly, incredible. Depending on your
>> definition of Turing Test, it passes with flying colors in many, many
>> contexts. It would take a much stricter Turing Test than we might have
>> imagined this time last year,*
>>
>
> The trouble with having a much tougher Turing Test  is that although it
> would correctly conclude that it was talking to a computer it would also
> incorrectly conclude that it was talking with a computer when in reality it
> was talking to a human being who had an IQ of 200. Yes GPT can occasionally
> do something that is very stupid, but if you had not also at one time or
> another in your life done something that is very stupid then you are a VERY
> remarkable person.
>
> > One way to improve chatGPT's performance on an actual Turing Test would
>> be to slow it down, because it is too fast to be human.
>>
>
> It would be easy to make GPT dumber, but what would that prove ? We could
> also mass-produce Olympic gold medals so everybody on earth could get one,
> but what would be the point?
>
>>
> *> All that said, is chatGPT actually intelligent?*
>>
>
> Obviously.
>
>
>> * > There's no question that it behaves in a way that we would all agree
>> is intelligent. The answers it gives, and the speed it gives them in,
>> reflect an intelligence that often far exceeds most if not all humans. I
>> know some here say intelligence is as intelligence does. Full stop, *
>>
>
> All I'm saying is you should play fair, whatever test you decide to use
> to measure the intelligence of a human you should use exactly the same test
> on an AI. Full stop.
>
> > *But this is an oversimplified view! *
>>
>
> Maybe so, but it's the only view we're ever going to get so we're just
> gonna have to make the best of it.  But I know there are some people who
> will continue to disagree with me about that until the day they die.
>
>  and so just five seconds before he was vaporized the last surviving
> human being turned to Mr. Jupiter Brain and said "*I still think I'm
> smarter than you*".
>
> *< If ChatGPT was trained on gibberish, that's what you'd get out of it.*
>
>
> And if you were trained on gibberish what sort of post do you imagine
> you'd be writing right now?
>
> * > the Chinese Room thought experiment proposed by John Searle.*
>>
>
> You mean the silliest thought experiment ever devised by the mind of man?
>
> *> ChatGPT, therefore, is more like a search engine*
>
>
> Oh for heaven sake, not that canard again!  I'm not young but since my
> early teens I've been hearing people say you only get out of a computer
> what you put in. I thought that was silly when I was 13 and I still do.
> John K ClarkSee what's on my new list at  Extropolis
> 
>

I'm just going to say up front that I'm not going to engage with you on
this particular topic, because I'm already well aware of your position,
that you do not take consciousness seriously, and that your mind won't be
changed on that. So anything we argue about will be about that fundamental
difference, and that's just not interesting or productive, not to mention
we've already had that pointless argument.

Terren


>
> nw4
>
>
>
>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv2ZG8Vo3LBF2nvUP5umHZVvUusjgPYQEkKhwptmKaNUWw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9wJ4us5_z3cMPBcT6KGNTnB1_0CiSXdTKpDkdvV%3DmpMw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 7:09 AM Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each other.
> Please either of you correct me if i am wrong, but in an effort to clarify
> and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the same
> fine-grained causal organization *would* have the same phenomenology, the
> same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with regards to
> symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I believe
> this is partly responsible for why you are both talking past each other,
> because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts, feelings,
> quale, etc. and there are low-level, be they neurons, neurotransmitters,
> atoms, quantum fields, and laws of physics as in human brains, or circuits,
> logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible, how we
> get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
> leap from how we get "it" from a bunch of cells squirting ions back and
> forth. Trying to understand a smartphone by looking at the flows of
> electrons is a similar kind of problem, it would seem just as difficult or
> impossible to explain and understand the high-level features and complexity
> out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the level
> one is operation on when one discusses symbols, substrates, or quale. In
> summary, I think a chief reason you have been talking past each other is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer my
> perspective in the hope it might help the conversation.
>

I appreciate the callout, but it is necessary to talk at both the micro and
the macro for this discussion. We're talking about symbol grounding. I
should make it clear that I don't believe symbols can be grounded in other
symbols (i.e. symbols all the way down as Stathis put it), that leads to
infinite regress and the illusion of meaning.  Symbols ultimately must
stand for something. The only thing they can stand *for*, ultimately, is
something that cannot be communicated by other symbols: conscious
experience. There is no concept in our brains that is not ultimately
connected to something we've seen, heard, felt, smelled, or tasted.

In my experience with conversations like this, you usually have people on
one side who take consciousness seriously as the only thing that is
actually undeniable, and you have people who'd rather not talk about it,
hand-wave it away, or outright deny it. That's the talking-past that
usually happens, and that's what's happening here.

Terren


>
> Jason
>
> On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 15:58, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 14:23, Terren Suydam 
 wrote:

>
>
> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 13:37, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
>>> stath...@gmail.com> wrote:
>>>


 On Tue, 23 May 2023 at 10:48, Terren Suydam <
 terren.suy...@gmail.com> wrote:

>
>
> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:03, Terren Suydam <
>> terren.suy...@gmail.com> wrote:
>>
>>>
>>> it is true that my brain has been trained on a large amount of
>>> data - 

Re: what chatGPT is and is not

2023-05-23 Thread John Clark
On Mon, May 22, 2023 at 5:56 PM Terren Suydam 
wrote:

*> Many, myself included, are captivated by the amazing capabilities of
> chatGPT and other LLMs. They are, truly, incredible. Depending on your
> definition of Turing Test, it passes with flying colors in many, many
> contexts. It would take a much stricter Turing Test than we might have
> imagined this time last year,*
>

The trouble with having a much tougher Turing Test  is that although it
would correctly conclude that it was talking to a computer it would also
incorrectly conclude that it was talking with a computer when in reality it
was talking to a human being who had an IQ of 200. Yes GPT can occasionally
do something that is very stupid, but if you had not also at one time or
another in your life done something that is very stupid then you are a VERY
remarkable person.

> One way to improve chatGPT's performance on an actual Turing Test would
> be to slow it down, because it is too fast to be human.
>

It would be easy to make GPT dumber, but what would that prove ? We could
also mass-produce Olympic gold medals so everybody on earth could get one,
but what would be the point?

>
*> All that said, is chatGPT actually intelligent?*
>

Obviously.


> * > There's no question that it behaves in a way that we would all agree
> is intelligent. The answers it gives, and the speed it gives them in,
> reflect an intelligence that often far exceeds most if not all humans. I
> know some here say intelligence is as intelligence does. Full stop, *
>

All I'm saying is you should play fair, whatever test you decide to use to
measure the intelligence of a human you should use exactly the same test on an
AI. Full stop.

> *But this is an oversimplified view! *
>

Maybe so, but it's the only view we're ever going to get so we're just
gonna have to make the best of it.  But I know there are some people who
will continue to disagree with me about that until the day they die.

 and so just five seconds before he was vaporized the last surviving
human being turned to Mr. Jupiter Brain and said "*I still think I'm
smarter than you*".

*< If ChatGPT was trained on gibberish, that's what you'd get out of it.*


And if you were trained on gibberish what sort of post do you imagine you'd
be writing right now?

* > the Chinese Room thought experiment proposed by John Searle.*
>

You mean the silliest thought experiment ever devised by the mind of man?

*> ChatGPT, therefore, is more like a search engine*


Oh for heaven sake, not that canard again!  I'm not young but since my
early teens I've been hearing people say you only get out of a computer
what you put in. I thought that was silly when I was 13 and I still do.
John K ClarkSee what's on my new list at  Extropolis

nw4




>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2ZG8Vo3LBF2nvUP5umHZVvUusjgPYQEkKhwptmKaNUWw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Terren Suydam
On Tue, May 23, 2023 at 2:25 AM Dylan Distasio  wrote:

> While we may not know everything about explaining it, pain doesn't seem to
> be that much of a mystery to me, and I don't consider it a symbol per se.
>  It seems obvious to me anyways that pain arose out of a very early neural
> circuit as a survival mechanism.
>

But how?  What was the biochemical or neural change that suddenly birthed
the feeling of pain?  I'm not asking you to know the details, just the
principle - by what principle can a critter that comes into being with some
modification of its organization start having a negative feeling when it
didn't exist in its progenitors?  This doesn't seem mysterious to you?

Very early neural circuits are relatively easy to simulate, and I'm
guessing some team has done this for the level of organization you're
talking about. What you're saying, if I'm reading you correctly, is that
that simulation feels pain. If so, how do you get that feeling of pain out
of code?

Terren



> Pain is the feeling you experience when pain receptors detect an area of
> the body is being damaged.   It is ultimately based on a sensory input that
> transmits to the brain via nerves where it is translated into a sensation
> that tells you to avoid whatever is causing the pain if possible, or let's
> you know you otherwise have a problem with your hardware.
>
> That said, I agree with you on LLMs for the most part, although I think
> they are showing some potentially emergent, interesting behaviors.
>
> On Tue, May 23, 2023 at 1:58 AM Terren Suydam 
> wrote:
>
>>
>> Take a migraine headache - if that's just a symbol, then why does that
>> symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
>> like anything? If you say evolution did it, that doesn't actually answer
>> the question, because evolution doesn't do anything except select for
>> traits, roughly speaking. So it just pushes the question to: how did the
>> subjective feeling of pain or pleasure emerge from some genetic mutation,
>> when it wasn't there before?
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9R_BDWiZvvGrA-8Tgx4kC7Syh29Z%3D_Jev09AJvOvknew%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 21:09, Jason Resch  wrote:

> As I see this thread, Terren and Stathis are both talking past each other.
> Please either of you correct me if i am wrong, but in an effort to clarify
> and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the same
> fine-grained causal organization *would* have the same phenomenology, the
> same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with regards to
> symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I believe
> this is partly responsible for why you are both talking past each other,
> because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts, feelings,
> quale, etc. and there are low-level, be they neurons, neurotransmitters,
> atoms, quantum fields, and laws of physics as in human brains, or circuits,
> logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible, how we
> get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
> leap from how we get "it" from a bunch of cells squirting ions back and
> forth. Trying to understand a smartphone by looking at the flows of
> electrons is a similar kind of problem, it would seem just as difficult or
> impossible to explain and understand the high-level features and complexity
> out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the level
> one is operation on when one discusses symbols, substrates, or quale. In
> summary, I think a chief reason you have been talking past each other is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer my
> perspective in the hope it might help the conversation.
>

I think you’ve captured my position. But in addition I think replicating
the fine-grained causal organisation is not necessary in order to replicate
higher level phenomena such as GMK. By extension of Chalmers’ substitution
experiment, replicating the behaviour of the human through any means, such
as training an AI not only on language but also movement, would also
preserve consciousness, even though it does not simulate any physiological
processes. Another way to say this is that it is not possible to make a
philosophical zombie.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXAnJWknDjVxCNZPGXoMocK%3DVim8QegA6GDeJBTLF%3DKBQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-23 Thread Jason Resch
As I see this thread, Terren and Stathis are both talking past each other.
Please either of you correct me if i am wrong, but in an effort to clarify
and perhaps resolve this situation:

I believe Stathis is saying the functional substitution having the same
fine-grained causal organization *would* have the same phenomenology, the
same experience, and the same qualia as the brain with the same
fine-grained causal organization.

Therefore, there is no disagreement between your positions with regards to
symbols groundings, mappings, etc.

When you both discuss the problem of symbology, or bits, etc. I believe
this is partly responsible for why you are both talking past each other,
because there are many levels involved in brains (and computational
systems). I believe you were discussing completely different levels in the
hierarchical organization.

There are high-level parts of minds, such as ideas, thoughts, feelings,
quale, etc. and there are low-level, be they neurons, neurotransmitters,
atoms, quantum fields, and laws of physics as in human brains, or circuits,
logic gates, bits, and instructions as in computers.

I think when Terren mentions a "symbol for the smell of grandmother's
kitchen" (GMK) the trouble is we are crossing a myriad of levels. The quale
or idea or memory of the smell of GMK is a very high-level feature of a
mind. When Terren asks for or discusses a symbol for it, a complete
answer/description for it can only be supplied in terms of a vast amount of
information concerning low level structures, be they patterns of neuron
firings, or patterns of bits being processed. When we consider things down
at this low level, however, we lose all context for what the meaning, idea,
and quale are or where or how they come in. We cannot see or find the idea
of GMK in any neuron, no more than we can see or find it in any neuron.

Of course then it should seem deeply mysterious, if not impossible, how we
get "it" (GMK or otherwise) from "bit", but to me, this is no greater a
leap from how we get "it" from a bunch of cells squirting ions back and
forth. Trying to understand a smartphone by looking at the flows of
electrons is a similar kind of problem, it would seem just as difficult or
impossible to explain and understand the high-level features and complexity
out of the low-level simplicity.

This is why it's crucial to bear in mind and explicitly discuss the level
one is operation on when one discusses symbols, substrates, or quale. In
summary, I think a chief reason you have been talking past each other is
because you are each operating on different assumed levels.

Please correct me if you believe I am mistaken and know I only offer my
perspective in the hope it might help the conversation.

Jason

On Tue, May 23, 2023, 2:47 AM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 15:58, Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 14:23, Terren Suydam 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou <
 stath...@gmail.com> wrote:

>
>
> On Tue, 23 May 2023 at 13:37, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
>> stath...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>>> wrote:
>>>


 On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
 stath...@gmail.com> wrote:

>
>
> On Tue, 23 May 2023 at 10:03, Terren Suydam <
> terren.suy...@gmail.com> wrote:
>
>>
>> it is true that my brain has been trained on a large amount of
>> data - data that contains intelligence outside of my own. But when I
>> introspect, I notice that my understanding of things is ultimately
>> rooted/grounded in my phenomenal experience. Ultimately, everything 
>> we
>> know, we know either by our experience, or by analogy to experiences 
>> we've
>> had. This is in opposition to how LLMs train on data, which is 
>> strictly
>> about how words/symbols relate to one another.
>>
>
> The functionalist position is that phenomenal experience
> supervenes on behaviour, such that if the behaviour is replicated 
> (same
> output for same input) the phenomenal experience will also be 
> replicated.
> This is what philosophers like Searle (and many laypeople) can’t 
> stomach.
>

 I think the kind of phenomenal supervenience you're talking about
 is typically asserted for behavior at the level of the neuron, not the
 level of the whole agent. Is that what you're saying?  That chatGPT 
 must be
 having a phenomenal experience if it talks like a human?   If so, that 
 is

Re: what chatGPT is and is not

2023-05-23 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 15:58, Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 14:23, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 13:37, Terren Suydam 
 wrote:

>
>
> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
>>> stath...@gmail.com> wrote:
>>>


 On Tue, 23 May 2023 at 10:03, Terren Suydam <
 terren.suy...@gmail.com> wrote:

>
> it is true that my brain has been trained on a large amount of
> data - data that contains intelligence outside of my own. But when I
> introspect, I notice that my understanding of things is ultimately
> rooted/grounded in my phenomenal experience. Ultimately, everything we
> know, we know either by our experience, or by analogy to experiences 
> we've
> had. This is in opposition to how LLMs train on data, which is 
> strictly
> about how words/symbols relate to one another.
>

 The functionalist position is that phenomenal experience supervenes
 on behaviour, such that if the behaviour is replicated (same output for
 same input) the phenomenal experience will also be replicated. This is 
 what
 philosophers like Searle (and many laypeople) can’t stomach.

>>>
>>> I think the kind of phenomenal supervenience you're talking about is
>>> typically asserted for behavior at the level of the neuron, not the 
>>> level
>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>> having a phenomenal experience if it talks like a human?   If so, that 
>>> is
>>> stretching the explanatory domain of functionalism past its breaking 
>>> point.
>>>
>>
>> The best justification for functionalism is David Chalmers' "Fading
>> Qualia" argument. The paper considers replacing neurons with functionally
>> equivalent silicon chips, but it could be generalised to replacing any 
>> part
>> of the brain with a functionally equivalent black box, the whole brain, 
>> the
>> whole person.
>>
>
> You're saying that an algorithm that provably does not have
> experiences of rabbits and lollipops - but can still talk about them in a
> way that's indistinguishable from a human - essentially has the same
> phenomenology as a human talking about rabbits and lollipops. That's just
> absurd on its face. You're essentially hand-waving away the grounding
> problem. Is that your position? That symbols don't need to be grounded in
> any sort of phenomenal experience?
>

 It's not just talking about them in a way that is indistinguishable
 from a human, in order to have human-like consciousness the entire I/O
 behaviour of the human would need to be replicated. But in principle, I
 don't see why a LLM could not have some other type of phenomenal
 experience. And I don't think the grounding problem is a problem: I was
 never grounded in anything, I just grew up associating one symbol with
 another symbol, it's symbols all the way down.

>>>
>>> Is the smell of your grandmother's kitchen a symbol?
>>>
>>
>> Yes, I can't pull away the facade to check that there was a real
>> grandmother and a real kitchen against which I can check that the sense
>> data matches.
>>
>
> The ground problem is about associating symbols with a phenomenal
> experience, or the memory of one - which is not the same thing as the
> functional equivalent or the neural correlate. It's the feeling, what it's
> like to experience the thing the symbol stands for. The experience of
> redness. The shock of plunging into cold water. The smell of coffee. etc.
>
> Take a migraine headache - if that's just a symbol, then why does that
> symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
> like anything? If you say evolution did it, that doesn't actually answer
> the question, because evolution doesn't do anything except select for
> traits, roughly speaking. So it just pushes the question to: how did the
> subjective feeling of pain or pleasure emerge from some genetic mutation,
> when it wasn't there before?
>
> Without a functionalist explanation of the *origin* of aesthetic valence,
> then I don't think you can "get it from bit".
>

That seems more like the hard problem of consciousness. There is no
solution to it.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group 

Re: what chatGPT is and is not

2023-05-23 Thread Dylan Distasio
While we may not know everything about explaining it, pain doesn't seem to
be that much of a mystery to me, and I don't consider it a symbol per se.
 It seems obvious to me anyways that pain arose out of a very early neural
circuit as a survival mechanism.   Pain is the feeling you experience when
pain receptors detect an area of the body is being damaged.   It is
ultimately based on a sensory input that transmits to the brain via nerves
where it is translated into a sensation that tells you to avoid whatever is
causing the pain if possible, or let's you know you otherwise have a
problem with your hardware.

That said, I agree with you on LLMs for the most part, although I think
they are showing some potentially emergent, interesting behaviors.

On Tue, May 23, 2023 at 1:58 AM Terren Suydam 
wrote:

>
> Take a migraine headache - if that's just a symbol, then why does that
> symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
> like anything? If you say evolution did it, that doesn't actually answer
> the question, because evolution doesn't do anything except select for
> traits, roughly speaking. So it just pushes the question to: how did the
> subjective feeling of pain or pleasure emerge from some genetic mutation,
> when it wasn't there before?
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJrqPH90groRYAgFC0Tux3Y1G-yHZThDBCKaxk%2B3mxcbbKuyRw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Tue, May 23, 2023 at 12:32 AM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 14:23, Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 13:37, Terren Suydam 
>>> wrote:
>>>


 On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <
 stath...@gmail.com> wrote:

>
>
> On Tue, 23 May 2023 at 10:48, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
>> stath...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:03, Terren Suydam 
>>> wrote:
>>>

 it is true that my brain has been trained on a large amount of data
 - data that contains intelligence outside of my own. But when I 
 introspect,
 I notice that my understanding of things is ultimately rooted/grounded 
 in
 my phenomenal experience. Ultimately, everything we know, we know 
 either by
 our experience, or by analogy to experiences we've had. This is in
 opposition to how LLMs train on data, which is strictly about how
 words/symbols relate to one another.

>>>
>>> The functionalist position is that phenomenal experience supervenes
>>> on behaviour, such that if the behaviour is replicated (same output for
>>> same input) the phenomenal experience will also be replicated. This is 
>>> what
>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>
>>
>> I think the kind of phenomenal supervenience you're talking about is
>> typically asserted for behavior at the level of the neuron, not the level
>> of the whole agent. Is that what you're saying?  That chatGPT must be
>> having a phenomenal experience if it talks like a human?   If so, that is
>> stretching the explanatory domain of functionalism past its breaking 
>> point.
>>
>
> The best justification for functionalism is David Chalmers' "Fading
> Qualia" argument. The paper considers replacing neurons with functionally
> equivalent silicon chips, but it could be generalised to replacing any 
> part
> of the brain with a functionally equivalent black box, the whole brain, 
> the
> whole person.
>

 You're saying that an algorithm that provably does not have experiences
 of rabbits and lollipops - but can still talk about them in a way that's
 indistinguishable from a human - essentially has the same phenomenology as
 a human talking about rabbits and lollipops. That's just absurd on its
 face. You're essentially hand-waving away the grounding problem. Is that
 your position? That symbols don't need to be grounded in any sort of
 phenomenal experience?

>>>
>>> It's not just talking about them in a way that is indistinguishable from
>>> a human, in order to have human-like consciousness the entire I/O behaviour
>>> of the human would need to be replicated. But in principle, I don't see why
>>> a LLM could not have some other type of phenomenal experience. And I don't
>>> think the grounding problem is a problem: I was never grounded in anything,
>>> I just grew up associating one symbol with another symbol, it's symbols all
>>> the way down.
>>>
>>
>> Is the smell of your grandmother's kitchen a symbol?
>>
>
> Yes, I can't pull away the facade to check that there was a real
> grandmother and a real kitchen against which I can check that the sense
> data matches.
>

The ground problem is about associating symbols with a phenomenal
experience, or the memory of one - which is not the same thing as the
functional equivalent or the neural correlate. It's the feeling, what it's
like to experience the thing the symbol stands for. The experience of
redness. The shock of plunging into cold water. The smell of coffee. etc.

Take a migraine headache - if that's just a symbol, then why does that
symbol *feel* *bad* while others feel *good*?  Why does any symbol feel
like anything? If you say evolution did it, that doesn't actually answer
the question, because evolution doesn't do anything except select for
traits, roughly speaking. So it just pushes the question to: how did the
subjective feeling of pain or pleasure emerge from some genetic mutation,
when it wasn't there before?

Without a functionalist explanation of the *origin* of aesthetic valence,
then I don't think you can "get it from bit".

Terren


>
> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXiwn%2Bwh2K_T6K5pL5NZJR8%3DaPejiWQmyy5SHtee0%2Bouw%40mail.gmail.com
> 

Re: what chatGPT is and is not

2023-05-22 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 14:40, Jesse Mazer  wrote:

>
>
> On Mon, May 22, 2023 at 11:37 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>>> wrote:
>>>


 On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
 wrote:

>
>
> On Tue, 23 May 2023 at 10:03, Terren Suydam 
> wrote:
>
>>
>> it is true that my brain has been trained on a large amount of data -
>> data that contains intelligence outside of my own. But when I 
>> introspect, I
>> notice that my understanding of things is ultimately rooted/grounded in 
>> my
>> phenomenal experience. Ultimately, everything we know, we know either by
>> our experience, or by analogy to experiences we've had. This is in
>> opposition to how LLMs train on data, which is strictly about how
>> words/symbols relate to one another.
>>
>
> The functionalist position is that phenomenal experience supervenes on
> behaviour, such that if the behaviour is replicated (same output for same
> input) the phenomenal experience will also be replicated. This is what
> philosophers like Searle (and many laypeople) can’t stomach.
>

 I think the kind of phenomenal supervenience you're talking about is
 typically asserted for behavior at the level of the neuron, not the level
 of the whole agent. Is that what you're saying?  That chatGPT must be
 having a phenomenal experience if it talks like a human?   If so, that is
 stretching the explanatory domain of functionalism past its breaking point.

>>>
>>> The best justification for functionalism is David Chalmers' "Fading
>>> Qualia" argument. The paper considers replacing neurons with functionally
>>> equivalent silicon chips, but it could be generalised to replacing any part
>>> of the brain with a functionally equivalent black box, the whole brain, the
>>> whole person.
>>>
>>
>> You're saying that an algorithm that provably does not have experiences
>> of rabbits and lollipops - but can still talk about them in a way that's
>> indistinguishable from a human - essentially has the same phenomenology as
>> a human talking about rabbits and lollipops. That's just absurd on its
>> face. You're essentially hand-waving away the grounding problem. Is that
>> your position? That symbols don't need to be grounded in any sort of
>> phenomenal experience?
>>
>> Terren
>>
>
> Are you talking here about Chalmer's thought experiment in which each
> neuron is replaced by a functional duplicate, or about an algorithm like
> ChatGPT that has no detailed resemblance to the structure of a human
> being's brain? I think in the former case the case for identical experience
> is very strong, though note Chalmers is not really a functionalist, he
> postulates "psychophysical laws" which map physical patterns to
> experiences, and uses the replacement argument to argue that such laws
> would have the property of "functional invariance".
>
> In you are just talking about ChatGPT style programs, I would agree with
> you, a system trained only on the high-level symbols of human language (as
> opposed to symbols representing neural impulses or other low-level events
> on the microscopic level) is not likely to have experience anything like a
> human being using the same symbols. If Stathis' black box argument is meant
> to suggest otherwise I don't the logic, since it's not like a ChatGPT style
> program would replicate the detailed output of a composite group of neurons
> either, or even the exact verbal output of a specific person, so there is
> no equivalent to gradual replacement of parts of a real human. If we are
> just talking about qualitatively behaving in a "human-like" way without
> replicating the behavior of a specific person or sub-component of a person
> like a group of neurons in their brain, Chalmer's thought-experiment
> doesn't apply. And even in a qualitative sense, count me as very skeptical
> that a LLM trained only on human writing will ever pass any really rigorous
> Turing test.
>

Chalmers considers replacing individual neurons and then extending this to
groups of neurons with silicon chips. My variation on this is to replace
any part of a human with a black box that replicates the interactions of
that part with the surrounding tissue. This preserves the behaviour of the
behaviour of the human and also the consciousness, otherwise, the argument
goes, we could make a partial zombie, which is absurd. We could extend the
replacement to any arbitrarily large proportion of the human, say all but a
few cells on the tip of his nose, and the argument still holds. Once those
cells are replaced, the entire human is replaced, and his consciousness
remains unchanged. It is not necessary that inside the black box is
anything resembling or even simulating human physiological processes: that
would be one way 

Re: what chatGPT is and is not

2023-05-22 Thread Jesse Mazer
On Mon, May 22, 2023 at 11:37 PM Terren Suydam 
wrote:

>
>
> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 10:03, Terren Suydam 
 wrote:

>
> it is true that my brain has been trained on a large amount of data -
> data that contains intelligence outside of my own. But when I introspect, 
> I
> notice that my understanding of things is ultimately rooted/grounded in my
> phenomenal experience. Ultimately, everything we know, we know either by
> our experience, or by analogy to experiences we've had. This is in
> opposition to how LLMs train on data, which is strictly about how
> words/symbols relate to one another.
>

 The functionalist position is that phenomenal experience supervenes on
 behaviour, such that if the behaviour is replicated (same output for same
 input) the phenomenal experience will also be replicated. This is what
 philosophers like Searle (and many laypeople) can’t stomach.

>>>
>>> I think the kind of phenomenal supervenience you're talking about is
>>> typically asserted for behavior at the level of the neuron, not the level
>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>> having a phenomenal experience if it talks like a human?   If so, that is
>>> stretching the explanatory domain of functionalism past its breaking point.
>>>
>>
>> The best justification for functionalism is David Chalmers' "Fading
>> Qualia" argument. The paper considers replacing neurons with functionally
>> equivalent silicon chips, but it could be generalised to replacing any part
>> of the brain with a functionally equivalent black box, the whole brain, the
>> whole person.
>>
>
> You're saying that an algorithm that provably does not have experiences of
> rabbits and lollipops - but can still talk about them in a way that's
> indistinguishable from a human - essentially has the same phenomenology as
> a human talking about rabbits and lollipops. That's just absurd on its
> face. You're essentially hand-waving away the grounding problem. Is that
> your position? That symbols don't need to be grounded in any sort of
> phenomenal experience?
>
> Terren
>

Are you talking here about Chalmer's thought experiment in which each
neuron is replaced by a functional duplicate, or about an algorithm like
ChatGPT that has no detailed resemblance to the structure of a human
being's brain? I think in the former case the case for identical experience
is very strong, though note Chalmers is not really a functionalist, he
postulates "psychophysical laws" which map physical patterns to
experiences, and uses the replacement argument to argue that such laws
would have the property of "functional invariance".

In you are just talking about ChatGPT style programs, I would agree with
you, a system trained only on the high-level symbols of human language (as
opposed to symbols representing neural impulses or other low-level events
on the microscopic level) is not likely to have experience anything like a
human being using the same symbols. If Stathis' black box argument is meant
to suggest otherwise I don't the logic, since it's not like a ChatGPT style
program would replicate the detailed output of a composite group of neurons
either, or even the exact verbal output of a specific person, so there is
no equivalent to gradual replacement of parts of a real human. If we are
just talking about qualitatively behaving in a "human-like" way without
replicating the behavior of a specific person or sub-component of a person
like a group of neurons in their brain, Chalmer's thought-experiment
doesn't apply. And even in a qualitative sense, count me as very skeptical
that a LLM trained only on human writing will ever pass any really rigorous
Turing test.

Jesse




> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fnyGDNxfQJXaqdUsYdSw7Sm5kx5j_5n94K8trJA57Jg%40mail.gmail.com
> 

Re: what chatGPT is and is not

2023-05-22 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 14:23, Terren Suydam  wrote:

>
>
> On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 13:37, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 10:48, Terren Suydam 
 wrote:

>
>
> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <
> stath...@gmail.com> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:03, Terren Suydam 
>> wrote:
>>
>>>
>>> it is true that my brain has been trained on a large amount of data
>>> - data that contains intelligence outside of my own. But when I 
>>> introspect,
>>> I notice that my understanding of things is ultimately rooted/grounded 
>>> in
>>> my phenomenal experience. Ultimately, everything we know, we know 
>>> either by
>>> our experience, or by analogy to experiences we've had. This is in
>>> opposition to how LLMs train on data, which is strictly about how
>>> words/symbols relate to one another.
>>>
>>
>> The functionalist position is that phenomenal experience supervenes
>> on behaviour, such that if the behaviour is replicated (same output for
>> same input) the phenomenal experience will also be replicated. This is 
>> what
>> philosophers like Searle (and many laypeople) can’t stomach.
>>
>
> I think the kind of phenomenal supervenience you're talking about is
> typically asserted for behavior at the level of the neuron, not the level
> of the whole agent. Is that what you're saying?  That chatGPT must be
> having a phenomenal experience if it talks like a human?   If so, that is
> stretching the explanatory domain of functionalism past its breaking 
> point.
>

 The best justification for functionalism is David Chalmers' "Fading
 Qualia" argument. The paper considers replacing neurons with functionally
 equivalent silicon chips, but it could be generalised to replacing any part
 of the brain with a functionally equivalent black box, the whole brain, the
 whole person.

>>>
>>> You're saying that an algorithm that provably does not have experiences
>>> of rabbits and lollipops - but can still talk about them in a way that's
>>> indistinguishable from a human - essentially has the same phenomenology as
>>> a human talking about rabbits and lollipops. That's just absurd on its
>>> face. You're essentially hand-waving away the grounding problem. Is that
>>> your position? That symbols don't need to be grounded in any sort of
>>> phenomenal experience?
>>>
>>
>> It's not just talking about them in a way that is indistinguishable from
>> a human, in order to have human-like consciousness the entire I/O behaviour
>> of the human would need to be replicated. But in principle, I don't see why
>> a LLM could not have some other type of phenomenal experience. And I don't
>> think the grounding problem is a problem: I was never grounded in anything,
>> I just grew up associating one symbol with another symbol, it's symbols all
>> the way down.
>>
>
> Is the smell of your grandmother's kitchen a symbol?
>

Yes, I can't pull away the facade to check that there was a real
grandmother and a real kitchen against which I can check that the sense
data matches.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXiwn%2Bwh2K_T6K5pL5NZJR8%3DaPejiWQmyy5SHtee0%2Bouw%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Tue, May 23, 2023 at 12:14 AM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 13:37, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>>> wrote:
>>>


 On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
 wrote:

>
>
> On Tue, 23 May 2023 at 10:03, Terren Suydam 
> wrote:
>
>>
>> it is true that my brain has been trained on a large amount of data -
>> data that contains intelligence outside of my own. But when I 
>> introspect, I
>> notice that my understanding of things is ultimately rooted/grounded in 
>> my
>> phenomenal experience. Ultimately, everything we know, we know either by
>> our experience, or by analogy to experiences we've had. This is in
>> opposition to how LLMs train on data, which is strictly about how
>> words/symbols relate to one another.
>>
>
> The functionalist position is that phenomenal experience supervenes on
> behaviour, such that if the behaviour is replicated (same output for same
> input) the phenomenal experience will also be replicated. This is what
> philosophers like Searle (and many laypeople) can’t stomach.
>

 I think the kind of phenomenal supervenience you're talking about is
 typically asserted for behavior at the level of the neuron, not the level
 of the whole agent. Is that what you're saying?  That chatGPT must be
 having a phenomenal experience if it talks like a human?   If so, that is
 stretching the explanatory domain of functionalism past its breaking point.

>>>
>>> The best justification for functionalism is David Chalmers' "Fading
>>> Qualia" argument. The paper considers replacing neurons with functionally
>>> equivalent silicon chips, but it could be generalised to replacing any part
>>> of the brain with a functionally equivalent black box, the whole brain, the
>>> whole person.
>>>
>>
>> You're saying that an algorithm that provably does not have experiences
>> of rabbits and lollipops - but can still talk about them in a way that's
>> indistinguishable from a human - essentially has the same phenomenology as
>> a human talking about rabbits and lollipops. That's just absurd on its
>> face. You're essentially hand-waving away the grounding problem. Is that
>> your position? That symbols don't need to be grounded in any sort of
>> phenomenal experience?
>>
>
> It's not just talking about them in a way that is indistinguishable from a
> human, in order to have human-like consciousness the entire I/O behaviour
> of the human would need to be replicated. But in principle, I don't see why
> a LLM could not have some other type of phenomenal experience. And I don't
> think the grounding problem is a problem: I was never grounded in anything,
> I just grew up associating one symbol with another symbol, it's symbols all
> the way down.
>

Is the smell of your grandmother's kitchen a symbol?


>
> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXViwvq0TnbJXnPt7VVDoy8zASJyZeq-O3ZpOpMSx6cwg%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA-0ZVcLfU0bBLCP%3DRsZNOSbAadhBONRNjM6wXLNk5iZxA%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 13:37, Terren Suydam  wrote:

>
>
> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:48, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 10:03, Terren Suydam 
 wrote:

>
> it is true that my brain has been trained on a large amount of data -
> data that contains intelligence outside of my own. But when I introspect, 
> I
> notice that my understanding of things is ultimately rooted/grounded in my
> phenomenal experience. Ultimately, everything we know, we know either by
> our experience, or by analogy to experiences we've had. This is in
> opposition to how LLMs train on data, which is strictly about how
> words/symbols relate to one another.
>

 The functionalist position is that phenomenal experience supervenes on
 behaviour, such that if the behaviour is replicated (same output for same
 input) the phenomenal experience will also be replicated. This is what
 philosophers like Searle (and many laypeople) can’t stomach.

>>>
>>> I think the kind of phenomenal supervenience you're talking about is
>>> typically asserted for behavior at the level of the neuron, not the level
>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>> having a phenomenal experience if it talks like a human?   If so, that is
>>> stretching the explanatory domain of functionalism past its breaking point.
>>>
>>
>> The best justification for functionalism is David Chalmers' "Fading
>> Qualia" argument. The paper considers replacing neurons with functionally
>> equivalent silicon chips, but it could be generalised to replacing any part
>> of the brain with a functionally equivalent black box, the whole brain, the
>> whole person.
>>
>
> You're saying that an algorithm that provably does not have experiences of
> rabbits and lollipops - but can still talk about them in a way that's
> indistinguishable from a human - essentially has the same phenomenology as
> a human talking about rabbits and lollipops. That's just absurd on its
> face. You're essentially hand-waving away the grounding problem. Is that
> your position? That symbols don't need to be grounded in any sort of
> phenomenal experience?
>

It's not just talking about them in a way that is indistinguishable from a
human, in order to have human-like consciousness the entire I/O behaviour
of the human would need to be replicated. But in principle, I don't see why
a LLM could not have some other type of phenomenal experience. And I don't
think the grounding problem is a problem: I was never grounded in anything,
I just grew up associating one symbol with another symbol, it's symbols all
the way down.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXViwvq0TnbJXnPt7VVDoy8zASJyZeq-O3ZpOpMSx6cwg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 10:48, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:03, Terren Suydam 
>>> wrote:
>>>

 it is true that my brain has been trained on a large amount of data -
 data that contains intelligence outside of my own. But when I introspect, I
 notice that my understanding of things is ultimately rooted/grounded in my
 phenomenal experience. Ultimately, everything we know, we know either by
 our experience, or by analogy to experiences we've had. This is in
 opposition to how LLMs train on data, which is strictly about how
 words/symbols relate to one another.

>>>
>>> The functionalist position is that phenomenal experience supervenes on
>>> behaviour, such that if the behaviour is replicated (same output for same
>>> input) the phenomenal experience will also be replicated. This is what
>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>
>>
>> I think the kind of phenomenal supervenience you're talking about is
>> typically asserted for behavior at the level of the neuron, not the level
>> of the whole agent. Is that what you're saying?  That chatGPT must be
>> having a phenomenal experience if it talks like a human?   If so, that is
>> stretching the explanatory domain of functionalism past its breaking point.
>>
>
> The best justification for functionalism is David Chalmers' "Fading
> Qualia" argument. The paper considers replacing neurons with functionally
> equivalent silicon chips, but it could be generalised to replacing any part
> of the brain with a functionally equivalent black box, the whole brain, the
> whole person.
>

You're saying that an algorithm that provably does not have experiences of
rabbits and lollipops - but can still talk about them in a way that's
indistinguishable from a human - essentially has the same phenomenology as
a human talking about rabbits and lollipops. That's just absurd on its
face. You're essentially hand-waving away the grounding problem. Is that
your position? That symbols don't need to be grounded in any sort of
phenomenal experience?

Terren

> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypW9qP_GQivWh_5BBwZ%2BNSVo93MagCD_HFOfVwLPRJwYAQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fnyGDNxfQJXaqdUsYdSw7Sm5kx5j_5n94K8trJA57Jg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 10:48, Terren Suydam  wrote:

>
>
> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 10:03, Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Tue, 23 May 2023 at 07:56, Terren Suydam 
 wrote:

> Many, myself included, are captivated by the amazing capabilities of
> chatGPT and other LLMs. They are, truly, incredible. Depending on your
> definition of Turing Test, it passes with flying colors in many, many
> contexts. It would take a much stricter Turing Test than we might have
> imagined this time last year, before we could confidently say that we're
> not talking to a human. One way to improve chatGPT's performance on an
> actual Turing Test would be to slow it down, because it is too fast to be
> human.
>
> All that said, is chatGPT actually intelligent?  There's no question
> that it behaves in a way that we would all agree is intelligent. The
> answers it gives, and the speed it gives them in, reflect an intelligence
> that often far exceeds most if not all humans.
>
> I know some here say intelligence is as intelligence does. Full stop,
> conversation over. ChatGPT is intelligent, because it acts intelligently.
>
> But this is an oversimplified view!  The reason it's over-simple is
> that it ignores what the source of the intelligence is. The source of the
> intelligence is in the texts it's trained on. If ChatGPT was trained on
> gibberish, that's what you'd get out of it. It is amazingly similar to the
> Chinese Room thought experiment proposed by John Searle. It is 
> manipulating
> symbols without having any understanding of what those symbols are. As a
> result, it does not and can not know if what it's saying is correct or 
> not.
> This is a well known caveat of using LLMs.
>
> ChatGPT, therefore, is more like a search engine that can extract the
> intelligence that is already structured within the data it's trained on.
> Think of it as a semantic google. It's a huge achievement in the sense 
> that
> training on the data in the way it does, it encodes the *context*
> that words appear in with sufficiently high resolution that it's usually
> indistinguishable from humans who actually understand context in a way
> that's *grounded in experience*. LLMs don't experience anything. They
> are feed-forward machines. The algorithms that implement chatGPT are
> useless without enormous amounts of text that expresses actual 
> intelligence.
>
> Cal Newport does a good job of explaining this here
> 
> .
>

 It could be argued that the human brain is just a complex machine that
 has been trained on vast amounts of data to produce a certain output given
 a certain input, and doesn’t really understand anything. This is a response
 to the Chinese room argument. How would I know if I really understand
 something or just think I understand something?

> --
 Stathis Papaioannou

>>>
>>> it is true that my brain has been trained on a large amount of data -
>>> data that contains intelligence outside of my own. But when I introspect, I
>>> notice that my understanding of things is ultimately rooted/grounded in my
>>> phenomenal experience. Ultimately, everything we know, we know either by
>>> our experience, or by analogy to experiences we've had. This is in
>>> opposition to how LLMs train on data, which is strictly about how
>>> words/symbols relate to one another.
>>>
>>
>> The functionalist position is that phenomenal experience supervenes on
>> behaviour, such that if the behaviour is replicated (same output for same
>> input) the phenomenal experience will also be replicated. This is what
>> philosophers like Searle (and many laypeople) can’t stomach.
>>
>
> I think the kind of phenomenal supervenience you're talking about is
> typically asserted for behavior at the level of the neuron, not the level
> of the whole agent. Is that what you're saying?  That chatGPT must be
> having a phenomenal experience if it talks like a human?   If so, that is
> stretching the explanatory domain of functionalism past its breaking point.
>

The best justification for functionalism is David Chalmers' "Fading Qualia"
argument. The paper considers replacing neurons with functionally
equivalent silicon chips, but it could be generalised to replacing any part
of the brain with a functionally equivalent black box, the whole brain, the
whole person.

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 10:03, Terren Suydam 
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 07:56, Terren Suydam 
>>> wrote:
>>>
 Many, myself included, are captivated by the amazing capabilities of
 chatGPT and other LLMs. They are, truly, incredible. Depending on your
 definition of Turing Test, it passes with flying colors in many, many
 contexts. It would take a much stricter Turing Test than we might have
 imagined this time last year, before we could confidently say that we're
 not talking to a human. One way to improve chatGPT's performance on an
 actual Turing Test would be to slow it down, because it is too fast to be
 human.

 All that said, is chatGPT actually intelligent?  There's no question
 that it behaves in a way that we would all agree is intelligent. The
 answers it gives, and the speed it gives them in, reflect an intelligence
 that often far exceeds most if not all humans.

 I know some here say intelligence is as intelligence does. Full stop,
 conversation over. ChatGPT is intelligent, because it acts intelligently.

 But this is an oversimplified view!  The reason it's over-simple is
 that it ignores what the source of the intelligence is. The source of the
 intelligence is in the texts it's trained on. If ChatGPT was trained on
 gibberish, that's what you'd get out of it. It is amazingly similar to the
 Chinese Room thought experiment proposed by John Searle. It is manipulating
 symbols without having any understanding of what those symbols are. As a
 result, it does not and can not know if what it's saying is correct or not.
 This is a well known caveat of using LLMs.

 ChatGPT, therefore, is more like a search engine that can extract the
 intelligence that is already structured within the data it's trained on.
 Think of it as a semantic google. It's a huge achievement in the sense that
 training on the data in the way it does, it encodes the *context* that
 words appear in with sufficiently high resolution that it's usually
 indistinguishable from humans who actually understand context in a way
 that's *grounded in experience*. LLMs don't experience anything. They
 are feed-forward machines. The algorithms that implement chatGPT are
 useless without enormous amounts of text that expresses actual 
 intelligence.

 Cal Newport does a good job of explaining this here
 
 .

>>>
>>> It could be argued that the human brain is just a complex machine that
>>> has been trained on vast amounts of data to produce a certain output given
>>> a certain input, and doesn’t really understand anything. This is a response
>>> to the Chinese room argument. How would I know if I really understand
>>> something or just think I understand something?
>>>
 --
>>> Stathis Papaioannou
>>>
>>
>> it is true that my brain has been trained on a large amount of data -
>> data that contains intelligence outside of my own. But when I introspect, I
>> notice that my understanding of things is ultimately rooted/grounded in my
>> phenomenal experience. Ultimately, everything we know, we know either by
>> our experience, or by analogy to experiences we've had. This is in
>> opposition to how LLMs train on data, which is strictly about how
>> words/symbols relate to one another.
>>
>
> The functionalist position is that phenomenal experience supervenes on
> behaviour, such that if the behaviour is replicated (same output for same
> input) the phenomenal experience will also be replicated. This is what
> philosophers like Searle (and many laypeople) can’t stomach.
>

I think the kind of phenomenal supervenience you're talking about is
typically asserted for behavior at the level of the neuron, not the level
of the whole agent. Is that what you're saying?  That chatGPT must be
having a phenomenal experience if it talks like a human?   If so, that is
stretching the explanatory domain of functionalism past its breaking point.

Terren


> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU5_fvzMNCTmc_7zc5gfOBTTsmOy5R%2BmJW4ceWtJYzw_g%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything 

Re: what chatGPT is and is not

2023-05-22 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 10:03, Terren Suydam  wrote:

>
>
> On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Tue, 23 May 2023 at 07:56, Terren Suydam 
>> wrote:
>>
>>> Many, myself included, are captivated by the amazing capabilities of
>>> chatGPT and other LLMs. They are, truly, incredible. Depending on your
>>> definition of Turing Test, it passes with flying colors in many, many
>>> contexts. It would take a much stricter Turing Test than we might have
>>> imagined this time last year, before we could confidently say that we're
>>> not talking to a human. One way to improve chatGPT's performance on an
>>> actual Turing Test would be to slow it down, because it is too fast to be
>>> human.
>>>
>>> All that said, is chatGPT actually intelligent?  There's no question
>>> that it behaves in a way that we would all agree is intelligent. The
>>> answers it gives, and the speed it gives them in, reflect an intelligence
>>> that often far exceeds most if not all humans.
>>>
>>> I know some here say intelligence is as intelligence does. Full stop,
>>> conversation over. ChatGPT is intelligent, because it acts intelligently.
>>>
>>> But this is an oversimplified view!  The reason it's over-simple is that
>>> it ignores what the source of the intelligence is. The source of the
>>> intelligence is in the texts it's trained on. If ChatGPT was trained on
>>> gibberish, that's what you'd get out of it. It is amazingly similar to the
>>> Chinese Room thought experiment proposed by John Searle. It is manipulating
>>> symbols without having any understanding of what those symbols are. As a
>>> result, it does not and can not know if what it's saying is correct or not.
>>> This is a well known caveat of using LLMs.
>>>
>>> ChatGPT, therefore, is more like a search engine that can extract the
>>> intelligence that is already structured within the data it's trained on.
>>> Think of it as a semantic google. It's a huge achievement in the sense that
>>> training on the data in the way it does, it encodes the *context* that
>>> words appear in with sufficiently high resolution that it's usually
>>> indistinguishable from humans who actually understand context in a way
>>> that's *grounded in experience*. LLMs don't experience anything. They
>>> are feed-forward machines. The algorithms that implement chatGPT are
>>> useless without enormous amounts of text that expresses actual intelligence.
>>>
>>> Cal Newport does a good job of explaining this here
>>> 
>>> .
>>>
>>
>> It could be argued that the human brain is just a complex machine that
>> has been trained on vast amounts of data to produce a certain output given
>> a certain input, and doesn’t really understand anything. This is a response
>> to the Chinese room argument. How would I know if I really understand
>> something or just think I understand something?
>>
>>> --
>> Stathis Papaioannou
>>
>
> it is true that my brain has been trained on a large amount of data - data
> that contains intelligence outside of my own. But when I introspect, I
> notice that my understanding of things is ultimately rooted/grounded in my
> phenomenal experience. Ultimately, everything we know, we know either by
> our experience, or by analogy to experiences we've had. This is in
> opposition to how LLMs train on data, which is strictly about how
> words/symbols relate to one another.
>

The functionalist position is that phenomenal experience supervenes on
behaviour, such that if the behaviour is replicated (same output for same
input) the phenomenal experience will also be replicated. This is what
philosophers like Searle (and many laypeople) can’t stomach.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU5_fvzMNCTmc_7zc5gfOBTTsmOy5R%2BmJW4ceWtJYzw_g%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Terren Suydam
On Mon, May 22, 2023 at 7:34 PM Stathis Papaioannou 
wrote:

>
>
> On Tue, 23 May 2023 at 07:56, Terren Suydam 
> wrote:
>
>> Many, myself included, are captivated by the amazing capabilities of
>> chatGPT and other LLMs. They are, truly, incredible. Depending on your
>> definition of Turing Test, it passes with flying colors in many, many
>> contexts. It would take a much stricter Turing Test than we might have
>> imagined this time last year, before we could confidently say that we're
>> not talking to a human. One way to improve chatGPT's performance on an
>> actual Turing Test would be to slow it down, because it is too fast to be
>> human.
>>
>> All that said, is chatGPT actually intelligent?  There's no question that
>> it behaves in a way that we would all agree is intelligent. The answers it
>> gives, and the speed it gives them in, reflect an intelligence that often
>> far exceeds most if not all humans.
>>
>> I know some here say intelligence is as intelligence does. Full stop,
>> conversation over. ChatGPT is intelligent, because it acts intelligently.
>>
>> But this is an oversimplified view!  The reason it's over-simple is that
>> it ignores what the source of the intelligence is. The source of the
>> intelligence is in the texts it's trained on. If ChatGPT was trained on
>> gibberish, that's what you'd get out of it. It is amazingly similar to the
>> Chinese Room thought experiment proposed by John Searle. It is manipulating
>> symbols without having any understanding of what those symbols are. As a
>> result, it does not and can not know if what it's saying is correct or not.
>> This is a well known caveat of using LLMs.
>>
>> ChatGPT, therefore, is more like a search engine that can extract the
>> intelligence that is already structured within the data it's trained on.
>> Think of it as a semantic google. It's a huge achievement in the sense that
>> training on the data in the way it does, it encodes the *context* that
>> words appear in with sufficiently high resolution that it's usually
>> indistinguishable from humans who actually understand context in a way
>> that's *grounded in experience*. LLMs don't experience anything. They
>> are feed-forward machines. The algorithms that implement chatGPT are
>> useless without enormous amounts of text that expresses actual intelligence.
>>
>> Cal Newport does a good job of explaining this here
>> 
>> .
>>
>
> It could be argued that the human brain is just a complex machine that has
> been trained on vast amounts of data to produce a certain output given a
> certain input, and doesn’t really understand anything. This is a response
> to the Chinese room argument. How would I know if I really understand
> something or just think I understand something?
>
>> --
> Stathis Papaioannou
>

it is true that my brain has been trained on a large amount of data - data
that contains intelligence outside of my own. But when I introspect, I
notice that my understanding of things is ultimately rooted/grounded in my
phenomenal experience. Ultimately, everything we know, we know either by
our experience, or by analogy to experiences we've had. This is in
opposition to how LLMs train on data, which is strictly about how
words/symbols relate to one another.

Terren

-- 
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU63GQuAJNQ%2BAM%3DcYHxi%3D57x_bGAoF35npeMcXcEdiNaA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9hy2aSH58kXty6WkAiVVkPfbxbdYhrpqfgFfCF01a%3Dkg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-22 Thread Stathis Papaioannou
On Tue, 23 May 2023 at 07:56, Terren Suydam  wrote:

> Many, myself included, are captivated by the amazing capabilities of
> chatGPT and other LLMs. They are, truly, incredible. Depending on your
> definition of Turing Test, it passes with flying colors in many, many
> contexts. It would take a much stricter Turing Test than we might have
> imagined this time last year, before we could confidently say that we're
> not talking to a human. One way to improve chatGPT's performance on an
> actual Turing Test would be to slow it down, because it is too fast to be
> human.
>
> All that said, is chatGPT actually intelligent?  There's no question that
> it behaves in a way that we would all agree is intelligent. The answers it
> gives, and the speed it gives them in, reflect an intelligence that often
> far exceeds most if not all humans.
>
> I know some here say intelligence is as intelligence does. Full stop,
> conversation over. ChatGPT is intelligent, because it acts intelligently.
>
> But this is an oversimplified view!  The reason it's over-simple is that
> it ignores what the source of the intelligence is. The source of the
> intelligence is in the texts it's trained on. If ChatGPT was trained on
> gibberish, that's what you'd get out of it. It is amazingly similar to the
> Chinese Room thought experiment proposed by John Searle. It is manipulating
> symbols without having any understanding of what those symbols are. As a
> result, it does not and can not know if what it's saying is correct or not.
> This is a well known caveat of using LLMs.
>
> ChatGPT, therefore, is more like a search engine that can extract the
> intelligence that is already structured within the data it's trained on.
> Think of it as a semantic google. It's a huge achievement in the sense that
> training on the data in the way it does, it encodes the *context* that
> words appear in with sufficiently high resolution that it's usually
> indistinguishable from humans who actually understand context in a way
> that's *grounded in experience*. LLMs don't experience anything. They are
> feed-forward machines. The algorithms that implement chatGPT are useless
> without enormous amounts of text that expresses actual intelligence.
>
> Cal Newport does a good job of explaining this here
> 
> .
>

It could be argued that the human brain is just a complex machine that has
been trained on vast amounts of data to produce a certain output given a
certain input, and doesn’t really understand anything. This is a response
to the Chinese room argument. How would I know if I really understand
something or just think I understand something?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypU63GQuAJNQ%2BAM%3DcYHxi%3D57x_bGAoF35npeMcXcEdiNaA%40mail.gmail.com.