Re: Quantum computers: what are they good for?

2023-05-25 Thread Brent Meeker
That was one of the original motivations for QC, protein folding. But 
ironically there have now been developed fast classical algorithms that 
do protein folding.  This illustrates that there is no proof that QC is 
necessarily faster than the fastest of all possible classical algorithms.


Brent

On 5/25/2023 12:03 PM, John Clark wrote:

Some interesting quotations from this Nature article:

/"//The short-term hype is a bit high, but the long-term hype is 
nowhere near enough.”//


 “If anything is going to give something useful in the next five 
years, it will be chemistry calculations,That’s because of the 
relatively low resource requirements. This would be possible using 
quantum computers with a relatively small number of qubits” /


Quantum computers: what are they good for? 



John K Clark    See what's on my new list at Extropolis 


ycx

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3M3qhnQASJcvB%3DFa6t49Yn7TgjSQ42HJ5SOsKpi%2BhLyg%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/ea232f81-a0eb-b7ff-5f8d-08c5e7c16db8%40gmail.com.


Gravitational-wave detector LIGO is back and can spot more colliding black holes than ever

2023-05-25 Thread John Clark
The new and improved LIGO detector has already found something
interesting.  It's too recent to have been mentioned in the article but
yesterday my phone gave me an alert that exactly 17.8 seconds after 10:38PM
Eastern time LIGO detected something which they preliminarily gave a 72%
probability of being caused by a collision between 2 Black Holes. That's
not high enough certainty to claim a discovery but it had only been turned
on a few hours previously so the machine certainly seems to be operating
very well indeed.

Gravitational-wave detector LIGO is back and can spot more colliding black
holes than ever



John K ClarkSee what's on my new list at  Extropolis

h5c

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0-swMCgvsiXR%3Dr7-ZY1xyC8sxKfREVVDVF-fgeJokvtQ%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread Brent Meeker



On 5/25/2023 7:04 AM, Terren Suydam wrote:


Do you have a theory for why neurology supports consciousness but
silicon circuitry cannot?


I'm agnostic about this, but that's because I no longer assume 
physicalism. For me, the hard problem signals that physicalism is 
impossible. I've argued on this list many times as a physicalist, as 
one who believes in the possibility of artificial consciousness, 
uploading, etc. I've argued that there is something it is like to be a 
cybernetic system. But at the end of it all, I just couldn't overcome 
the problem of aesthetic valence


Why would aesthetic valence be a problem for physicalism.  Even bacteria 
know enough to swim away from some chemical gradients and toward others.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/56ba1075-5e52-2bd0-77f6-bd65b7031435%40gmail.com.


Quantum computers: what are they good for?

2023-05-25 Thread John Clark
Some interesting quotations from this Nature article:

*"**The short-term hype is a bit high, but the long-term hype is nowhere
near enough.”*

* “If anything is going to give something useful in the next five years, it
will be chemistry calculations, That’s because of the relatively low
resource requirements. This would be possible using quantum computers with
a relatively small number of qubits” *

Quantum computers: what are they good for?


John K ClarkSee what's on my new list at  Extropolis

ycx

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3M3qhnQASJcvB%3DFa6t49Yn7TgjSQ42HJ5SOsKpi%2BhLyg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread Stathis Papaioannou
On Fri, 26 May 2023 at 00:21, Jason Resch  wrote:

>
>
> On Thu, May 25, 2023, 9:43 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 21:28, Jason Resch  wrote:
>>
>>>
>>>
>>> On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
>>> wrote:
>>>


 On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:

>
>
> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 11:48, Jason Resch 
>> wrote:
>>
>> >An RNG would be a bad design choice because it would be extremely
>>> unreliable. However, as a thought experiment, it could work. If the 
>>> visual
>>> cortex were removed and replaced with an RNG which for five minutes
>>> replicated the interactions with the remaining brain, the subject would
>>> behave as if they had normal vision and report that they had normal 
>>> vision,
>>> then after five minutes behave as if they were blind and report that 
>>> they
>>> were blind. It is perhaps contrary to intuition that the subject would
>>> really have visual experiences in that five minute period, but I don't
>>> think there is any other plausible explanation.
>>>
>>
>>> I think they would be a visual zombie in that five minute period,
>>> though as described they would not be able to report any difference.
>>>
>>> I think if one's entire brain were replaced by an RNG, they would be
>>> a total zombie who would fool us into thinking they were conscious and 
>>> we
>>> would not notice a difference. So by extension a brain partially 
>>> replaced
>>> by an RNG would be a partial zombie that fooled the other parts of the
>>> brain into thinking nothing was amiss.
>>>
>>
>> I think the concept of a partial zombie makes consciousness
>> nonsensical.
>>
>
> It borders on the nonsensical, but between the two bad alternatives I
> find the idea of a RNG instantiating human consciousness somewhat less
> sensical than the idea of partial zombies.
>

 If consciousness persists no matter what the brain is replaced with as
 long as the output remains the same this is consistent with the idea that
 consciousness does not reside in a particular substance (even a magical
 substance) or in a particular process.

>>>
>>> Yes but this is a somewhat crude 1960s version of functionalism, which
>>> as I described and as you recognized, is vulnerable to all kinds of
>>> attacks. Modern functionalism is about more than high level inputs and
>>> outputs, and includes causal organization and implementation details at
>>> some level (the functional substitution level).
>>>
>>> Don't read too deeply into the mathematical definition of function as
>>> simply inputs and outputs, think of it more in terms of what a mind does,
>>> rather than what a mind is, this is the thinking that led to functionalism
>>> and an acceptance of multiple realizability.
>>>
>>>
>>>
>>> This is a strange idea, but it is akin to the existence of platonic
 objects. The number three can be implemented by arranging three objects in
 a row but it does not depend those three objects unless it is being used
 for a particular purpose, such as three beads on an abacus.

>>>
>>> Bubble sort and merge sort both compute the same thing and both have the
>>> same inputs and outputs, but they are different mathematical objects, with
>>> different behaviors, steps, subroutines and runtime efficiency.
>>>
>>>
>>>

> How would I know that I am not a visual zombie now, or a visual zombie
>> every Tuesday, Thursday and Saturday?
>>
>
> Here, we have to be careful what we mean by "I". Our own brains have
> various spheres of consciousness as demonstrated by the Wada Test: we can
> shut down one hemisphere of the brain and lose partial awareness and
> functionality such as the ability to form words and yet one remains
> conscious. I think being a partial zombie would be like that, having one's
> sphere of awareness shrink.
>

 But the subject's sphere of awareness would not shrink in the thought
 experiment,

>>>
>>> Have you ever wondered what delineates the mind from its environment?
>>> Why it is that you are not aware of my thoughts but you see me as an object
>>> that only affects your senses, even though we could represent the whole
>>> earth as one big functional system?
>>>
>>> I don't have a good answer to this question but it seems it might be a
>>> factor here. The randomly generated outputs from the RNG would seem an
>>> environmental noise/sensation coming from the outside, rather than a
>>> recursively linked and connected loop of processing as would exist in a
>>> genuinely functioning brain of two hemispheres.
>>>
>>>
>>> since by assumption their behaviour stays the same, while if their
 sphere of awareness shrank they 

Re: what chatGPT is and is not

2023-05-25 Thread Brent Meeker




On 5/25/2023 4:28 AM, Jason Resch wrote:
Have you ever wondered what delineates the mind from its environment? 
Why it is that you are not aware of my thoughts but you see me as an 
object that only affects your senses, even though we could represent 
the whole earth as one big functional system?


I don't have a good answer to this question but it seems it might be a 
factor here. The randomly generated outputs from the RNG would seem an 
environmental noise/sensation coming from the outside, rather than a 
recursively linked and connected loop of processing as would exist in 
a genuinely functioning brain of two hemispheres.


I would reject this radical output=function.  The brain evolved as 
support for the sensory systems.  I is inherently and sensitively 
engaged with the environment.  The RNG thought experiment is based on 
the idea that the brain can function in isolation, an idea supported by 
concentrating on consciousness as the main function of the brain, which 
I also reject.  Consciousness is a relatively small part of the brain's 
function, mainly concerned with communication to others.  Remember that 
the Poincare' effect was described by a great mathematician.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b8f9616d-9506-8c41-3d90-656365baf7a5%40gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread John Clark
On Thu, May 25, 2023 at 7:28 AM Jason Resch  wrote:

*> Have you ever wondered what delineates the mind from its environment?*
>

No.


> * > Why it is that you are not aware of my thoughts but you see me as an
> object that only affects your senses, even though we could represent the
> whole earth as one big functional system?*
>

The reason is lack of information and lack of computational resources, it's
the same reason you're not aware of the velocity of every molecule of the air
in the room you're in right now  nor can you predict what all the molecules
will be doing one hour from now, but you are aware of the air's temperature
now and you can make a pretty good guess about what the temperature will be in
one hour.


> *> I don't have a good answer to this question*
>

Then how fortunate it is for you to be able to talk to me.

*> The randomly generated outputs from the RNG would seem an environmental
> noise/sensation coming from the outside, rather than a recursively linked
> and connected loop of processing *


In your ridiculous example the cause of the neuron acting the way it does is
not coming from the inside and it does not come from the outside either because
you claim the neuron is acting randomly and the very definition of "random"
is an event without a cause.

> *But here (almost by magic), the RNG outputs have forced the physical
> behavior of the remaining hemisphere to remain the same*
>

That is incorrect. The neuron is not behaving "*ALMOST*" magically, it
*IS *magical;
but you were the one who dreamed up this magical thought experiment, not
me.

*Arnold Zuboff has written a thought experiment to this effect.*
>

I'm not going to bother looking it up because you and I have very different
ideas about what constitutes a good thought experiment.

*> But if a theory cannot acknowledge a difference in the conscious between
> an electron and a dreaming brain inside a skull, then the theory is (in my
> opinion) operationally useless.*
>

Correct. Unless you make the unprovable assumption that intelligent
behavior implies consciousness then EVERY consciousness theory is
operationally useless. And useless for the study of Ontology and
Epistemology too. In other words just plain useless. That's why I'm vastly
more interested in intelligence theories than consciousness theories; one
is easy to fake and the other is impossible to fake.

John K ClarkSee what's on my new list at  Extropolis

tic

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0UZTyZ67zO%2Bjj2cmoyLbCyxDpMTEpj7yJipHiWdNdJUg%40mail.gmail.com.


Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023 at 9:16 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 6:00 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023, 4:14 PM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 2:27 PM Jason Resch 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
 wrote:

>
>
> On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
> wrote:
>
>
>> And yes, I'm arguing that a true simulation (let's say for the sake
>> of a thought experiment we were able to replicate every neural connection
>> of a human being in code, including the connectomes, and 
>> neurotransmitters,
>> along with a simulated nerve that was connected to a button on the desk 
>> we
>> could press which would simulate the signal sent when a biological pain
>> receptor is triggered) would feel pain that is just as real as the pain 
>> you
>> and I feel as biological organisms.
>>
>
> This follows from the physicalist no-zombies-possible stance. But it
> still runs into the hard problem, basically. How does stuff give rise to
> experience.
>
>
 I would say stuff doesn't give rise to conscious experience. Conscious
 experience is the logically necessary and required state of knowledge that
 is present in any consciousness-necessitating behaviors. If you design a
 simple robot with a camera and robot arm that is able to reliably catch a
 ball thrown in its general direction, then something in that system *must*
 contain knowledge of the ball's relative position and trajectory. It simply
 isn't logically possible to have a system that behaves in all situations as
 if it knows where the ball is, without knowing where the ball is.
 Consciousness is simply the state of being with knowledge.

 Con- "Latin for with"
 -Scious- "Latin for knowledge"
 -ness "English suffix meaning the state of being X"

 Consciousness -> The state of being with knowledge.

 There is an infinite variety of potential states and levels of
 knowledge, and this contributes to much of the confusion, but boiled down
 to the simplest essence of what is or isn't conscious, it is all about
 knowledge states. Knowledge states require activity/reactivity to the
 presence of information, and counterfactual behaviors (if/then, greater
 than less than, discriminations and comparisons that lead to different
 downstream consequences in a system's behavior). At least, this is my
 theory of consciousness.

 Jason

>>>
>>> This still runs into the valence problem though. Why does some
>>> "knowledge" correspond with a positive *feeling* and other knowledge
>>> with a negative feeling?
>>>
>>
>> That is a great question. Though I'm not sure it's fundamentally
>> insoluble within model where every conscious state is a particular state of
>> knowledge.
>>
>> I would propose that having positive and negative experiences, i.e. pain
>> or pleasure, requires knowledge states with a certain minium degree of
>> sophistication. For example, knowing:
>>
>> Pain being associated with knowledge states such as: "I don't like this,
>> this is bad, I'm in pain, I want to change my situation."
>>
>> Pleasure being associated with knowledge states such as: "This is good
>> for me, I could use more of this, I don't want this to end.'
>>
>> Such knowledge states require a degree of reflexive awareness, to have a
>> notion of a self where some outcomes may be either positive or negative to
>> that self, and perhaps some notion of time or a sufficient agency to be
>> able to change one's situation.
>>
>> Sone have argued that plants can't feel pain because there's little they
>> can do to change their situation (though I'm agnostic on this).
>>
>>   I'm not talking about the functional accounts of positive and negative
>>> experiences. I'm talking about phenomenology. The functional aspect of it
>>> is not irrelevant, but to focus *only* on that is to sweep the feeling
>>> under the rug. So many dialogs on this topic basically terminate here,
>>> where it's just a clash of belief about the relative importance of
>>> consciousness and phenomenology as the mediator of all experience and
>>> knowledge.
>>>
>>
>> You raise important questions which no complete theory of consciousness
>> should ignore. I think one reason things break down here is because there's
>> such incredible complexity behind and underlying the states of
>> consciousness we humans perceive and no easy way to communicate all the
>> salient properties of those experiences.
>>
>> Jason
>>
>
> Thanks for that. These kinds of questions are rarely acknowledged in the
> mainstream. The problem is how much we take valence as a given, or how much
> it's conflated with its function, that most people aren't aware of how
> strange it is if you're coming from a physicalist metaphysics.  "Evolution
> did 

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023 at 9:05 AM Terren Suydam 
wrote:

>
>
> On Tue, May 23, 2023 at 5:47 PM Jason Resch  wrote:
>
>>
>>
>> On Tue, May 23, 2023, 3:50 PM Terren Suydam 
>> wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 1:46 PM Jason Resch 
>>> wrote:
>>>


 On Tue, May 23, 2023, 9:34 AM Terren Suydam 
 wrote:

>
>
> On Tue, May 23, 2023 at 7:09 AM Jason Resch 
> wrote:
>
>> As I see this thread, Terren and Stathis are both talking past each
>> other. Please either of you correct me if i am wrong, but in an effort to
>> clarify and perhaps resolve this situation:
>>
>> I believe Stathis is saying the functional substitution having the
>> same fine-grained causal organization *would* have the same 
>> phenomenology,
>> the same experience, and the same qualia as the brain with the same
>> fine-grained causal organization.
>>
>> Therefore, there is no disagreement between your positions with
>> regards to symbols groundings, mappings, etc.
>>
>> When you both discuss the problem of symbology, or bits, etc. I
>> believe this is partly responsible for why you are both talking past each
>> other, because there are many levels involved in brains (and 
>> computational
>> systems). I believe you were discussing completely different levels in 
>> the
>> hierarchical organization.
>>
>> There are high-level parts of minds, such as ideas, thoughts,
>> feelings, quale, etc. and there are low-level, be they neurons,
>> neurotransmitters, atoms, quantum fields, and laws of physics as in human
>> brains, or circuits, logic gates, bits, and instructions as in computers.
>>
>> I think when Terren mentions a "symbol for the smell of grandmother's
>> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
>> quale
>> or idea or memory of the smell of GMK is a very high-level feature of a
>> mind. When Terren asks for or discusses a symbol for it, a complete
>> answer/description for it can only be supplied in terms of a vast amount 
>> of
>> information concerning low level structures, be they patterns of neuron
>> firings, or patterns of bits being processed. When we consider things 
>> down
>> at this low level, however, we lose all context for what the meaning, 
>> idea,
>> and quale are or where or how they come in. We cannot see or find the 
>> idea
>> of GMK in any neuron, no more than we can see or find it in any neuron.
>>
>> Of course then it should seem deeply mysterious, if not impossible,
>> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
>> greater a leap from how we get "it" from a bunch of cells squirting ions
>> back and forth. Trying to understand a smartphone by looking at the flows
>> of electrons is a similar kind of problem, it would seem just as 
>> difficult
>> or impossible to explain and understand the high-level features and
>> complexity out of the low-level simplicity.
>>
>> This is why it's crucial to bear in mind and explicitly discuss the
>> level one is operation on when one discusses symbols, substrates, or 
>> quale.
>> In summary, I think a chief reason you have been talking past each other 
>> is
>> because you are each operating on different assumed levels.
>>
>> Please correct me if you believe I am mistaken and know I only offer
>> my perspective in the hope it might help the conversation.
>>
>
> I appreciate the callout, but it is necessary to talk at both the
> micro and the macro for this discussion. We're talking about symbol
> grounding. I should make it clear that I don't believe symbols can be
> grounded in other symbols (i.e. symbols all the way down as Stathis put
> it), that leads to infinite regress and the illusion of meaning.  Symbols
> ultimately must stand for something. The only thing they can stand
> *for*, ultimately, is something that cannot be communicated by other
> symbols: conscious experience. There is no concept in our brains that is
> not ultimately connected to something we've seen, heard, felt, smelled, or
> tasted.
>

 I agree everything you have experienced is rooted in consciousness.

 But at the low level, that only thing your brain senses are neural
 signals (symbols, on/off, ones and zeros).

 In your arguments you rely on the high-level conscious states of human
 brains to establish that they have grounding, but then use the low-level
 descriptions of machines to deny their own consciousness, and hence deny
 they can ground their processing to anything.

 If you remained in the space of low-level descriptions for both brains
 and machine intelligences, however, you would see each struggles to make a
 connection to what may exist at the 

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023, 9:43 AM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 21:28, Jason Resch  wrote:
>
>>
>>
>> On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>>>


 On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
 wrote:

>
>
> On Thu, 25 May 2023 at 11:48, Jason Resch 
> wrote:
>
> >An RNG would be a bad design choice because it would be extremely
>> unreliable. However, as a thought experiment, it could work. If the 
>> visual
>> cortex were removed and replaced with an RNG which for five minutes
>> replicated the interactions with the remaining brain, the subject would
>> behave as if they had normal vision and report that they had normal 
>> vision,
>> then after five minutes behave as if they were blind and report that they
>> were blind. It is perhaps contrary to intuition that the subject would
>> really have visual experiences in that five minute period, but I don't
>> think there is any other plausible explanation.
>>
>
>> I think they would be a visual zombie in that five minute period,
>> though as described they would not be able to report any difference.
>>
>> I think if one's entire brain were replaced by an RNG, they would be
>> a total zombie who would fool us into thinking they were conscious and we
>> would not notice a difference. So by extension a brain partially replaced
>> by an RNG would be a partial zombie that fooled the other parts of the
>> brain into thinking nothing was amiss.
>>
>
> I think the concept of a partial zombie makes consciousness
> nonsensical.
>

 It borders on the nonsensical, but between the two bad alternatives I
 find the idea of a RNG instantiating human consciousness somewhat less
 sensical than the idea of partial zombies.

>>>
>>> If consciousness persists no matter what the brain is replaced with as
>>> long as the output remains the same this is consistent with the idea that
>>> consciousness does not reside in a particular substance (even a magical
>>> substance) or in a particular process.
>>>
>>
>> Yes but this is a somewhat crude 1960s version of functionalism, which as
>> I described and as you recognized, is vulnerable to all kinds of attacks.
>> Modern functionalism is about more than high level inputs and outputs, and
>> includes causal organization and implementation details at some level (the
>> functional substitution level).
>>
>> Don't read too deeply into the mathematical definition of function as
>> simply inputs and outputs, think of it more in terms of what a mind does,
>> rather than what a mind is, this is the thinking that led to functionalism
>> and an acceptance of multiple realizability.
>>
>>
>>
>> This is a strange idea, but it is akin to the existence of platonic
>>> objects. The number three can be implemented by arranging three objects in
>>> a row but it does not depend those three objects unless it is being used
>>> for a particular purpose, such as three beads on an abacus.
>>>
>>
>> Bubble sort and merge sort both compute the same thing and both have the
>> same inputs and outputs, but they are different mathematical objects, with
>> different behaviors, steps, subroutines and runtime efficiency.
>>
>>
>>
>>>
 How would I know that I am not a visual zombie now, or a visual zombie
> every Tuesday, Thursday and Saturday?
>

 Here, we have to be careful what we mean by "I". Our own brains have
 various spheres of consciousness as demonstrated by the Wada Test: we can
 shut down one hemisphere of the brain and lose partial awareness and
 functionality such as the ability to form words and yet one remains
 conscious. I think being a partial zombie would be like that, having one's
 sphere of awareness shrink.

>>>
>>> But the subject's sphere of awareness would not shrink in the thought
>>> experiment,
>>>
>>
>> Have you ever wondered what delineates the mind from its environment? Why
>> it is that you are not aware of my thoughts but you see me as an object
>> that only affects your senses, even though we could represent the whole
>> earth as one big functional system?
>>
>> I don't have a good answer to this question but it seems it might be a
>> factor here. The randomly generated outputs from the RNG would seem an
>> environmental noise/sensation coming from the outside, rather than a
>> recursively linked and connected loop of processing as would exist in a
>> genuinely functioning brain of two hemispheres.
>>
>>
>> since by assumption their behaviour stays the same, while if their sphere
>>> of awareness shrank they notice that something was different and say so.
>>>
>>
>> But here (almost by magic), the RNG outputs have forced the physical
>> behavior of the remaining hemisphere to remain the same while fundamentally
>> 

Re: what chatGPT is and is not

2023-05-25 Thread Terren Suydam
On Tue, May 23, 2023 at 6:00 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 4:14 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 2:27 PM Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023 at 1:15 PM Terren Suydam 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 11:08 AM Dylan Distasio 
 wrote:


> And yes, I'm arguing that a true simulation (let's say for the sake of
> a thought experiment we were able to replicate every neural connection of 
> a
> human being in code, including the connectomes, and neurotransmitters,
> along with a simulated nerve that was connected to a button on the desk we
> could press which would simulate the signal sent when a biological pain
> receptor is triggered) would feel pain that is just as real as the pain 
> you
> and I feel as biological organisms.
>

 This follows from the physicalist no-zombies-possible stance. But it
 still runs into the hard problem, basically. How does stuff give rise to
 experience.


>>> I would say stuff doesn't give rise to conscious experience. Conscious
>>> experience is the logically necessary and required state of knowledge that
>>> is present in any consciousness-necessitating behaviors. If you design a
>>> simple robot with a camera and robot arm that is able to reliably catch a
>>> ball thrown in its general direction, then something in that system *must*
>>> contain knowledge of the ball's relative position and trajectory. It simply
>>> isn't logically possible to have a system that behaves in all situations as
>>> if it knows where the ball is, without knowing where the ball is.
>>> Consciousness is simply the state of being with knowledge.
>>>
>>> Con- "Latin for with"
>>> -Scious- "Latin for knowledge"
>>> -ness "English suffix meaning the state of being X"
>>>
>>> Consciousness -> The state of being with knowledge.
>>>
>>> There is an infinite variety of potential states and levels of
>>> knowledge, and this contributes to much of the confusion, but boiled down
>>> to the simplest essence of what is or isn't conscious, it is all about
>>> knowledge states. Knowledge states require activity/reactivity to the
>>> presence of information, and counterfactual behaviors (if/then, greater
>>> than less than, discriminations and comparisons that lead to different
>>> downstream consequences in a system's behavior). At least, this is my
>>> theory of consciousness.
>>>
>>> Jason
>>>
>>
>> This still runs into the valence problem though. Why does some
>> "knowledge" correspond with a positive *feeling* and other knowledge
>> with a negative feeling?
>>
>
> That is a great question. Though I'm not sure it's fundamentally insoluble
> within model where every conscious state is a particular state of knowledge.
>
> I would propose that having positive and negative experiences, i.e. pain
> or pleasure, requires knowledge states with a certain minium degree of
> sophistication. For example, knowing:
>
> Pain being associated with knowledge states such as: "I don't like this,
> this is bad, I'm in pain, I want to change my situation."
>
> Pleasure being associated with knowledge states such as: "This is good for
> me, I could use more of this, I don't want this to end.'
>
> Such knowledge states require a degree of reflexive awareness, to have a
> notion of a self where some outcomes may be either positive or negative to
> that self, and perhaps some notion of time or a sufficient agency to be
> able to change one's situation.
>
> Sone have argued that plants can't feel pain because there's little they
> can do to change their situation (though I'm agnostic on this).
>
>   I'm not talking about the functional accounts of positive and negative
>> experiences. I'm talking about phenomenology. The functional aspect of it
>> is not irrelevant, but to focus *only* on that is to sweep the feeling
>> under the rug. So many dialogs on this topic basically terminate here,
>> where it's just a clash of belief about the relative importance of
>> consciousness and phenomenology as the mediator of all experience and
>> knowledge.
>>
>
> You raise important questions which no complete theory of consciousness
> should ignore. I think one reason things break down here is because there's
> such incredible complexity behind and underlying the states of
> consciousness we humans perceive and no easy way to communicate all the
> salient properties of those experiences.
>
> Jason
>

Thanks for that. These kinds of questions are rarely acknowledged in the
mainstream. The problem is how much we take valence as a given, or how much
it's conflated with its function, that most people aren't aware of how
strange it is if you're coming from a physicalist metaphysics.  "Evolution
did it" is the common refrain, but it begs the question.

With your proposal would bacterium potentially possess the knowledge states
required?

And the idea that plants cannot influence their 

Re: what chatGPT is and is not

2023-05-25 Thread Terren Suydam
On Tue, May 23, 2023 at 5:47 PM Jason Resch  wrote:

>
>
> On Tue, May 23, 2023, 3:50 PM Terren Suydam 
> wrote:
>
>>
>>
>> On Tue, May 23, 2023 at 1:46 PM Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, May 23, 2023, 9:34 AM Terren Suydam 
>>> wrote:
>>>


 On Tue, May 23, 2023 at 7:09 AM Jason Resch 
 wrote:

> As I see this thread, Terren and Stathis are both talking past each
> other. Please either of you correct me if i am wrong, but in an effort to
> clarify and perhaps resolve this situation:
>
> I believe Stathis is saying the functional substitution having the
> same fine-grained causal organization *would* have the same phenomenology,
> the same experience, and the same qualia as the brain with the same
> fine-grained causal organization.
>
> Therefore, there is no disagreement between your positions with
> regards to symbols groundings, mappings, etc.
>
> When you both discuss the problem of symbology, or bits, etc. I
> believe this is partly responsible for why you are both talking past each
> other, because there are many levels involved in brains (and computational
> systems). I believe you were discussing completely different levels in the
> hierarchical organization.
>
> There are high-level parts of minds, such as ideas, thoughts,
> feelings, quale, etc. and there are low-level, be they neurons,
> neurotransmitters, atoms, quantum fields, and laws of physics as in human
> brains, or circuits, logic gates, bits, and instructions as in computers.
>
> I think when Terren mentions a "symbol for the smell of grandmother's
> kitchen" (GMK) the trouble is we are crossing a myriad of levels. The 
> quale
> or idea or memory of the smell of GMK is a very high-level feature of a
> mind. When Terren asks for or discusses a symbol for it, a complete
> answer/description for it can only be supplied in terms of a vast amount 
> of
> information concerning low level structures, be they patterns of neuron
> firings, or patterns of bits being processed. When we consider things down
> at this low level, however, we lose all context for what the meaning, 
> idea,
> and quale are or where or how they come in. We cannot see or find the idea
> of GMK in any neuron, no more than we can see or find it in any neuron.
>
> Of course then it should seem deeply mysterious, if not impossible,
> how we get "it" (GMK or otherwise) from "bit", but to me, this is no
> greater a leap from how we get "it" from a bunch of cells squirting ions
> back and forth. Trying to understand a smartphone by looking at the flows
> of electrons is a similar kind of problem, it would seem just as difficult
> or impossible to explain and understand the high-level features and
> complexity out of the low-level simplicity.
>
> This is why it's crucial to bear in mind and explicitly discuss the
> level one is operation on when one discusses symbols, substrates, or 
> quale.
> In summary, I think a chief reason you have been talking past each other 
> is
> because you are each operating on different assumed levels.
>
> Please correct me if you believe I am mistaken and know I only offer
> my perspective in the hope it might help the conversation.
>

 I appreciate the callout, but it is necessary to talk at both the micro
 and the macro for this discussion. We're talking about symbol grounding. I
 should make it clear that I don't believe symbols can be grounded in other
 symbols (i.e. symbols all the way down as Stathis put it), that leads to
 infinite regress and the illusion of meaning.  Symbols ultimately must
 stand for something. The only thing they can stand *for*, ultimately,
 is something that cannot be communicated by other symbols: conscious
 experience. There is no concept in our brains that is not ultimately
 connected to something we've seen, heard, felt, smelled, or tasted.

>>>
>>> I agree everything you have experienced is rooted in consciousness.
>>>
>>> But at the low level, that only thing your brain senses are neural
>>> signals (symbols, on/off, ones and zeros).
>>>
>>> In your arguments you rely on the high-level conscious states of human
>>> brains to establish that they have grounding, but then use the low-level
>>> descriptions of machines to deny their own consciousness, and hence deny
>>> they can ground their processing to anything.
>>>
>>> If you remained in the space of low-level descriptions for both brains
>>> and machine intelligences, however, you would see each struggles to make a
>>> connection to what may exist at the high-level. You would see, the lack of
>>> any apparent grounding in what are just neurons firing or not firing at
>>> certain times. Just as a wire in a circuit either carries or doesn't carry
>>> a charge.
>>>
>>
>> 

Re: what chatGPT is and is not

2023-05-25 Thread Stathis Papaioannou
On Thu, 25 May 2023 at 21:28, Jason Resch  wrote:

>
>
> On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:

 >An RNG would be a bad design choice because it would be extremely
> unreliable. However, as a thought experiment, it could work. If the visual
> cortex were removed and replaced with an RNG which for five minutes
> replicated the interactions with the remaining brain, the subject would
> behave as if they had normal vision and report that they had normal 
> vision,
> then after five minutes behave as if they were blind and report that they
> were blind. It is perhaps contrary to intuition that the subject would
> really have visual experiences in that five minute period, but I don't
> think there is any other plausible explanation.
>

> I think they would be a visual zombie in that five minute period,
> though as described they would not be able to report any difference.
>
> I think if one's entire brain were replaced by an RNG, they would be a
> total zombie who would fool us into thinking they were conscious and we
> would not notice a difference. So by extension a brain partially replaced
> by an RNG would be a partial zombie that fooled the other parts of the
> brain into thinking nothing was amiss.
>

 I think the concept of a partial zombie makes consciousness nonsensical.

>>>
>>> It borders on the nonsensical, but between the two bad alternatives I
>>> find the idea of a RNG instantiating human consciousness somewhat less
>>> sensical than the idea of partial zombies.
>>>
>>
>> If consciousness persists no matter what the brain is replaced with as
>> long as the output remains the same this is consistent with the idea that
>> consciousness does not reside in a particular substance (even a magical
>> substance) or in a particular process.
>>
>
> Yes but this is a somewhat crude 1960s version of functionalism, which as
> I described and as you recognized, is vulnerable to all kinds of attacks.
> Modern functionalism is about more than high level inputs and outputs, and
> includes causal organization and implementation details at some level (the
> functional substitution level).
>
> Don't read too deeply into the mathematical definition of function as
> simply inputs and outputs, think of it more in terms of what a mind does,
> rather than what a mind is, this is the thinking that led to functionalism
> and an acceptance of multiple realizability.
>
>
>
> This is a strange idea, but it is akin to the existence of platonic
>> objects. The number three can be implemented by arranging three objects in
>> a row but it does not depend those three objects unless it is being used
>> for a particular purpose, such as three beads on an abacus.
>>
>
> Bubble sort and merge sort both compute the same thing and both have the
> same inputs and outputs, but they are different mathematical objects, with
> different behaviors, steps, subroutines and runtime efficiency.
>
>
>
>>
>>> How would I know that I am not a visual zombie now, or a visual zombie
 every Tuesday, Thursday and Saturday?

>>>
>>> Here, we have to be careful what we mean by "I". Our own brains have
>>> various spheres of consciousness as demonstrated by the Wada Test: we can
>>> shut down one hemisphere of the brain and lose partial awareness and
>>> functionality such as the ability to form words and yet one remains
>>> conscious. I think being a partial zombie would be like that, having one's
>>> sphere of awareness shrink.
>>>
>>
>> But the subject's sphere of awareness would not shrink in the thought
>> experiment,
>>
>
> Have you ever wondered what delineates the mind from its environment? Why
> it is that you are not aware of my thoughts but you see me as an object
> that only affects your senses, even though we could represent the whole
> earth as one big functional system?
>
> I don't have a good answer to this question but it seems it might be a
> factor here. The randomly generated outputs from the RNG would seem an
> environmental noise/sensation coming from the outside, rather than a
> recursively linked and connected loop of processing as would exist in a
> genuinely functioning brain of two hemispheres.
>
>
> since by assumption their behaviour stays the same, while if their sphere
>> of awareness shrank they notice that something was different and say so.
>>
>
> But here (almost by magic), the RNG outputs have forced the physical
> behavior of the remaining hemisphere to remain the same while fundamentally
> altering the definition of the computation that underlies the mind.
>
> If this does not alter the consciousness, if neurons don't need to
> interact in a computationally meaningful way with other 

Re: what chatGPT is and is not

2023-05-25 Thread Jason Resch
On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou 
wrote:

>
>
> On Thu, 25 May 2023 at 13:59, Jason Resch  wrote:
>
>>
>>
>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Thu, 25 May 2023 at 11:48, Jason Resch  wrote:
>>>
>>> >An RNG would be a bad design choice because it would be extremely
 unreliable. However, as a thought experiment, it could work. If the visual
 cortex were removed and replaced with an RNG which for five minutes
 replicated the interactions with the remaining brain, the subject would
 behave as if they had normal vision and report that they had normal vision,
 then after five minutes behave as if they were blind and report that they
 were blind. It is perhaps contrary to intuition that the subject would
 really have visual experiences in that five minute period, but I don't
 think there is any other plausible explanation.

>>>
 I think they would be a visual zombie in that five minute period,
 though as described they would not be able to report any difference.

 I think if one's entire brain were replaced by an RNG, they would be a
 total zombie who would fool us into thinking they were conscious and we
 would not notice a difference. So by extension a brain partially replaced
 by an RNG would be a partial zombie that fooled the other parts of the
 brain into thinking nothing was amiss.

>>>
>>> I think the concept of a partial zombie makes consciousness nonsensical.
>>>
>>
>> It borders on the nonsensical, but between the two bad alternatives I
>> find the idea of a RNG instantiating human consciousness somewhat less
>> sensical than the idea of partial zombies.
>>
>
> If consciousness persists no matter what the brain is replaced with as
> long as the output remains the same this is consistent with the idea that
> consciousness does not reside in a particular substance (even a magical
> substance) or in a particular process.
>

Yes but this is a somewhat crude 1960s version of functionalism, which as I
described and as you recognized, is vulnerable to all kinds of attacks.
Modern functionalism is about more than high level inputs and outputs, and
includes causal organization and implementation details at some level (the
functional substitution level).

Don't read too deeply into the mathematical definition of function as
simply inputs and outputs, think of it more in terms of what a mind does,
rather than what a mind is, this is the thinking that led to functionalism
and an acceptance of multiple realizability.



This is a strange idea, but it is akin to the existence of platonic
> objects. The number three can be implemented by arranging three objects in
> a row but it does not depend those three objects unless it is being used
> for a particular purpose, such as three beads on an abacus.
>

Bubble sort and merge sort both compute the same thing and both have the
same inputs and outputs, but they are different mathematical objects, with
different behaviors, steps, subroutines and runtime efficiency.



>
>> How would I know that I am not a visual zombie now, or a visual zombie
>>> every Tuesday, Thursday and Saturday?
>>>
>>
>> Here, we have to be careful what we mean by "I". Our own brains have
>> various spheres of consciousness as demonstrated by the Wada Test: we can
>> shut down one hemisphere of the brain and lose partial awareness and
>> functionality such as the ability to form words and yet one remains
>> conscious. I think being a partial zombie would be like that, having one's
>> sphere of awareness shrink.
>>
>
> But the subject's sphere of awareness would not shrink in the thought
> experiment,
>

Have you ever wondered what delineates the mind from its environment? Why
it is that you are not aware of my thoughts but you see me as an object
that only affects your senses, even though we could represent the whole
earth as one big functional system?

I don't have a good answer to this question but it seems it might be a
factor here. The randomly generated outputs from the RNG would seem an
environmental noise/sensation coming from the outside, rather than a
recursively linked and connected loop of processing as would exist in a
genuinely functioning brain of two hemispheres.


since by assumption their behaviour stays the same, while if their sphere
> of awareness shrank they notice that something was different and say so.
>

But here (almost by magic), the RNG outputs have forced the physical
behavior of the remaining hemisphere to remain the same while fundamentally
altering the definition of the computation that underlies the mind.

If this does not alter the consciousness, if neurons don't need to interact
in a computationally meaningful way with other neurons, then in principle
all we need is one neuron to fire once, and this can stand for all possible
consciousness invoked by all possible minds.

Arnold Zuboff has written a thought experiment to this effect.

Re: what chatGPT is and is not

2023-05-25 Thread John Clark
On Wed, May 24, 2023 at 7:56 AM Jason Resch  wrote:

*> Can I ask you what you would believe would happen to the conscious of
> the individual if you replaced the right hemisphere of the brain with a
> black box that interfaced identically with the left hemisphere, but
> internal to this black box is nothing but a random number generator, and it
> is only by fantastic luck that the output of the RNG happens to have caused
> it's interfacing with the left hemisphere to remain unchanged?*


If that were to happen absolutely positively nothing would happen to the
consciousness of the individual, except that such a thing would be
astronomically (that's far too wimpy a word but is the best I could come up
with) to occur, and even if it did it would be astronomically squared
unlikely that such "fantastic luck" would continue and the individual would
remain conscious for another nanosecond. But I want to compete with you in
figuring out a thought experiment that is even more ridiculous than yours,
in fact I want to find one that is almost as ridiculous as the Chinese
Room. Here is my modest proposal:

Have ALL the neurons and not just half behave randomly,  and let them
produce exactly the same output that Albert Einstein's brain did, and let
them continue doing that for all of the 76 years of Albert Einstein's life.

In anticipation of your inevitable questions  Yes, that would be a
reincarnation of Albert Einstein. And yes he would be conscious, assuming
that the original Albert Einstein was conscious and that I am not the only
conscious being in the universe. And yes randomness producing consciousness
would be bizarre, but the bizarre is always to be expected if the starting
conditions are bizarre, and in this case your starting conditions are
bizarre to the bazaar power.  And no, something like that is not going to
happen, not for 76 years, not even for a picosecond .

John K ClarkSee what's on my new list at  Extropolis

n76



>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2ZtGg_AhNZCJZJBitdSLYWffarq3-%3DiyhMdax2Cpao5A%40mail.gmail.com.