RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-24 Thread Ed Porter
Eric,

Without knowing the scientifically measurable effects of the substance your
post mentioned on the operation of the brain --- I am hypothesizing that the
subjective experience you described could be caused, for example, by a
greatly increased activation of neurons, or by a great decrease in the
operations of the control and tuning mechanism of the brain, such as those
in the basil-gangia/thalamic/cortical feedback loop.  This could result in
the large part of the brain that receives and perceives sensation and
emotions not being a well moduluated, gain-controlled, and having normal
higher level attention focusing processes select which, relatively small,
parts of it get high degrees of activation by the parts of you brain that
normally controls your mind  --- which are often the part of your brain most
normally associated with self control, and thus the self, ---  a scheme
selected by evolution so you as an organism can respond to those aspects of
the environment that are most relevant to serving your own purposes, as has
been generally necessary for survival of our ancestors, from a Darwinian
standpoint.

To use a sociological analogy, it may be a temporary revolution, in which
the majority of the brain's neurons, that normally stay under the control of
the elites, the portions of the pre-frontal lobe that normally control the
focus of attention of the brain through their domination of the
basil-ganglia and the thalamus, losing their ability to keep the mob in its
place.  The result is that the senses and emotions run wild, and the part of
the brain dedicated to representing the self --- instead of being able to
control things --- is overwhelmed and greatly out numbered by the large
portion of the brain dedicated to emotion, sensation, and patterns within
them -- so that the consciousness is much more directly felt, without any or
significant interference from the self.  

And being overwhelmed by this sensation, and its awareness of the being
and computation (i.e., a since of life) of the reality around us---
uninterrupted by the control and voices of the self --- generates a strong
sensation that such sensed being is all, and, thus, we are one with it.

If any one could give me a concise explanation, or link to one, of the
scientifically studied effects on the brain of the chemicals that give such
experiences, I would be interested in reading it, to see to what extent it
agrees with the above hypothesis.

Ed Porter


-Original Message-
From: Eric Burton [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 23, 2008 10:50 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness

Ego death! This is not as pernicious as it sounds. The death/rebirth
trial is a standby of the psilocybin excursion. One realizes one's
self has vanished and is reincarnated into all the strangeness of life
on earth as if being born. Very much an experience of the physical
vessel being re-filled with new spirit stuff, some new soul overly
given to wonder at it all. A sensation at the heart of most tryptamine
raptures, I think... certainly more overlaid with alien imagery when
induced by say psilocin than say, five methoxy dmt. But with almost
all the tryptamine/indole hallucinogens this experience of user
reboot is often there

As if the user, not the machine, is rebooting.

Worthy, but outside list scope ._.


On 11/23/08, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,



 I googled ego loss and found a lot of first person accounts of various
 experiences.  From an AGI/brain science standpoint they were quite
 interesting, but I can see why you might not want such account to be on
this
 list, other than perhaps if they were copied from other sites, and
 accompanied by third party deconstruction from a brain science or AGI
 standpoint.



 In fact, some of the account were disturbing, and were actually written to
 be cautionary tails.  Some of these accounts described ego death.  Ego
 death appears to me to be quite distinct from what I had thought of as ego
 loss, because it appears to be associated with a sense of fearing death
 (which presumably one would not do if one had lost one's ego), which in
some
 instances occurred after, or intermitantly with,  periods of having sensed
a
 lost of ego, and was associated with a feat that one was permanently
loosing
 that sense of self that would be necessary for normal human existence.
 Several people reported having disturbing repercussions from such trips
for
 months or longer.



 But some of the people who reported ego loss said they felt it was a
 valuable experience.



 I forget exactly what various entheogens are supposed to do the brain,
from
 a measurable brain science standpoint, but several of the subjective
 accounts by people claiming to have taken very strong dosages of
entheogens
 described experiences that would be compatable with loss of normal brain
 control mechanism to maintain their normal control, or perhaps 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-24 Thread Eric Burton
I remember reading that LSD caused a desegregation of brain faculties,
so that patterns of activity produced by normal operation in one
region can spill over into adjacent ones, where they're intepreted
bizarrely. However, the brain does not go to soup or static, but
rather explodes with novel noise or intense satori. So indeed,
something else is happening.

I think your idea that ego loss is induced by a swelling of abstract
senses, squeezing out the structures that deal with your self in an
identificatory way, rings true. It's a phenomenon one usually realizes
has occurred, rather than going through acutely -- that is, it's in
the midst of some other trial that one realizes the conventional self
has evapourated, or become thin and transparent like tissue.

The signal to noise ratio on content-heavy tryptamines is very high.
5-meo-DMT which I mentioned is actually light on content but does
reliably induce a sense of transcendance and universal oneness. I
don't know if 5-meo-dmt satori is an ideal example of the bare ego
death experince. It is certainly also found in stranger substances

Eric B

On 11/24/08, Ed Porter [EMAIL PROTECTED] wrote:
 Eric,

 Without knowing the scientifically measurable effects of the substance your
 post mentioned on the operation of the brain --- I am hypothesizing that the
 subjective experience you described could be caused, for example, by a
 greatly increased activation of neurons, or by a great decrease in the
 operations of the control and tuning mechanism of the brain, such as those
 in the basil-gangia/thalamic/cortical feedback loop.  This could result in
 the large part of the brain that receives and perceives sensation and
 emotions not being a well moduluated, gain-controlled, and having normal
 higher level attention focusing processes select which, relatively small,
 parts of it get high degrees of activation by the parts of you brain that
 normally controls your mind  --- which are often the part of your brain most
 normally associated with self control, and thus the self, ---  a scheme
 selected by evolution so you as an organism can respond to those aspects of
 the environment that are most relevant to serving your own purposes, as has
 been generally necessary for survival of our ancestors, from a Darwinian
 standpoint.

 To use a sociological analogy, it may be a temporary revolution, in which
 the majority of the brain's neurons, that normally stay under the control of
 the elites, the portions of the pre-frontal lobe that normally control the
 focus of attention of the brain through their domination of the
 basil-ganglia and the thalamus, losing their ability to keep the mob in its
 place.  The result is that the senses and emotions run wild, and the part of
 the brain dedicated to representing the self --- instead of being able to
 control things --- is overwhelmed and greatly out numbered by the large
 portion of the brain dedicated to emotion, sensation, and patterns within
 them -- so that the consciousness is much more directly felt, without any or
 significant interference from the self.

 And being overwhelmed by this sensation, and its awareness of the being
 and computation (i.e., a since of life) of the reality around us---
 uninterrupted by the control and voices of the self --- generates a strong
 sensation that such sensed being is all, and, thus, we are one with it.

 If any one could give me a concise explanation, or link to one, of the
 scientifically studied effects on the brain of the chemicals that give such
 experiences, I would be interested in reading it, to see to what extent it
 agrees with the above hypothesis.

 Ed Porter


 -Original Message-
 From: Eric Burton [mailto:[EMAIL PROTECTED]
 Sent: Sunday, November 23, 2008 10:50 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] A paper that actually does solve the problem of
 consciousness

 Ego death! This is not as pernicious as it sounds. The death/rebirth
 trial is a standby of the psilocybin excursion. One realizes one's
 self has vanished and is reincarnated into all the strangeness of life
 on earth as if being born. Very much an experience of the physical
 vessel being re-filled with new spirit stuff, some new soul overly
 given to wonder at it all. A sensation at the heart of most tryptamine
 raptures, I think... certainly more overlaid with alien imagery when
 induced by say psilocin than say, five methoxy dmt. But with almost
 all the tryptamine/indole hallucinogens this experience of user
 reboot is often there

 As if the user, not the machine, is rebooting.

 Worthy, but outside list scope ._.


 On 11/23/08, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,



 I googled ego loss and found a lot of first person accounts of various
 experiences.  From an AGI/brain science standpoint they were quite
 interesting, but I can see why you might not want such account to be on
 this
 list, other than perhaps if they were copied from other sites, and
 accompanied by third party 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-24 Thread Mike Tintner


Eric: I think your idea that ego loss is induced by a swelling of abstract

senses, squeezing out the structures that deal with your self in an
identificatory way, rings true.




I haven't followed this thread closely, but there is an aspect to it, I 
would argue, which is AGI-relevant. It's not so much ego-loss as 
ego-abandonment - letting your self go, which is central to mental 
illness. We are all capable of doing that under pressure - being highly 
conscious is painful especially under difficult circumstances. We also all 
continually diminish (and heighten) our consciousness- diminish rather than 
abandon our self -  by some form of substance abuse from hard drugs to 
mild stimulants like coffee and comfort food..


How is that AGI-relevant? Because a true AGI that is continually dealing 
with creative problems, is and has to be continually afraid (along with 
other unpleasant emotions) - i.e. alert to the risks of things going wrong, 
which they always can - those problems may not be solved. And there is and 
has to be an issue of how much attention the self should pay to those fears, 
(all part of the area of emotional (general) intelligence).


In extreme situations, of course, there will be an issue of 
self-extinction - suicide. When *should* an AGI commit suicide?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-23 Thread Eric Burton
Hey, ego loss is attendant with even modest doses of LSD or
psilocybin. At ~ 700 mics I found that effect to be very much
background

On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,

 Entheogens!

 What a great word/euphemism.

 Is it pronounced like Inns (where travelers sleep) + Theo (short for
 Theodore) + gins(a subset of liquors I normally avoid like the plague,
 except in the occasional summer gin and tonic with lime)?

 What is the respective emphasis given to each of these three parts in the
 proper pronunciations.

 It is a word that would be deeply appreciated by many at my local Unitarian
 Church.

 Ed Porter

 -Original Message-
 From: Ben Goertzel [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 20, 2008 7:11 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] A paper that actually does solve the problem of
 consciousness

 When I was in college and LSD was the rage, one of the main goals of the
 heavy duty heads was ego loss which was to achieve a sense of cosmic
 oneness with all of the universe.  It was commonly stated that 1000
 micrograms was the ticket to ego loss.  I never went there.  Nor have I
 ever achieved cosmic oneness through meditation, although I have achieved
 temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.

 Perhaps you have been more brave (acid wise) or much lucky or disciplined
 meditation wise, and have achieve a seen of oneness with the cosmic
 consciousness.  If so, I tip my hat (and Colbert wag of the finger) to
 you.

 Not a great topic for public mailing list discussion but ... uh ... yah ..

 But it's not really so much about the dosage ... entheogens are tools
 and it's all about what you do with them ;-)

 ben


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-23 Thread Ed Porter
Eric,

 

If, as your post below implies, you have experienced ego loss, --- please
tell me --- how, if at all, was it different than the sense of oneness with
the surround world that I described in my post of Fri 11/21/2008 8:02 PM
which started this named thread.

 

That is, how  was it different than merely having, for an extended period of
time, a oneness with sensual experience of the computational richness of
external reality around (or perhaps of just ones breathing and feelings it
engenders)  --- a oneness uninterrupted by awareness of oneself as a
something separate from such sensations or by the chattering of the chatbot
most of us have inside our heads --- other than for the standard effects on
sensations and emotions one would routinely associate with being
entheogenned.

 

Ed Porter 

 

-Original Message-
From: Eric Burton [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 23, 2008 11:40 AM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness

 

Hey, ego loss is attendant with even modest doses of LSD or

psilocybin. At ~ 700 mics I found that effect to be very much

background

 

On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote:

 Ben,



 Entheogens!



 What a great word/euphemism.



 Is it pronounced like Inns (where travelers sleep) + Theo (short for

 Theodore) + gins(a subset of liquors I normally avoid like the plague,

 except in the occasional summer gin and tonic with lime)?



 What is the respective emphasis given to each of these three parts in the

 proper pronunciations.



 It is a word that would be deeply appreciated by many at my local
Unitarian

 Church.



 Ed Porter



 -Original Message-

 From: Ben Goertzel [mailto:[EMAIL PROTECTED]

 Sent: Thursday, November 20, 2008 7:11 PM

 To: agi@v2.listbox.com

 Subject: Re: [agi] A paper that actually does solve the problem of

 consciousness



 When I was in college and LSD was the rage, one of the main goals of the

 heavy duty heads was ego loss which was to achieve a sense of cosmic

 oneness with all of the universe.  It was commonly stated that 1000

 micrograms was the ticket to ego loss.  I never went there.  Nor have I

 ever achieved cosmic oneness through meditation, although I have achieved

 temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.



 Perhaps you have been more brave (acid wise) or much lucky or disciplined

 meditation wise, and have achieve a seen of oneness with the cosmic

 consciousness.  If so, I tip my hat (and Colbert wag of the finger) to

 you.



 Not a great topic for public mailing list discussion but ... uh ... yah ..



 But it's not really so much about the dosage ... entheogens are tools

 and it's all about what you do with them ;-)



 ben





 ---

 agi

 Archives: https://www.listbox.com/member/archive/303/=now

 RSS Feed: https://www.listbox.com/member/archive/rss/303/

 Modify Your Subscription:

 https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com





 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-23 Thread Ben Goertzel
 I don't feel motivated to kill this thread in my role as list
moderator, and I agree that what's on or off topic is fairly fuzzy ...
but I just have a sense that discussions of various varieties of
drug-induced (or otherwise induced) states of exalted consciousness is
a bit off-topic for an AGI list ... anyway I don't feel it quite right
to share my own experiences in this regard in this forum ;-)

Ben G

On Sun, Nov 23, 2008 at 5:21 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,



 It's your list, so you get to decide what is off topic.



 Are you implying all discussion of subjectively describable aspect of human
 conscious experience is off topic?



 At least in my own experience, thinking about introspective subjective
 experiences has played a major role in my thinking about AGI.   Thus, I tend
 to have a bias toward thinking discussions of such thinking are relevant to
 AGI.



 If p-consciousness, such as discussed in Richard's paper, is relevant to
 AGI, then why isn't a-consciousness?



 Or, perhaps, your implication about what is off topic was more narrow?



 That is what I assumed, and that is why, in the post you responding to
 below, I was asking if there were any describable non-entheogenic aspects of
 the ego-loss experience, other than what I had already described.



 Ed Porter







 -Original Message-
 From: Ben Goertzel [mailto:[EMAIL PROTECTED]
 Sent: Sunday, November 23, 2008 4:04 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] A paper that actually does solve the problem of
 consciousness



 Goodness.. I feel like



 a) it is mighty hard to draw distinctions about these kinds of

 experiences in ordinary, informal language...



 b) this is kinda off topic for the list ;-)



 ben



 On Sun, Nov 23, 2008 at 3:28 PM, Ed Porter [EMAIL PROTECTED] wrote:

 Eric,







 If, as your post below implies, you have experienced ego loss, ---
 please

 tell me --- how, if at all, was it different than the sense of oneness
 with

 the surround world that I described in my post of Fri 11/21/2008 8:02 PM

 which started this named thread.







 That is, how  was it different than merely having, for an extended period
 of

 time, a oneness with sensual experience of the computational richness of

 external reality around (or perhaps of just ones breathing and feelings it

 engenders)  --- a oneness uninterrupted by awareness of oneself as a

 something separate from such sensations or by the chattering of the
 chatbot

 most of us have inside our heads --- other than for the standard effects
 on

 sensations and emotions one would routinely associate with being

 entheogenned.







 Ed Porter







 -Original Message-

 From: Eric Burton [mailto:[EMAIL PROTECTED]

 Sent: Sunday, November 23, 2008 11:40 AM

 To: agi@v2.listbox.com

 Subject: Re: [agi] A paper that actually does solve the problem of

 consciousness







 Hey, ego loss is attendant with even modest doses of LSD or



 psilocybin. At ~ 700 mics I found that effect to be very much



 background







 On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote:



 Ben,







 Entheogens!







 What a great word/euphemism.







 Is it pronounced like Inns (where travelers sleep) + Theo (short for



 Theodore) + gins(a subset of liquors I normally avoid like the
 plague,



 except in the occasional summer gin and tonic with lime)?







 What is the respective emphasis given to each of these three parts in the



 proper pronunciations.







 It is a word that would be deeply appreciated by many at my local

 Unitarian



 Church.







 Ed Porter







 -Original Message-



 From: Ben Goertzel [mailto:[EMAIL PROTECTED]



 Sent: Thursday, November 20, 2008 7:11 PM



 To: agi@v2.listbox.com



 Subject: Re: [agi] A paper that actually does solve the problem of



 consciousness







 When I was in college and LSD was the rage, one of the main goals of the



 heavy duty heads was ego loss which was to achieve a sense of cosmic



 oneness with all of the universe.  It was commonly stated that 1000



 micrograms was the ticket to ego loss.  I never went there.  Nor have
 I



 ever achieved cosmic oneness through meditation, although I have
 achieved



 temporary (say fifteen or thirty seconds) feeling of deep peaceful
 bliss.







 Perhaps you have been more brave (acid wise) or much lucky or
 disciplined



 meditation wise, and have achieve a seen of oneness with the cosmic



 consciousness.  If so, I tip my hat (and Colbert wag of the finger) to



 you.







 Not a great topic for public mailing list discussion but .. uh . yah ..







 But it's not really so much about the dosage ... entheogens are tools



 and it's all about what you do with them ;-)







 ben











 ---



 agi



 Archives: https://www.listbox.com/member/archive/303/=now



 RSS Feed: https://www.listbox.com/member/archive/rss/303/



 Modify Your Subscription:



 

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-22 Thread Ed Porter
Wannabe,

If you read my post of Fri 11/21/2008 8:02 PM in this thread, you will see
that I said the sense of oneness with the external world many of us feel may
just be sensory experience and perception of the external world,
uninterrupted by thoughts of oneself or our brain's chatbot.

This would tend to agree with what you say in your post below.

Ed Porter

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Saturday, November 22, 2008 2:57 PM
To: agi@v2.listbox.com
Subject: RE: [agi] A paper that actually does solve the problem of
consciousness

You guys and your experiments.  Well the whole experience of  
oneness could also just be the disruption of the orientation  
association cortex.  Jill Bolte Taylor, a neuroscientist, describes  
this in her book, _My Stroke of Insight_.  She had a stroke that  
affected much of her left hemisphere, including this area that creates  
awareness of personal boundaries.  So she had the whole feeling of  
oneness with the universe.  And also now that she has recovered she is  
able to shift her consciousness more to her right brain and get back  
to it.  She has a TED talk about it:   
http://www.ted.com/index.php/talks/jill_bolte_taylor_s_powerful_stroke_of_in
sight.html
andi




Quoting Ed Porter [EMAIL PROTECTED]:

 Ben,

 Entheogens!

 What a great word/euphemism.

 Is it pronounced like Inns (where travelers sleep) + Theo (short for
 Theodore) + gins(a subset of liquors I normally avoid like the plague,
 except in the occasional summer gin and tonic with lime)?

 What is the respective emphasis given to each of these three parts in the
 proper pronunciations.

 It is a word that would be deeply appreciated by many at my local
Unitarian
 Church.

 Ed Porter

 -Original Message-
 From: Ben Goertzel [mailto:[EMAIL PROTECTED]
 Sent: Thursday, November 20, 2008 7:11 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] A paper that actually does solve the problem of
 consciousness

 When I was in college and LSD was the rage, one of the main goals of the
 heavy duty heads was ego loss which was to achieve a sense of cosmic
 oneness with all of the universe.  It was commonly stated that 1000
 micrograms was the ticket to ego loss.  I never went there.  Nor have I
 ever achieved cosmic oneness through meditation, although I have achieved
 temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.

 Perhaps you have been more brave (acid wise) or much lucky or disciplined
 meditation wise, and have achieve a seen of oneness with the cosmic
 consciousness.  If so, I tip my hat (and Colbert wag of the finger) to
 you.

 Not a great topic for public mailing list discussion but ... uh ... yah ..

 But it's not really so much about the dosage ... entheogens are tools
 and it's all about what you do with them ;-)

 ben


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:   
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-21 Thread Ed Porter
Ben,

Entheogens!  

What a great word/euphemism.  

Is it pronounced like Inns (where travelers sleep) + Theo (short for
Theodore) + gins(a subset of liquors I normally avoid like the plague,
except in the occasional summer gin and tonic with lime)?

What is the respective emphasis given to each of these three parts in the
proper pronunciations.

It is a word that would be deeply appreciated by many at my local Unitarian
Church.

Ed Porter

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 20, 2008 7:11 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness

 When I was in college and LSD was the rage, one of the main goals of the
 heavy duty heads was ego loss which was to achieve a sense of cosmic
 oneness with all of the universe.  It was commonly stated that 1000
 micrograms was the ticket to ego loss.  I never went there.  Nor have I
 ever achieved cosmic oneness through meditation, although I have achieved
 temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.

 Perhaps you have been more brave (acid wise) or much lucky or disciplined
 meditation wise, and have achieve a seen of oneness with the cosmic
 consciousness.  If so, I tip my hat (and Colbert wag of the finger) to
you.

Not a great topic for public mailing list discussion but ... uh ... yah ..

But it's not really so much about the dosage ... entheogens are tools
and it's all about what you do with them ;-)

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Richard Loosemore

Ed Porter wrote:

Richard,

 

In response to your below copied email, I have the following response to 
the below quoted portions:


 


### My prior post 


 That aspects of consciousness seem real does not provides much of an



 “explanation for consciousness.”  It says something, but not much.  It



 adds little to Descartes’ “I think therefore I am.”  I don’t think it



 provides much of an answer to any of the multiple questions Wikipedia



 associates with Chalmer’s hard problem of consciousness.


 


### Richard said 

I would respond as follows.  When I make statements about consciousness

deserving to be called real, I am only saying this as a summary of a

long argument that has gone before.  So it would not really be fair to

declare that this statement of mine says something, but not much

without taking account of the reasons that have been building up toward

that statement earlier in the paper. 

 


## My response ##

Perhaps ---  but this prior work which you claim explains so much is not 
in the paper being discussed.  Without it, it is not clear how much your 
paper itself contributes.  And, Ben, who is much more knowledgeable than 
I on these things seemed similarly unimpressed.


I would say that it does.  I blieve that the situation is that you do 
not yet understand it.  Ben has had similar trouble, but seems to be 
comprehending more of the issue as I respond to his questions.


(I owe him one response right now:  I am working on it)



 


### Richard said 

I am arguing that when we probe

the meaning of real we find that the best criterion of realness is the

way that the system builds a population of concept-atoms that are (a)

mutually consistent with one another,

 


## My response ##

I don’t know what mutually consistent means in this context, and from my 
memory of reading you paper multiple times I don’t think it explains it, 
other than perhaps implying that the framework of atoms represent 
experiential generalization and associations, which would presumably 
tend to represent the regularities of experienced reality.


I'll grant you that one:  I did not explain in detail this idea of 
mutual consistency.


However, the reason I did not is that I really had to assume some 
background, and I was hoping that the reader would already be aware of 
the general idea that cognitive systems build their knowledge in the 
form of concepts that are (largely) consistent with one another, and 
that it is this global consistency that lends strength to the whole.  In 
other words, all the bits of our knowledge work together.


A piece of knowledge like The Loch Ness monster lives in Loch Ness is 
NOT a piece of knowledge that fits well with all of the rest of our 
knowledge, because we have little or no evidence that such a thing as 
the Loch Ness Monster has been photographed, observed by independent 
people, observed by several people at the same time, caught in a trap 
and taken to a museum, been found as a skeletal remain, bumped into a 
boat, etc etc etc.  There are no links from the rest of our knowledge to 
the LNM fact, so we actually do not credit the LNM as being real.


By contrast, facts about Coelacanths are very well connected to the rest 
of our knowledge, and we believe that they do exist.





 


### Richard said 

and (b) strongly supported by

sensory evidence (there are other criteria, but those are the main

ones).  If you think hard enough about these criteria, you notice that

the qualia-atoms (those concept-atoms that cause the analysis mechanism

to bottom out) score very high indeed.  This is in dramatic contrast to

other concept-atoms like hallucinations, which we consider 'artifacts'

precisely because they score so low.  The difference between these two

is so dramatic that I think we need to allow the qualia-atoms to be

called real by all our usual criteria, BUT with the added feature that

they cannot be understood in any more basic terms.

 


## My response ##

You seem to be defining “real” here to mean believed to exist in what is 
perceived as objective reality.  I personally believe a sense of 
subjective reality is much more central to the concept of consciousness. 

 

Personal computers of today, which most people don’t think have anything 
approaching a human-like consciousness, could in many tasks make 
estimations of whether some signal was “real” in the sense of 
representing something in objective reality without being conscious.  
But a powerful hallucination, combined with a human level of sense of 
being conscious of it, does not appear to be something any current 
computer can achieve. 

 

So if you are looking for the hard problems in consciousness focus more 
on the human subjective sense of awareness, not whether there is 
evidence something is real in what we perceive as objective reality.



Alas, you have 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Ben Goertzel
Hmmm...

I don't agree w/ you that the hard problem of consciousness is
unimportant or non-critical in a philosophical sense.  Far from it.

However, from the point of view of this list, I really don't think it
needs to be solved (whatever that might mean) in order to build AGI.

Of course, I think that because I think the hard problem of
consciousness is actually easy: I'm a panpsychist ... I think
everything is conscious, and different kinds of structures just focus
and amplify this universal consciousness in different ways...

Interestingly, this panpsychist perspective is seen as obviously right
by most folks deeply involved with meditation or yoga whom I've talked
to, and seen as obviously wrong by most scientists I talk to...

-- Ben G

On Thu, Nov 20, 2008 at 5:26 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Richard,



 Thank you for your reply.



 I started to write a point-by-point response to your reply, copied below,
 but after 45 minutes I said stop.  As interesting as it is, from a
 philosophical and argumentative writing standpoint to play wack-a-mole with
 your constantly sifting and often contradictory arguments --- right now, I
 have much more pressing things to do.



 And I think I have already stated many of my positions on the subject of
 this thread sufficiently clearly that intelligent people who have a little
 imagination and really want to can understand them.  Since few others beside
 you have responded to my posts, I don't think there is any community demand
 that I spend further time on such replies.



 What little I can add to what I have already said is that I basically I
 think the hard problem/easy problem dichotomy is largely, although, not
 totally pointless.



 I do not think the hard problem is central to understanding consciousness,
 because so much of consciousness is excluded from being part of the hard
 problem.  It is excluded either because it can be described verbally by
 introspection by the mind itself, or because it affects external behavior,
 and, thus, at least according to Wikipedia's definition of p-consciousness,
 is part of the easy problem.



 It should be noted that not affecting external behavior excludes one hell of
 a lot of consciousness, because emotions, which clearly affect external
 behavior, are so closely associated with much of our sensing of experience.



 Thus, it seems a large part of what we humans consider to be our subjective
 sense of experience of consciousness is rejected by hard problem purists
 as being part of the easy problem.



 Richard, you in particular seems to be much more of a hard problem purist
 than those who wrote the Wikipedia definition of p-consciousness.   This is
 because in your responses to me you have even excluded as not part of the
 hard problem any lateral or higher level associations of one of your bottom
 level red detector nodes might have.  This, for example, would arguably
 exclude from the p-consciousness of the color red the associations between
 the lowest level, local red sensing nodes, that are necessary so the
 activation of such nodes can be recognized as a common color red no matter
 where they occur in different parts of the visual field.



 Thus according to such a definition, qualia for red would have to be
 different for each location of V1 in which red is sensed --- even when
 different portions of V1 get mapped into the same portions of the semi
 stationary representation your brain builds out of stationary surroundings
 as your eyes saccade and pan across them.  Thus, your concept of the qualia
 for the color red does not cover a unified color red, and necessarily
 includes thousands of separate red qualia, each associated with a different
 portion of V1.



 Aspects of consciousness that (a) cannot be verbally described by
 introspection; (b) have no effect on behavior, and (c) cannot involve any
 associations with the activation of other nodes (which is an exclusion you,
 Richard, seem to have added to Wikipedia's description of p-consciousness)
 --- defines the hard problem so narrowly as to make it of relatively little,
 or no importance.  It certainly is not the central question of
 consciousness, because a sense of experiencing something has no meaning
 unless it has grounding, and that requires associations in large numbers,
 and, thus, according to your definition could not be part of the hard
 problem.



 Plus, Richard, you have not even come close to addressing my statement that
 just because certain aspects of consciousness cannot be verbally described
 by the introspection of the brain or by affects on external behavior of the
 body itself does not mean they cannot be subject to further analysis through
 scientific research --- such as by brain science, brain scanning, brain
 simulations, and advances in understanding of AGIs.



 I have already spent way, way too much time in this response, So, I will
 leave it at that.  If you want to think you have won the argument 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Mike Tintner

Ben: I'm a panpsychist ...

You think that all things are sentient/ conscious?

(I argue that consciousness depends on having a nervous system and being 
able to feel - and if we could understand the mechanics of that, we would 
probably have solved the hard problem and be able to give something similar 
to a machine (which might have to be organic) ).


So I'm interested in any alternative/panpsychist views. If you do think that 
inorganic things like stones, say, are conscious, then surely it would 
follow, that we should ultimately be able to explain their consciousness, 
and make even inanimate metallic computers conscious?


Care to expand a little on your views? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Ben Goertzel
well, what does feel mean to you ... what is feeling that a slug can
do but a rock or an atom cannot ... are you sure this is an absolute
distinction rather than a matter of degree?

On Thu, Nov 20, 2008 at 6:15 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Ben: I'm a panpsychist ...

 You think that all things are sentient/ conscious?

 (I argue that consciousness depends on having a nervous system and being
 able to feel - and if we could understand the mechanics of that, we would
 probably have solved the hard problem and be able to give something similar
 to a machine (which might have to be organic) ).

 So I'm interested in any alternative/panpsychist views. If you do think that
 inorganic things like stones, say, are conscious, then surely it would
 follow, that we should ultimately be able to explain their consciousness,
 and make even inanimate metallic computers conscious?

 Care to expand a little on your views?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects.  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Ed Porter
Ben, 

If you place the limitations on what is part of the hard problem that
Richard has, most of what you consider part of the hard problem would
probably cease to be part of the hard problem.  In one argument he
eliminated things relating to lateral or upward associative connections from
being consider part of the hard problem of consciousness.  That would
eliminate the majority of sources of grounding from any notion of
consciousness.

I like you tend to think that all of reality is conscious, but I think there
are vastly different degrees and types of consciousness, and I think there
are many meaningful types of consciousness that humans have that most of
reality does not have.

When I was in college and LSD was the rage, one of the main goals of the
heavy duty heads was ego loss which was to achieve a sense of cosmic
oneness with all of the universe.  It was commonly stated that 1000
micrograms was the ticket to ego loss.  I never went there.  Nor have I
ever achieved cosmic oneness through meditation, although I have achieved
temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.

Perhaps you have been more brave (acid wise) or much lucky or disciplined
meditation wise, and have achieve a seen of oneness with the cosmic
consciousness.  If so, I tip my hat (and Colbert wag of the finger) to you.

Ed Porter


-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 20, 2008 5:46 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of
consciousness

Hmmm...

I don't agree w/ you that the hard problem of consciousness is
unimportant or non-critical in a philosophical sense.  Far from it.

However, from the point of view of this list, I really don't think it
needs to be solved (whatever that might mean) in order to build AGI.

Of course, I think that because I think the hard problem of
consciousness is actually easy: I'm a panpsychist ... I think
everything is conscious, and different kinds of structures just focus
and amplify this universal consciousness in different ways...

Interestingly, this panpsychist perspective is seen as obviously right
by most folks deeply involved with meditation or yoga whom I've talked
to, and seen as obviously wrong by most scientists I talk to...

-- Ben G

On Thu, Nov 20, 2008 at 5:26 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Richard,



 Thank you for your reply.



 I started to write a point-by-point response to your reply, copied below,
 but after 45 minutes I said stop.  As interesting as it is, from a
 philosophical and argumentative writing standpoint to play wack-a-mole
with
 your constantly sifting and often contradictory arguments --- right now, I
 have much more pressing things to do.



 And I think I have already stated many of my positions on the subject of
 this thread sufficiently clearly that intelligent people who have a little
 imagination and really want to can understand them.  Since few others
beside
 you have responded to my posts, I don't think there is any community
demand
 that I spend further time on such replies.



 What little I can add to what I have already said is that I basically I
 think the hard problem/easy problem dichotomy is largely, although, not
 totally pointless.



 I do not think the hard problem is central to understanding consciousness,
 because so much of consciousness is excluded from being part of the hard
 problem.  It is excluded either because it can be described verbally by
 introspection by the mind itself, or because it affects external behavior,
 and, thus, at least according to Wikipedia's definition of
p-consciousness,
 is part of the easy problem.



 It should be noted that not affecting external behavior excludes one hell
of
 a lot of consciousness, because emotions, which clearly affect external
 behavior, are so closely associated with much of our sensing of
experience.



 Thus, it seems a large part of what we humans consider to be our
subjective
 sense of experience of consciousness is rejected by hard problem purists
 as being part of the easy problem.



 Richard, you in particular seems to be much more of a hard problem purist
 than those who wrote the Wikipedia definition of p-consciousness.   This
is
 because in your responses to me you have even excluded as not part of the
 hard problem any lateral or higher level associations of one of your
bottom
 level red detector nodes might have.  This, for example, would arguably
 exclude from the p-consciousness of the color red the associations between
 the lowest level, local red sensing nodes, that are necessary so the
 activation of such nodes can be recognized as a common color red no
matter
 where they occur in different parts of the visual field.



 Thus according to such a definition, qualia for red would have to be
 different for each location of V1 in which red is sensed --- even when
 different portions of V1 get mapped into the same portions of the 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Mike Tintner

Ben,

I suspect you're being evasive. You and I know what feel means. When I feel 
the wind, I feel cold. When I feel tea poured on my hand, I/it feel/s 
scalding hot. And we can trace the line of feeling to a considerable 
extent - no? - through the nervous system and brain. Not only do I feel it 
internally, but there are normally external signs of my feeling. You see me 
shivering/ wincing etc. And we - science - can interfere with those feelings 
and anaesthetise or heighten them.


Now when the rock is exposed to the same wind or hot tea, if it does feel 
anything, it stoically and heroically refuses to display any signs 
whatsoever. It appears to be magnificently indifferent. And if it really is 
suffering, we wouldn't know what to do to alleviate its suffering.


So what do you (or others) mean by inanimate things feeling?

I'm mainly seeking enlightenment not an argument here  -  and to see whether 
your or others' panpsychism has been at all thought through, and is more 
than an abstract conjunction of concepts. I assume there is some substance 
to the philosophy - I'd like to know what it is.

I
Ben:

well, what does feel mean to you ... what is feeling that a slug can
do but a rock or an atom cannot ... are you sure this is an absolute
distinction rather than a matter of degree?

On Thu, Nov 20, 2008 at 6:15 PM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Ben: I'm a panpsychist ...

You think that all things are sentient/ conscious?

(I argue that consciousness depends on having a nervous system and being
able to feel - and if we could understand the mechanics of that, we would
probably have solved the hard problem and be able to give something 
similar

to a machine (which might have to be organic) ).

So I'm interested in any alternative/panpsychist views. If you do think 
that

inorganic things like stones, say, are conscious, then surely it would
follow, that we should ultimately be able to explain their consciousness,
and make even inanimate metallic computers conscious?

Care to expand a little on your views?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 2:23 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 well, what does feel mean to you ... what is feeling that a slug can
 do but a rock or an atom cannot ... are you sure this is an absolute
 distinction rather than a matter of degree?


Does a rock compute Fibonacci numbers just to a lesser degree than
this program? A concept, like any other. Also, some shades of gray are
so thin you'd run out of matter in the Universe to track all the
things that light.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Ben Goertzel
 When I was in college and LSD was the rage, one of the main goals of the
 heavy duty heads was ego loss which was to achieve a sense of cosmic
 oneness with all of the universe.  It was commonly stated that 1000
 micrograms was the ticket to ego loss.  I never went there.  Nor have I
 ever achieved cosmic oneness through meditation, although I have achieved
 temporary (say fifteen or thirty seconds) feeling of deep peaceful bliss.

 Perhaps you have been more brave (acid wise) or much lucky or disciplined
 meditation wise, and have achieve a seen of oneness with the cosmic
 consciousness.  If so, I tip my hat (and Colbert wag of the finger) to you.

Not a great topic for public mailing list discussion but ... uh ... yah ...

But it's not really so much about the dosage ... entheogens are tools
and it's all about what you do with them ;-)

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Eric Baum

 I completed the first draft of a technical paper on consciousness
 the other day.  It is intended for the AGI-09 conference, and it
 can be found at:


Ben Hi Richard,

Ben I don't have any comments yet about what you have written,
Ben because I'm not sure I fully understand what you're trying to
Ben say... I hope your answers to these questions will help clarify
Ben things.

Ben It seems to me that your core argument goes something like this:

Ben That there are many concepts for which an introspective analysis
Ben can only return the concept itself.  That this recursion blocks
Ben any possible explanation.  That consciousness is one of these
Ben concepts because self is inherently recursive.  Therefore,
Ben consciousness is explicitly blocked from having any kind of
Ben explanation.

Haven't read the paper yet, but the situation with introspection 
is the following:

Introspection accesses a meaning level, at which you can summon and
use concepts (subroutines) by name, but you are protected essentially 
by information hiding from looking at the code that implements them.

Consider for example summoning Microsoft Word to perform some task.
You know what you are doing, why you are doing it, how you intend to
use it, but you have no idea of the code within Microsoft Word. The
same is true for internal concepts within your mind.

Your mind is no more built to be able to look inside subroutines, than
my laptop is built to output the internal transistor values. Partial
results within subroutines are not meaningful, your conscious
processing is in terms of meaningful quantities.

What is Thought? (MIT Press, 2004) discusses this, in Chap 14 which 
answers most questions about consciousness IMO.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

I re-read your paper and I'm afraid I really don't grok why you think it 
solves Chalmers' hard problem of consciousness...


It really seems to me like what you're suggesting is a cognitive 
correlate of consciousness, to morph the common phrase neural 
correlate of consciousness ...


You seem to be stating that when X is an unanalyzable, pure atomic 
sensation from the perspective of cognitive system C, then C will 
perceive X as a raw quale ... unanalyzable and not explicable by 
ordinary methods of explication, yet, still subjectively real...


But, I don't see how the hypothesis

Conscious experience is **identified with** unanalyzable mind-atoms

could be distinguished empirically from

Conscious experience is **correlated with** unanalyzable mind-atoms

I think finding cognitive correlates of consciousness is interesting, 
but I don't think it constitutes solving the hard problem in Chalmers' 
sense...


I grok that you're saying consciousness feels inexplicable because it 
has to do with atoms that the system can't explain, due to their role as 
its primitive atoms ... and this is a good idea, but, I don't see how 
it bridges the gap btw subjective experience and empirical data ...


What it does is explain why, even if there *were* no hard problem, 
cognitive systems might feel like there is one, in regard to their 
unanalyzable atoms


Another worry I have is: I feel like I can be conscious of my son, even 
though he is not an unanalyzable atom.  I feel like I can be conscious 
of the unique impression he makes ... in the same way that I'm conscious 
of redness ... and, yeah, I feel like I can't fully explain the 
conscious impression he makes on me, even though I can explain a lot of 
things about him...


So I'm not convinced that atomic sensor input is the only source of raw, 
unanalyzable consciousness...


My first response to this is that you still don't seem to have taken 
account of what was said in the second part of the paper  -  and, at the 
same time, I can find many places where you make statements that are 
undermined by that second part.


To take the most significant example:  when you say:

 But, I don't see how the hypothesis

 Conscious experience is **identified with** unanalyzable mind-atoms

 could be distinguished empirically from

 Conscious experience is **correlated with** unanalyzable mind-atoms

... there are several concepts buried in there, like [identified with], 
[distinguished empirically from] and [correlated with] that are 
theory-laden.  In other words, when you use those terms you are 
implictly applying some standards that have to do with semantics and 
ontology, and it is precisely those standards that I attacked in part 2 
of the paper.


However, there is also another thing I can say about this statement, 
based on the argument in part one of the paper.


It looks like you are also falling victim to the argument in part 1, at 
the same time that you are questioning its validity:  one of the 
consequences of that initial argument was that *because* those 
concept-atoms are unanalyzable, you can never do any such thing as talk 
about their being only correlated with a particular cognitive event 
versus actually being identified with that cognitive event!


So when you point out that the above distinction seems impossible to 
make, I say:  Yes, of course:  the theory itself just *said* that!.


So far, all of the serious questions that people have placed at the door 
of this theory have proved susceptible to that argument.


That was essentially what I did when talking to Chalmers.  He came up 
with an objection very like the one you gave above, so I said: Okay, 
the answer is that the theory itself predicts that you *must* find that 
question to be a stumbling block . AND, more importantly, you should 
be able to see that the strategy I am using here is a strategy that I 
can flexibly deploy to wipe out a whole class of objections, so the only 
way around that strategy (if you want to bring down this theory) is to 
come up a with a counter-strategy that demonstrably has the structure to 
undermine my strategy and I don't believe you can do that.


His only response, IIRC, was Huh!  This looks like it might be new. 
Send me a copy.


To make further progress in this discussion it is important, I think, to 
understand both the fact that I have that strategy, and also to 
appreciate that the second part of the paper went far beyond that.



Lastly, about your question re. consciousness of extended objects that 
are not concept-atoms.


I think there is some confusion here about what I was trying to say (my 
fault perhaps).  It is not just the fact of those concept-atoms being at 
the end of the line, it is actually about what happens to the analysis 
mechanism.  So, what I did was point to the clearest cases where people 
feel that a subjective experience is in need of explanation - the qualia 
- and I showed that in 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel

 Lastly, about your question re. consciousness of extended objects that are
 not concept-atoms.

 I think there is some confusion here about what I was trying to say (my
 fault perhaps).  It is not just the fact of those concept-atoms being at the
 end of the line, it is actually about what happens to the analysis
 mechanism.  So, what I did was point to the clearest cases where people feel
 that a subjective experience is in need of explanation - the qualia - and I
 showed that in that case the explanation is a failure of the analysis
 mechanism because it bottoms out.

 However, just because I picked that example for the sake of clarity, that
 does not mean that the *only* place where the analysis mechanism can get
 into trouble must be just when it bumps into those peripheral atoms.  I
 tried to explain this in a previous reply to someone (perhaps it was you):
  it would be entirely possible that higher level atoms could get built to
 represent [a sum of all the qualia-atoms that are part of one object], and
 if that happened we might find that this higher level atom was partly
 analyzable (it is composed of lower level qualia) and partly not (any
 analysis hits the brick wall after one successful unpacking step).



OK, I think I get that... I think that's the easy part ;-)

Indeed, the analysis  mechanism can get into trouble just due to its limited
capacity

Other aspects of the mind can pack together complex mental structures, which
the analysis mechanism perceives as tokens with some evocative power, but
which the analysis mechanism lacks the capacity to decompose into parts.
So, these can appear to it as indecomposable too, in a related but slightly
different sense from peripheral atoms...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
Richard,

My first response to this is that you still don't seem to have taken account
 of what was said in the second part of the paper  -  and, at the same time,
 I can find many places where you make statements that are undermined by that
 second part.

 To take the most significant example:  when you say:

  But, I don't see how the hypothesis
 
  Conscious experience is **identified with** unanalyzable mind-atoms
 
  could be distinguished empirically from
 
  Conscious experience is **correlated with** unanalyzable mind-atoms

 ... there are several concepts buried in there, like [identified with],
 [distinguished empirically from] and [correlated with] that are
 theory-laden.  In other words, when you use those terms you are implictly
 applying some standards that have to do with semantics and ontology, and it
 is precisely those standards that I attacked in part 2 of the paper.

 However, there is also another thing I can say about this statement, based
 on the argument in part one of the paper.

 It looks like you are also falling victim to the argument in part 1, at the
 same time that you are questioning its validity:  one of the consequences of
 that initial argument was that *because* those concept-atoms are
 unanalyzable, you can never do any such thing as talk about their being
 only correlated with a particular cognitive event versus actually being
 identified with that cognitive event!

 So when you point out that the above distinction seems impossible to make,
 I say:  Yes, of course:  the theory itself just *said* that!.

 So far, all of the serious questions that people have placed at the door of
 this theory have proved susceptible to that argument.



Well, suppose I am studying your brain with a super-advanced
brain-monitoring device ...

Then, suppose that I, using the brain-monitoring device, identify the brain
response pattern that uniquely occurs when you look at something red ...

I can then pose the question: Is your experience of red *identical* to this
brain-response pattern ... or is it correlated with this brain-response
pattern?

I can pose this question even though the cognitive atoms corresponding to
this brain-response pattern are unanalyzable from your perspective...

Next, note that I can also turn the same brain-monitoring device on
myself...

So I don't see why the question is unaskable ... it seems askable, because
these concept-atoms in question are experience-able even if not
analyzable... that is, they still form mental content even though they
aren't susceptible to explanation as you describe it...

I agree that, subjectively or empirically, there is no way to distinguish

Conscious experience is **identified with** unanalyzable mind-atoms

from

Conscious experience is **correlated with** unanalyzable mind-atoms

and it seems to me that this indicates you have NOT solved the hard problem,
but only restated it in a different (possibly useful) way

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

My first response to this is that you still don't seem to have taken
account of what was said in the second part of the paper  -  and, at
the same time, I can find many places where you make statements that
are undermined by that second part.

To take the most significant example:  when you say:


  But, I don't see how the hypothesis
 
  Conscious experience is **identified with** unanalyzable mind-atoms
 
  could be distinguished empirically from
 
  Conscious experience is **correlated with** unanalyzable mind-atoms

... there are several concepts buried in there, like [identified
with], [distinguished empirically from] and [correlated with] that
are theory-laden.  In other words, when you use those terms you are
implictly applying some standards that have to do with semantics and
ontology, and it is precisely those standards that I attacked in
part 2 of the paper.

However, there is also another thing I can say about this statement,
based on the argument in part one of the paper.

It looks like you are also falling victim to the argument in part 1,
at the same time that you are questioning its validity:  one of the
consequences of that initial argument was that *because* those
concept-atoms are unanalyzable, you can never do any such thing as
talk about their being only correlated with a particular cognitive
event versus actually being identified with that cognitive event!

So when you point out that the above distinction seems impossible to
make, I say:  Yes, of course:  the theory itself just *said* that!.

So far, all of the serious questions that people have placed at the
door of this theory have proved susceptible to that argument.



Well, suppose I am studying your brain with a super-advanced 
brain-monitoring device ...


Then, suppose that I, using the brain-monitoring device, identify the 
brain response pattern that uniquely occurs when you look at something 
red ...


I can then pose the question: Is your experience of red *identical* to 
this brain-response pattern ... or is it correlated with this 
brain-response pattern?


I can pose this question even though the cognitive atoms corresponding 
to this brain-response pattern are unanalyzable from your perspective...


Next, note that I can also turn the same brain-monitoring device on 
myself...


So I don't see why the question is unaskable ... it seems askable, 
because these concept-atoms in question are experience-able even if not 
analyzable... that is, they still form mental content even though they 
aren't susceptible to explanation as you describe it...


I agree that, subjectively or empirically, there is no way to distinguish

Conscious experience is **identified with** unanalyzable mind-atoms

from

Conscious experience is **correlated with** unanalyzable mind-atoms

and it seems to me that this indicates you have NOT solved the hard 
problem, but only restated it in a different (possibly useful) way


There are several different approaches and comments that I could take 
with what you just wrote, but let me focus on just one;  the last one.


When you make a statement such as ... it seems to me that .. you have 
NOT solved the hard problem, but only restated it, you are implicitly 
bringing to the table a set of ideas about what it means to solve this 
problem, or explain consciousness.


Fine so far:  everyone uses the rules of explanation that they have 
acquired over a lifetime - and of course in science we all roughly agree 
on a set of ideas about what it means to explain things.


But what I am trying to point out in this paper is that because of the 
nature of intelligent systems and how they must do their job, the very 
concept of *explanation* is undermined by the topic that in this case we 
are trying to explain.  You cannot just go right ahead and apply a 
standard of explanation right out of the box (so to speak) because 
unlike explaining atoms and explaining stars, in this case you are 
trying to explain something that interferes with the notion of 
explanation.


So when you imply that the theory I propose is weak *because* it 
provides no way to distinguish:


Conscious experience is **identified with** unanalyzable mind-atoms

from

Conscious experience is **correlated with** unanalyzable mind-atoms

You are missing the main claim that the theory tries to make:  that such 
distinctions are broken precisely *because* of what is going on with the 
explanandum.


You have got to get this point to be able to understand the paper.

I mean, it is okay to disagree with the point and say why (to talk about 
what it means to explain things'  to talk about the connection between 
the explanandum and the methods and basic terms of the thing that we 
call explaining things).  That would be fine.


But at the moment it seems to me that you have been through several 
passes 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
Richard,

So are you saying that: According to the ordinary scientific standards of
'explanation', the subjective experience of consciousness cannot be
explained ... and as a consequence, the relationship between subjective
consciousness and physical data (as required to be elucidated by any
solution to Chalmers' hard problem as normally conceived) also cannot be
explained.

If so, then: according to the ordinary scientific standards of explanation,
you are not explaining consciousness, nor explaining the relation btw
consciousness and the physical ... but are rather **explaining why, due to
the particular nature of consciousness and its relationship to the ordinary
scientific standards of explanation, this kind of explanation is not
possible**

??

ben g




On Wed, Nov 19, 2008 at 4:05 PM, Richard Loosemore [EMAIL PROTECTED]wrote:

 Ben Goertzel wrote:

 Richard,

My first response to this is that you still don't seem to have taken
account of what was said in the second part of the paper  -  and, at
the same time, I can find many places where you make statements that
are undermined by that second part.

To take the most significant example:  when you say:


  But, I don't see how the hypothesis
 
  Conscious experience is **identified with** unanalyzable
 mind-atoms
 
  could be distinguished empirically from
 
  Conscious experience is **correlated with** unanalyzable
 mind-atoms

... there are several concepts buried in there, like [identified
with], [distinguished empirically from] and [correlated with] that
are theory-laden.  In other words, when you use those terms you are
implictly applying some standards that have to do with semantics and
ontology, and it is precisely those standards that I attacked in
part 2 of the paper.

However, there is also another thing I can say about this statement,
based on the argument in part one of the paper.

It looks like you are also falling victim to the argument in part 1,
at the same time that you are questioning its validity:  one of the
consequences of that initial argument was that *because* those
concept-atoms are unanalyzable, you can never do any such thing as
talk about their being only correlated with a particular cognitive
event versus actually being identified with that cognitive event!

So when you point out that the above distinction seems impossible to
make, I say:  Yes, of course:  the theory itself just *said* that!.

So far, all of the serious questions that people have placed at the
door of this theory have proved susceptible to that argument.



 Well, suppose I am studying your brain with a super-advanced
 brain-monitoring device ...

 Then, suppose that I, using the brain-monitoring device, identify the
 brain response pattern that uniquely occurs when you look at something red
 ...

 I can then pose the question: Is your experience of red *identical* to
 this brain-response pattern ... or is it correlated with this brain-response
 pattern?

 I can pose this question even though the cognitive atoms corresponding
 to this brain-response pattern are unanalyzable from your perspective...

 Next, note that I can also turn the same brain-monitoring device on
 myself...

 So I don't see why the question is unaskable ... it seems askable, because
 these concept-atoms in question are experience-able even if not
 analyzable... that is, they still form mental content even though they
 aren't susceptible to explanation as you describe it...

 I agree that, subjectively or empirically, there is no way to distinguish

 Conscious experience is **identified with** unanalyzable mind-atoms

 from

 Conscious experience is **correlated with** unanalyzable mind-atoms

 and it seems to me that this indicates you have NOT solved the hard
 problem, but only restated it in a different (possibly useful) way


 There are several different approaches and comments that I could take with
 what you just wrote, but let me focus on just one;  the last one.

 When you make a statement such as ... it seems to me that .. you have NOT
 solved the hard problem, but only restated it, you are implicitly bringing
 to the table a set of ideas about what it means to solve this problem, or
 explain consciousness.

 Fine so far:  everyone uses the rules of explanation that they have
 acquired over a lifetime - and of course in science we all roughly agree on
 a set of ideas about what it means to explain things.

 But what I am trying to point out in this paper is that because of the
 nature of intelligent systems and how they must do their job, the very
 concept of *explanation* is undermined by the topic that in this case we are
 trying to explain.  You cannot just go right ahead and apply a standard of
 explanation right out of the box (so to speak) because unlike explaining
 atoms and explaining stars, in this case you are trying to explain something
 that 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
Ed,

I'd be curious for your reaction to

http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html

which explores the limits of scientific and linguistic explanation, in
a different but possibly related way to Richard's argument.

Science and language are powerful tools for explanation but there is
no reason to assume they are all-powerful.  We should push them as far
as we can, but no further...

I agree with Richard that according to standard scientific notions of
explanation, consciousness and its relation to the physical world are
inexplicable.  My intuition and reasoning are probably not exactly the
same as his, but there seems some similarity btw our views...

-- Ben G


On Wed, Nov 19, 2008 at 5:27 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Richard,



 (the second half of this post, that starting with the all capitalized
 heading, is the most important)



 I agree with your extreme cognitive semantics discussion.



 I agree with your statement that one criterion for realness is the
 directness and immediateness of something's phenomenology.



 I agree with your statement that, based on this criterion for realness,
 many conscious phenomena, such as qualia, which have traditionally fallen
 under the hard problem of consciousness seem to be real.



 But I have problems with some of the conclusions you draw from these things,
 particularly in your Implications section at the top of the second column
 on Page 5 of your paper.



 There you state



 …the correct explanation for consciousness is that all of its various
 phenomenological facets deserve to be called as real as any other concept
 we have, because there are no meaningful objective standards that we could
 apply to judge them otherwise.



 That aspects of consciousness seem real does not provides much of an
 explanation for consciousness.  It says something, but not much.  It adds
 little to Descartes' I think therefore I am.  I don't think it provides
 much of an answer to any of the multiple questions Wikipedia associates with
 Chalmer's hard problem of consciousness.



 You further state that some aspects of consciousness have a unique status of
 being beyond the reach of scientific inquiry and give a purported reason why
 they are beyond such a reach. Similarly you say:



 …although we can never say exactly what the phenomena of consciousness are,
 in the way that we give scientific explanations for other things, we can
 nevertheless say exactly why we cannot say anything: so in the end, we can
 explain it.



 First, I would point out as I have in my prior papers that, given the
 advances that are expected to be made in AGI, brain scanning and brain
 science in the next fifty years, it is not clear that consciousness is
 necessarily any less explainable than are many other aspects of physical
 reality.  You admit there are easy problems of consciousness that can be
 explained, just as there are easy parts of physical reality that can be
 explained. But it is not clear that the percent of consciousness that will
 remain a mystery in fifty years is any larger than the percent of basic
 physical reality that will remain a mystery in that time frame.



 But even if we accept as true your statement that certain phenomena of
 consciousness are beyond analysis, that does little to explain
 consciousness.  In fact, it does not appear to answer any of the hard
 problems of consciousness.  For example, just because (a) we are conscious
 of the distinction used in our own mind's internal representation between
 sensation of the colors red and blue, (b) we allegedly cannot analyze that
 difference further, and (c) that distinction seems subjectively real to us
 --- that does not shed much light on whether or not a p-zombie would be
 capable of acting just like a human without having consciousness of red and
 blue color qualia.



 It is not even clear to me that your paper shows consciousness is not an
 artifact,  as your abstract implies.  Just because something is real
 does not mean it is not an artifact, in many senses of the word, such as
 an unintended, secondary, or unessential, aspect of something.





 THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS ON THE
 PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND NOT ENOUGH
 ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS DESPITE
 SUCH BOTTOMING OUT  -- THE SENSE OF REALNESS THAT IS MOST ESSENTIAL TO
 CONSCIOUSNESS.



 It is my belief that if you want to understand consciousness in the context
 of the types of things discussed in your paper, you should focus the part of
 the molecular framework, which you imply it is largely in the foreground,
 that prevents the system from returning with no answer, even when trying to
 analyze a node such as a lowest level input node for the color red in a
 given portion of the visual field.



 This is the part of your molecular framework that



 …because of 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ben Goertzel wrote:


Richard,

So are you saying that: According to the ordinary scientific standards 
of 'explanation', the subjective experience of consciousness cannot be 
explained ... and as a consequence, the relationship between subjective 
consciousness and physical data (as required to be elucidated by any 
solution to Chalmers' hard problem as normally conceived) also cannot 
be explained.


If so, then: according to the ordinary scientific standards of 
explanation, you are not explaining consciousness, nor explaining the 
relation btw consciousness and the physical ... but are rather 
**explaining why, due to the particular nature of consciousness and its 
relationship to the ordinary scientific standards of explanation, this 
kind of explanation is not possible**


??


No!

If you write the above, then you are summarizing the question that I 
pose at the half-way point of the paper, just before the second part 
gets underway.


The ordinary scientific standards of explanation are undermined by 
questions about consciousness.  They break.  You cannot use them.  They 
become internally inconsistent.  You cannot say I hereby apply the 
standard mechanism of 'explanation' to Problem X, but then admit that 
Problem X IS the very mechanism that is responsible for determining the 
 'explanation' method you are using, AND the one thing you know about 
that mechanism is that you can see a gaping hole in the mechanism!


You have to find a way to mend that broken standard of explanation.

I do that in part 2.

So far we have not discussed the whole paper, only part 1.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Ben Goertzel
Ok, well I read part 2 three times and I seem not to be getting the
importance or the crux of it.

I hate to ask this, but could you possibly summarize it in some
different way, in the hopes of getting through to me??

I agree that the standard scientific approach to explanation breaks
when presented with consciousness.

I do not (yet) understand your proposed alternative approach to explanation.

If anyone on this list *does* understand it, feel free to chip in with
your own attempted summary...

thx
ben

On Wed, Nov 19, 2008 at 5:47 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Ben Goertzel wrote:

 Richard,

 So are you saying that: According to the ordinary scientific standards of
 'explanation', the subjective experience of consciousness cannot be
 explained ... and as a consequence, the relationship between subjective
 consciousness and physical data (as required to be elucidated by any
 solution to Chalmers' hard problem as normally conceived) also cannot be
 explained.

 If so, then: according to the ordinary scientific standards of
 explanation, you are not explaining consciousness, nor explaining the
 relation btw consciousness and the physical ... but are rather **explaining
 why, due to the particular nature of consciousness and its relationship to
 the ordinary scientific standards of explanation, this kind of explanation
 is not possible**

 ??

 No!

 If you write the above, then you are summarizing the question that I pose at
 the half-way point of the paper, just before the second part gets underway.

 The ordinary scientific standards of explanation are undermined by
 questions about consciousness.  They break.  You cannot use them.  They
 become internally inconsistent.  You cannot say I hereby apply the standard
 mechanism of 'explanation' to Problem X, but then admit that Problem X IS
 the very mechanism that is responsible for determining the  'explanation'
 method you are using, AND the one thing you know about that mechanism is
 that you can see a gaping hole in the mechanism!

 You have to find a way to mend that broken standard of explanation.

 I do that in part 2.

 So far we have not discussed the whole paper, only part 1.



 Richard Loosemore


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects.  -- Robert
Heinlein


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-19 Thread Richard Loosemore

Ed Porter wrote:


  Richard,

/(the second half of this post, that starting with the all capitalized 
heading, is the most important)/


I agree with your extreme cognitive semantics discussion. 

I agree with your statement that one criterion for “realness” is the 
directness and immediateness of something’s phenomenology.


I agree with your statement that, based on this criterion for 
“realness,” many conscious phenomena, such as qualia, which have 
traditionally fallen under the hard problem of consciousness seem to be 
“real.”


But I have problems with some of the conclusions you draw from these 
things, particularly in your “Implications” section at the top of the 
second column on Page 5 of your paper.


There you state

“…the correct explanation for consciousness is that all of its various 
phenomenological facets deserve to be called as “real” as any other 
concept we have, because there are no meaningful /objective /standards 
that we could apply to judge them otherwise.”


That aspects of consciousness seem real does not provides much of an 
“explanation for consciousness.”  It says something, but not much.  It 
adds little to Descartes’ “I think therefore I am.”  I don’t think it 
provides much of an answer to any of the multiple questions Wikipedia 
associates with Chalmer’s hard problem of consciousness.


I would respond as follows.  When I make statements about consciousness 
deserving to be called real, I am only saying this as a summary of a 
long argument that has gone before.  So it would not really be fair to 
declare that this statement of mine says something, but not much 
without taking account of the reasons that have been building up toward 
that statement earlier in the paper.  I am arguing that when we probe 
the meaning of real we find that the best criterion of realness is the 
way that the system builds a population of concept-atoms that are (a) 
mutually consistent with one another, and (b) strongly supported by 
sensory evidence (there are other criteria, but those are the main 
ones).  If you think hard enough about these criteria, you notice that 
the qualia-atoms (those concept-atoms that cause the analysis mechanism 
to bottom out) score very high indeed.  This is in dramatic contrast to 
other concept-atoms like hallucinations, which we consider 'artifacts' 
precisely because they score so low.  The difference between these two 
is so dramatic that I think we need to allow the qualia-atoms to be 
called real by all our usual criteria, BUT with the added feature that 
they cannot be understood in any more basic terms.


Now, all of that (and more) lies behind the simple statement that they 
should be called real.  It wouldn't make much sense to judge that 
statement by itself.  Only judge the argument behind it.



You further state that some aspects of consciousness have a unique 
status of being beyond the reach of scientific inquiry and give a 
purported reason why they are beyond such a reach. Similarly you say:


”…although we can never say exactly what the phenomena of consciousness 
are, in the way that we give scientific explanations for other things, 
we can nevertheless say exactly why we cannot say anything: so in the 
end, we can explain it.”


First, I would point out as I have in my prior papers that, given the 
advances that are expected to be made in AGI, brain scanning and brain 
science in the next fifty years, it is not clear that consciousness is 
necessarily any less explainable than are many other aspects of physical 
reality.  You admit there are easy problems of consciousness that can be 
explained, just as there are easy parts of physical reality that can be 
explained. But it is not clear that the percent of consciousness that 
will remain a mystery in fifty years is any larger than the percent of 
basic physical reality that will remain a mystery in that time frame.



The paper gives a clear argument for *why* it cannot be explained.

So contradict that argument (to say it is not clear that consciousness 
is necessarily any less explainable than are many other aspects of 
physical reality) you have to say why the argument does not work.  It 
would make no sense for a person to simply assert the opposite of the 
argument's conclusion, without justification.


The argument goes into plenty of specific details, so there are many 
kinds of attack that you could make.



But even if we accept as true your statement that certain phenomena of 
consciousness are beyond analysis, that does little to explain 
consciousness.  In fact, it does not appear to answer any of the hard 
problems of consciousness.  For example, just because (a) we are 
conscious of the distinction used in our own mind’s internal 
representation between sensation of the colors red and blue, (b) we 
allegedly cannot analyze that difference further, and (c) that 
distinction seems subjectively real to us --- that does not shed much 
light on whether or not a p-zombie would be 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Harry Chesley
Richard Loosemore wrote:
 Harry Chesley wrote:
 Richard Loosemore wrote:
 I completed the first draft of a technical paper on consciousness
 the other day.   It is intended for the AGI-09 conference, and it
 can be found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


 One other point: Although this is a possible explanation for our
 subjective experience of qualia like red or soft, I don't see
 it explaining pain or happy quite so easily. You can
 hypothesize a sort of mechanism-level explanation of those by
 relegating them to the older or lower parts of the brain (i.e.,
 they're atomic at the conscious level, but have more effects at the
 physiological level (like releasing chemicals into the system)),
 but that doesn't satisfactorily cover the subjective side for me.

 I do have a quick answer to that one.

 Remember that the core of the model is the *scope* of the analysis
 mechanism.  If there is a sharp boundary (as well there might be),
 then this defines the point where the qualia kick in.  Pain receptors
 are fairly easy:  they are primitive signal lines.  Emotions are, I
 believe, caused by clusters of lower brain structures, so the
 interface between lower brain and foreground is the place where
 the foreground sees a limit to the analysis mechanisms.

 More generally, the significance of the foreground is that it sets
 a boundary on how far the analysis mechanisms can reach.

 I am not sure why that would seem less satisfactory as an explanation
 of the subjectivity.  It is a raw feel, and that is the key idea,
 no?

My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Mark Waser

My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.


Maybe I missed it but why do you assume that because qualia are atomic that 
they have no differentiable details?  Evolution is, quite correctly, going 
to give pain qualia higher priority and less ability to be shut down than 
red qualia.  In a good representation system, that means that searing hot is 
going to be *very* whatever and very tough to ignore.




- Original Message - 
From: Harry Chesley [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 18, 2008 1:57 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of 
consciousness




Richard Loosemore wrote:

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness
the other day.   It is intended for the AGI-09 conference, and it
can be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf



One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see
it explaining pain or happy quite so easily. You can
hypothesize a sort of mechanism-level explanation of those by
relegating them to the older or lower parts of the brain (i.e.,
they're atomic at the conscious level, but have more effects at the
physiological level (like releasing chemicals into the system)),
but that doesn't satisfactorily cover the subjective side for me.


I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis
mechanism.  If there is a sharp boundary (as well there might be),
then this defines the point where the qualia kick in.  Pain receptors
are fairly easy:  they are primitive signal lines.  Emotions are, I
believe, caused by clusters of lower brain structures, so the
interface between lower brain and foreground is the place where
the foreground sees a limit to the analysis mechanisms.

More generally, the significance of the foreground is that it sets
a boundary on how far the analysis mechanisms can reach.

I am not sure why that would seem less satisfactory as an explanation
of the subjectivity.  It is a raw feel, and that is the key idea,
no?


My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Harry Chesley
Mark Waser wrote:
 My problem is if qualia are atomic, with no differentiable details,
 why do some feel different than others -- shouldn't they all be
 separate but equal? Red is relatively neutral, while searing
 hot is not. Part of that is certainly lower brain function, below
 the level of consciousness, but that doesn't explain to me why it
 feels qualitatively different. If it was just something like
 increased activity (franticness) in response to searing hot, then
 fine, that could just be something like adrenaline being pumped
 into the system, but there is a subjective feeling that goes beyond
 that.

 Maybe I missed it but why do you assume that because qualia are
 atomic that they have no differentiable details?  Evolution is, quite
 correctly, going to give pain qualia higher priority and less ability
 to be shut down than red qualia.  In a good representation system,
 that means that searing hot is going to be *very* whatever and very
 tough to ignore.

I thought that was the meaning of atomic as used in the paper. Maybe I
got it wrong.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Richard Loosemore

Harry Chesley wrote:

Richard Loosemore wrote:

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness
the other day.   It is intended for the AGI-09 conference, and it
can be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see
it explaining pain or happy quite so easily. You can
hypothesize a sort of mechanism-level explanation of those by
relegating them to the older or lower parts of the brain (i.e.,
they're atomic at the conscious level, but have more effects at the
physiological level (like releasing chemicals into the system)),
but that doesn't satisfactorily cover the subjective side for me.

I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis
mechanism.  If there is a sharp boundary (as well there might be),
then this defines the point where the qualia kick in.  Pain receptors
are fairly easy:  they are primitive signal lines.  Emotions are, I
believe, caused by clusters of lower brain structures, so the
interface between lower brain and foreground is the place where
the foreground sees a limit to the analysis mechanisms.

More generally, the significance of the foreground is that it sets
a boundary on how far the analysis mechanisms can reach.

I am not sure why that would seem less satisfactory as an explanation
of the subjectivity.  It is a raw feel, and that is the key idea,
no?


My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.


There is more than one question wrapped up inside this question, I think.

First:  all qualia feel different, of course.  You seem to be pointing 
to a sense in which pain is more different than most  ?  But is 
that really a valid idea?


Does pain have differentiable details?  Well, there are different 
types of pain  but that is to be expected, like different colors. 
But that is arelatively trivial point.  Within one single pain there can 
be several *effects* of that pain, including some strange ones that do 
not have counterparts in the vision-color case.


For example, suppose that a searing hot pain caused a simultaneous 
triggering of the motivational system, forcing you to suddenly want to 
do something (like pulling your body part away from the pain).  The 
feeling of wanting (wanting to pull away) is a quale of its own, in a 
sense, so it would not be impossible for one quale (searing hot) to 
always be associated with another (wanting to pull away).  If those 
always occurred together, it might seem that there was structure to the 
pain experience, where in fact there is a pair of things happening.


It is probably more than a pair of things, but perhaps you get my drift.

Remember that having associations to a pain is not part of what we 
consider to be the essence of the subjective experience;  the bit that 
is most mysterious and needs to be explained.


Another thing we have to keep in mind here is that the exact details of 
how each subjective experience feels are certainly going to seem 
different, and some can seem like each other and not like others  
colors are like other colors, but not like pains.


That is to be expected:  we can say that colors happen in a certain 
place in our sensorium (vision) while pains are associated with the body 
(usually), but these differences are not inconsistent with the account I 
have given.  If concept-atoms encoding [red] always attach to all the 
othe concept-atoms involving visual experiences, that would make them 
very different than pains like [searing hot], but all of this could be 
true at the same time that [red] would do what it does to the analysis 
mechanism (when we try to think the thought Was is the essence of 
redness?).  So the problem with the analysis mechanism would happen 
with both pains and colors, even though the two different atom types 
played games with different sets of other concept-atoms.




Richard Loosemore





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Ben Goertzel
Richard,

I re-read your paper and I'm afraid I really don't grok why you think it
solves Chalmers' hard problem of consciousness...

It really seems to me like what you're suggesting is a cognitive correlate
of consciousness, to morph the common phrase neural correlate of
consciousness ...

You seem to be stating that when X is an unanalyzable, pure atomic sensation
from the perspective of cognitive system C, then C will perceive X as a raw
quale ... unanalyzable and not explicable by ordinary methods of
explication, yet, still subjectively real...

But, I don't see how the hypothesis

Conscious experience is **identified with** unanalyzable mind-atoms

could be distinguished empirically from

Conscious experience is **correlated with** unanalyzable mind-atoms

I think finding cognitive correlates of consciousness is interesting, but I
don't think it constitutes solving the hard problem in Chalmers' sense...

I grok that you're saying consciousness feels inexplicable because it has
to do with atoms that the system can't explain, due to their role as its
primitive atoms ... and this is a good idea, but, I don't see how it
bridges the gap btw subjective experience and empirical data ...

What it does is explain why, even if there *were* no hard problem, cognitive
systems might feel like there is one, in regard to their unanalyzable atoms

Another worry I have is: I feel like I can be conscious of my son, even
though he is not an unanalyzable atom.  I feel like I can be conscious of
the unique impression he makes ... in the same way that I'm conscious of
redness ... and, yeah, I feel like I can't fully explain the conscious
impression he makes on me, even though I can explain a lot of things about
him...

So I'm not convinced that atomic sensor input is the only source of raw,
unanalyzable consciousness...

-- Ben G

On Tue, Nov 18, 2008 at 5:14 PM, Richard Loosemore [EMAIL PROTECTED]wrote:

 Harry Chesley wrote:

 Richard Loosemore wrote:

 Harry Chesley wrote:

 Richard Loosemore wrote:

 I completed the first draft of a technical paper on consciousness
 the other day.   It is intended for the AGI-09 conference, and it
 can be found at:


 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

  One other point: Although this is a possible explanation for our
 subjective experience of qualia like red or soft, I don't see
 it explaining pain or happy quite so easily. You can
 hypothesize a sort of mechanism-level explanation of those by
 relegating them to the older or lower parts of the brain (i.e.,
 they're atomic at the conscious level, but have more effects at the
 physiological level (like releasing chemicals into the system)),
 but that doesn't satisfactorily cover the subjective side for me.

 I do have a quick answer to that one.

 Remember that the core of the model is the *scope* of the analysis
 mechanism.  If there is a sharp boundary (as well there might be),
 then this defines the point where the qualia kick in.  Pain receptors
 are fairly easy:  they are primitive signal lines.  Emotions are, I
 believe, caused by clusters of lower brain structures, so the
 interface between lower brain and foreground is the place where
 the foreground sees a limit to the analysis mechanisms.

 More generally, the significance of the foreground is that it sets
 a boundary on how far the analysis mechanisms can reach.

 I am not sure why that would seem less satisfactory as an explanation
 of the subjectivity.  It is a raw feel, and that is the key idea,
 no?


 My problem is if qualia are atomic, with no differentiable details, why
 do some feel different than others -- shouldn't they all be separate
 but equal? Red is relatively neutral, while searing hot is not. Part
 of that is certainly lower brain function, below the level of
 consciousness, but that doesn't explain to me why it feels
 qualitatively different. If it was just something like increased
 activity (franticness) in response to searing hot, then fine, that
 could just be something like adrenaline being pumped into the system,
 but there is a subjective feeling that goes beyond that.


 There is more than one question wrapped up inside this question, I think.

 First:  all qualia feel different, of course.  You seem to be pointing to
 a sense in which pain is more different than most  ?  But is that
 really a valid idea?

 Does pain have differentiable details?  Well, there are different types
 of pain  but that is to be expected, like different colors. But that is
 arelatively trivial point.  Within one single pain there can be several
 *effects* of that pain, including some strange ones that do not have
 counterparts in the vision-color case.

 For example, suppose that a searing hot pain caused a simultaneous
 triggering of the motivational system, forcing you to suddenly want to do
 something (like pulling your body part away from the pain).  The feeling of
 wanting (wanting to pull away) is a quale of its own, 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mike Tintner

Colin: right or wrong...I have a working physical model for
consciousness.

Just so. Serious scientific study of consciousness entails *models* not 
verbal definitions.  The latter are quite hopeless. Richard opined that 
there is a precise definition of the hard problem of consciousness. 
There is no precise definition of any term AFAIK in philosophy, or 
language...consciousness,mind,problem-solving, senses, 
intelligence etc... Every term is massively contested in philosophy - and 
often by the individual philosopher himself. See studies of how many ways 
Kuhn used the term paradigm. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Matt Mahoney
--- On Sun, 11/16/08, Mark Waser [EMAIL PROTECTED] wrote:

I wrote:
  I think the reason that the hard question is
 interesting at all is that it would presumably be OK to
 torture a zombie because it doesn't actually experience
 pain, even though it would react exactly like a human being
 tortured. That's an ethical question. Ethics is a belief
 system that exists in our minds about what we should or
 should not do. There is no objective experiment you can do
 that will tell you whether any act, such as inflicting pain
 on a human, animal, or machine, is ethical or not. The only
 thing you can measure is belief, for example, by taking a
 poll.
 
 What is the point to ethics?  The reason why you can't
 do objective experiments is because *YOU* don't have a
 grounded concept of ethics.  The second that you ground your
 concepts in effects that can be seen in the real
 world, there are numerous possible experiments.

How do you propose grounding ethics? I have a complex model that says some 
things are right and others are wrong. So does everyone else. These models 
don't agree. How do you propose testing whether a model is correct or not? If 
everyone agreed that torturing people was wrong, then torture wouldn't exist.

 The same is true of consciousness.  The hard problem of
 consciousness is hard because the question is ungrounded. 
 Define all of the arguments in terms of things that appear
 and matter in the real world and the question goes away. 
 It's only because you invent ungrounded unprovable
 distinctions that the so-called hard problem appears.

How do you prove that Richard's definition of consciousness is correct and 
Colin's is wrong, or vice versa? All you can say about either definition is 
that some entities are conscious and others are not, according to whichever 
definition you accept. But so what?

 Torturing a p-zombie is unethical because whether it feels
 pain or not is 100% irrelevant in the real
 world.  If it 100% acts as if it feels pain, then for
 all purposes that matter it does feel pain.  Why invent this
 mystical situation where it doesn't feel pain yet acts
 as if it does?

Because people nevertheless make this arbitrary distinction in order to make 
ethical decisions. Torturing a p-zombie is only wrong according to some ethical 
models but not others. The same is true about doing animal experiments, or 
running autobliss with two negative arguments. If you ask people why they think 
so, a common response is that the things that it is not ethical to torture are 
conscious.

 Richard's paper attempts to solve the hard problem by
 grounding some of the silliness.  It's the best possible
 effort short of just ignoring the silliness and going on to
 something else that is actually relevant to the real world.

I agree. This whole irrelevant discussion of consciousness is getting tedious.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser

How do you propose grounding ethics?


Ethics is building and maintaining healthy relationships for the betterment 
of all.  Evolution has equipped us all with a good solid moral sense that 
frequently we don't/can't even override with our short-sighted selfish 
desires (that, more frequently than not, eventually end up screwing us over 
when we follow them).  It's pretty easy to ground ethics as long as you 
realize that there are some cases that are just too close to call with the 
information that you possess at the time you need to make a decision.  But 
then again, that's precisely what intelligence is -- making effective 
decisions under uncertainty.


I have a complex model that says some things are right and others are 
wrong.


That's nice -- but you've already pointed out that your model has numerous 
shortcomings such that you won't even stand behind it.  Why do you keep 
bringing it up?  It's like saying I have an economic theory when you 
clearly don't have the expertise to form a competent one.



So does everyone else. These models don't agree.


And lots of people have theories of creationism.  Do you want to use that to 
argue that evolution is incorrect?



How do you propose testing whether a model is correct or not?


By determining whether it is useful and predictive -- just like what we 
always do when we're practicing science (as opposed to spouting BS).


If everyone agreed that torturing people was wrong, then torture wouldn't 
exist.


Wrong.  People agree that things are wrong and then they go and do them 
anyways because they believe that it is beneficial for them.  Why do you 
spout obviously untrue BS?


How do you prove that Richard's definition of consciousness is correct and 
Colin's is wrong, or vice versa? All you can say about either definition 
is that some entities are conscious and others are not, according to 
whichever definition you accept. But so what?


Wow!  You really do practice useless sophistry.  For definitions, correct 
simply means useful and predictive.  I'll go with whichever definition most 
accurately reflects the world.  Are you trying to propose that there is an 
absolute truth out there as far as definitions go?


Because people nevertheless make this arbitrary distinction in order to 
make ethical decisions.


So when lemmings go into the river you believe that they are correct and you 
should follow them?



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 9:35 AM
Subject: Re: [agi] A paper that actually does solve the problem of 
consciousness




--- On Sun, 11/16/08, Mark Waser [EMAIL PROTECTED] wrote:

I wrote:

 I think the reason that the hard question is
interesting at all is that it would presumably be OK to
torture a zombie because it doesn't actually experience
pain, even though it would react exactly like a human being
tortured. That's an ethical question. Ethics is a belief
system that exists in our minds about what we should or
should not do. There is no objective experiment you can do
that will tell you whether any act, such as inflicting pain
on a human, animal, or machine, is ethical or not. The only
thing you can measure is belief, for example, by taking a
poll.

What is the point to ethics?  The reason why you can't
do objective experiments is because *YOU* don't have a
grounded concept of ethics.  The second that you ground your
concepts in effects that can be seen in the real
world, there are numerous possible experiments.


How do you propose grounding ethics? I have a complex model that says some 
things are right and others are wrong. So does everyone else. These models 
don't agree. How do you propose testing whether a model is correct or not? 
If everyone agreed that torturing people was wrong, then torture wouldn't 
exist.



The same is true of consciousness.  The hard problem of
consciousness is hard because the question is ungrounded.
Define all of the arguments in terms of things that appear
and matter in the real world and the question goes away.
It's only because you invent ungrounded unprovable
distinctions that the so-called hard problem appears.


How do you prove that Richard's definition of consciousness is correct and 
Colin's is wrong, or vice versa? All you can say about either definition 
is that some entities are conscious and others are not, according to 
whichever definition you accept. But so what?



Torturing a p-zombie is unethical because whether it feels
pain or not is 100% irrelevant in the real
world.  If it 100% acts as if it feels pain, then for
all purposes that matter it does feel pain.  Why invent this
mystical situation where it doesn't feel pain yet acts
as if it does?


Because people nevertheless make this arbitrary distinction in order to 
make ethical decisions. Torturing a p-zombie is only wrong according to 
some ethical models but not others. The same is true about doing animal 
experiments, or running 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

Three things.


First, David Chalmers is considered one of the world's foremost
researchers in the consciousness field (he is certainly now the most
celebrated).  He has read the argument presented in my paper, and he
has
discussed it with me.  He understood all of it, and he does not share
any of your concerns, nor anything remotely like your concerns.  He had
one single reservation, on a technical point, but when I explained my
answer, he thought it interesting and novel, and possibly quite valid.

Second, the remainder of your comments below are not coherent enough to
be answerable, and it is not my job to walk you through the basics of
this field.

Third, about your digression:  gravity does not escape from black
holes, because gravity is just the curvature of spacetime.  The other
things that cannot escape from black holes are not forces.

I will not be replying to any further messages from you because you are
wasting my time.




I read this paper several times and still have trouble holding the model
that you describe in my head as it fades quickly and then there is a just a
memory of it (recursive ADD?). I'm not up on the latest consciousness
research but still somewhat understand what is going on there. Your paper is
a nice and terse description but to get others to understand the highlighted
entity that you are trying to describe may be easier done with more
diagrams. When I kind of got it for a second it did appear quantitative,
like mathematically describable. I find it hard to believe though that
others have not put it this way, I mean doesn't Hofstadter talk about this
in his books, in an unacademical fashion?



Hofstadter does talk about loopiness and recursion in ways that are 
similar, but the central idea is not the same.  FWIW I did have a brief 
discussion with him about this at the same conference where I talked to 
Chalmers, and he agreed that his latest ideas about consciousness and 
the one I was suggesting did not seem to overlap.





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Ben Goertzel wrote:


Sorry to be negative, but no, my proposal is not in any way a
modernization of Peirce's metaphysical analysis of awareness.



Could you elaborate the difference?  It seems very similar to me.   
You're saying that consciousness has to do with the bottoming-out of 
mental hierarchies in raw percepts that are unanalyzable by the mind ... 
and Peirce's Firsts are precisely raw percepts that are unanalyzable by 
the mind...


It is partly the stance (I arrive at my position from a cognitivist 
point of view, with specific mechanisms that must be causing the 
problem), where Peirce appears to suggest the Firsts idea as a purely 
metaphysical proposal.


So, what I am saying is that this superficial resemblance between his 
position and mine is so superficial that it makes no sense to describe 
on the latter as a modernization of the former.


A good analogy would be Galilean Relativity and Einsten's Relativity. 
Although there is a superficial resemblance, nobody would really say 
that Einstein was just a modernization of Galileo.





***
The standard meaning of Hard Problem issues was described very well by 
Chalmers, and I am addressing the hard problem of concsciousness, not 
the other problems.

***

Hmmm  I don't really understand why you think your argument is a 
solution to the hard problem  It seems like you explicitly 
acknowledge in your paper that it's *not*, actually  It's more like 
a philosophical argument as to why the hard problem is unsolvable, IMO.


No, that is only part one of the paper, and as you pointed out before, 
the first part of the proposal ends with a question, not a statement 
that this was a failure to explain the problem.  That question was 
important.


The important part is the analysis of explanation and meaning.  This 
can also be taken to be about your use of the word unsolvable in the 
above sentence.


What I am claiming (and I will make this explicit in a revision of the 
paper) is that these notions of explanation, meaning, solution to 
the problem, etc., are pushed to their breaking point by the problem of 
consciousness.  So it is not that there is a problem with understanding 
consciousness itself, so much as there is a problem with what it means 
to *explain* things.


Other things are easy to explain, but when we ask for an explanation 
of something like consciousness, the actual notion of explanation 
breaks down in a drastic way.  This is very closely related to the idea 
of an objective observer in physics  in the quantum realm that 
notion breaks down.


What I gave in my paper was (a) a detailed description of how the 
confusion about consciousness arises [peculiar behavior of the analysis 
mechanism], but then (b) I went on to point out this peculiar behavior 
infects much more than just our ability to explain consciousness, 
because it casts doubt on the fundamental meaning of explanation and 
semantics and ontology.


The conclusion that I then tried to draw was that it would be wrong to 
say that consciousness was just an artifact or (ordinarily) inexplicable 
thing, because this would be to tacitly assume that the sense of 
explain that we are using in these statements is the same one we have 
always used.  Anyone who continued to use explain and mean (etc.) in 
their old context would be stuck in what I have called Level 0, and in 
that level the old meanings [sic] of those terms are just not able to 
address the issue of consciousness.


Go back to the quantum mechanics analogy again:  it is not right to 
cling to old ideas of position and momentum, etc., and say that we 
simply do not know the position of an electron.  The real truth - the 
new truth about how we should understand position and momentum - is 
that the position of the electron is fundamentally not even determined 
(without observation).


This analogy is not just an analogy, as I think you might begin to 
guess:  there is a deep relationship between these two domains, and I am 
still working on a way to link them.





Richard Loosemore.















---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Zombies, Autism and Consciousness {WAS Re: [agi] A paper that actually does solve the problem of consciousness]

2008-11-17 Thread Richard Loosemore

Trent Waddington wrote:

Richard,

  After reading your paper and contemplating the implications, I
believe you have done a good job at describing the intuitive notion of
consciousness that many lay-people use the word to refer to.  I
don't think your explanation is fleshed out enough for those
lay-people, but its certainly sufficient for most the people on this
list.  I would recommend that anyone who hasn't read the paper, and
has an interest in this whole consciousness business, give it a read.

I especially liked the bit where you describe how the model of self
can't be defined in terms of anything else.. as it is inherently
recursive.  I wonder whether the dynamic updating of the model of self
may well be exactly the subjective experience of consciousness that
people describe.  If so, the notion of a p-zombie is not impossible,
as you suggest in your conclusions, but simply an AGI without a
self-model.


This is something that does intrigue me (the different kinds of 
self-model that could be in there), but I come to slightly different 
conclusions.


I think someone (Putnam, IIRC) pointed out that you could still have 
consciousness without the equivalent of any references to self and 
others, because such a creature would still be experiencing qualia.


But, that aside, do you not think that a creature with absolutely no 
self model at all woudl have some troubles?  It woudl not be able to 
represent itself in the context of the world, so it would be purely 
reactive.  But wait:  come to think of it, could it actually control any 
limbs if it did not have some kind of model of itself?


Now, suppose you grant me that all AGIs would have at least some model 
of self (if only to control a single robot arm):  then, if the rest of 
the cognitive mechanism allows it to think in a powerful and recursive 
way about the contents of its own thought processes (which I have 
suggested is one of the main preconditions for being conscious, or even 
being AG-Intelligent), would it not be difficult to stop it from 
developing a more general model of itself than just the simple self 
model needed to control the robot arm?  We might find that any kind of 
self model would be a slippery slope toward a bigger self model.


Finally, consider the case of humans with severe Autism.  One suggestion 
is that they have a very poorly developed, or suppressed self model.  I 
would be *extremely* reluctant to think that these humans are p-zombies, 
just because of that.  I know that is a gut feeling, but even so.






Finally, the introduction says:

  Given  the  strength  of  feeling on  these matters - for  example,
 the widespread belief  that AGIs  would  be  dangerous  because,  as
conscious  beings, they would inevitably rebel against their lack of
freedom - it  is  incumbent upon  the AGI  community  to  resolve
these questions  as  soon  as  possible.

I was really looking forward to seeing you address this widespread
belief, but unfortunately you declined.  Seems a bit of a tease.

Trent


Oh, I apologize. :-(

I started out with the intention of squeezing into the paper a 
description of the concsiousness proposal PLUS my parallel proposal 
about AGI motivation and emotion.


It became obvious toward the end that I would not be able to say 
anything about the latter (I barely had enough room for a terse 
description of the former).  But then I explained instead that this was 
part of a larger research program to cover issues of motivation, emotion 
and friendliness.  I guess that wording did not really make up for the 
initial tease, so I'll try to rephrase that in the edited version


And I will also try to get the motivation and friendliness paper written 
asap, to complement this one.





Richard Loosemore









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Benjamin Johnston wrote:


I completed the first draft of a technical paper on consciousness the 
other day.   It is intended for the AGI-09 conference, and it can be 
found at:



Hi Richard,

I don't have any comments yet about what you have written, because I'm 
not sure I fully understand what you're trying to say... I hope your 
answers to these questions will help clarify things.


It seems to me that your core argument goes something like this:

That there are many concepts for which an introspective analysis can 
only return the concept itself.

That this recursion blocks any possible explanation.
That consciousness is one of these concepts because self is inherently 
recursive.
Therefore, consciousness is explicitly blocked from having any kind of 
explanation.


Is this correct? If not, how have I misinterpreted you?



This is pretty much accurate, but only up to the end of the first phase 
of the paper, where I asked the question: Is explaining why we cannot 
explain something the same as explaining it?


The next phase is crucial, because (as I explained a little more in my 
parallel reply to Ben) the conclusion of part 1 is really that the whole 
notion of 'explanation' is stretched to breaking point by the concept of 
consciousness.


So in the end what I do is argue that the whole concept of explanation 
(and meaning, etc) has to be replaced in order to deal with 
consciousness.  Eventually I come to a rather strange-looking 
conclusion, which is that we are obliged to say that consciousness is 
a real thing like any other in the universe, but the exact content of it 
(the subjective core) is truly inexplicable.





I have a thought experiment that might help me understand your ideas:

If we have a robot designed according to your molecular model, and we 
then ask the robot what exactly is the nature of red or what is it 
like to experience the subjective essense of red, the robot may analyze 
this concept, ultimately bottoming out on an incoming signal line.


But what if this robot is intelligent and can study other robots? It 
might then examine other robots and see that when their analysis bottoms 
out on an incoming signal line, what actually happens is that the 
incoming signal line is activated by electromagnetic energy of a certain 
frequency, and that the object recognition routines identify patterns in 
signal lines and that when an object is identified it gets annotated 
with texture and color information from its sensations, and that a 
particular software module injects all that information into the 
foreground memory. It might conclude that the experience of 
experiencing red in the other robot is to have sensors inject atoms 
into foreground memory, and it could then explain how the current 
context of that robot's foreground memory interacts with the changing 
sensations (that have been injected into foreground memory) to make that 
experience 'meaningful' to the robot.


What if this robot then turns its inspection abilities onto itself? Can 
it therefore further analyze red? How does your theory interpret that 
situation?


-Ben


Ahh, but that *is* the way that my theory analyzes the situation, no? 
:-)  What I mean is, I would use a human (me) in place of the first robot.


Bear in mind that we must first separate out the hard problem (the 
pure subjective experience of red) from any easy problems (mere 
radiation sensititivity, etc).  From the point of view of that first 
robot, what will she get from studying the second robot (other robots in 
general), if the question she really wants to answer is What is the 
explanation for *my* subjective experience of redness?


She could talk all about the foreground and the way the analysis 
mechanism works in other robots (and humans), but the question is, what 
would that avail her is she wanted to answer the hard problem of where 
her subjective conscious experience comes from?


After reading the first part of my paper, she would say (I hope!):  Ah, 
now I see how all my questions about the subjective experience of things 
are actually caused by my analysis mechanism doing somethig weird.


But the (again, I hope) she would say:  H, does it meta-explain my 
subjective experiences if I know why I cannot explain these experiences?


And thence to part two of the paper




Richard Loosemore




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Colin Hales wrote:

Dear Richard,
I have an issue with the 'falsifiable predictions' being used as 
evidence of your theory.


The problem is that right or wrong...I have a working physical model for 
consciousness. Predictions 1-3 are something that my hardware can do 
easily. In fact that kind of experimentation is in my downstream 
implementation plan. These predictions have nothing whatsoever to do 
with your theory or mine or anyones. I'm not sure about prediction 4. 
It's not something I have thought about, so I'll leave it aside for now. 
In my case, in the second stage of testing of my chips, one of the 
things I want to do is literally 'Mind Meld', forming a bridge of 4 sets 
of compared, independently generated qualia. Ultimately the chips may be 
implantable, which means a human could experience what they generate in 
the first person...but I digress


Your statement This theory of consciousness can be used to make some 
falsifiable predictions could be replaced by ANY theory of 
consciousness can be used to make falsifiable predictions 1..4 as 
follows.. Which basically says they are not predictions that falsify 
anything at all. In which case the predictions cannot be claimed to 
support your theory. The problem is that the evidence of predictions 1-4 
acts merely as a correlate. It does not test any particular critical 
dependency (causality origins). The predictions are merely correlates of 
any theory of consciousness. They do not test the causal necessities. In 
any empirical science paper the evidence could not be held in support of 
the claim and they would be would be discounted as evidence of your 
mechanism. I could cite 10 different computationalist AGI knowledge 
metaphors in the sections preceding the 'predictions' and the result 
would be the same.


SoIf I was a reviewer I'd be unable to accept the claim that your 
'predictions' actually said anything about the theory preceding them. 
This would seem to be the problematic issue of the paper. You might want 
to take a deeper look at this issue and try to isolate something unique 
to your particular solution - which has  a real critical dependency in 
it. Then you'll  have an evidence base of your own that people can use 
independently. In this way your proposal  could be seen to be scientific 
in the dry empirical sense.


By way of example... a computer program is  not scientific evidence of 
anything. The computer materials, as configured by the program, actually 
causally necessitate the behaviour. The program is a correlate. A 
correlate has the formal evidentiary status of 'hearsay'. This is the 
sense in which I invoke the term 'correlate' above.


BTW I have fallen foul of this problem myself...I had to look elsewhere 
for real critical dependency, like I suggested above. You never know, 
you might find one in there someplace! I found one after a lot of 
investigation. You might, too.


Regards,

Colin Hales


Okay, let me phrase it like this:  I specifically say (or rather I 
should have done... this is another thing I need to make more explicit!) 
that the predictions are about making alterations at EXACTLY the 
boundary of the analysis mechanisms.


So, when we test the predictions, we must first understand the mechanics 
of human (or AGI) cognition well enough to be able to locate the exact 
scope of the analysis mechanisms.


Then, we make the tests by changing things around just outside the reach 
of those mechanisms.


Then we ask subjects (human or AGI) what happened to their subjective 
experiences.  If the subjects are ourselves - which I strongly suggest 
must be the case - then we can ask ourselves what happened to our 
subjective experiences.


My prediction is that if the swaps are made at that boundary, then 
things will be as I state.  But if changes are made within the scope of 
the analysis mechanisms, then we will not see those changes in the qualia.


So the theory could be falsified if changes in the qualia are NOT 
consistent with the theory, when changes are made at different points in 
the system.  The theory is all about the analysis mechanisms being the 
culprit, so in that sense it is extremely falsifiable.


Now, correct me if I am wrong, but is there anywhere else in the 
literature where you have you seen anyone make a prediction that the 
qualia will be changed by the alteration of a specific mechanism, but 
not by other, fairly similar alterations?





Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote:

  How do you propose testing whether a model is correct or not?
 
 By determining whether it is useful and predictive -- just
 like what we always do when we're practicing science (as
 opposed to spouting BS).

An ethical model tells you what is good or bad. It does not make useful 
predictions.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 What I am claiming (and I will make this explicit in a
 revision of the paper) is that these notions of
 explanation, meaning, solution
 to the problem, etc., are pushed to their breaking
 point by the problem of consciousness.  So it is not that
 there is a problem with understanding consciousness itself,
 so much as there is a problem with what it means to
 *explain* things.

Yes, that is because we are asking the wrong questions. For example:

Not: should we do experiments on animals?
Instead: will we do experiments on animals?

Not: can computers think?
Instead: can computers behave in a way that is indistinguishable from human?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Okay, let me phrase it like this:  I specifically say (or
 rather I should have done... this is another thing I need to
 make more explicit!) that the predictions are about making
 alterations at EXACTLY the boundary of the analysis
 mechanisms.
 
 So, when we test the predictions, we must first understand
 the mechanics of human (or AGI) cognition well enough to be
 able to locate the exact scope of the analysis mechanisms.
 
 Then, we make the tests by changing things around just
 outside the reach of those mechanisms.
 
 Then we ask subjects (human or AGI) what happened to their
 subjective experiences.  If the subjects are ourselves -
 which I strongly suggest must be the case - then we can ask
 ourselves what happened to our subjective experiences.
 
 My prediction is that if the swaps are made at that
 boundary, then things will be as I state.  But if changes
 are made within the scope of the analysis mechanisms, then
 we will not see those changes in the qualia.
 
 So the theory could be falsified if changes in the qualia
 are NOT consistent with the theory, when changes are made at
 different points in the system.  The theory is all about the
 analysis mechanisms being the culprit, so in that sense it
 is extremely falsifiable.
 
 Now, correct me if I am wrong, but is there anywhere else
 in the literature where you have you seen anyone make a
 prediction that the qualia will be changed by the alteration
 of a specific mechanism, but not by other, fairly similar
 alterations?

Your predictions are not testable. How do you know if another person has 
experienced a change in qualia, or is simply saying that they do? If you do the 
experiment on yourself, how do you know if you really experience a change in 
qualia, or only believe that you do?

There is a difference, you know. Belief is only a rearrangement of your 
neurons. I have no doubt that if you did the experiments you describe, that the 
brains would be rearranged consistently with your predictions. But what does 
that say about consciousness?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Dan Dennett [WAS Re: [agi] A paper that actually does solve the problem of consciousness]

2008-11-17 Thread Richard Loosemore

Ben Goertzel wrote:


Ed,

BTW on this topic my view seems closer to Richard's than yours, though 
not anywhere near identical to his either.  Maybe I'll write a blog post 
on consciousness to clarify, it's too much for an email...


I am very familiar with Dennett's position on consciousness, as I'm sure 
Richard is, but I consider it a really absurd and silly argument.  I'll 
clarify in a blog post sometime soon, but I don't have time for it now.


Anyway, arguing that experience basically doesn't exist, which is what 
Dennett does, certainly doesn't solve the hard problem as posed by 
Chalmers ... it just claims that the hard problem doesn't exist...


ben


Agreed.

I like Dennett's analytical style in many ways, but I was disappointed 
when I realized where he was going with the multiple drafts account.


He falls into a classic trap.  Chalmers says: Whooaa!  There is a big, 
3-part problem here:  (1) We can barely even define what we mean by 
consciousness, (2) That fact of its indefinability seems almost 
intrinsic to the definition of it!, and then (3) Nevertheless, most of 
us are convinced that there is something significant that needs to be 
explained here.


So Chalmers is *pointing* at the dramatic conjunction of the three 
things inexplicability, inexplicability that seems intrinsic to the 
definition and needs to be explained ... and he is saying that these 
three combined make a very, very hard problem.


But then what Dennett does is walk right up and say Whooaa!  There is a 
big problem here:  (1) You can barely even define what you mean by 
consciousness, so you folks are just confused.


Chalmers is trying to get Dennett to go upstairs and look at the problem 
from a higher perspective, but Dennett digs in his heels and insists at 
looking at the problem *only* from the ground floor level.  He can only 
see the fact there is a problem with defining it, he cannot see the fact 
that this problem is itself interesting.


What I have tried to do is take it one step further and say that if we 
understand the nature of the confusion we can actually resolve it 
(albeit in a weird kind of way).






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
I have no doubt that if you did the experiments you describe, that the 
brains would be rearranged consistently with your predictions. But what 
does that say about consciousness?


What are you asking about consciousness?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 1:11 PM
Subject: Re: [agi] A paper that actually does solve the problem of 
consciousness




--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Okay, let me phrase it like this:  I specifically say (or
rather I should have done... this is another thing I need to
make more explicit!) that the predictions are about making
alterations at EXACTLY the boundary of the analysis
mechanisms.

So, when we test the predictions, we must first understand
the mechanics of human (or AGI) cognition well enough to be
able to locate the exact scope of the analysis mechanisms.

Then, we make the tests by changing things around just
outside the reach of those mechanisms.

Then we ask subjects (human or AGI) what happened to their
subjective experiences.  If the subjects are ourselves -
which I strongly suggest must be the case - then we can ask
ourselves what happened to our subjective experiences.

My prediction is that if the swaps are made at that
boundary, then things will be as I state.  But if changes
are made within the scope of the analysis mechanisms, then
we will not see those changes in the qualia.

So the theory could be falsified if changes in the qualia
are NOT consistent with the theory, when changes are made at
different points in the system.  The theory is all about the
analysis mechanisms being the culprit, so in that sense it
is extremely falsifiable.

Now, correct me if I am wrong, but is there anywhere else
in the literature where you have you seen anyone make a
prediction that the qualia will be changed by the alteration
of a specific mechanism, but not by other, fairly similar
alterations?


Your predictions are not testable. How do you know if another person has 
experienced a change in qualia, or is simply saying that they do? If you 
do the experiment on yourself, how do you know if you really experience a 
change in qualia, or only believe that you do?


There is a difference, you know. Belief is only a rearrangement of your 
neurons. I have no doubt that if you did the experiments you describe, 
that the brains would be rearranged consistently with your predictions. 
But what does that say about consciousness?


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or on auto pilot. In one case, qualia manifest, 
while in the other they do not. Why is that?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Okay, let me phrase it like this:  I specifically say (or rather I
should have done... this is another thing I need to make more
explicit!) that the predictions are about making alterations at
EXACTLY the boundary of the analysis mechanisms.

So, when we test the predictions, we must first understand the
mechanics of human (or AGI) cognition well enough to be able to
locate the exact scope of the analysis mechanisms.

Then, we make the tests by changing things around just outside the
reach of those mechanisms.

Then we ask subjects (human or AGI) what happened to their 
subjective experiences.  If the subjects are ourselves - which I

strongly suggest must be the case - then we can ask ourselves what
happened to our subjective experiences.

My prediction is that if the swaps are made at that boundary, then
things will be as I state.  But if changes are made within the
scope of the analysis mechanisms, then we will not see those
changes in the qualia.

So the theory could be falsified if changes in the qualia are NOT
consistent with the theory, when changes are made at different
points in the system.  The theory is all about the analysis
mechanisms being the culprit, so in that sense it is extremely
falsifiable.

Now, correct me if I am wrong, but is there anywhere else in the
literature where you have you seen anyone make a prediction that
the qualia will be changed by the alteration of a specific
mechanism, but not by other, fairly similar alterations?


Your predictions are not testable. How do you know if another person
has experienced a change in qualia, or is simply saying that they do?
If you do the experiment on yourself, how do you know if you really
experience a change in qualia, or only believe that you do?

There is a difference, you know. Belief is only a rearrangement of
your neurons. I have no doubt that if you did the experiments you
describe, that the brains would be rearranged consistently with your
predictions. But what does that say about consciousness?


Yikes, whatever happened to the incorrigibility of belief?!

You seem to have a bone or two to pick with Descartes:  please don't ask me!



Richard Loosemore




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Harry Chesley wrote:

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf 



Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or on auto pilot. In one case, qualia manifest, 
while in the other they do not. Why is that?


I actually *really* like this question:  I was trying to compose an 
answer to it while lying in bed this morning.


This is what I started referring to (in a longer version of the paper) 
as a Consciousness Holiday.


In fact, if start unpacking the idea of what we mean by conscious 
experience, we start to realize that it inly really exists when we look 
at it.  It is not even logically possible to think about consciousness - 
any form of it, including *memories* of the consciousness that I had a 
few minutes ago, when I was driving along the road and talking to my 
companion without bothering to look at several large towns that we drove 
through - without applying the analysis mechanism to the consciousness 
episode.


So when I don't remember anything about those towns, from a few minutes 
ago on my road trip, is it because (a) the attentional mechanism did not 
bother to lay down any episodic memory traces, so I cannot bring back 
the memories and analyze them, or (b) that I was actually not 
experiencing any qualia during that time when I was on autopilot?


I believe that the answer is (a), and that IF I can stopped at any point 
during the observation period and thought about the experience I just 
had, I would be able to appreciate the last few seconds of subjective 
experience.


The real reply to your question goes much much deeper, and it is 
fascinating because we need to get a handle on creatures that probably 
do not do any reflective, language-based philosophical thinking (like 
guinea pigs and crocodiles).  I want to say more, but will have to set 
it down in a longer form.


Does this seem to make sense so far, though?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser

An excellent question from Harry . . . .

So when I don't remember anything about those towns, from a few minutes 
ago on my road trip, is it because (a) the attentional mechanism did not 
bother to lay down any episodic memory traces, so I cannot bring back the 
memories and analyze them, or (b) that I was actually not experiencing any 
qualia during that time when I was on autopilot?


I believe that the answer is (a), and that IF I can stopped at any point 
during the observation period and thought about the experience I just had, 
I would be able to appreciate the last few seconds of subjective 
experience.


So . . . . what if the *you* that you/we speak of is simply the attentional 
mechanism?  What if qualia are simply the way that other brain processes 
appear to you/the attentional mechanism?


Why would you be experiencing qualia when you were on autopilot?  It's 
quite clear from experiments that human's don't see things in their visual 
field when they are concentrating on other things in their visual field (for 
example, when you are told to concentrate on counting something that someone 
is doing in the foreground while a man in an ape suit walks by in the 
background).  Do you really have qualia from stuff that you don't sense 
(even though your sensory apparatus picked it up, it was clearly discarded 
at some level below the conscious/attentional level)?




- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 1:46 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of 
consciousness




Harry Chesley wrote:

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or on auto pilot. In one case, qualia manifest, 
while in the other they do not. Why is that?


I actually *really* like this question:  I was trying to compose an answer 
to it while lying in bed this morning.


This is what I started referring to (in a longer version of the paper) as 
a Consciousness Holiday.


In fact, if start unpacking the idea of what we mean by conscious 
experience, we start to realize that it inly really exists when we look at 
it.  It is not even logically possible to think about consciousness - any 
form of it, including *memories* of the consciousness that I had a few 
minutes ago, when I was driving along the road and talking to my companion 
without bothering to look at several large towns that we drove through - 
without applying the analysis mechanism to the consciousness episode.


So when I don't remember anything about those towns, from a few minutes 
ago on my road trip, is it because (a) the attentional mechanism did not 
bother to lay down any episodic memory traces, so I cannot bring back the 
memories and analyze them, or (b) that I was actually not experiencing any 
qualia during that time when I was on autopilot?


I believe that the answer is (a), and that IF I can stopped at any point 
during the observation period and thought about the experience I just had, 
I would be able to appreciate the last few seconds of subjective 
experience.


The real reply to your question goes much much deeper, and it is 
fascinating because we need to get a handle on creatures that probably do 
not do any reflective, language-based philosophical thinking (like guinea 
pigs and crocodiles).  I want to say more, but will have to set it down in 
a longer form.


Does this seem to make sense so far, though?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Ben Goertzel
Thanks Richard ... I will re-read the paper with this clarification in mind.
On the face of it, I tend to agree that the concept of explanation is
fuzzy and messy and probably is not, in its standard form, useful for
dealing with consciousness

However, I'm still uncertain as to whether your deconstruction and
reconstruction of the notion of explanation counts as

a) a solution of Chalmers' hard problem

b) an explanation of why Chalmer's hard problem is ill-posed

I'll reflect on this more as I re-read the paper...

ben


On Mon, Nov 17, 2008 at 8:38 AM, Richard Loosemore [EMAIL PROTECTED]wrote:

 Ben Goertzel wrote:


Sorry to be negative, but no, my proposal is not in any way a
modernization of Peirce's metaphysical analysis of awareness.



 Could you elaborate the difference?  It seems very similar to me.   You're
 saying that consciousness has to do with the bottoming-out of mental
 hierarchies in raw percepts that are unanalyzable by the mind ... and
 Peirce's Firsts are precisely raw percepts that are unanalyzable by the
 mind...


 It is partly the stance (I arrive at my position from a cognitivist point
 of view, with specific mechanisms that must be causing the problem), where
 Peirce appears to suggest the Firsts idea as a purely metaphysical proposal.

 So, what I am saying is that this superficial resemblance between his
 position and mine is so superficial that it makes no sense to describe on
 the latter as a modernization of the former.

 A good analogy would be Galilean Relativity and Einsten's Relativity.
 Although there is a superficial resemblance, nobody would really say that
 Einstein was just a modernization of Galileo.



 ***
 The standard meaning of Hard Problem issues was described very well by
 Chalmers, and I am addressing the hard problem of concsciousness, not the
 other problems.
 ***

 Hmmm  I don't really understand why you think your argument is a
 solution to the hard problem  It seems like you explicitly acknowledge
 in your paper that it's *not*, actually  It's more like a philosophical
 argument as to why the hard problem is unsolvable, IMO.


 No, that is only part one of the paper, and as you pointed out before, the
 first part of the proposal ends with a question, not a statement that this
 was a failure to explain the problem.  That question was important.

 The important part is the analysis of explanation and meaning.  This
 can also be taken to be about your use of the word unsolvable in the above
 sentence.

 What I am claiming (and I will make this explicit in a revision of the
 paper) is that these notions of explanation, meaning, solution to the
 problem, etc., are pushed to their breaking point by the problem of
 consciousness.  So it is not that there is a problem with understanding
 consciousness itself, so much as there is a problem with what it means to
 *explain* things.

 Other things are easy to explain, but when we ask for an explanation of
 something like consciousness, the actual notion of explanation breaks down
 in a drastic way.  This is very closely related to the idea of an objective
 observer in physics  in the quantum realm that notion breaks down.

 What I gave in my paper was (a) a detailed description of how the confusion
 about consciousness arises [peculiar behavior of the analysis mechanism],
 but then (b) I went on to point out this peculiar behavior infects much more
 than just our ability to explain consciousness, because it casts doubt on
 the fundamental meaning of explanation and semantics and ontology.

 The conclusion that I then tried to draw was that it would be wrong to say
 that consciousness was just an artifact or (ordinarily) inexplicable thing,
 because this would be to tacitly assume that the sense of explain that we
 are using in these statements is the same one we have always used.  Anyone
 who continued to use explain and mean (etc.) in their old context would
 be stuck in what I have called Level 0, and in that level the old meanings
 [sic] of those terms are just not able to address the issue of
 consciousness.

 Go back to the quantum mechanics analogy again:  it is not right to cling
 to old ideas of position and momentum, etc., and say that we simply do not
 know the position of an electron.  The real truth - the new truth about
 how we should understand position and momentum - is that the position of
 the electron is fundamentally not even determined (without observation).

 This analogy is not just an analogy, as I think you might begin to guess:
  there is a deep relationship between these two domains, and I am still
 working on a way to link them.





 Richard Loosemore.















 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley
Richard Loosemore wrote:
 Harry Chesley wrote:
 A related question: How do you explain the fact that we sometimes
 are aware of qualia and sometimes not? You can perform the same
 actions paying attention or on auto pilot. In one case, qualia
 manifest, while in the other they do not. Why is that?

 I actually *really* like this question:  I was trying to compose an
 answer to it while lying in bed this morning.

 ...

 So when I don't remember anything about those towns, from a few
 minutes ago on my road trip, is it because (a) the attentional
 mechanism did not bother to lay down any episodic memory traces, so I
 cannot bring back the memories and analyze them, or (b) that I was
 actually not experiencing any qualia during that time when I was on
 autopilot?

 I believe that the answer is (a), and that IF I can stopped at any
 point during the observation period and thought about the experience
 I just had, I would be able to appreciate the last few seconds of
 subjective experience.

 ...

 Does this seem to make sense so far, though?

It sounds reasonable. I would suspect (a) also, and that the reason is
that these are circumstances where remembering is a waste of resources,
either because the task being done on auto-pilot is so well understood
that it won't need to be analyzed later, and/or because there is another
task in the works at the same time that has more need for the memory
resources.

Note that your supposition about remembering the last few seconds if
interrupted during an auto-pilot task is experimentally verifiable
fairly easily.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley
Richard Loosemore wrote:

 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see it
explaining pain or happy quite so easily. You can hypothesize a sort
of mechanism-level explanation of those by relegating them to the older
or lower parts of the brain (i.e., they're atomic at the conscious
level, but have more effects at the physiological level (like releasing
chemicals into the system)), but that doesn't satisfactorily cover the
subjective side for me.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness the
other day.   It is intended for the AGI-09 conference, and it can be
found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see it
explaining pain or happy quite so easily. You can hypothesize a sort
of mechanism-level explanation of those by relegating them to the older
or lower parts of the brain (i.e., they're atomic at the conscious
level, but have more effects at the physiological level (like releasing
chemicals into the system)), but that doesn't satisfactorily cover the
subjective side for me.


I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis 
mechanism.  If there is a sharp boundary (as well there might be), then 
this defines the point where the qualia kick in.  Pain receptors are 
fairly easy:  they are primitive signal lines.  Emotions are, I believe, 
caused by clusters of lower brain structures, so the interface between 
lower brain and foreground is the place where the foreground sees a 
limit to the analysis mechanisms.


More generally, the significance of the foreground is that it sets a 
boundary on how far the analysis mechanisms can reach.


I am not sure why that would seem less satisfactory as an explanation of 
the subjectivity.  It is a raw feel, and that is the key idea, no?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Mark Waser wrote:

An excellent question from Harry . . . .

So when I don't remember anything about those towns, from a few 
minutes ago on my road trip, is it because (a) the attentional 
mechanism did not bother to lay down any episodic memory traces, so I 
cannot bring back the memories and analyze them, or (b) that I was 
actually not experiencing any qualia during that time when I was on 
autopilot?


I believe that the answer is (a), and that IF I can stopped at any 
point during the observation period and thought about the experience I 
just had, I would be able to appreciate the last few seconds of 
subjective experience.


So . . . . what if the *you* that you/we speak of is simply the 
attentional mechanism?  What if qualia are simply the way that other 
brain processes appear to you/the attentional mechanism?


Why would you be experiencing qualia when you were on autopilot?  It's 
quite clear from experiments that human's don't see things in their 
visual field when they are concentrating on other things in their visual 
field (for example, when you are told to concentrate on counting 
something that someone is doing in the foreground while a man in an ape 
suit walks by in the background).  Do you really have qualia from stuff 
that you don't sense (even though your sensory apparatus picked it up, 
it was clearly discarded at some level below the conscious/attentional 
level)?


Yes, I did not mean to imply that all unattended stimuli register in 
consciousness.  Clearly there are things that are simply not seen, even 
when they are in the visual field.


But I would distinguish between that and a situation where you drive for 
50 miles and do not have a memory afterwards of the places you went 
through.  I do not think that we do not see the road and the towns and 
other traffic in the same sense that we do not see an unattended 
stimulus in a dual task experiment, for example.


But then, there are probably intermediate cases.

Some of the recent neural imaging work is relevant in this respect.  I 
will think some more about this whole issue.




Richard Loosemore









- Original Message - From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 1:46 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem 
of consciousness




Harry Chesley wrote:

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf 



Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or on auto pilot. In one case, qualia manifest, 
while in the other they do not. Why is that?


I actually *really* like this question:  I was trying to compose an 
answer to it while lying in bed this morning.


This is what I started referring to (in a longer version of the paper) 
as a Consciousness Holiday.


In fact, if start unpacking the idea of what we mean by conscious 
experience, we start to realize that it inly really exists when we 
look at it.  It is not even logically possible to think about 
consciousness - any form of it, including *memories* of the 
consciousness that I had a few minutes ago, when I was driving along 
the road and talking to my companion without bothering to look at 
several large towns that we drove through - without applying the 
analysis mechanism to the consciousness episode.


So when I don't remember anything about those towns, from a few 
minutes ago on my road trip, is it because (a) the attentional 
mechanism did not bother to lay down any episodic memory traces, so I 
cannot bring back the memories and analyze them, or (b) that I was 
actually not experiencing any qualia during that time when I was on 
autopilot?


I believe that the answer is (a), and that IF I can stopped at any 
point during the observation period and thought about the experience I 
just had, I would be able to appreciate the last few seconds of 
subjective experience.


The real reply to your question goes much much deeper, and it is 
fascinating because we need to get a handle on creatures that probably 
do not do any reflective, language-based philosophical thinking (like 
guinea pigs and crocodiles).  I want to say more, but will have to set 
it down in a longer form.


Does this seem to make sense so far, though?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Colin Hales



Richard Loosemore wrote:

Colin Hales wrote:

Dear Richard,
I have an issue with the 'falsifiable predictions' being used as 
evidence of your theory.


The problem is that right or wrong...I have a working physical model 
for consciousness. Predictions 1-3 are something that my hardware can 
do easily. In fact that kind of experimentation is in my downstream 
implementation plan. These predictions have nothing whatsoever to do 
with your theory or mine or anyones. I'm not sure about prediction 4. 
It's not something I have thought about, so I'll leave it aside for 
now. In my case, in the second stage of testing of my chips, one of 
the things I want to do is literally 'Mind Meld', forming a bridge of 
4 sets of compared, independently generated qualia. Ultimately the 
chips may be implantable, which means a human could experience what 
they generate in the first person...but I digress


Your statement This theory of consciousness can be used to make some 
falsifiable predictions could be replaced by ANY theory of 
consciousness can be used to make falsifiable predictions 1..4 as 
follows.. Which basically says they are not predictions that falsify 
anything at all. In which case the predictions cannot be claimed to 
support your theory. The problem is that the evidence of predictions 
1-4 acts merely as a correlate. It does not test any particular 
critical dependency (causality origins). The predictions are merely 
correlates of any theory of consciousness. They do not test the 
causal necessities. In any empirical science paper the evidence could 
not be held in support of the claim and they would be would be 
discounted as evidence of your mechanism. I could cite 10 different 
computationalist AGI knowledge metaphors in the sections preceding 
the 'predictions' and the result would be the same.


SoIf I was a reviewer I'd be unable to accept the claim that your 
'predictions' actually said anything about the theory preceding them. 
This would seem to be the problematic issue of the paper. You might 
want to take a deeper look at this issue and try to isolate something 
unique to your particular solution - which has  a real critical 
dependency in it. Then you'll  have an evidence base of your own that 
people can use independently. In this way your proposal  could be 
seen to be scientific in the dry empirical sense.


By way of example... a computer program is  not scientific evidence 
of anything. The computer materials, as configured by the program, 
actually causally necessitate the behaviour. The program is a 
correlate. A correlate has the formal evidentiary status of 
'hearsay'. This is the sense in which I invoke the term 'correlate' 
above.


BTW I have fallen foul of this problem myself...I had to look 
elsewhere for real critical dependency, like I suggested above. You 
never know, you might find one in there someplace! I found one after 
a lot of investigation. You might, too.


Regards,

Colin Hales


Okay, let me phrase it like this:  I specifically say (or rather I 
should have done... this is another thing I need to make more 
explicit!) that the predictions are about making alterations at 
EXACTLY the boundary of the analysis mechanisms.


So, when we test the predictions, we must first understand the 
mechanics of human (or AGI) cognition well enough to be able to locate 
the exact scope of the analysis mechanisms.


Then, we make the tests by changing things around just outside the 
reach of those mechanisms.


Then we ask subjects (human or AGI) what happened to their subjective 
experiences.  If the subjects are ourselves - which I strongly suggest 
must be the case - then we can ask ourselves what happened to our 
subjective experiences.


My prediction is that if the swaps are made at that boundary, then 
things will be as I state.  But if changes are made within the scope 
of the analysis mechanisms, then we will not see those changes in the 
qualia.


So the theory could be falsified if changes in the qualia are NOT 
consistent with the theory, when changes are made at different points 
in the system.  The theory is all about the analysis mechanisms being 
the culprit, so in that sense it is extremely falsifiable.


Now, correct me if I am wrong, but is there anywhere else in the 
literature where you have you seen anyone make a prediction that the 
qualia will be changed by the alteration of a specific mechanism, but 
not by other, fairly similar alterations?





Richard Loosemore

At the risk of lecturing the already-informed ---Qualia generation has 
been highly localised into specific regions in *cranial *brain material 
already. Qualia are not in the periphery. Qualia are not in the spinal 
CNS, Qualia are not in the cranial periphery eg eyes or lips. Qualia are 
generated in specific CNS cortex and basal regions. So anyone who thinks 
they have a mechanism consistent with physiological knowledge could 
conceive of alterations reconnecting periphery and 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore





Sorry for the late reply.  Got interrupted.


Vladimir Nesov wrote:

(I'm sorry that I make some unclear statements on semantics/meaning,
I'll probably get to the description of this perspective later on the
blog (or maybe it'll become obsolete before that), but it's a long
story, and writing it up on the spot isn't an option.)

On Sat, Nov 15, 2008 at 2:18 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

Taking the position that consciousness is an epiphenomenon and is therefore
meaningless has difficulties.


Rather p-zombieness in atom-by-atom the same environment is an epiphenomenon.


By saying that it is an epiphenomenon, you actually do not answer the
questions about instrinsic qualities and how they relate to other things in
the universe.  The key point is that we do have other examples of
epiphenomena (e.g. smoke from a steam train),


What do you mean by smoke being epiphenomenal?


The standard philosophical term, no?  A phenomenon that is associated
with something, but which plays no causal role in the functioning of
that something.

Thus:  smoke coming from a steam train is always there when is running,
but the smoke does not cause the steam train to do anything.  It is just
a byproduct.






but their ontological status
is very clear:  they are things in the world.  We do not know of other
things with such puzzling ontology (like consciousness), that we can use as
a clear analogy, to explain what consciousness is.

Also, it raises the question of *why* there should be an epiphenomenon.
 Calling it an E does not tell us why such a thing should happen.  And it
leaves us in the dark about whether or not to believe that other systems
that are not atom-for-atom identical with us, should also have this
epiphenomenon.


I don't know how to parse the word epiphenomenon in this context. I
use to to describe reference-free, meaningless concepts, so you can't
say that some epiphenomenon is present here or there, that would be
meaningless.


I think the problem is that you are confusing epiphenomenon with 
something else.


Where did you get the idea that an epiphenomenon was a reference-free, 
meaningless concept?  Not from Eliezer's reference-free, meaningless 
ramblings on his blog, I hope?  ;-)





Jumping into molecular framework as describing human cognition is
unwarranted. It could be a description of AGI design, or it could be a
theoretical description of more general epistemology, but as presented
it's not general enough to automatically correspond to the brain.
Also, semantics of atoms is tricky business, for all I know it keeps
shifting with the focus of attention, often dramatically. Saying that
self is a cluster of atoms doesn't cut it.

I'm not sure of what you are saying, exactly.

The framework is general in this sense:  its components have *clear*
counterparts in all models of cognition, both human and machine.  So, for
example, if you look at a system that uses logical reasoning and bare
symbols, that formalism will differentiate between the symbols that are
currently active, and playing a role in the system's analysis of the world,
and those that are not active.  That is the distinction between foreground
and background.


Without a working, functional theory of cognition, this high-level
descriptive picture has little explanatory power. It might be a step
towards developing a useful theory, but it doesn't explain anything.
There is a set of states of mind that correlates with experience of
apples, etc. So what? You can't build a detailed edifice on general
principles and claim that far-reaching conclusions apply to actual
brain. They might, but you need a semantic link from theory to
described functionality.


Sorry, I don't follow you here.

If you think that there was some aspect of the framework that might NOt 
show up in some architecture for a thinking system, you should probably 
point to it.


I think that the architecture was general, but it referred to a specific 
component (the analysis mechanism) that was well-specified enough to be 
usable in the theory.  And that was all I needed.


If there is some specific way that it doesn't work, you will probably 
have to pin it down and tell me, because I don't see it.







As for the self symbol, there was no time to go into detail.  But there
clearly is an atom that represents the self.


*shug*
It only stands as definition, there is no self-neuron, or something
easily identifiable as self, it's a complex thing. I'm not sure I
even understand what self refers to subjectively, I don't feel any
clear focus of self-perception, my experience is filled with thoughts
on many things, some of them involving management of thought process,
some of external concepts, but no unified center to speak of...


No, no:  what I meant by self was that somewhere in the system it must 
have a representation for its own self, or it will have a missing 
concept.  Also, in any system there is a basic source of action  
some place that is the 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore

Colin Hales wrote:



Richard Loosemore wrote:

Colin Hales wrote:

Dear Richard,
I have an issue with the 'falsifiable predictions' being used as 
evidence of your theory.


The problem is that right or wrong...I have a working physical model 
for consciousness. Predictions 1-3 are something that my hardware can 
do easily. In fact that kind of experimentation is in my downstream 
implementation plan. These predictions have nothing whatsoever to do 
with your theory or mine or anyones. I'm not sure about prediction 4. 
It's not something I have thought about, so I'll leave it aside for 
now. In my case, in the second stage of testing of my chips, one of 
the things I want to do is literally 'Mind Meld', forming a bridge of 
4 sets of compared, independently generated qualia. Ultimately the 
chips may be implantable, which means a human could experience what 
they generate in the first person...but I digress


Your statement This theory of consciousness can be used to make some 
falsifiable predictions could be replaced by ANY theory of 
consciousness can be used to make falsifiable predictions 1..4 as 
follows.. Which basically says they are not predictions that falsify 
anything at all. In which case the predictions cannot be claimed to 
support your theory. The problem is that the evidence of predictions 
1-4 acts merely as a correlate. It does not test any particular 
critical dependency (causality origins). The predictions are merely 
correlates of any theory of consciousness. They do not test the 
causal necessities. In any empirical science paper the evidence could 
not be held in support of the claim and they would be would be 
discounted as evidence of your mechanism. I could cite 10 different 
computationalist AGI knowledge metaphors in the sections preceding 
the 'predictions' and the result would be the same.


SoIf I was a reviewer I'd be unable to accept the claim that your 
'predictions' actually said anything about the theory preceding them. 
This would seem to be the problematic issue of the paper. You might 
want to take a deeper look at this issue and try to isolate something 
unique to your particular solution - which has  a real critical 
dependency in it. Then you'll  have an evidence base of your own that 
people can use independently. In this way your proposal  could be 
seen to be scientific in the dry empirical sense.


By way of example... a computer program is  not scientific evidence 
of anything. The computer materials, as configured by the program, 
actually causally necessitate the behaviour. The program is a 
correlate. A correlate has the formal evidentiary status of 
'hearsay'. This is the sense in which I invoke the term 'correlate' 
above.


BTW I have fallen foul of this problem myself...I had to look 
elsewhere for real critical dependency, like I suggested above. You 
never know, you might find one in there someplace! I found one after 
a lot of investigation. You might, too.


Regards,

Colin Hales


Okay, let me phrase it like this:  I specifically say (or rather I 
should have done... this is another thing I need to make more 
explicit!) that the predictions are about making alterations at 
EXACTLY the boundary of the analysis mechanisms.


So, when we test the predictions, we must first understand the 
mechanics of human (or AGI) cognition well enough to be able to locate 
the exact scope of the analysis mechanisms.


Then, we make the tests by changing things around just outside the 
reach of those mechanisms.


Then we ask subjects (human or AGI) what happened to their subjective 
experiences.  If the subjects are ourselves - which I strongly suggest 
must be the case - then we can ask ourselves what happened to our 
subjective experiences.


My prediction is that if the swaps are made at that boundary, then 
things will be as I state.  But if changes are made within the scope 
of the analysis mechanisms, then we will not see those changes in the 
qualia.


So the theory could be falsified if changes in the qualia are NOT 
consistent with the theory, when changes are made at different points 
in the system.  The theory is all about the analysis mechanisms being 
the culprit, so in that sense it is extremely falsifiable.


Now, correct me if I am wrong, but is there anywhere else in the 
literature where you have you seen anyone make a prediction that the 
qualia will be changed by the alteration of a specific mechanism, but 
not by other, fairly similar alterations?





Richard Loosemore

At the risk of lecturing the already-informed ---Qualia generation has 
been highly localised into specific regions in *cranial *brain material 
already. Qualia are not in the periphery. Qualia are not in the spinal 
CNS, Qualia are not in the cranial periphery eg eyes or lips. Qualia are 
generated in specific CNS cortex and basal regions. 


You are assuming that my references to the *foreground* periphery 
correspond to the physical brain's periphery.


That is 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mike Tintner
Colin:Qualia generation has been highly localised into specific regions in 
cranial brain material already. Qualia are not in the periphery. Qualia are not 
in the spinal CNS, Qualia are not in the cranial periphery eg eyes or lips

Colin,

This is to a great extent nonsense. Which sensation/emotion - (qualia is a word 
strictly for philosophers not scientists, I suggest) - is not located in the 
body? When you are angry, you never frown or bite or tense your lips? The brain 
helps to generate the emotion - (and note helps). But emotions are bodily 
events - and *felt* bodily.

This whole discussion ignores the primary paradox about consciousness, (which 
is first and foremost sentience) :  *the brain doesn't feel a thing* - 
sentience/feeling is located in the body outside the brain. When a surgeon cuts 
your brain, you feel nothing. You feel and are conscious of your emotions in 
and with your whole body. 

Consciousness is a *whole body* affair.Mere computers have no way of copying 
it. Robots perhaps.

Brains in a vat, or a black computer box, are strictly fantasies of 
philosophers and AI-ers.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Colin Hales



Richard Loosemore wrote:

Colin Hales wrote:



Richard Loosemore wrote:

Colin Hales wrote:

Dear Richard,
I have an issue with the 'falsifiable predictions' being used as 
evidence of your theory.


The problem is that right or wrong...I have a working physical 
model for consciousness. Predictions 1-3 are something that my 
hardware can do easily. In fact that kind of experimentation is in 
my downstream implementation plan. These predictions have nothing 
whatsoever to do with your theory or mine or anyones. I'm not sure 
about prediction 4. It's not something I have thought about, so 
I'll leave it aside for now. In my case, in the second stage of 
testing of my chips, one of the things I want to do is literally 
'Mind Meld', forming a bridge of 4 sets of compared, independently 
generated qualia. Ultimately the chips may be implantable, which 
means a human could experience what they generate in the first 
person...but I digress


Your statement This theory of consciousness can be used to make 
some falsifiable predictions could be replaced by ANY theory of 
consciousness can be used to make falsifiable predictions 1..4 as 
follows.. Which basically says they are not predictions that 
falsify anything at all. In which case the predictions cannot be 
claimed to support your theory. The problem is that the evidence of 
predictions 1-4 acts merely as a correlate. It does not test any 
particular critical dependency (causality origins). The predictions 
are merely correlates of any theory of consciousness. They do not 
test the causal necessities. In any empirical science paper the 
evidence could not be held in support of the claim and they would 
be would be discounted as evidence of your mechanism. I could cite 
10 different computationalist AGI knowledge metaphors in the 
sections preceding the 'predictions' and the result would be the same.


SoIf I was a reviewer I'd be unable to accept the claim that 
your 'predictions' actually said anything about the theory 
preceding them. This would seem to be the problematic issue of the 
paper. You might want to take a deeper look at this issue and try 
to isolate something unique to your particular solution - which 
has  a real critical dependency in it. Then you'll  have an 
evidence base of your own that people can use independently. In 
this way your proposal  could be seen to be scientific in the dry 
empirical sense.


By way of example... a computer program is  not scientific evidence 
of anything. The computer materials, as configured by the program, 
actually causally necessitate the behaviour. The program is a 
correlate. A correlate has the formal evidentiary status of 
'hearsay'. This is the sense in which I invoke the term 'correlate' 
above.


BTW I have fallen foul of this problem myself...I had to look 
elsewhere for real critical dependency, like I suggested above. You 
never know, you might find one in there someplace! I found one 
after a lot of investigation. You might, too.


Regards,

Colin Hales


Okay, let me phrase it like this:  I specifically say (or rather I 
should have done... this is another thing I need to make more 
explicit!) that the predictions are about making alterations at 
EXACTLY the boundary of the analysis mechanisms.


So, when we test the predictions, we must first understand the 
mechanics of human (or AGI) cognition well enough to be able to 
locate the exact scope of the analysis mechanisms.


Then, we make the tests by changing things around just outside the 
reach of those mechanisms.


Then we ask subjects (human or AGI) what happened to their 
subjective experiences.  If the subjects are ourselves - which I 
strongly suggest must be the case - then we can ask ourselves what 
happened to our subjective experiences.


My prediction is that if the swaps are made at that boundary, then 
things will be as I state.  But if changes are made within the scope 
of the analysis mechanisms, then we will not see those changes in 
the qualia.


So the theory could be falsified if changes in the qualia are NOT 
consistent with the theory, when changes are made at different 
points in the system.  The theory is all about the analysis 
mechanisms being the culprit, so in that sense it is extremely 
falsifiable.


Now, correct me if I am wrong, but is there anywhere else in the 
literature where you have you seen anyone make a prediction that the 
qualia will be changed by the alteration of a specific mechanism, 
but not by other, fairly similar alterations?





Richard Loosemore

At the risk of lecturing the already-informed ---Qualia generation 
has been highly localised into specific regions in *cranial *brain 
material already. Qualia are not in the periphery. Qualia are not in 
the spinal CNS, Qualia are not in the cranial periphery eg eyes or 
lips. Qualia are generated in specific CNS cortex and basal regions. 


You are assuming that my references to the *foreground* periphery 
correspond to the physical 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore


This commentary represents a fundamental misunderstanding of both the 
paper I wrote and the background literature on the hard problem of 
consciousness.




Richard Loosemore



Ed Porter wrote:
  I respect the amount of thought that when into Richard’s paper 
“Consciousness in Human and Machine: A Theory and Some Falsifiable 
Predictions” --- but I do not think it provides a good explanation of 
consciousness. 

 

  It seems to spend more time explaining the limitations on what we 
can know about consciousness than explaining consciousness, itself.  
What little the paper says about consciousness can be summed up roughly 
as follows: that consciousness is created by a system that can analyze 
and seek explanations from some, presumably experientially-learned, 
knowledgebase, based on associations between nodes in that 
knowledgebase, and that it can determine when it cannot describe a given 
node further, in terms of relations to other nodes, but nevertheless 
senses the given node is real (such as the way it is difficult for a 
human to explain what it is like to sense the color red).


 

  First I disagree with the paper’s allegation that “analysis” of 
conscious phenomena necessarily “bottom” out more than analyses of many 
other aspects of reality.  Second, I disagree that conscious phenomena 
are beyond any scientific explanation. 

 

  With regard to the first, I feel our minds contain substantial 
memories of various conscious states, and thus there is actually 
substantial experiential grounding of many aspects of consciousness 
recorded in our brains.  This is particularly true for the consciousness 
of emotional states (for example, brain scans on very young infants 
indicate a high percent of their mental activity is in emotional centers 
of the brain).  I developed many of my concepts of how to design an AGI 
based on reading brain science and performing introspection into my own 
conscious and subconscious thought processes, and I found it quite easy 
to draw many generalities from the behavior of my own conscious mind.  
Since I view the subconscious to be at the same time both a staging area 
for, and a reactive audience for, conscious thoughts, I think one has to 
view the subconscious and consciouness as part of a functioning whole. 

 

  When I think of the color red, I don’t bottom out.  Instead I have 
many associations with my experiences of redness that provide it with 
deep grounding.  As with the description of any other concept, it is 
hard to explain how I experience red to others, other than through 
experiences we share relating to that concept.  This would include 
things we see in common to be red, or perhaps common emotional 
experiences to seeing the red of blood that has been spilled in 
violence, or the way the sensation of red seems to fill a 2 dimensional 
portion of an image that we perceive as a two dimensional distribution 
of differently colored areas.   But I can communicate within my own mind 
across time what it is like to sense red, such as in dreams when my eyes 
are closed.  Yes, the experience of sensing red does not decompose into 
parts, such as the way the sensed image of a human body can be 
de-composed into the seeing of subordinate parts, but that does not 
necessarily mean that my sensing of something that is a certain color of 
red, is somehow more mysterious than my sensing of seeing a human body.


 

  With regard to the second notion, that conscious phenomena are not 
subject to scientific explanation, there is extensive evidence to the 
contrary.  The prescient psychological writings of William James, and 
Dr. Alexander Luria’s famous studies of the effects of variously located 
bullet wounds on the minds of Russian soldiers after World War II, both 
illustrate that human consciousness can be scientifically studied.  The 
effects of various drugs on consciousness have been scientifically 
studied.  Multiple experiments have shown that the presence or absence 
of synchrony between neural firings in various parts of the brain have 
been strongly correlated with human subjects reporting the presence or 
absence, respectively, of conscious experience of various thoughts or 
sensory inputs.  Multiple studies have shown that electrode stimulation 
to different parts of the brain tend to make the human consciousness 
aware of different thoughts.  Our own personal experiences with our own 
individual consciousnesses, the current scientific levels of knowledge 
about commonly reported conscious experiences, and increasingly more 
sophisticated ways to correlate objectively observable brain states with 
various reports of human conscious experience, all indicate that 
consciousness already is subject to scientific explanation.  In the 
future, particularly with the advent of much more sophisticated brain 
scanning tools, and with the development of AGI, consciousness will be 
much more subject to scientific explanation.


 


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:
 
 http://susaro.com/wp-
 content/uploads/2008/11/draft_consciousness_rpwl.pdf
 


Um... this is a model of consciousness. One way of looking at it.
Whether or not it is comprehensive enough, not sure, this irreducible
indeterminacy. But after reading the paper a couple times I get what you are
trying to describe. It's part of an essence of consciousness but not sure if
it enough.

Kind of reminds me of Curly's view of consciousness - I'm trying to think
but nothing happens!

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

I completed the first draft of a technical paper on consciousness the
other day.   It is intended for the AGI-09 conference, and it can be
found at:

http://susaro.com/wp-
content/uploads/2008/11/draft_consciousness_rpwl.pdf




Um... this is a model of consciousness. One way of looking at it.
Whether or not it is comprehensive enough, not sure, this irreducible
indeterminacy. But after reading the paper a couple times I get what you are
trying to describe. It's part of an essence of consciousness but not sure if
it enough.


But did you notice that the paper argued that if you think on the base 
level, you would have to have that feeling that, as you put it, ...It's 
part of an essence of consciousness but not sure if it enough.?


The question is:  does the explanation seem consistent with an 
explanation of your feeling that it might not be enough of an explanation?






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Ed Porter
Matt, 

 

Although Richard's paper places considerable focus the zombie/non-zombie
distinction, its pronouncements do not appear to be so limited.  For
example, its discussion of the analysis of qualia bottoming out is not so
limited, since presumably qualia and their associated conscious experience
only occur in Non-Zombies.  

 

The paper states in the last sentence of its section on page 5 entitled
Implications that 

 

...we can never say exactly what the phenomena of consciousness are, in the
way we give scientific explanation for other things.  

 

As I said in my prior post, this is one of the major points to which I
disagree, and it does not seem to be limited to the zombie/non-zombie
distinction, since all the zombie/non-zombie distinction has to do is
provide a basis for distinguishing between zombies and non-zombies, and has
no relevance to what the phenomena of consciousness are beyond that.

 

I disagree with the above quote, because although our current technical
capabilities decrease the extent to which we can make explanations about the
phenomena of consciousness, I believe we already can give initial
explanations for many aspects of consciousness and I believe that within the
next 20 to 40 years we will be able to give much greater explanations.  

 

I admit that currently there are problems in making the Zombie/non-zombie
distinction.  But this same limitation arguably applies to making the
zombie/non-zombie distinction for humans as well as AGI's.  

 

Based on my own subjective experience, I believe I have a consciousness, and
as Richard points out, it reasonable to consider that subjective experience
as real as anything else, some would say even more real than anything else.
Since I assume other humans have brainware similar to my own, --- and since
I outward manifestations of substantial similarities between the way the
minds and emotions of other humans appear to work, and the way my mind
appears to me to work --- I assume most other humans are not Zombies.  

 

But after serious brain damage, we are told by doctors such as Antonio
Damasio, humans can become zombies.  And we have to face medical and moral
decisions about when to pull the plug on such humans, as in the famous case
of Terri Schiavo.  The current medical and political community bases their
zombie/non-zombie decisions for humans based on a partial understanding of
what human consciousness is, and current measurements they can make
indicating whether or not such a consciousness exists.

 

When it comes to determining whether machines have consciousness of a type
that warrants better treatment than Terri Schiavo, such decision will
probably be based on the advanced understanding of consciousness that we
will develop in the coming decades.

 

Like Richard, I do not believe the attribute of human consciousness we hold
so dear are a mere artifact.  But I don't put much faith in his definition
of consciousness as the ability to sensing something is real even though
analyze of it bottoms out.

 

I believe the sense of awareness humans call consciousness is essential to
power of the computation we call the human mind.  I believe a human-like
consciousness arises from the massively self-aware computation --- having an
internal bandwidth of over 1 million DVD channel/second --- inherent in a
massively parallel spreading activation system like the human brain --- when
a proper mechanism is available for rapidly successively selecting certain
items for broad activation in a relatively coherent manner based on the
competitive relevance or match to current goals or drives of the system of
competing assemblies of activation, and/or based on the current importance
and valence of the emotional associations of such assemblies.  

 

The activations that are most conscious, are sufficiently broad that they
dynamically activate experiential memories and patterns representing the
grounded meaning of the conscious concept.  The effect of prior activations
on the brain state, tend to favor the activations of those aspects of a
currently conscious concept's meaning that are most relevant to the current
context.  This contextually relevant grounding and the massively parallel
dynamic state of activation and its retention of various degrees and
patterns of activation over time, allows the consciousness to have a sense
of being aware of many things at once, and of extending between points in
time and space.

 

People have asked for centuries, what is it inside our mind that seems to
watch the show provided by our senses.  The answer is the tens of billions
of neurons and trillion of synapses that respond to the flood of sensory
information, and store selected portions of it in short, mid, and then long
term memory, to weave a story out of it which is labeled with recognized
patterns, and patterns of explanation.

 

Thus, I believe that the conscious/subconscious theater of the mind, with
its reactive audience of billions of neurons, 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED]
wrote:


This is equivalent to your prediction #2 where connecting the
output of neurons that respond to the sound of a cello to the
input of neurons that respond to red would cause a cello to
sound red. We should expect the effect to be temporary.

I'm not sure how this demonstrates consciousness. How do you
test that the subject actually experiences redness at the
sound of a cello, rather than just behaving as if
experiencing redness, for example, claiming to hear red?

You misunderstand the experiment in a very intersting way!

This experiment has to be done on the *skeptic* herself!

The prediction is that if *you* get your brain rewired, *you*
will experience this.

How do you know what I experience, as opposed to what I claim to
experience?

That is exactly the question you started with, so you

haven't gotten anywhere. I don't need proof that I experience
things. I already have that belief programmed into my brain.

Huh?

Now what are we talking about... I am confused:  I was talking
about proving my prediction.  I simply replied to your doubt about
whether a subject woudl be experiencing the predicted effects, or
just producing language consistent with it.  I gave you a solution
by pointing out that anyone who had an interest in the prediction
could themselves join in and be a subject.  That seemed to answer
your original question.


You are confusing truth and belief. I am not asking you to make me
believe that consciousness (that which distinguishes you from a
philosophical zombie) exists. I already believe that. I am asking you
to prove it. You haven't done that. I don't believe you can prove the
existence of anything that is both detectable and not detectable.


You are stuck in Level 0.

I showed something a great deal more sophisticated.  In fact, I 
explicitly agreed with you on a Level 0 version of what you just said: 
I actually said in the paper that I (and anyone else) cannot explain 
these phenomena qua the (Level 0) things that they appear to be.


But I went far beyond that:  I explained why people have difficulty 
defining these terms, and I explained a self-consistent understanding of 
the nature of consciousness that involves it being classified as a novel 
type of thing.


You cannot define in properly.

I can explain why you cannot define in properly.

I can both define and explain it, and part of that explanation is that 
the very nature of explanation is bound up in the solution.


But instead of understanding that the nature of explanation has to 
change to deal with the problem, you remain stuck with the old, broken 
idea of explanation, and keep trying to beat the argument with it!




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Ed Porter wrote:

Richard,

You have provided no basis for your argument that I have misunderstood 
your paper and the literature upon which it is based.


[snip]

My position is that we can actually describe a fairly large number of 
characteristics of our subjective experience consciousness that most 
other intelligent people agree with.  Although we cannot know that 
others experience the color red exactly the same way we do, we can 
determine that there are multiple shared describable characteristics 
that most people claim to have with regard to their subjective 
experiences of the color red.


This is what I meant when I said that you had completely misunderstood 
both my paper and the background literature:  the statement in the above 
paragraph could only be written by a person who does not understand the 
distinction between the Hard Problem of consciousness (this being 
David Chalmers' term for it) and the Easy problems.


The precise definition of qualia, which everyone agrees on, and which 
you are flatly contradicting here, is that these things do not involve 
anything that can be compared across individuals.


Since this an utterly fundamental concept, if you do not get this then 
it is almost impossible to discuss the topic.


Matt just tried to explain it to you.  You did not get it even then.




Richard Loosemore














---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Mark Waser
I think the reason that the hard question is interesting at all is that 
it would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that 
exists in our minds about what we should or should not do. There is no 
objective experiment you can do that will tell you whether any act, such 
as inflicting pain on a human, animal, or machine, is ethical or not. The 
only thing you can measure is belief, for example, by taking a poll.


What is the point to ethics?  The reason why you can't do objective 
experiments is because *YOU* don't have a grounded concept of ethics.  The 
second that you ground your concepts in effects that can be seen in the 
real world, there are numerous possible experiments.


The same is true of consciousness.  The hard problem of consciousness is 
hard because the question is ungrounded.  Define all of the arguments in 
terms of things that appear and matter in the real world and the question 
goes away.  It's only because you invent ungrounded unprovable distinctions 
that the so-called hard problem appears.


Torturing a p-zombie is unethical because whether it feels pain or not is 
100% irrelevant in the real world.  If it 100% acts as if it feels pain, 
then for all purposes that matter it does feel pain.  Why invent this 
mystical situation where it doesn't feel pain yet acts as if it does?


Richard's paper attempts to solve the hard problem by grounding some of the 
silliness.  It's the best possible effort short of just ignoring the 
silliness and going on to something else that is actually relevant to the 
real world.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, November 15, 2008 10:02 PM
Subject: RE: [agi] A paper that actually does solve the problem of 
consciousness



--- On Sat, 11/15/08, Ed Porter [EMAIL PROTECTED] wrote:

With regard to the second notion,
that conscious phenomena are not subject to scientific explanation, there 
is

extensive evidence to the contrary. The prescient psychological writings of
William James, and Dr. Alexander Luria’s famous studies of the effects of
variously located bullet wounds on the minds of Russian soldiers after 
World

War II, both illustrate that human consciousness can be scientifically
studied. The effects of various drugs on consciousness have been
scientifically studied.


Richard's paper is only about the hard question of consciousness, that 
which distinguishes you from a P-zombie, not the easy question about mental 
states that distinguish between being awake or asleep.


I think the reason that the hard question is interesting at all is that it 
would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that exists 
in our minds about what we should or should not do. There is no objective 
experiment you can do that will tell you whether any act, such as inflicting 
pain on a human, animal, or machine, is ethical or not. The only thing you 
can measure is belief, for example, by taking a poll.


-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Ben Goertzel
Ed / Richard,

It seems to me that Richard's propsal is in large part a modernization of
Peirce's metaphysical analysis of awareness.

Peirce introduced foundational metaphysical categories of First, Second and
Third ... where First is defined as raw unanalyzable awareness/being ...

http://www.helsinki.fi/science/commens/terms/firstness.html

To me, Richard's analysis sounds a lot like Peirce's statement that
consciousness is First...

And Ed's refutation sounds like a rejection of First as a meaningful
category, and an attempt to redirect the conversation to the level of
Third...

-- Ben G



On Sun, Nov 16, 2008 at 7:04 PM, Richard Loosemore [EMAIL PROTECTED]wrote:

 Ed Porter wrote:

 Richard,

 You have provided no basis for your argument that I have misunderstood
 your paper and the literature upon which it is based.

 [snip]

 My position is that we can actually describe a fairly large number of
 characteristics of our subjective experience consciousness that most other
 intelligent people agree with.  Although we cannot know that others
 experience the color red exactly the same way we do, we can determine that
 there are multiple shared describable characteristics that most people claim
 to have with regard to their subjective experiences of the color red.


 This is what I meant when I said that you had completely misunderstood both
 my paper and the background literature:  the statement in the above
 paragraph could only be written by a person who does not understand the
 distinction between the Hard Problem of consciousness (this being David
 Chalmers' term for it) and the Easy problems.

 The precise definition of qualia, which everyone agrees on, and which you
 are flatly contradicting here, is that these things do not involve anything
 that can be compared across individuals.

 Since this an utterly fundamental concept, if you do not get this then it
 is almost impossible to discuss the topic.

 Matt just tried to explain it to you.  You did not get it even then.




 Richard Loosemore














 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Mike Tintner
Richard:The precise definition of qualia, which everyone agrees on, and 
which

you are flatly contradicting here, is that these things do not involve
anything that can be compared across individuals.

Actually, we don't do a bad job of comparing our emotions/sensations - not 
remotely perfect, but not remotely as bad as the above philosophy would 
suggest. We do share each other's pains and joys to a remarkable extent. 
That's because our emotions are very much materially based and we share 
basically the same bodies and nervous systems.


The hard problem of consciousness is primarily about *not* 
qualia/emotions/sensations but *sentience*  - not about what a red bus or a 
warm hand stroking your face feel like to you, but about your capacity to 
feel anything at all - about your capacity not for particular types of 
emotions/sensations, but for emotion generally.


Sentience resides to a great extent in the nervous system, and whatever 
proto-nervous system preceded it in evolution. When we solve how that works 
we may solve the hard problem. Unless you believe that every thing including 
inanimate objects, feels, then the capacity of sentience clearly evolved and 
has an explanation.


(Bear in mind that AGI-ers' approaches to the problem of consciousness are 
bound to be limited by their disembodied and anti-evolutionary prejudices).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Ben Goertzel wrote:



Ed / Richard,

It seems to me that Richard's propsal is in large part a modernization 
of Peirce's metaphysical analysis of awareness.


Peirce introduced foundational metaphysical categories of First, Second 
and Third ... where First is defined as raw unanalyzable awareness/being ...


http://www.helsinki.fi/science/commens/terms/firstness.html

To me, Richard's analysis sounds a lot like Peirce's statement that 
consciousness is First...


And Ed's refutation sounds like a rejection of First as a meaningful 
category, and an attempt to redirect the conversation to the level of 
Third...


Sorry to be negative, but no, my proposal is not in any way a 
modernization of Peirce's metaphysical analysis of awareness.


The standard meaning of Hard Problem issues was described very well by 
Chalmers, and I am addressing the hard problem of concsciousness, not 
the other problems.


Ed is talking about consciousness in a way that plainly wanders back and 
forth between Hard Problem issues and Easy Problem, and as such he has 
misunderstood the entirety of what I wrote in the paper.


It might be arguable that my position relates to Feigl, but even that is 
significantly different.






Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore

Mike Tintner wrote:
Richard:The precise definition of qualia, which everyone agrees on, 
and which

you are flatly contradicting here, is that these things do not involve
anything that can be compared across individuals.

Actually, we don't do a bad job of comparing our emotions/sensations - 
not remotely perfect, but not remotely as bad as the above philosophy 
would suggest. We do share each other's pains and joys to a remarkable 
extent. That's because our emotions are very much materially based and 
we share basically the same bodies and nervous systems.


The hard problem of consciousness is primarily about *not* 
qualia/emotions/sensations but *sentience*  - not about what a red bus 
or a warm hand stroking your face feel like to you, but about your 
capacity to feel anything at all - about your capacity not for 
particular types of emotions/sensations, but for emotion generally.


Sentience resides to a great extent in the nervous system, and whatever 
proto-nervous system preceded it in evolution. When we solve how that 
works we may solve the hard problem. Unless you believe that every thing 
including inanimate objects, feels, then the capacity of sentience 
clearly evolved and has an explanation.


(Bear in mind that AGI-ers' approaches to the problem of consciousness 
are bound to be limited by their disembodied and anti-evolutionary 
prejudices).


Mike

Hard Problem is a technical term.

It was invented by David Chalmers, and it has a very specific meaning.

See the Chalmers reference in my paper.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Richard Loosemore


Three things.


First, David Chalmers is considered one of the world's foremost 
researchers in the consciousness field (he is certainly now the most 
celebrated).  He has read the argument presented in my paper, and he has 
discussed it with me.  He understood all of it, and he does not share 
any of your concerns, nor anything remotely like your concerns.  He had 
one single reservation, on a technical point, but when I explained my 
answer, he thought it interesting and novel, and possibly quite valid.


Second, the remainder of your comments below are not coherent enough to 
be answerable, and it is not my job to walk you through the basics of 
this field.


Third, about your digression:  gravity does not escape from black 
holes, because gravity is just the curvature of spacetime.  The other 
things that cannot escape from black holes are not forces.


I will not be replying to any further messages from you because you are 
wasting my time.




Richard Loosemore





Ed Porter wrote:

Richard,

 

Thank you for your reply. 

 

It implies your article was not as clearly worded as I would have liked 
it to have been, given the interpretation you say it is limited to.  
When you said


 

subjective phenomena associated with consciousness ... have the special 
status of being unanalyzable. (last paragraph in the first column of 
page 4 of your paper.) 



  you apparently meant something much more narrow, such as

 

subjective phenomena associated with consciousness [of the type that 
cannot be communicated between people --- and/or --- of the type that 
are unanalyzable] ... have the special status of being unanalyzable.


 

If you always intended that all your statements about the limited 
ability to analyze conscious phenomena be so limited --- then you were 
right --- I misunderstood your article, at least partially. 

 

We could argue about whether a reader should have understood this narrow 
interpretation.  But it should be noted Wikipedia, that unquestionable 
font of human knowledge, states “qualia” has multiple definitions, only 
some of which matche the meaning you claim “everyone agrees upon.”, 
i.e., subjective experiences that “do not involve anything that can be 
compared across individuals.” 

 

And in Wikipedia’s description of Chalmers’ hard problem of 
consciousness, it lists questions that arguably would be covered by my 
interpretation.


 

It is your paper, and it is up to you to decide how you define things, 
and how clearly you make your definitions known.  But even given your 
narrow interpretation of conscious phenomena in your paper, I think 
there are important additional statements that can be made concerning it.


 

First given some of the definitions of Chalmers hard problem it is not 
clear how much your definition adds.


 

Second, and more importantly, I do not think there is a totally clear 
distinction between Chalmers’ “hard problem of consciousness” and what 
he classifies as the easy problems of consciousness.  For example, the 
first two paragraphs on the second page of your paper seem to be 
discusses the unanalyzable nature of the hard problem.  This includes 
the following statement:


 

“…for every “objective” definition that has ever been proposed [for the 
hard problem], it seems, someone has countered that the real mystery has 
been side-stepped by the definition.”


 

If you define the hard problem of consciousness as being those aspects 
of consciousness that cannot be physically explained, it is like the 
hard problems concerning physical reality.  It would seem that many key 
aspects of physical reality are equally


 

“intrinsically beyond the reach of objective definition, while at the 
same time being as deserving of explanation as anything else in the 
universe” (Second paragraph on page 2 of your paper).


 

Over time we have explained more and more about concepts at the heart of 
physical reality such as time, space, existence, but always some mystery 
remains.  I think the same will be true about consciousness.  In the 
coming decades we will be able to explain more and more about 
consciousness, and what is covered by the “hard problem” (i.e., that 
which is unexplainable) will shrink, but there will always remain some 
mystery.  I believe that within decades two to six decades we will


 

--be able to examine the physical manifestations of aspects of qualia 
that now cannot now be communicated between people (and thus now fit 
within your definition of qualia);


 

--have an explanation for most of the major types of subjectively 
perceived properties and behaviors of consciousness; and


 

--be able to posit reasonable theories about why we experience 
consciousness as a sense of awareness and how the various properties of 
that sense of awareness are created.


 

But I believe there will always remain some mysteries, such as why there 
is any existence of anything, why there is any separation of anything, 
why there is any 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Trent Waddington
On Mon, Nov 17, 2008 at 10:47 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 I will not be replying to any further messages from you because you are
 wasting my time.

Welcome to the Internet.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 Three things.
 
 
 First, David Chalmers is considered one of the world's foremost
 researchers in the consciousness field (he is certainly now the most
 celebrated).  He has read the argument presented in my paper, and he
 has
 discussed it with me.  He understood all of it, and he does not share
 any of your concerns, nor anything remotely like your concerns.  He had
 one single reservation, on a technical point, but when I explained my
 answer, he thought it interesting and novel, and possibly quite valid.
 
 Second, the remainder of your comments below are not coherent enough to
 be answerable, and it is not my job to walk you through the basics of
 this field.
 
 Third, about your digression:  gravity does not escape from black
 holes, because gravity is just the curvature of spacetime.  The other
 things that cannot escape from black holes are not forces.
 
 I will not be replying to any further messages from you because you are
 wasting my time.
 
 

I read this paper several times and still have trouble holding the model
that you describe in my head as it fades quickly and then there is a just a
memory of it (recursive ADD?). I'm not up on the latest consciousness
research but still somewhat understand what is going on there. Your paper is
a nice and terse description but to get others to understand the highlighted
entity that you are trying to describe may be easier done with more
diagrams. When I kind of got it for a second it did appear quantitative,
like mathematically describable. I find it hard to believe though that
others have not put it this way, I mean doesn't Hofstadter talk about this
in his books, in an unacademical fashion?
 
Also Edward's critique is very well expressed and thoughtful. Just blowing
him off like that is undeserving.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Ben Goertzel
 Sorry to be negative, but no, my proposal is not in any way a modernization
 of Peirce's metaphysical analysis of awareness.



Could you elaborate the difference?  It seems very similar to me.   You're
saying that consciousness has to do with the bottoming-out of mental
hierarchies in raw percepts that are unanalyzable by the mind ... and
Peirce's Firsts are precisely raw percepts that are unanalyzable by the
mind...


***
The standard meaning of Hard Problem issues was described very well by
Chalmers, and I am addressing the hard problem of concsciousness, not the
other problems.
***

Hmmm  I don't really understand why you think your argument is a
solution to the hard problem  It seems like you explicitly acknowledge
in your paper that it's *not*, actually  It's more like a philosophical
argument as to why the hard problem is unsolvable, IMO.


ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Trent Waddington
Richard,

  After reading your paper and contemplating the implications, I
believe you have done a good job at describing the intuitive notion of
consciousness that many lay-people use the word to refer to.  I
don't think your explanation is fleshed out enough for those
lay-people, but its certainly sufficient for most the people on this
list.  I would recommend that anyone who hasn't read the paper, and
has an interest in this whole consciousness business, give it a read.

I especially liked the bit where you describe how the model of self
can't be defined in terms of anything else.. as it is inherently
recursive.  I wonder whether the dynamic updating of the model of self
may well be exactly the subjective experience of consciousness that
people describe.  If so, the notion of a p-zombie is not impossible,
as you suggest in your conclusions, but simply an AGI without a
self-model.

Finally, the introduction says:

  Given  the  strength  of  feeling on  these matters - for  example,
 the widespread belief  that AGIs  would  be  dangerous  because,  as
conscious  beings, they would inevitably rebel against their lack of
freedom - it  is  incumbent upon  the AGI  community  to  resolve
these questions  as  soon  as  possible.

I was really looking forward to seeing you address this widespread
belief, but unfortunately you declined.  Seems a bit of a tease.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Benjamin Johnston


I completed the first draft of a technical paper on consciousness the 
other day.   It is intended for the AGI-09 conference, and it can be 
found at:



Hi Richard,

I don't have any comments yet about what you have written, because I'm 
not sure I fully understand what you're trying to say... I hope your 
answers to these questions will help clarify things.


It seems to me that your core argument goes something like this:

That there are many concepts for which an introspective analysis can 
only return the concept itself.

That this recursion blocks any possible explanation.
That consciousness is one of these concepts because self is inherently 
recursive.
Therefore, consciousness is explicitly blocked from having any kind of 
explanation.


Is this correct? If not, how have I misinterpreted you?


I have a thought experiment that might help me understand your ideas:

If we have a robot designed according to your molecular model, and we 
then ask the robot what exactly is the nature of red or what is it 
like to experience the subjective essense of red, the robot may analyze 
this concept, ultimately bottoming out on an incoming signal line.


But what if this robot is intelligent and can study other robots? It 
might then examine other robots and see that when their analysis bottoms 
out on an incoming signal line, what actually happens is that the 
incoming signal line is activated by electromagnetic energy of a certain 
frequency, and that the object recognition routines identify patterns in 
signal lines and that when an object is identified it gets annotated 
with texture and color information from its sensations, and that a 
particular software module injects all that information into the 
foreground memory. It might conclude that the experience of 
experiencing red in the other robot is to have sensors inject atoms 
into foreground memory, and it could then explain how the current 
context of that robot's foreground memory interacts with the changing 
sensations (that have been injected into foreground memory) to make that 
experience 'meaningful' to the robot.


What if this robot then turns its inspection abilities onto itself? Can 
it therefore further analyze red? How does your theory interpret that 
situation?


-Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Charles Hixson

Robert Swaine wrote:

Conciousness is akin to the phlogiston theory in chemistry.  It is likely a 
shadow concept, similar to how the bodily reactions make us feel that the heart 
is the seat of emotions.  Gladly, cardiologist and heart surgeons do not look 
for a spirit, a soul, or kindness in the heart muscle.  The brain organ need 
not contain anything beyond the means to effect physical behavior,.. and 
feedback as to those behavior.
  
This isn't clear.  Certainly some definitions of consciousness fit this 
analysis, but the term is generally so loosely defined that unlike 
phlogiston it probably can't be disproven. 

OTOH, it seems to me quite likely that there are, or at least can be, 
definitions of consciousness which fit within the common definition of 
consciousness and are also reasonably accurate.  And testable.  (I 
haven't reviewed Richard Loosemore's recent paper.  Perhaps it is one of 
these.)

A finite degree of sensory awareness serves as a suitable replacement for 
consciousness, in otherwords, just feedback.
  
To an extent I agree with you.  I have in the past argued that a 
thermostat is minimally conscious.  But please note the *minimally*.  
Feedback cannot, by itself, progress beyond that minimal state.  Just 
what else is required is very interesting.  (The people who refuse to 
call thermostats minimally conscious merely have stricter minimal 
requirements for consciousness.  We don't disagree about how a 
thermostat behaves.)

Would it really make a difference if we were all biological machines, and our perceptions 
were the same as other animals, or other designed minds; more so if we were 
in a simulated existence.  The search for consciousness is a misleading (though not 
entirely fruitless) path to AGI.

  
??? We *are* biological machines.  So what?  And our perceptions are 
basically the same as those of other animals.  This doesn't make sense 
as an argument, unless you are presuming that other animals aren't 
conscious, which flys in the face of most recent research on the 
subject.  (I'm not sure that they've demonstrated consciousness in 
bacteria, but they have demonstrated that they are trainable.  Whether 
they are conscious, then, is probably an artifact of your definition.)



--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:

  

From: Richard Loosemore [EMAIL PROTECTED]
Subject: [agi] A paper that actually does solve the problem of consciousness
To: agi@v2.listbox.com
Date: Friday, November 14, 2008, 12:27 PM
I completed the first draft of a technical paper on
consciousness the 
other day.   It is intended for the AGI-09 conference, and
it can be 
found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

The title is Consciousness in Human and Machine: A
Theory and Some 
Falsifiable Predictions, and it does solve the

problem, believe it or not.

But I have no illusions:  it will be misunderstood, at the
very least. 
I expect there will be plenty of people who argue that it
does not solve 
the problem, but I don't really care, because I think
history will 
eventually show that this is indeed the right answer.  It
gives a 
satisfying answer to all the outstanding questions and it

feels right.

Oh, and it does make some testable predictions.  Alas, we
do not yet 
have the technology to perform the tests yet, but the
predictions are on 
the table, anyhow.


In a longer version I would go into a lot more detail,
introducing  the 
background material at more length, analyzing the other
proposals that 
have been made and fleshing out the technical aspects along
several 
dimensions.  But the size limit for the conference was 6
pages, so that 
was all I could cram in.






Richard Loosemore








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Trent Waddington
On Sat, Nov 15, 2008 at 6:42 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
 To an extent I agree with you.  I have in the past argued that a thermostat
 is minimally conscious.  But please note the *minimally*.

I invite you then to consider the horrors being inflicted upon my CPU
by Microsoft software.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Matt Mahoney
--- On Fri, 11/14/08, Ben Goertzel [EMAIL PROTECTED] wrote:
The problem is that there may be many possible explanations for why we
can't explain consciousness.

There may be many reasons why we can't explain why 2 + 2 = 5. Suppose I 
identified all the neurons in your brain that respond to 2 + 2 and all the 
neurons that respond to 5, and connected them so that whenever one set fires, 
the other set fires. If you entered 2 + 2 into a calculator and it said 4 you 
would insist it was broken. If you put 2 pebbles into a bucket and then 2 more 
and saw that there were 4, and if everyone else's brain was wired like yours, 
then philosophers would write books about the mystery of the missing pebble.

All machine learning algorithms must have biases, an assumed a-priori 
distribution over hypothesis space. Otherwise they couldn't learn. What would 
really be surprising would be the case that the human brain were somehow 
different.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

In your paper you say


The argument does not
say anything about the nature of conscious experience, qua
subjective experience, but the argument does say why it
cannot supply an explanation of subjective experience. Is
explaining why we cannot explain something the same as
explaining it?


I think it isn't the same...

The problem is that there may be many possible explanations for why we 
can't explain consciousness.  And it seems there is no empirical way to 
decide among these explanations.  So we need to decide among them via 
some sort of metatheoretical criteria -- Occam's Razor, or conceptual 
consistency with our scientific ideas, or some such.  The question for 
you then is, why is yours the best explanation of why we can't explain 
consciousness?


H that question of mine, which you quote above, was the
introduction to part 2 of the paper, which then specifically supplied an
answer to your above question.

In other words, I accept your question, but the words that came
immediately after the above quote did actually answer it in detail. ;-)

Short summary of that later answer:  we do indeed need an
occams-razor-like reason for believing the solution I propose, but
there are different versions of how you understand occams razor, and I
argue that you decide among *those* things by having a fundamental
theory of semantics (not a superficial theory, but a fundamental,
ontologically deep theory).

What I then effectively do is to point to a spectrum of semantic/
ontological theories, ranging from something as extremely formalist as
Hutter (though I do not mention him by name) to something as extremely
empirical and emergentist as Loosemore (the idea of Extreme Cognitive
Semantics) ... and I argue that the only self-consistent position is the
Extreme Cognitive Semantics position.

The implication of that argument is, then, that the very best we can do
to decide between my theory and any other in the same vein, is to apply
the rules of science to it:  this will then be a mixture of all the
usual processes, and among those processes will be the main criterion,
which is:  Does a majority of people find that this theory makes more
sense than any other, and does it make novel predictions that can be
falsified?

I am happy to be judged on those criteria.






But I have another confusion about your argument.  I understand the idea 
that a mind's analysis process has eventually got to bottom out 
somewhere, so that it will describe some entities using descriptions 
that are (from its perspective) arbitrary and can't be decomposed any 
further.  These bottom-level entities could be sensations or they could 
be sort-of arbitrary internal tokens out of which internal patterns are 
constructed


But what do you say about the experience of being conscious of a chair, 
then?  Are you saying that the consciousness I have of the chair is the 
*set* of all the bottom-level unanalyzables into which the chair is 
decomposed by my mind?


ben



Well, let us distinguish two kinds of answer to your question

When a philosopher says that there is a mystery about what the
conscious experience of red actually is, and that they could imagine a
machine that was able to talk about red, etc etc, without actually
having that mysterious experience, then we have the beginning of a
philosophical quandary that demands explanation.

But when you say that you are conscious of a chair, I don't know of
any philosophers who would say that there is a profound mystery there,
which is over and above the mystery of the qualia of all the
chair-parts.  Philosophers don't ever say (at least, I don't recall)
that chairs and other objects contain a deep mystery that seems to be
unanalyzable.  From that point of view, I would have to ask for extra 
information about what you wanted explained:  do you feel that [chair] 
has a conscious phenomenology that is independent of the sum of its 
parts-qualia?


Second answer:

Now, I could give a much deeper answer to your question, which would
start talking about our general awareness of the things around us ... 
and this may have been what you meant by your consciousness of the chair.


This is a little tricky, because now what I think is happening is that 
you first have to think about the idea of your consciousness, and what 
happens then, I believe, would be a kind of mental summing of the qualia 
- forming a new concept-atom to encode [all of the component qualia of 
[chair]].  You can then see how this summed concept would still be 
fairly unanalyzable, because it was just one step removed from a host 
of others that were dead ends.


This needs more thinking, but I believe that it can be worked out 
properly, in a way consistent with the original argument.


I am especially interested in the fact that there are some vague 
consciousness feelings we get:  things that are kinda mysterious. 
Perhaps they are just these atoms that are one step removed from 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Matt Mahoney
--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED] wrote:

  This is equivalent to your prediction #2 where connecting the output
  of neurons that respond to the sound of a cello to the input of
  neurons that respond to red would cause a cello to sound red. We
  should expect the effect to be temporary.
  
  I'm not sure how this demonstrates consciousness. How do you test
  that the subject actually experiences redness at the sound of a
  cello, rather than just behaving as if experiencing redness, for
  example, claiming to hear red?
 
 You misunderstand the experiment in a very intersting way!
 
 This experiment has to be done on the *skeptic* herself!
 
 The prediction is that if *you* get your brain rewired,
 *you* will experience this.

How do you know what I experience, as opposed to what I claim to experience?

That is exactly the question you started with, so you haven't gotten anywhere. 
I don't need proof that I experience things. I already have that belief 
programmed into my brain.

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Richard Loosemore

Matt Mahoney wrote:

--- On Sat, 11/15/08, Richard Loosemore [EMAIL PROTECTED] wrote:


This is equivalent to your prediction #2 where connecting the output
of neurons that respond to the sound of a cello to the input of
neurons that respond to red would cause a cello to sound red. We
should expect the effect to be temporary.

I'm not sure how this demonstrates consciousness. How do you test
that the subject actually experiences redness at the sound of a
cello, rather than just behaving as if experiencing redness, for
example, claiming to hear red?

You misunderstand the experiment in a very intersting way!

This experiment has to be done on the *skeptic* herself!

The prediction is that if *you* get your brain rewired,
*you* will experience this.


How do you know what I experience, as opposed to what I claim to experience?

That is exactly the question you started with, so you haven't gotten anywhere. 
I don't need proof that I experience things. I already have that belief 
programmed into my brain.


Huh?

Now what are we talking about... I am confused:  I was talking about 
proving my prediction.  I simply replied to your doubt about whether a 
subject woudl be experiencing the predicted effects, or just producing 
language consistent with it.  I gave you a solution by pointing out that 
anyone who had an interest in the prediction could themselves join in 
and be a subject.  That seemed to answer your original question.




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Ed Porter
  I respect the amount of thought that when into Richard's paper
Consciousness in Human and Machine: A Theory and Some Falsifiable
Predictions --- but I do not think it provides a good explanation of
consciousness.  

 

  It seems to spend more time explaining the limitations on what we can
know about consciousness than explaining consciousness, itself.  What little
the paper says about consciousness can be summed up roughly as follows: that
consciousness is created by a system that can analyze and seek explanations
from some, presumably experientially-learned, knowledgebase, based on
associations between nodes in that knowledgebase, and that it can determine
when it cannot describe a given node further, in terms of relations to other
nodes, but nevertheless senses the given node is real (such as the way it is
difficult for a human to explain what it is like to sense the color red).

 

  First I disagree with the paper's allegation that analysis of
conscious phenomena necessarily bottom out more than analyses of many
other aspects of reality.  Second, I disagree that conscious phenomena are
beyond any scientific explanation.  

 

  With regard to the first, I feel our minds contain substantial
memories of various conscious states, and thus there is actually substantial
experiential grounding of many aspects of consciousness recorded in our
brains.  This is particularly true for the consciousness of emotional states
(for example, brain scans on very young infants indicate a high percent of
their mental activity is in emotional centers of the brain).  I developed
many of my concepts of how to design an AGI based on reading brain science
and performing introspection into my own conscious and subconscious thought
processes, and I found it quite easy to draw many generalities from the
behavior of my own conscious mind.  Since I view the subconscious to be at
the same time both a staging area for, and a reactive audience for,
conscious thoughts, I think one has to view the subconscious and
consciouness as part of a functioning whole.  

 

  When I think of the color red, I don't bottom out.  Instead I have
many associations with my experiences of redness that provide it with deep
grounding.  As with the description of any other concept, it is hard to
explain how I experience red to others, other than through experiences we
share relating to that concept.  This would include things we see in common
to be red, or perhaps common emotional experiences to seeing the red of
blood that has been spilled in violence, or the way the sensation of red
seems to fill a 2 dimensional portion of an image that we perceive as a two
dimensional distribution of differently colored areas.   But I can
communicate within my own mind across time what it is like to sense red,
such as in dreams when my eyes are closed.  Yes, the experience of sensing
red does not decompose into parts, such as the way the sensed image of a
human body can be de-composed into the seeing of subordinate parts, but that
does not necessarily mean that my sensing of something that is a certain
color of red, is somehow more mysterious than my sensing of seeing a human
body.

 

  With regard to the second notion, that conscious phenomena are not
subject to scientific explanation, there is extensive evidence to the
contrary.  The prescient psychological writings of William James, and Dr.
Alexander Luria's famous studies of the effects of variously located bullet
wounds on the minds of Russian soldiers after World War II, both illustrate
that human consciousness can be scientifically studied.  The effects of
various drugs on consciousness have been scientifically studied.  Multiple
experiments have shown that the presence or absence of synchrony between
neural firings in various parts of the brain have been strongly correlated
with human subjects reporting the presence or absence, respectively, of
conscious experience of various thoughts or sensory inputs.  Multiple
studies have shown that electrode stimulation to different parts of the
brain tend to make the human consciousness aware of different thoughts.  Our
own personal experiences with our own individual consciousnesses, the
current scientific levels of knowledge about commonly reported conscious
experiences, and increasingly more sophisticated ways to correlate
objectively observable brain states with various reports of human conscious
experience, all indicate that consciousness already is subject to scientific
explanation.  In the future, particularly with the advent of much more
sophisticated brain scanning tools, and with the development of AGI,
consciousness will be much more subject to scientific explanation.

 

  Does this mean we will ever be able to ultimately explain what it
means to be conscious?  The answer is probably no more than we will ever be
able to fully explain many of the other big existential questions of
science, such as what is time and space and 

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-15 Thread Matt Mahoney
--- On Sat, 11/15/08, Ed Porter [EMAIL PROTECTED] wrote:
  With regard to the second notion,
that conscious phenomena are not subject to scientific explanation, there is
extensive evidence to the contrary.  The prescient psychological writings of
William James, and Dr. Alexander Luria’s famous studies of the effects of
variously located bullet wounds on the minds of Russian soldiers after World
War II, both illustrate that human consciousness can be scientifically
studied.  The effects of various drugs on consciousness have been
scientifically studied.

Richard's paper is only about the hard question of consciousness, that which 
distinguishes you from a P-zombie, not the easy question about mental states 
that distinguish between being awake or asleep.

I think the reason that the hard question is interesting at all is that it 
would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that exists in 
our minds about what we should or should not do. There is no objective 
experiment you can do that will tell you whether any act, such as inflicting 
pain on a human, animal, or machine, is ethical or not. The only thing you can 
measure is belief, for example, by taking a poll.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Derek Zahn

Richard,
 
As a general rule, I find discussions about consciousness, qualia, and so forth 
to be unhelpful, frustrating, and unnecessary.  However, I enjoyed this paper a 
great deal.  Thanks for writing it.  Because of my inclinations on these 
matters, I am not an expert on the history of thought on the topic, or its 
current status among philosophers, but I find your account to be credible and 
reasonably clear.  I'm not particularly repulsed by the idea that ... our most 
immediate, subjective experiance of the world is, in some sense, an artifact 
produced by the operation of the brain so searching for a more satisfying 
conclusion is not really high up on my priority list.  Still, I don't see 
anything immediately objectionable in your analysis.
 
I am not certain about the distinguishing power of your falsifiable 
predictions, but only because I would need to give that considerably more 
thought.
 
I look forward to being in the audience when you present the paper at AGI-09.
 
Derek Zahn
agiblog.net


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Derek Zahn

Oh, one other thing I forgot to mention.  To reach my cheerful conclusion about 
your paper, I have to be willing to accept your model of cognition.  I'm pretty 
easy on that premise-granting, by which I mean that I'm normally willing to 
go along with architectural suggestions to see where they lead.  But I will be 
curious to see whether others are also willing to go along with you on your 
generic  cognitive system model.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Robert Swaine
Conciousness is akin to the phlogiston theory in chemistry.  It is likely a 
shadow concept, similar to how the bodily reactions make us feel that the heart 
is the seat of emotions.  Gladly, cardiologist and heart surgeons do not look 
for a spirit, a soul, or kindness in the heart muscle.  The brain organ need 
not contain anything beyond the means to effect physical behavior,.. and 
feedback as to those behavior.

A finite degree of sensory awareness serves as a suitable replacement for 
consciousness, in otherwords, just feedback.

Would it really make a difference if we were all biological machines, and our 
perceptions were the same as other animals, or other designed minds; more so 
if we were in a simulated existence.  The search for consciousness is a 
misleading (though not entirely fruitless) path to AGI.


--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 From: Richard Loosemore [EMAIL PROTECTED]
 Subject: [agi] A paper that actually does solve the problem of consciousness
 To: agi@v2.listbox.com
 Date: Friday, November 14, 2008, 12:27 PM
 I completed the first draft of a technical paper on
 consciousness the 
 other day.   It is intended for the AGI-09 conference, and
 it can be 
 found at:
 
 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
 
 The title is Consciousness in Human and Machine: A
 Theory and Some 
 Falsifiable Predictions, and it does solve the
 problem, believe it or not.
 
 But I have no illusions:  it will be misunderstood, at the
 very least. 
 I expect there will be plenty of people who argue that it
 does not solve 
 the problem, but I don't really care, because I think
 history will 
 eventually show that this is indeed the right answer.  It
 gives a 
 satisfying answer to all the outstanding questions and it
 feels right.
 
 Oh, and it does make some testable predictions.  Alas, we
 do not yet 
 have the technology to perform the tests yet, but the
 predictions are on 
 the table, anyhow.
 
 In a longer version I would go into a lot more detail,
 introducing  the 
 background material at more length, analyzing the other
 proposals that 
 have been made and fleshing out the technical aspects along
 several 
 dimensions.  But the size limit for the conference was 6
 pages, so that 
 was all I could cram in.
 
 
 
 
 
 Richard Loosemore
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
Some notes/review.

Whether AGI is conscious is independent from whether it'll
rebel/be dangerous. Answering any kind of question about
consciousness doesn't answer a question about safety.

How is the situation with p-zombies atom-by-atom identical to
conscious beings not resolved by saying that in this case
consciousness is an epiphenomenon, meaninglessness?
http://www.overcomingbias.com/2008/04/zombies.html
http://www.overcomingbias.com/2008/04/zombies-ii.html
http://www.overcomingbias.com/2008/04/anti-zombie-pri.html

Jumping into molecular framework as describing human cognition is
unwarranted. It could be a description of AGI design, or it could be a
theoretical description of more general epistemology, but as presented
it's not general enough to automatically correspond to the brain.
Also, semantics of atoms is tricky business, for all I know it keeps
shifting with the focus of attention, often dramatically. Saying that
self is a cluster of atoms doesn't cut it.

Bottoming out of explanation of experience is a good answer, but you
don't need to point to specific moving parts of a specific cognitive
architecture to give it (I don't see how it helps with the argument).
If you have a belief (generally, a state of mind), it may indicate
that the world has a certain property, that world having that property
caused you to have this belief, or it can indicate that you have a
certain cognitive quirk that caused this belief, a loophole in
cognition. There is always a cause, the trick is in correctly
dereferencing the belief.
http://www.overcomingbias.com/2008/03/righting-a-wron.html

Subjective phenomena might be unreachable for meta-introspection, but
it doesn't place them on different level, making them unanalyzeable,
you can in principle inspect them from outside, using tools other then
one's mind itself. You yourself just presented a model of what's
happening.

Meaning/information is relative, it can be represented within a basis,
for example within a mind, and communicated to another mind. Like
speed, it has no absolute, but the laws of relativity, of conversion
between frames of reference, between minds, are precise and not
arbitrary. Possible-worlds semantics is one way to establish a basis,
allowing to communicate concepts, but maybe not a very good one.
Grounding in common cognitive architecture is probably a good move,
but it doesn't have fundamental significance.

Predictions are not described carefully enough to appear as
following from your theory. They use some terminology, but on a level
that allows literal translation to a language of perceptual wiring,
with correspondence between qualia and areas implementing
modalities/receiving perceptual input.

You didn't argue about a general case of AGI, so how does it follow
that any AGI is bound to be conscious?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Richard Loosemore

Derek Zahn wrote:
Oh, one other thing I forgot to mention.  To reach my cheerful 
conclusion about your paper, I have to be willing to accept your model 
of cognition.  I'm pretty easy on that premise-granting, by which I 
mean that I'm normally willing to go along with architectural 
suggestions to see where they lead.  But I will be curious to see 
whether others are also willing to go along with you on your generic  
cognitive system model.




That's an interesting point.

In fact, the argument doesn't change too much if we go to other models 
of cognition, it just looks different ... and more complicated, which is 
partly why I wanted to stick with my own formalism.


The crucial part is that there has to be a very powerful mechanism that 
lets the system analyze its own concepts - it has to be able to reflect 
on its own knowledge in a very recursive kind of way.  Now, I think that 
Novamente, OpenCog and other systems will eventually have that sort of 
capability because it is such a crucial part of the general bit in 
artificial general intelligence.


Once a system has that mechanism, I can use it to take the line I took 
in the paper.


Also, the generic model of cognition was useful to me in the later part 
of the paper where I want to analyze semantics.  Other AGI architectures 
(logical ones for example) implicitly stick with the very strict kinds 
of semantics (possible worlds, e.g.) that I actually think cannot be 
made to work for all of cognition.


Anyhow, thanks for your positive comments.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Richard Loosemore

Robert Swaine wrote:

Conciousness is akin to the phlogiston theory in chemistry.  It is
likely a shadow concept, similar to how the bodily reactions make us
feel that the heart is the seat of emotions.  Gladly, cardiologist
and heart surgeons do not look for a spirit, a soul, or kindness in
the heart muscle.  The brain organ need not contain anything beyond
the means to effect physical behavior,.. and feedback as to those
behavior.

A finite degree of sensory awareness serves as a suitable replacement
for consciousness, in otherwords, just feedback.

Would it really make a difference if we were all biological machines,
and our perceptions were the same as other animals, or other
designed minds; more so if we were in a simulated existence.  The
search for consciousness is a misleading (though not entirely
fruitless) path to AGI.


Well, with respect, it does sound as though you did not read the paper
itself, or any of the other books like Chalmers' Conscious Mind.

I say this because there are lengthy (and standard) replies to the 
points that you make, both in the paper and in the literature.


And, please don't misunderstand: this is not a path to AGI.  Just an 
important side issue that the geneal public cares about enormously.




Richard Loosemore



--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:


From: Richard Loosemore [EMAIL PROTECTED] Subject: [agi] A paper
that actually does solve the problem of consciousness To:
agi@v2.listbox.com Date: Friday, November 14, 2008, 12:27 PM I
completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can
be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


The title is Consciousness in Human and Machine: A Theory and Some
 Falsifiable Predictions, and it does solve the problem, believe
it or not.

But I have no illusions:  it will be misunderstood, at the very
least. I expect there will be plenty of people who argue that it 
does not solve the problem, but I don't really care, because I

think history will eventually show that this is indeed the right
answer.  It gives a satisfying answer to all the outstanding
questions and it feels right.

Oh, and it does make some testable predictions.  Alas, we do not
yet have the technology to perform the tests yet, but the 
predictions are on the table, anyhow.


In a longer version I would go into a lot more detail, introducing
the background material at more length, analyzing the other 
proposals that have been made and fleshing out the technical

aspects along several dimensions.  But the size limit for the
conference was 6 pages, so that was all I could cram in.





Richard Loosemore


--- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription: https://www.listbox.com/member/?; Powered by Listbox:
http://www.listbox.com




--- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription:
https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
(I'm sorry that I make some unclear statements on semantics/meaning,
I'll probably get to the description of this perspective later on the
blog (or maybe it'll become obsolete before that), but it's a long
story, and writing it up on the spot isn't an option.)

On Sat, Nov 15, 2008 at 2:18 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Taking the position that consciousness is an epiphenomenon and is therefore
 meaningless has difficulties.

Rather p-zombieness in atom-by-atom the same environment is an epiphenomenon.


 By saying that it is an epiphenomenon, you actually do not answer the
 questions about instrinsic qualities and how they relate to other things in
 the universe.  The key point is that we do have other examples of
 epiphenomena (e.g. smoke from a steam train),

What do you mean by smoke being epiphenomenal?

 but their ontological status
 is very clear:  they are things in the world.  We do not know of other
 things with such puzzling ontology (like consciousness), that we can use as
 a clear analogy, to explain what consciousness is.

 Also, it raises the question of *why* there should be an epiphenomenon.
  Calling it an E does not tell us why such a thing should happen.  And it
 leaves us in the dark about whether or not to believe that other systems
 that are not atom-for-atom identical with us, should also have this
 epiphenomenon.

I don't know how to parse the word epiphenomenon in this context. I
use to to describe reference-free, meaningless concepts, so you can't
say that some epiphenomenon is present here or there, that would be
meaningless.


 Jumping into molecular framework as describing human cognition is
 unwarranted. It could be a description of AGI design, or it could be a
 theoretical description of more general epistemology, but as presented
 it's not general enough to automatically correspond to the brain.
 Also, semantics of atoms is tricky business, for all I know it keeps
 shifting with the focus of attention, often dramatically. Saying that
 self is a cluster of atoms doesn't cut it.

 I'm not sure of what you are saying, exactly.

 The framework is general in this sense:  its components have *clear*
 counterparts in all models of cognition, both human and machine.  So, for
 example, if you look at a system that uses logical reasoning and bare
 symbols, that formalism will differentiate between the symbols that are
 currently active, and playing a role in the system's analysis of the world,
 and those that are not active.  That is the distinction between foreground
 and background.

Without a working, functional theory of cognition, this high-level
descriptive picture has little explanatory power. It might be a step
towards developing a useful theory, but it doesn't explain anything.
There is a set of states of mind that correlates with experience of
apples, etc. So what? You can't build a detailed edifice on general
principles and claim that far-reaching conclusions apply to actual
brain. They might, but you need a semantic link from theory to
described functionality.


 As for the self symbol, there was no time to go into detail.  But there
 clearly is an atom that represents the self.

*shug*
It only stands as definition, there is no self-neuron, or something
easily identifiable as self, it's a complex thing. I'm not sure I
even understand what self refers to subjectively, I don't feel any
clear focus of self-perception, my experience is filled with thoughts
on many things, some of them involving management of thought process,
some of external concepts, but no unified center to speak of...


 Bottoming out of explanation of experience is a good answer, but you
 don't need to point to specific moving parts of a specific cognitive
 architecture to give it (I don't see how it helps with the argument).
 If you have a belief (generally, a state of mind), it may indicate
 that the world has a certain property, that world having that property
 caused you to have this belief, or it can indicate that you have a
 certain cognitive quirk that caused this belief, a loophole in
 cognition. There is always a cause, the trick is in correctly
 dereferencing the belief.
 http://www.overcomingbias.com/2008/03/righting-a-wron.html

 Not so fast.  There are many different types of mistaken beliefs. Most of
 these are so shallow that they could not possibly explain the
 characteristics of consciousness that need to be explained.

 And, as I point out in the second part, it is not at all clear that this
 particular issue can be given the status of mistaken or failure.  It
 simply does not fit with all the other known examples of failures of the
 cognitive system, such as hallucinations, etc.

 I thin it would be intellectually dishonest to try to sweep it under the rug
 with those other things, because those are clearly breakdowns that, with a
 little care, could all be avoided.  But this issue is utterly different:  by
 making the argument that I did, I think I showed that it was 

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Matt Mahoney
--- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

Interesting that some of your predictions have already been tested, in 
particular, synaesthetic qualia was described by George Stratton in 1896. When 
people wear glasses that turn images upside down, they adapt after several days 
and begin to see the world normally.

http://www.cns.nyu.edu/~nava/courses/psych_and_brain/pdfs/Stratton_1896.pdf
http://wearcam.org/tetherless/node4.html

This is equivalent to your prediction #2 where connecting the output of neurons 
that respond to the sound of a cello to the input of neurons that respond to 
red would cause a cello to sound red. We should expect the effect to be 
temporary.

I'm not sure how this demonstrates consciousness. How do you test that the 
subject actually experiences redness at the sound of a cello, rather than just 
behaving as if experiencing redness, for example, claiming to hear red?

I can do a similar experiment with autobliss (a program that learns a 2 input 
logic function by reinforcement). If I swapped the inputs, the program would 
make mistakes at first, but adapt after a few dozen training sessions. So 
autobliss meets one of the requirements for qualia. The other is that it be 
advanced enough to introspect on itself, and that which it cannot analyze 
(describe in terms of simpler phenomena) is qualia. What you describe as 
elements are neurons in a connectionist model, and the atoms are the set of 
active neurons. Analysis means describing a neuron in terms of its inputs. 
Then qualia is the first layer of a feedforward network. In this respect, 
autobliss is a single neuron with 4 inputs, and those inputs are therefore its 
qualia.

You might object that autobliss is not advanced enough to ponder its own self 
existence. Perhaps you define advanced to mean it is capable of language 
(pass the Turing test), but I don't think that's what you meant. In that case, 
you need to define more carefully what qualifies as sufficiently powerful.


-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Ben Goertzel
Richard,

In your paper you say


The argument does not
say anything about the nature of conscious experience, qua
subjective experience, but the argument does say why it
cannot supply an explanation of subjective experience. Is
explaining why we cannot explain something the same as
explaining it?


I think it isn't the same...

The problem is that there may be many possible explanations for why we can't
explain consciousness.  And it seems there is no empirical way to decide
among these explanations.  So we need to decide among them via some sort of
metatheoretical criteria -- Occam's Razor, or conceptual consistency with
our scientific ideas, or some such.  The question for you then is, why is
yours the best explanation of why we can't explain consciousness?

But I have another confusion about your argument.  I understand the idea
that a mind's analysis process has eventually got to bottom out somewhere,
so that it will describe some entities using descriptions that are (from its
perspective) arbitrary and can't be decomposed any further.  These
bottom-level entities could be sensations or they could be sort-of arbitrary
internal tokens out of which internal patterns are constructed

But what do you say about the experience of being conscious of a chair,
then?  Are you saying that the consciousness I have of the chair is the
*set* of all the bottom-level unanalyzables into which the chair is
decomposed by my mind?

ben


On Fri, Nov 14, 2008 at 11:44 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Fri, 11/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 
 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

 Interesting that some of your predictions have already been tested, in
 particular, synaesthetic qualia was described by George Stratton in 1896.
 When people wear glasses that turn images upside down, they adapt after
 several days and begin to see the world normally.

 http://www.cns.nyu.edu/~nava/courses/psych_and_brain/pdfs/Stratton_1896.pdfhttp://www.cns.nyu.edu/%7Enava/courses/psych_and_brain/pdfs/Stratton_1896.pdf
 http://wearcam.org/tetherless/node4.html

 This is equivalent to your prediction #2 where connecting the output of
 neurons that respond to the sound of a cello to the input of neurons that
 respond to red would cause a cello to sound red. We should expect the effect
 to be temporary.

 I'm not sure how this demonstrates consciousness. How do you test that the
 subject actually experiences redness at the sound of a cello, rather than
 just behaving as if experiencing redness, for example, claiming to hear red?

 I can do a similar experiment with autobliss (a program that learns a 2
 input logic function by reinforcement). If I swapped the inputs, the program
 would make mistakes at first, but adapt after a few dozen training sessions.
 So autobliss meets one of the requirements for qualia. The other is that it
 be advanced enough to introspect on itself, and that which it cannot analyze
 (describe in terms of simpler phenomena) is qualia. What you describe as
 elements are neurons in a connectionist model, and the atoms are the set
 of active neurons. Analysis means describing a neuron in terms of its
 inputs. Then qualia is the first layer of a feedforward network. In this
 respect, autobliss is a single neuron with 4 inputs, and those inputs are
 therefore its qualia.

 You might object that autobliss is not advanced enough to ponder its own
 self existence. Perhaps you define advanced to mean it is capable of
 language (pass the Turing test), but I don't think that's what you meant. In
 that case, you need to define more carefully what qualifies as sufficiently
 powerful.


 -- Matt Mahoney, [EMAIL PROTECTED]





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com