Ed Porter wrote:
Richard,

In response to your below copied email, I have the following response to the below quoted portions:

############### My prior post ################>>>>

 That aspects of consciousness seem real does not provides much of an

 “explanation for consciousness.”  It says something, but not much.  It

 adds little to Descartes’ “I think therefore I am.”  I don’t think it

 provides much of an answer to any of the multiple questions Wikipedia

 associates with Chalmer’s hard problem of consciousness.

########### Richard said ############>>>>

I would respond as follows.  When I make statements about consciousness

deserving to be called "real", I am only saying this as a summary of a

long argument that has gone before.  So it would not really be fair to

declare that this statement of mine "says something, but not much"

without taking account of the reasons that have been building up toward

that statement earlier in the paper.
###### My response ######>>>>

Perhaps --- but this prior work which you claim explains so much is not in the paper being discussed. Without it, it is not clear how much your paper itself contributes. And, Ben, who is much more knowledgeable than I on these things seemed similarly unimpressed.

I would say that it does. I blieve that the situation is that you do not yet understand it. Ben has had similar trouble, but seems to be comprehending more of the issue as I respond to his questions.

(I owe him one response right now:  I am working on it)



########### Richard said ############>>>>

I am arguing that when we probe

the meaning of "real" we find that the best criterion of realness is the

way that the system builds a population of concept-atoms that are (a)

mutually consistent with one another,

###### My response ######>>>>

I don’t know what mutually consistent means in this context, and from my memory of reading you paper multiple times I don’t think it explains it, other than perhaps implying that the framework of atoms represent experiential generalization and associations, which would presumably tend to represent the regularities of experienced reality.

I'll grant you that one: I did not explain in detail this idea of mutual consistency.

However, the reason I did not is that I really had to assume some background, and I was hoping that the reader would already be aware of the general idea that cognitive systems build their knowledge in the form of concepts that are (largely) consistent with one another, and that it is this global consistency that lends strength to the whole. In other words, all the bits of our knowledge work together.

A piece of knowledge like "The Loch Ness monster lives in Loch Ness" is NOT a piece of knowledge that fits well with all of the rest of our knowledge, because we have little or no evidence that such a thing as the Loch Ness Monster has been photographed, observed by independent people, observed by several people at the same time, caught in a trap and taken to a museum, been found as a skeletal remain, bumped into a boat, etc etc etc. There are no links from the rest of our knowledge to the LNM fact, so we actually do not credit the LNM as being "real".

By contrast, facts about Coelacanths are very well connected to the rest of our knowledge, and we believe that they do exist.




########### Richard said ############>>>>

and (b) strongly supported by

sensory evidence (there are other criteria, but those are the main

ones).  If you think hard enough about these criteria, you notice that

the qualia-atoms (those concept-atoms that cause the analysis mechanism

to bottom out) score very high indeed.  This is in dramatic contrast to

other concept-atoms like hallucinations, which we consider 'artifacts'

precisely because they score so low.  The difference between these two

is so dramatic that I think we need to allow the qualia-atoms to be

called "real" by all our usual criteria, BUT with the added feature that

they cannot be understood in any more basic terms.

###### My response ######>>>>

You seem to be defining “real” here to mean believed to exist in what is perceived as objective reality. I personally believe a sense of subjective reality is much more central to the concept of consciousness. Personal computers of today, which most people don’t think have anything approaching a human-like consciousness, could in many tasks make estimations of whether some signal was “real” in the sense of representing something in objective reality without being conscious. But a powerful hallucination, combined with a human level of sense of being conscious of it, does not appear to be something any current computer can achieve. So if you are looking for the hard problems in consciousness focus more on the human subjective sense of awareness, not whether there is evidence something is real in what we perceive as objective reality.


Alas, you have perhaps forgotten, or missed, the reason why "real" was being discussed in the paper, so you are discussing it out of its original context.

So what you say in the above response does not relate to the paper.







########### Richard said ############>>>>

So to contradict that argument (to say "it is not clear that consciousness

is necessarily any less explainable than are many other aspects of

physical reality") you have to say why the argument does not work.  It

would make no sense for a person to simply assert the opposite of the

argument's conclusion, without justification.

The argument goes into plenty of specific details, so there are many

kinds of attack that you could make.

###### My response ######>>>>

First, I am not claiming that all aspects of consciousness can ever be understood by science, since I do not believe all aspects of physical reality can be understood by science. I am saying that just as science has greatly reduced the number of things about physical reality that were once unexplainable, I think brain scanning, brain science, computer neural simulations, and AGI will greatly reduce the number of things about consciousness that cannot be explained. Even you yourself implied as much when I gave examples of such learning about consciousness which you dismissed as the easy problems of consciousness.

All questions about the "Easy" problems of consciousness are completely outside this discussion, because the paper ONLY addressed the hard problem of consciousness.

You may say things about those "Easy" problems, but I must ignore them because they do not relate in any way to my argument.

It would help if you could avoid mentioning them, because otherwise the discussion gets confused if you start an argument appearing to talk about the Hard Problem, but then slip into one of the Easy problems.

Second, with regard to the bottoming out of the ability for analysis in your molecular framework. I have two comments. (A) In the human brain, even the lowest level nodes have some associations with lateral or higher level nodes. So it is not as if they are totally devoid of grounding, and thus, some source of further explanation. Thus, explanation would not bottom out with such node, but it could lead to circular activation. I certainly admit there are limits to the extend to which a subjective consciousness can obtain information about itself. There is no way a consciousness can model all of the computation that gives rise to it.

But this statement misses the actual point that I was trying to make.

I postulated the "analysis mechanism" to be specifically concerned with delivering answers to the question "What exactly is the nature of my subjective experience of [x], as opposed to the nature of all the extraneous connections and associations that [x] has with other concepts".

This question defines the Hard Problem of consciousness. Therefore, I am ONLY interested, in my paper, in addressing the issue of what happens when the analysis mechanism tries to answer that question.

You keep referring to all the other questions that a person could ask about (e.g.) the color red. My paper makes no reference to any of those other questions that can be asked, and in fact the paper specifically and deliberately excludes all of those questions as being irrelevant.

But in spite of that, you keep repeating that these questions have some relevance. They do not.

This is exactly the same as confusing the distinction between the easy and hard problems: by mentioning these other senses of the "explanation" of redness, you have skipped right back into Easy problems that have nothing to do with the issue at hand.



(B) But the mere fact that your molecular framework is limited as to what of its own computation and representation it can understand from its own analysis of itself does not mean that scientific inquiry is equally limited. Just as scientific measurements, instruments, tests, and computing have enabled human to learn thing about physical reality that are far beyond the natural capabilities of our senses and mind to perceive and understand, similarly there is reason to believe the aids and methods of science can enable us to understand much about the brain that is not available to us through the type of introspective analysis that your paper is limited to.

The paper says precisely why the above statement is not to be believed: it gives a mechanism to explain why there is an absolute barrier beyond which no explanation can go.

What you are doing is saying "Oh, but future science must not be underestimated.....", but you are ignoring the way in which my argument addresses ALL of future science, regardless of how clever it might get.

You are still just asserting that science may eventually crack the problem, and ignoring my request that you say exactly why the argument fails.

Until you address that argument, this attack is a dead end.


Third, I am not making any claim about the ratio of what percent of consciousness can by understood by science, compared two what percent of physical reality that can be understood by science. Instead, I am saying that I think great strides can be made in the understanding of consciousness, and much of what we currently consider unknowable about consciousness, very possibly including many things that now fall in Chalmers’ hard problem of consciousness, we will either know about, or have reasonable theories about, within fifty years, if not before.

See above comment.

Please explain why my argument, which demonstrates that this statement of yours cannot be true, is somehow wrong.

You do not address my argument, only repeat that it is wrong without saying why.


########### Richard said ############>>>>

One of things that we

can explain is that when someone (e.g. Ed Porter) looks at a theory of

consciousness, he will *have* to make the statement that the theory does

not address the hard problem of consciousness.

So the truth is that the argument (at that stage of the paper) is all

about WHY people have trouble being specific about what consciousness is

and what is the explanation of consciousness.  It does this by an "in

principle" argument:  in principle, if the analysis mechanism hits a

dead end when it tries to analyze certain concepts, the net result will

be that the system comes to conclusions like "There is something real

here, but we can say nothing about it".
###### My response ######>>>>

Again, your paper relates to limits to the understanding of consciousness that can be reached by introspection. It does not prove that similar limits will be imposed on obtaining information about operation of the mind from other techniques such as brain scanning, brain science, and computer brain simulation.

On the contrary, it does exactly that.

IMO, you have simply not understood *how* it does that.

In fact, you appear not to have understood the above statement that I made, just by itself, so it is difficult to reply without repeating it.


########### Richard said ############>>>>

Notice that the argument is not

(at this stage) saying that "consciousness is a bunch of dead end

concept atoms", it is simply saying that those concept atoms cause the

system to make a whole variety of statements that exactly coincide with

all the statements that we see philosophers (amateur and professional)

making about consciousness.
###### My response ######>>>>

Your papers is most clear in its attempts to explain why there are limits to what an introspective use of consciousness can explain about certain aspects of itself. It makes an even less convincing argument for why, despite these limitations, the system has any sense of subjective consciousness at all.
You claim concept atom of your system

“cause the system to make a whole variety of statements that exactly coincide with all the statement that we see philosophers…making about consciousness.”

But there is very little in you paper that explains how they accomplish anything at all other than an inabiiltiy to introspectively answer certain questions, and that the system somehow senses these inexplicable things to be real.

So it actually explains very little about consciousness.


Alas, other philosophers (knowledgeable about the whole field) have given an exactly opposite reaction, saying that the argument clearly does account, in principle, for many of the problematic questions.

You appear to be able to see what they see.

I am having difficulty giving extra explanation to you that allows you to see what they see.

I am close to giving up.




############### My prior post ################>>>>

 THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS ON

 THE PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND NOT

 ENOUGH ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS

 DESPITE SUCH BOTTOMING OUT  -- THE SENSE OF REALNESS THAT IS MOST

 ESSENTIAL TO CONSCIOUSNESS.

########### Richard said ############>>>>

I could say more.  But why is this a weakness?  Does it break down,

become inconsistent or leave something out?  I think you have to be more

specific about why this is a weak point.

Every part of the paper coould do with some expansion.  Alas, the limit

is six pages.....

###### My response ######>>>>

It is a weakness because the real mystery of consciousness, the real hard problem of consciousness --- whether Chalmers recognizes it or not --- is not that there are certain things it cannot introspectively explained, but rather that the human mind has a sense of subjective reality and self-awareness. Your paper spends much more time on the easier less important problem, and much less on the harder, much more important problem. And without the sense of awareness and realness --- which you never convincingly explain the source of --- other than to say it exist and it comes from the operation of the framework --- even the major conclusion you do draw would be meaningless.

So a “weakness” it is.

Sorry, but the above paragraph is deeply confused.

Chalmers DOES say something equivalent to "the real hard problem of consciousness [is] that the human mind has a sense of subjective reality and self-awareness".

I also say the same thing.

You appear not to recognize that we say this, nor that I address this explicitly.







############### My prior post ################>>>>

 This is the part of your system that is providing the subjective

 experience that you say is providing the “realness” to your conscious

 experience.  This is where your papers should focus.  How does it

 provide this sense of realness.

########### Richard said ############>>>>

Well, no, this is the feature of the system that explains why we end up

convinced that there is something, and that it is inexplicable.

###### My response ######>>>>

Exactly. That’s what I was trying to say. Without these explanations you just attributed to these features of the system you paper amounts to virtually nothing, because there would be no feeling that something exists and it is inexplicable, according to your own argument. You fail to explain how such feelings rise to a level that is conscious, or subjectively real, as you describe in your paper. That explanation is what any paper that is claiming to explain consciousness should be focusing on, i.e., what gives rise to subjective sense of experience.

The paper does exactly what you ask here.

You have not shown any sign that you understand *how* it shows that.





########### Richard said ############>>>>

Remember:  this is a hypothesis in which I say "IF this mechanism

exists, then we can see that all the things people say about

consciousness would be said by an intelligent system that had this

mechanism."  Then i am inviting you to conclude, along with me:  "It

seems that this mechanism is so basically plausible, and it also

captures the expected locutions of philosophers so well, that we should

conclude (Ockhams Razor) that this mechanism is very likely to both

exist and be the culprit."

###### My response ######>>>>

You have lost me here. If, when you say “IF this mechanism exists” the mechanism you are referring is to a part of your molecular framework that actually and EXPLAINABLY does gives rise to the subjective experience we human call consciousness, then, YES, you would have really said a lot. Unfortunately your paper gives extremely minimal explanation for how this sense of subjective consciousness arises, other than through what is presumably a form of spreading activation in a network of patterns presumably learned from experientially and their associative connection.

These statements do not in any way summarize or relate to what was said in the paper.



Now that is exactly the type of network that I believe is most likely to give rise to human-like consciousness in humans and AGIs, but your discussion spreads little light on how conscious awareness is derived from such a system, and what additional features such a system would have to have to be conscious.

On the contrary, it does exactly that.



########### Richard said ############>>>>

The "realness" issue is separate.

Concepts are judged "real" by the system to the extent that they play a

very strongly anchored and consistent role in the foreground.  Color

concepts are anchored more strongly than any other, hence they are very

real.

I could say more about this, for sure, but most of the philosophers I

have talked to have gotten this point fairly quickly.

###### My response ######>>>>

It is my belief that such anchoring, if it had certain properties, and occurred in a system with certain properties --- properties which you do not describe --- may be well give rise to a subjective experience of realness. I believe grounding plays a major role in consciousness. Of course, there are two senses of “real” here. The first is a belief or mental experience that something has subjective reality, which a dream or a really strong hallucination might. The second is a belief that something exists in what our minds perceive to be objective reality. As I have indicated above, I think the most important of these two, when trying to understand consciousness is the first.

########### Richard said ############>>>>

The data produced by neuroscience, at this point, is extremely

confusing.  It is also obscured by people who are themselves confused

about the distinction between the Hard and Easy problems.  I do not

believe you can deduce anything meaningful from the neural research yet.

See Loosemore and Harley (forthcoming).

###### My response ######>>>>

Maybe you can’t deduce anything meaningful from the neural research that has been done to date, but I and lot of other people can --- including brain surgeons who actually prove the value of their understandings with successful operations that are probably performed many times a day around the world.

This has got nothing whatever to do with the question under discussion.






############### My prior post ################>>>>

 P.S. /(With regard to the alleged bottoming out reported in your papert:

 as I have pointed out in previous threads, even the lowest level nodes

 in any system would normally have associations that would give them a

 type and degree of grounding and, thus, further meaning  So that

 spreading activation would normally not bottom out when it reaches the

 lowest level nodes.  But it would be subject to circularly, or a lack of

 information about lowest nodes other than what could be learned from

 their associations with other nodes in the system.)/



########### Richard said ############>>>>

The spreading activation you are talking about is not the same as the

operation of the analysis mechanism.  You are talking about things other

than the analysis mechanism that I have posited.  Hence not relevant.

###### My response ######>>>>

Richard, you above said that

“Concepts are judged "real" by the system to the extent that they play a

very strongly anchored and consistent role in the foreground.  Color

concepts are anchored more strongly than any other, hence they are very

real.”

This means that you, yourself, consider low level colors inputs to have a type of grounding in the foreground of your molecular framework, which is one aspect of what I was talking about and what you are denying immediately above. The very sense of realness which you claim the system associates with these colors input nodes is part of the total analysis the system makes of them. That colors play a major role in visual sensation, and the nature of the role that they play in that sensation, such as that it appears to fill areas in a roughly 2D perception of the world, are analyzable attributes of them. You are free to define “analysis” narrowly to avoid my argument, but that would serves no purpose other than trying to protect your vanity.


Okay, this is where the discussion ends, at the words "your vanity".

You have no idea that the definition of "analysis mechanism" I use MUST be defined that narrowly if it is to address the Hard problem.

Furthermore, you have now done what you always do at this point in my attempts to respond to your points: you start making comments about me personally. Thus: "but that would serves no purpose other than trying to protect your vanity".



I now regret wasting so much time attempting to respond to what seemed to be a politely worded set of questions about the paper. I noticed your post because Ben quoted it, and I noticed that it was not abusive. So I made the mistake of engaging you again.

In all of the above discussion I find myself trying to explain that you must not confuse the hard problem and the non-hard problems of consciousness, because the non-hard problems have nothing whatever to do with the argument. I have now done this - what? - a dozen times at least. I have been doing this from the beginning, but instead of listening to my repeated attempts to get you to understand the distinction, you only repeat the same mistake over and over again.



This is what I get for trying to engage in debate with someone who picks up a technical distinction (e.g. Hard/Non-Hard problem of consciousness) from Wikipedia, and then, a couple of days later, misapplies the concept left right and center.


Sorry: I did make one last effort, but there is a limit to how many times I can say the same thing and be ignored every time.




Richard Loosemore




If one is talking about the sense of experience and mental associations a normal human mind associates with the color red, one is talking about a complex of activations that involve much more than just the activation of a single or even a contiguous group of lowest level color sensing nodes. For example, the system has to have a higher level concept to let it know that a given color red in one part of the visual field is the same as the same color red in another part of the visual field. This is a higher level concept that is vital to any analysis of the meaning of the activation of a lowest level red receptor node.

So what you are rejecting as irrelevant are not only clearly relevant to your own argument, but they are relevant to any honest attempt to understand the subjective experience of the activation of the types of lower level nodes your paper places such an emphasis on.

Ed Porter

=============================

P.S. I have gotten sufficiently busy that I should not have taken the time to write this response, but because of the thoughtfulness of your below email, I felt obligated to respond. Unfortunately if you choose to respond to this email, I will only have time to read it, but not prepare a response to it.

Ed Porter
-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 8:21 PM
To: [email protected]
Subject: Re: [agi] A paper that actually does solve the problem of consciousness

Ed Porter wrote:



  Richard,



 /(the second half of this post, that starting with the all capitalized

 heading, is the most important)/



 I agree with your extreme cognitive semantics discussion.



 I agree with your statement that one criterion for “realness” is the

 directness and immediateness of something’s phenomenology.



 I agree with your statement that, based on this criterion for

 “realness,” many conscious phenomena, such as qualia, which have

 traditionally fallen under the hard problem of consciousness seem to be

 “real.”



 But I have problems with some of the conclusions you draw from these

 things, particularly in your “Implications” section at the top of the

 second column on Page 5 of your paper.



 There you state



 “…the correct explanation for consciousness is that all of its various

 phenomenological facets deserve to be called as “real” as any other

 concept we have, because there are no meaningful /objective /standards

 that we could apply to judge them otherwise.”



 That aspects of consciousness seem real does not provides much of an

 “explanation for consciousness.”  It says something, but not much.  It

 adds little to Descartes’ “I think therefore I am.”  I don’t think it

 provides much of an answer to any of the multiple questions Wikipedia

 associates with Chalmer’s hard problem of consciousness.

I would respond as follows.  When I make statements about consciousness

deserving to be called "real", I am only saying this as a summary of a

long argument that has gone before.  So it would not really be fair to

declare that this statement of mine "says something, but not much"

without taking account of the reasons that have been building up toward

that statement earlier in the paper.  I am arguing that when we probe

the meaning of "real" we find that the best criterion of realness is the

way that the system builds a population of concept-atoms that are (a)

mutually consistent with one another, and (b) strongly supported by

sensory evidence (there are other criteria, but those are the main

ones).  If you think hard enough about these criteria, you notice that

the qualia-atoms (those concept-atoms that cause the analysis mechanism

to bottom out) score very high indeed.  This is in dramatic contrast to

other concept-atoms like hallucinations, which we consider 'artifacts'

precisely because they score so low.  The difference between these two

is so dramatic that I think we need to allow the qualia-atoms to be

called "real" by all our usual criteria, BUT with the added feature that

they cannot be understood in any more basic terms.

Now, all of that (and more) lies behind the simple statement that they

should be called real.  It wouldn't make much sense to judge that

statement by itself.  Only judge the argument behind it.

 You further state that some aspects of consciousness have a unique

 status of being beyond the reach of scientific inquiry and give a

 purported reason why they are beyond such a reach. Similarly you say:



 ”…although we can never say exactly what the phenomena of consciousness

 are, in the way that we give scientific explanations for other things,

 we can nevertheless say exactly why we cannot say anything: so in the

 end, we can explain it.”



 First, I would point out as I have in my prior papers that, given the

 advances that are expected to be made in AGI, brain scanning and brain

 science in the next fifty years, it is not clear that consciousness is

 necessarily any less explainable than are many other aspects of physical

 reality.  You admit there are easy problems of consciousness that can be

 explained, just as there are easy parts of physical reality that can be

 explained. But it is not clear that the percent of consciousness that

 will remain a mystery in fifty years is any larger than the percent of

 basic physical reality that will remain a mystery in that time frame.

The paper gives a clear argument for *why* it cannot be explained.

So contradict that argument (to say "it is not clear that consciousness

is necessarily any less explainable than are many other aspects of

physical reality") you have to say why the argument does not work.  It

would make no sense for a person to simply assert the opposite of the

argument's conclusion, without justification.

The argument goes into plenty of specific details, so there are many

kinds of attack that you could make.

 But even if we accept as true your statement that certain phenomena of

 consciousness are beyond analysis, that does little to explain

 consciousness.  In fact, it does not appear to answer any of the hard

 problems of consciousness.  For example, just because (a) we are

 conscious of the distinction used in our own mind’s internal

 representation between sensation of the colors red and blue, (b) we

 allegedly cannot analyze that difference further, and (c) that

 distinction seems subjectively real to us --- that does not shed much

 light on whether or not a p-zombie would be capable of acting just like

 a human without having consciousness of red and blue color qualia.

I think that the actual argument has not been summarized corectly here.

  At that point in the paper, the claim is that WE CAN UNDERSTAND WHY

THINKING SYSTEMS *MUST* MAKE STATEMENTS ABOUT HOW THERE IS THIS THING

CALLED "CONSCIOUSNESS" THAT SEEMS INEXPLICABLE.  One of things that we

can explain is that when someone (e.g. Ed Porter) looks at a theory of

consciousness, he will *have* to make the statement that the theory does

not address the hard problem of consciousness.

So the truth is that the argument (at that stage of the paper) is all

about WHY people have trouble being specific about what consciousness is

and what is the explanation of consciousness.  It does this by an "in

principle" argument:  in principle, if the analysis mechanism hits a

dead end when it tries to analyze certain concepts, the net result will

be that the system comes to conclusions like "There is something real

here, but we can say nothing about it".  Notice that the argument is not

(at this stage) saying that "consciousness is a bunch of dead end

concept atoms", it is simply saying that those concept atoms cause the

system to make a whole variety of statements that exactly coincide with

all the statements that we see philosophers (amateur and professional)

making about consciousness.





 It is not even clear to me that your paper shows consciousness is not an

 “artifact, ” as your abstract implies.  Just because something is “real”

 does not mean it is not an “artifact”, in many senses of the word, such

 as an unintended, secondary, or unessential, aspect of something.

Artifacts are explainable as due to something else that has a physical

explanation: they are a malfunction. That is not the case with the

situation I am proposing.



 THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS ON

 THE PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND NOT

 ENOUGH ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS

 DESPITE SUCH BOTTOMING OUT  -- THE SENSE OF REALNESS THAT IS MOST

 ESSENTIAL TO CONSCIOUSNESS.

I could say more.  But why is this a weakness?  Does it break down,

become inconsistent or leave something out?  I think you have to be more

specific about why this is a weak point.

Every part of the paper coould do with some expansion.  Alas, the limit

is six pages.....





 It is my belief that if you want to understand consciousness in the

 context of the types of things discussed in your paper, you should focus

 the part of the molecular framework, which you imply it is largely in

 the foreground, that prevents the system from returning with no answer,

 even when trying to analyze a node such as a lowest level input node for

 the color red in a given portion of the visual field.



 This is the part of your molecular framework that



 “…because of the nature of the representations used in the foreground,

 there is no way for the analysis mechanism to fail to return some kind

 of answer, because a non-existent answer would be the same as

 representing the color of red as “nothing,” and in that case all colors

 would be the same.” (Page 3, Col.2, first full paragraph.)



 It is also presumably the part of your molecular framework that



 “…report that ‘There is definitely something that it is like to be

 experiencing the subjective essence of red, but that thing is ineffable

 and inexplicable.’ ” (Page 3, Col. 2, 2^nd full paragraph.)



 This is the part of your system that is providing the subjective

 experience that you say is providing the “realness” to your conscious

 experience.  This is where your papers should focus.  How does it

 provide this sense of realness.

Well, no, this is the feature of the system that explains why we end up

convinced that there is something, and that it is inexplicable.

Remember:  this is a hypothesis in which I say "IF this mechanism

exists, then we can see that all the things people say about

consciousness would be said by an intelligent system that had this

mechanism."  Then i am inviting you to conclude, along with me:  "It

seems that this mechanism is so basically plausible, and it also

captures the expected locutions of philosophers so well, that we should

conclude (Ockhams Razor) that this mechanism is very likely to both

exist and be the culprit."

The "realness" issue is separate.

Concepts are judged "real" by the system to the extent that they play a

very strongly anchored and consistent role in the foreground.  Color

concepts are anchored more strongly than any other, hence they are very

real.

I could say more about this, for sure, but most of the philosophers I

have talked to have gotten this point fairly quickly.





 Unfortunately, your description of the molecular framework provides

 some, but very little, insight into what might be providing this

 subjective sense of experience, that is so key to the conclusions of

 your paper.







 In multiple prior posts on this thread I have said I believe the real

 source of consciousness appears to lie in such a molecular framework,

 but that to have anything approaching a human level of such

 consciousness this framework, and its computations that give rise to

 consciousness, have to be extremely complex.  I have also emphasized

 that brain scientist who have already done research on the neural

 correlates of consciousness, tend to indicate humans usually only report

 consciousness of things associated with fairly broad spread neural

 activation, which would normally involve many billions or trillions of

 inter-neuron messages per second.

The data produced by neuroscience, at this point, is extremely

confusing.  It is also obscured by people who are themselves confused

about the distinction between the Hard and Easy problems.  I do not

believe you can deduce anything meaningful from the neural research yet.

See Loosemore and Harley (forthcoming).

 I have posited that widespread

 activation of the nodes directly and indirectly associated with a given

 “conscious” node, provides dynamic grounding for the meaning of the

 conscious node.







 As I have pointed out, we know of nothing about physical reality that is

 anything other than computation (if you consider representation to be

 part of computation).  Similarly there is nothing our subjective

 experience can tell us about our own consciousnesses that is other than

 computation.  One of the key words we humans use to describe our

consciousnesses is “awareness.” Awareness is created by computation.

 It is my belief that this awareness comes from the complex, dynamically

 focused, and meaningful way in which our thought processes compute in

 interaction with themselves.







 Ed Porter







 P.S. /(With regard to the alleged bottoming out reported in your papert:

 as I have pointed out in previous threads, even the lowest level nodes

 in any system would normally have associations that would give them a

 type and degree of grounding and, thus, further meaning  So that

 spreading activation would normally not bottom out when it reaches the

 lowest level nodes.  But it would be subject to circularly, or a lack of

 information about lowest nodes other than what could be learned from

 their associations with other nodes in the system.)/



The spreading activation you are talking about is not the same as the

operation of the analysis mechanism.  You are talking about things other

than the analysis mechanism that I have posited.  Hence not relevant.

Regards

Richard Loosemore













 -----Original Message-----

 From: Richard Loosemore [mailto:[EMAIL PROTECTED]

 Sent: Wednesday, November 19, 2008 1:57 PM

 To: [email protected]

 Subject: Re: [agi] A paper that actually does solve the problem of

 consciousness







 Ben Goertzel wrote:



>  Richard,



>



>  I re-read your paper and I'm afraid I really don't grok why you think it



>  solves Chalmers' hard problem of consciousness...



>



>  It really seems to me like what you're suggesting is a "cognitive



>  correlate of consciousness", to morph the common phrase "neural



>  correlate of consciousness" ..



>



>  You seem to be stating that when X is an unanalyzable, pure atomic



>  sensation from the perspective of cognitive system C, then C will



>  perceive X as a raw quale ... unanalyzable and not explicable by



>  ordinary methods of explication, yet, still subjectively real...



>



>  But, I don't see how the hypothesis



>



>  "Conscious experience is **identified with** unanalyzable mind-atoms"



>



>  could be distinguished empirically from



>



>  "Conscious experience is **correlated with** unanalyzable mind-atoms"



>



>  I think finding cognitive correlates of consciousness is interesting,



>  but I don't think it constitutes solving the hard problem in Chalmers'



>  sense...



>



>  I grok that you're saying "consciousness feels inexplicable because it



>  has to do with atoms that the system can't explain, due to their role as



>  its primitive atoms" ... and this is a good idea, but, I don't see how



>  it bridges the gap btw subjective experience and empirical data ..



>



>  What it does is explain why, even if there *were* no hard problem,



>  cognitive systems might feel like there is one, in regard to their



>  unanalyzable atoms



>



>  Another worry I have is: I feel like I can be conscious of my son, even



>  though he is not an unanalyzable atom.  I feel like I can be conscious



>  of the unique impression he makes ... in the same way that I'm conscious



>  of redness ... and, yeah, I feel like I can't fully explain the



>  conscious impression he makes on me, even though I can explain a lot of



>  things about him...



>



>  So I'm not convinced that atomic sensor input is the only source of raw,



>  unanalyzable consciousness...







 My first response to this is that you still don't seem to have taken



 account of what was said in the second part of the paper  -  and, at the



 same time, I can find many places where you make statements that are



 undermined by that second part.







 To take the most significant example:  when you say:






 > But, I don't see how the hypothesis



 >



 > "Conscious experience is **identified with** unanalyzable mind-atoms"



 >



 > could be distinguished empirically from



 >



 > "Conscious experience is **correlated with** unanalyzable mind-atoms"







 ... there are several concepts buried in there, like [identified with],



 [distinguished empirically from] and [correlated with] that are



 theory-laden.  In other words, when you use those terms you are



 implictly applying some standards that have to do with semantics and



 ontology, and it is precisely those standards that I attacked in part 2



 of the paper.







 However, there is also another thing I can say about this statement,



 based on the argument in part one of the paper.







 It looks like you are also falling victim to the argument in part 1, at



 the same time that you are questioning its validity:  one of the



 consequences of that initial argument was that *because* those



 concept-atoms are unanalyzable, you can never do any such thing as talk



 about their being "only correlated with a particular cognitive event"



 versus "actually being identified with that cognitive event"!







 So when you point out that the above distinction seems impossible to



 make, I say:  "Yes, of course:  the theory itself just *said* that!".







 So far, all of the serious questions that people have placed at the door



 of this theory have proved susceptible to that argument.







 That was essentially what I did when talking to Chalmers.  He came up



 with an objection very like the one you gave above, so I said: "Okay,



 the answer is that the theory itself predicts that you *must* find that



 question to be a stumbling block ..... AND, more importantly, you should



 be able to see that the strategy I am using here is a strategy that I



 can flexibly deploy to wipe out a whole class of objections, so the only



 way around that strategy (if you want to bring down this theory) is to



 come up a with a counter-strategy that demonstrably has the structure to



 undermine my strategy.... and I don't believe you can do that."







 His only response, IIRC, was "Huh!  This looks like it might be new.



 Send me a copy."







 To make further progress in this discussion it is important, I think, to



 understand both the fact that I have that strategy, and also to



 appreciate that the second part of the paper went far beyond that.











 Lastly, about your question re. consciousness of extended objects that



 are not concept-atoms.







 I think there is some confusion here about what I was trying to say (my



 fault perhaps).  It is not just the fact of those concept-atoms being at



 the end of the line, it is actually about what happens to the analysis



 mechanism.  So, what I did was point to the clearest cases where people



 feel that a subjective experience is in need of explanation - the qualia



 - and I showed that in that case the explanation is a failure of the



 analysis mechanism because it bottoms out.







 However, just because I picked that example for the sake of clarity,



 that does not mean that the *only* place where the analysis mechanism



 can get into trouble must be just when it bumps into those peripheral



 atoms.  I tried to explain this in a previous reply to someone (perhaps



 it was you):  it would be entirely possible that higher level atoms



 could get built to represent [a sum of all the qualia-atoms that are



 part of one object], and if that happened we might find that this higher



 level atom was partly analyzable (it is composed of lower level qualia)



 and partly not (any analysis hits the brick wall after one successful



 unpacking step).







 So when you raise the example of being conscious of your son, it can be



 partly a matter of the consciousness that comes from just consciousness



 of his parts.







 But there are other things that could be at work in this case, too.  How



 much is that "consciousness" of a whole object an awareness of an



 internal visual image?  How much is it due to the fact that we can



 represent the concept of [myself having a concept of object x] ... in



 which case the unanalyzability is deriving not from the large object,



 but from the fact that [self having a concept of...] is a representation



 of something your *self* is doing .... and we know already that that is



 a bottoming-out concept.







 Overall, you can see that there are multiple ways to get the analysis



 mechanism to bottom out, and it may be able to bottom out partially



 rather than completely.  Just because I used a prticular example of



 bottoming-out does not mean that I claimed this was the only way it



 could happen.







 And, of course, all those other claims of "conscious experiences" are



 widely agreed to be more dilute (less mysterious) than such things as



 qualia.



















 Richard Loosemore

-------------------------------------------

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com

------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now> <https://www.listbox.com/member/archive/rss/303/> | Modify <https://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to