This commentary represents a fundamental misunderstanding of both the
paper I wrote and the background literature on the hard problem of
consciousness.
Richard Loosemore
Ed Porter wrote:
I respect the amount of thought that when into Richard’s paper
“Consciousness in Human and Machine: A Theory and Some Falsifiable
Predictions” --- but I do not think it provides a good explanation of
consciousness.
It seems to spend more time explaining the limitations on what we
can know about consciousness than explaining consciousness, itself.
What little the paper says about consciousness can be summed up roughly
as follows: that consciousness is created by a system that can analyze
and seek explanations from some, presumably experientially-learned,
knowledgebase, based on associations between nodes in that
knowledgebase, and that it can determine when it cannot describe a given
node further, in terms of relations to other nodes, but nevertheless
senses the given node is real (such as the way it is difficult for a
human to explain what it is like to sense the color red).
First I disagree with the paper’s allegation that “analysis” of
conscious phenomena necessarily “bottom” out more than analyses of many
other aspects of reality. Second, I disagree that conscious phenomena
are beyond any scientific explanation.
With regard to the first, I feel our minds contain substantial
memories of various conscious states, and thus there is actually
substantial experiential grounding of many aspects of consciousness
recorded in our brains. This is particularly true for the consciousness
of emotional states (for example, brain scans on very young infants
indicate a high percent of their mental activity is in emotional centers
of the brain). I developed many of my concepts of how to design an AGI
based on reading brain science and performing introspection into my own
conscious and subconscious thought processes, and I found it quite easy
to draw many generalities from the behavior of my own conscious mind.
Since I view the subconscious to be at the same time both a staging area
for, and a reactive audience for, conscious thoughts, I think one has to
view the subconscious and consciouness as part of a functioning whole.
When I think of the color red, I don’t bottom out. Instead I have
many associations with my experiences of redness that provide it with
deep grounding. As with the description of any other concept, it is
hard to explain how I experience red to others, other than through
experiences we share relating to that concept. This would include
things we see in common to be red, or perhaps common emotional
experiences to seeing the red of blood that has been spilled in
violence, or the way the sensation of red seems to fill a 2 dimensional
portion of an image that we perceive as a two dimensional distribution
of differently colored areas. But I can communicate within my own mind
across time what it is like to sense red, such as in dreams when my eyes
are closed. Yes, the experience of sensing red does not decompose into
parts, such as the way the sensed image of a human body can be
de-composed into the seeing of subordinate parts, but that does not
necessarily mean that my sensing of something that is a certain color of
red, is somehow more mysterious than my sensing of seeing a human body.
With regard to the second notion, that conscious phenomena are not
subject to scientific explanation, there is extensive evidence to the
contrary. The prescient psychological writings of William James, and
Dr. Alexander Luria’s famous studies of the effects of variously located
bullet wounds on the minds of Russian soldiers after World War II, both
illustrate that human consciousness can be scientifically studied. The
effects of various drugs on consciousness have been scientifically
studied. Multiple experiments have shown that the presence or absence
of synchrony between neural firings in various parts of the brain have
been strongly correlated with human subjects reporting the presence or
absence, respectively, of conscious experience of various thoughts or
sensory inputs. Multiple studies have shown that electrode stimulation
to different parts of the brain tend to make the human consciousness
aware of different thoughts. Our own personal experiences with our own
individual consciousnesses, the current scientific levels of knowledge
about commonly reported conscious experiences, and increasingly more
sophisticated ways to correlate objectively observable brain states with
various reports of human conscious experience, all indicate that
consciousness already is subject to scientific explanation. In the
future, particularly with the advent of much more sophisticated brain
scanning tools, and with the development of AGI, consciousness will be
much more subject to scientific explanation.
Does this mean we will ever be able to ultimately explain what it
means to be conscious? The answer is probably no more than we will ever
be able to fully explain many of the other big existential questions of
science, such as what is time and space and existence. Just as we
humans have developed from the grounding of experience common sense
notions of time, space, and existence, we also have common sense notions
of consciousness, and various of its states and behaviors. The only
difference, is that until recently the tools necessary to objectively
measure consciousness have been much more primitive than our tools for
measuring many other aspects of physical reality. But that is starting
to change rapidly. If people like Kurzweil are right,
we will soon be able to measure brain states with amazing accuracy, and,
thus, we will soon be able to measure consciousness more completely than
many aspects of physical reality.
So what can we currently say or guess about consciousness, based
on introspection, brain science, and AGI.
First, just as there is no aspect of physical reality that can be
described that is anything other than representation and computation,
there is no aspect of consciousness that is anything other than
representation and computation.
Second, It follows from the first point that it should be possible
to create consciousness from a computer, but it is not clear exactly
what type and scale of computer would be required.
Third, there may well be different degrees of consciousness.
Arguably all computation, and thus all physical reality is conscious,
but perhaps the particular type of computations we humans describe as
consciousness is an extremely complex computation that has multiple
characteristics that appear to distinguish it from most of the
computation that takes place in physical reality. For example, it is a
computation that can have many millions, billions, or arguably
trillions, of rapidly changing states, in which various nodes in that
state space can respond with relative crispness to the states of a large
number of other states, including the history of its own state, and that
of other nodes, over various time scales, and in which computational
focus can be rapidly switched pursuant to competition between competing
assemblies of activated states. Experiments on the correlation of
neural synchrony and conscious experience, indicate that conscious
awareness of a thought or sensation involves fairly large spread
coordinated behavior in the brain, which probably results in a
corresponding flood of activations related to a conscious concept that
sufficiently ground that concept to give the brain an awareness of its
meaning.
I could go on listing what I believe to be the probable
computational aspects of a human consciousness, but I think those in the
list that understand some of the possible correlations between the
operation of a large (i.e, human level) Novamente-like AGI and the
operation of their own consciousness --- as derived from a study of
their own subjective experience --- already understand much of what
additional things I would say.
An AGI billions of times less powerful and complex than the self
aware computation supported by the human brain could meet the definition
of consciousness used in Richard’s paper. But it is doubtful that such
a miniscule computation would have much meaningful similarity to the
rich, full sense of consciousness in the human mind, and, thus, I think
Richard’s paper sheds little light on the miracle that is a human
consciousness.
So although I appreciate the serious, careful, respectful tone of
Richard’s paper, I disagree strongly with about two thirds of its basic
conclusions.
Ed Porter
-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Friday, November 14, 2008 12:28 PM
To: [email protected]
Subject: [agi] A paper that actually does solve the problem of consciousness
I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf
The title is "Consciousness in Human and Machine: A Theory and Some
Falsifiable Predictions", and it does solve the problem, believe it or not.
But I have no illusions: it will be misunderstood, at the very least.
I expect there will be plenty of people who argue that it does not solve
the problem, but I don't really care, because I think history will
eventually show that this is indeed the right answer. It gives a
satisfying answer to all the outstanding questions and it feels right.
Oh, and it does make some testable predictions. Alas, we do not yet
have the technology to perform the tests yet, but the predictions are on
the table, anyhow.
In a longer version I would go into a lot more detail, introducing the
background material at more length, analyzing the other proposals that
have been made and fleshing out the technical aspects along several
dimensions. But the size limit for the conference was 6 pages, so that
was all I could cram in.
Richard Loosemore
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/> | Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com