On 5/9/2011 1:34 AM, Bruno Marchal wrote:
On 07 May 2011, at 19:36, meekerdb wrote:
On 5/7/2011 8:19 AM, John Mikes wrote:
I am gladly standing corrected about our fellow smart animals.
We speak about a "self-awareness" as we, humans identify it in our
human terms and views.
Maybe other animals have different mental capabilities we cannot
pursue or understand, as adjusted to their level of complexity
usable in their 'menatality'. It may - or may not - be only
according to their number of neurons as our conventional sciences
teach. Or some may use senses we are deficient in, maybe totally
ignorant about. (We have a deficient smelling sense as compared to a
dog and missing orientation's senses of some birds, fish, turtle)
In our anthropocentric boasting we believe that only our human
observations are 'real'.
Thanks for setting me straight
Not only do other species have different perceptual modalities; even
within the "self-awareness" there are different kinds. Referring to
my favorite example of the AI Mars rover, such a rover has awareness
of it's position on the planet. It has awareness of it's battery
charge and the functionality of various subsystems. It has awareness
of its immediate goal (climb over that hill) and of some longer
mission (proceed to the gully and take a soil sample). It's not
aware of where these goals arise (as humans are not aware of why they
fall in love). It's not aware of it's origins or construction. It's
not a social creature, so it's not aware of it's position in a
society or of what others may think of it.
I expect that when we have understood consciousness we will see that
it is a complex of many things, just as when we came to understand
life we found that it is a complex of many different processes.
Life and consciousness are different notion with respect to the notion
of explanation we can find from them. In case of life, we can reduce a
third person describable phenomenon to another one (for example we can
argue that biology is in principle reduced to chemistry, which is
reduced to physics). For consciousness there is an hard problem, which
is the mind-body problem, and most people working on the subject agree
that it needs another sort of explanation. Then comp shows that
indeed, part of that problem, is that if we use the "traditional"
mechanistic rationale, we inherit the need of reducing physics to
number theory and intensional number theory, with a need to explicitly
distinguish first person and third person distinction. In a sense, the
"hard problem" of consciousness leads to an "hard problem of matter"
(the first person measure problem). Of course, I do think that
mathematical logic put much light on all of this, especially the
self-reference logics. Indeed, it makes the problem a purely
mathematical problem, and it shows quanta to be a particular case of
qualia. So we can say that comp has already solved the conceptual
problem of the origin of the coupling consciousness/matter, unless
someone can shows that too much white rabbits remains predictible and
that normalization of them is impossible, in which case comp is refuted.
I don't see that reducing consciousness to mathematics is any different
than reducing it to physics. Aren't you are still left with "the hard
problem" which now becomes "Why do these number relations produce
consciousness?". I don't think this "hard problem" is soluble. Rather
what can be solved is how to make devices, like intelligent Mars Rovers
and parts of brains the doctor can insert, which act conscious. And
further to understand which computations correspond to different kinds
of thoughts, such as "awareness of self as a part of society" or
"feeling of guilt" or "I'm in Moscow". When we have that kind of
engineering mastery of AI, the "hard problem" will be seen as a
simplistic, archaic wrong question.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at