On 24 Jun 2011, at 17:49, Rex Allen wrote:
On Thu, Jun 23, 2011 at 3:24 PM, meekerdb <meeke...@verizon.net>
On 6/23/2011 10:29 AM, Rex Allen wrote:
On Tue, Jun 21, 2011 at 1:44 PM, meekerdb<meeke...@verizon.net>
On 6/21/2011 8:17 AM, Bruno Marchal wrote:
But comp denies that "we can prove that a machine can think". Of
can prove that some machine has this or that competence. But for
intelligence/consciousness, this is not possible. (Unless we are
machine. Some non-machine can prove that some machine are
this is purely academical until we find something which is both
But of course we can prove that a machine can think to the same
can prove other people think. That we cannot prove it from some
self-evident set of axioms is completely unsurprising. This
my idea that with the development of AI the "question of
come to be seen as a archaic, like "What is life?".
Actually, I think you may have a point. The question of "what is
life" is really not a scientific question. Yet, nevertheless, I am
In the same way, the question of "what is consciousness" is not a
scientific question either. And yet, I am conscious. Consciousness
Science is just not applicable to these questions, because these
questions have nothing to do with the core purposes of science:
"Instrumentalism is the view that a scientific theory is a useful
instrument in understanding the world. A concept or theory should be
evaluated by how effectively it explains and predicts phenomena, as
opposed to how accurately it describes objective reality."
So science is about formulating frameworks for understanding
observations in a way that allows for accurate prediction. To
"metaphysical truth" to any of these frameworks is to take a leap of
faith *beyond* science.
Taking the view of "instrumentalism with a pinch of common sense",
there is no reason to believe that every question that can be asked
can be answered scientifically. Is there?
Unless and until consciousness can be made useful for prediction, it
will remain invisible and irrelevant to science.
But I think it will be useful for prediction, as it is already
predicting what other people will do; what Dennett calls "the
The "intentional stance" is no more a scientific theory than the
"physical stance" is.
It's just a way of thinking, an attitude that one can take towards
something, which may or may not be useful.
We can only observe our own conscious experience. For everyone else
we just see behaviors. Since science is based on observation, science
deals with behaviors, not conscious experience.
Dennett's paper "Personal and Sub-Personal Levels of Explanation" has
a good discussion of this, I think:
"When we have said that a person has a sensation of pain, locates it
and is prompted to react in a certain way, we have said all there is
to say within the scope of this vocabulary. We *can* demand further
explanation of how a person happens to withdraw his hand from the hot
stove, but we cannot demand further explanations of terms of 'mental
processes'. Since the introduction of unanalysable mental qualities
leads to a premature end to explanation, we may decide that such
introduction is wrong, and look for alternative modes of explanation.
If we do this we must abandon the explanatory level of people and
their sensations and activities and turn to the sub-personal level of
brains and events in the nervous system. But when we abandon the
personal level in a very real sense we abandon the subject matter of
pains as well. When we abandon mental process talk for physical
process talk we cannot say that the mental process analysis of *pain*
is wrong, for our alternative analysis cannot be an analysis of pain
at all, but rather of something else - the motions of human bodies or
the organization of the nervous system. Indeed, the mental process
analysis of pain is correct. Pains are feelings, felt by people, and
they hurt. People can discriminate their pains and they do this not
by applying any tests, or in virtue of any describable qualities in
their sensations. Yet we do talk about the qualities of sensations
and we act, react, and make decisions in virtue of these qualities
that we find in our sensations.
Abandoning the personal level of explanation is just that:
*abandoning* the pains and not bringing them along to identify with
some physical event. The only sort of explanation in which 'pain'
belongs is non-mechanistic; hence no identification of pains or
painful sensations with brain processes makes sense, and the physical,
mechanistic explanation can proceed with not worries about the absence
in the explanation of any talk abut the discrimination of unanalysable
The philosophy of mind initiaed by Ryle and Wittgenstein is in large
measure an analysis of the concepts we use at the personal level, and
the lesson to be learned from Ryle's attacks on 'para-mechanical
hypotheses' and Wittgenstein's ofgen startling insistence that
explanations come to and end rather earlier than we had thought is
that the personal and sub-personal levels must not be confused."
We already do it for machines, we know what their sensors detect
and so we attribute "awareness" of some signals to them. We
anthropomorphize them and this has some value in predicting their
It's a useful "shortcut", yes. A calculational device, with no
metaphysical significance. Right?
As AI becomes an engineering discipline we will develop fine
between different kinds of awareness and how they can be implemented.
Awareness in the sense of the Chalmersian "easy problems", sure. But
this isn't related to the question of consciousness.
"Where is its consciousness?" will seems as archaic a question as
at an automobile and asking "Where is its animation?"
If it's seen as archaic, in will only be in the sense that
"metaphysical interpretations" of scientific theories are archaic.
If science becomes explicit in its embrace of instrumentalism, then
sure, the "bad old days" of metaphysical realism will be considered
As Bruno points out,
one will never be able to prove, in the mathematical sense of
proof, that a
given entity is conscious. But as we do with other people, we will
in the scientific sense, that they are.
It seems unlikely that we will "prove" any such thing. Maybe we will
prove that it is useful to take the intentional stance towards such
machines, just as it's sometimes useful to take that stance towards
But, sometimes it's useful to take the physical stance towards
machines and towards people.
And some times it's useful to take the design stance towards machines
and towards people.
But nothing is "proved", except the usefulness of doing so in some
circumstances but not in others.
In Bruno's logical hierarchy, as I
understand it, there are not many different kinds of consciousness,
awareness and self-awareness. But I think there there are other
distinctions that Bruno would lump together as mere competences.
Awareness and self-awareness aren't related to the question of
consciousness. They fall well within the realm of the easy problems.
I have deduced this from some posts. You, and Dennett are begging the
question. Why should science be based only on observation? What would
that mean? Science, and already observation itself, are based on many
layers of theories, some innate in our brain, some developed through
symbolic reasoning, reflexion and imagination. The computationalist
theory illustrates well that we *can* explain the third person
description of the first person discourses. So we can make progress.
To abandon the scientific study of consciousness is like to abandon
the notion of God to the authorities. As I said: it is a form of "shut
up and calculate". Instrumentalism is about like abandoning the
fundamental questions to the engineering science. Many engineers do
understand that it will lead in less genuine engineering in the long
run, so that eventually, even instrumentalists with long term goal can
defend a non-instrumentalist philosophy here and now.
And then, if we assume like Dennett the comp hypothesis, we have just
no choice than to recover the physical relations by the number
relations (unless there is a flaw ...). Even an instrumentalist cannot
ignore that. Comp, among other possible everything-like idea, leads to
a real concrete and terribly complex mathematical measure problem.
Consciousness is not like life. We can say that molecular biology has
solved the conceptual problem of life, and this has evacuate vitalism.
But comp, per se, does not solve the consciousness problem: it
transforms it into a conceptual matter problem, which can be solved
only by evacuating materialism, by reducing the origin of matter to a
machine psychological self-perception problem.
The reduction of the mind-body problem into the arithmetical bodies
appearance problem *has* been done. It is not well known because
philosopher of mind, especially the computationalists, for some
reason, ignore everything of (theoretical) computer science. That is
just a contingent fact which slow down the progress in the field. The
philosophy curriculum should be revised.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at