On Thu, Jun 23, 2011 at 3:24 PM, meekerdb <meeke...@verizon.net> wrote:
> On 6/23/2011 10:29 AM, Rex Allen wrote:
>>
>> On Tue, Jun 21, 2011 at 1:44 PM, meekerdb<meeke...@verizon.net>  wrote:
>>
>>>
>>> On 6/21/2011 8:17 AM, Bruno Marchal wrote:
>>>
>>>>
>>>> But comp denies that "we can prove that a machine can think". Of course
>>>> we
>>>> can prove that some machine has this or that competence. But for
>>>> intelligence/consciousness, this is not possible. (Unless we are not
>>>> machine. Some non-machine can prove that some machine are intelligent,
>>>> but
>>>> this is purely academical until we find something which is both a person
>>>> and
>>>> a non-machine).
>>>>
>>>
>>> But of course we can prove that a machine can think to the same degree we
>>> can prove other people think.  That we cannot prove it from some
>>> self-evident set of axioms is completely unsurprising.  This comports
>>> with
>>> my idea that with the development of AI the "question of consciousness"
>>> will
>>> come to be seen as a archaic, like "What is life?".
>>>
>>
>> Actually, I think you may have a point.  The question of "what is
>> life" is really not a scientific question.  Yet, nevertheless, I am
>> alive.
>>
>> In the same way, the question of "what is consciousness" is not a
>> scientific question either.  And yet, I am conscious.  Consciousness
>> exists.
>>
>> Science is just not applicable to these questions, because these
>> questions have nothing to do with the core purposes of science:
>> prediction.
>>
>> "Instrumentalism is the view that a scientific theory is a useful
>> instrument in understanding the world. A concept or theory should be
>> evaluated by how effectively it explains and predicts phenomena, as
>> opposed to how accurately it describes objective reality."
>>
>> So science is about formulating frameworks for understanding
>> observations in a way that allows for accurate prediction.  To ascribe
>> "metaphysical truth" to any of these frameworks is to take a leap of
>> faith *beyond* science.
>>
>> Taking the view of "instrumentalism with a pinch of common sense",
>> there is no reason to believe that every question that can be asked
>> can be answered scientifically.  Is there?
>>
>> Unless and until consciousness can be made useful for prediction, it
>> will remain invisible and irrelevant to science.
>>
>>
>> Rex
>>
>
> But I think it will be useful for prediction, as it is already useful in
> predicting what other people will do; what Dennett calls "the intentional
> stance".

The "intentional stance" is no more a scientific theory than the
"physical stance" is.

It's just a way of thinking, an attitude that one can take towards
something, which may or may not be useful.

We can only observe our own conscious experience.  For everyone else
we just see behaviors.  Since science is based on observation, science
deals with behaviors, not conscious experience.


Dennett's paper "Personal and Sub-Personal Levels of Explanation" has
a good discussion of this, I think:

"When we have said that a person has a sensation of pain, locates it
and is prompted to react in a certain way, we have said all there is
to say within the scope of this vocabulary.  We *can* demand further
explanation of how a person happens to withdraw his hand from the hot
stove, but we cannot demand further explanations of terms of 'mental
processes'.  Since the introduction of unanalysable mental qualities
leads to a premature end to explanation, we may decide that such
introduction is wrong, and look for alternative modes of explanation.
If we do this we must abandon the explanatory level of people and
their sensations and activities and turn to the sub-personal level of
brains and events in the nervous system.  But when we abandon the
personal level in a very real sense we abandon the subject matter of
pains as well.  When we abandon mental process talk for physical
process talk we cannot say that the mental process analysis of *pain*
is wrong, for our alternative analysis cannot be an analysis of pain
at all, but rather of something else - the motions of human bodies or
the organization of the nervous system.  Indeed, the mental process
analysis of pain is correct.  Pains are feelings, felt by people, and
they hurt.  People can discriminate their pains and they do this not
by applying any tests, or in virtue of any describable qualities in
their sensations.  Yet we do talk about the qualities of sensations
and we act, react, and make decisions in virtue of these qualities
that we find in our sensations.

Abandoning the personal level of explanation is just that:
*abandoning* the pains and not bringing them along to identify with
some physical event.  The only sort of explanation in which 'pain'
belongs is non-mechanistic; hence no identification of pains or
painful sensations with brain processes makes sense, and the physical,
mechanistic explanation can proceed with not worries about the absence
in the explanation of any talk abut the discrimination of unanalysable
qualities.

[...]

The philosophy of mind initiaed by Ryle and Wittgenstein is in large
measure an analysis of the concepts we use at the personal level, and
the lesson to be learned from Ryle's attacks on 'para-mechanical
hypotheses' and Wittgenstein's ofgen startling insistence that
explanations come to and end rather earlier than we had thought is
that the personal and sub-personal levels must not be confused."



> We already do it for machines, we know what their sensors detect
> and so we attribute "awareness" of some signals to them.  We
> anthropomorphize them and this has some value in predicting their behavior.

It's a useful "shortcut", yes.  A calculational device, with no
metaphysical significance.  Right?



>  As AI becomes an engineering discipline we will develop fine distinctions
> between different kinds of awareness and how they can be implemented.

Awareness in the sense of the Chalmersian "easy problems", sure.  But
this isn't related to the question of consciousness.


>  "Where is its consciousness?" will seems as archaic a question as looking
> at an automobile and asking "Where is its animation?"

If it's seen as archaic, in will only be in the sense that
"metaphysical interpretations" of scientific theories are archaic.

If science becomes explicit in its embrace of instrumentalism, then
sure, the "bad old days" of metaphysical realism will be considered
archaic.


> As Bruno points out,
> one will never be able to prove, in the mathematical sense of proof, that a
> given entity is conscious.  But as we do with other people, we will prove,
> in the scientific sense, that they are.

It seems unlikely that we will "prove" any such thing.  Maybe we will
prove that it is useful to take the intentional stance towards such
machines, just as it's sometimes useful to take that stance towards
people.

But, sometimes it's useful to take the physical stance towards
machines and towards people.

And some times it's useful to take the design stance towards machines
and towards people.

But nothing is "proved", except the usefulness of doing so in some
circumstances but not in others.


> In Bruno's logical hierarchy, as I
> understand it, there are not many different kinds of consciousness, only
> awareness and self-awareness.  But I think there there are other useful
> distinctions that Bruno would lump together as mere competences.

Awareness and self-awareness aren't related to the question of
consciousness.  They fall well within the realm of the easy problems.


Rex

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to