Thanks, Eric, for this assistance.
It will be interesting to see how the list responds.
Nick
Nicholas S. Thompson
Emeritus Professor of Psychology and Ethology,
Clark University ([email protected])
http://home.earthlink.net/~nickthompson/naturaldesigns/
----- Original Message -----
From: ERIC P. CHARLES
To: Russ Abbott
Cc: [email protected]; [email protected]
Sent: 6/18/2009 1:34:35 PM
Subject: Do robots dream of electric illusions? or Bladerunner, theRealist's Cut
Greetings all,
Let me begin by seconding Steves point that Nicks perspective is indeed a fun
one to try on. In that vein, I will try to give the response Nick is not giving
(or is giving more obliquely):
The word experience especially when combined with conscious is a perennial
problem. People think those words adds much more to the conversation than they
actually do. This can be seen if we use words that pretty much mean the same
thing. Let us rephrase the question to be this: Do robots and computers suffer
from illusions? This simplifies things because the answer is obviously Yes!
If "suffers from" is still too much for you, substitute: Do robots and
computers fall prey to illusory effects? If that is still too much for you,
ask: Do robots and computers act exactly the way we act when we think we are
having illusory experiences?
There are many, many examples possible, but we can give an obvious
robot-applicable example as follows: Keep both eyes open and press under one of
them with your finger until you start to see double. For example, I now see two
lamps in front of me, whereas when I am not pressing on my eye there is only
one
. Im experiencing an illusion, right? Good! Now that we are agreed upon
that
imagine a well-calibrated robot that has two optical sensors and is
programmed to identify objects in its environment. Imagine that if the robot
where sitting where I am now sitting, it would see one lamp. Now imagine that I
press on one of the sensors so that they are no longer in alignment. It should
not be hard to additionally imagine that the robot now experiences two lamps.
What is there left to discuss?
Continuing to speak as Nick, I can assure you that if you think there is
something else left to discuss, I will have trouble understanding what it is,
and you will have trouble trying to explain it. Any talk of granting the
normal meaning of words will be completely lost on me, because WHAT I HAVE
SAID ABOVE USES THE NORMAL MEANING OF THE WORDS. The normal meaning of I
experience that lamp is that there is a lamp over there, and the lamp I
experience IS the one over there. The lay meaning of the term, and normal usage
of it by anyone who is not having an intentionally contrived conversation,
involves no dualism whatsoever. --- I experience the lamp that is on the table
in front of me, and when I press my eye I experience two lamps on the table in
front of me. The same goes for our particular robot. If you want to know
whether the robot consciously experiences two lamps, you will need to explain
to me how consciously experiencing two lamps, or worse subjectively
experiencing two lamps, differs significantly from experiencing two lamps.
Unless you can tell me the difference, I answered the bloody question.
--------------
A second issue seems to be the metaphor of feeling. Surely it is a metaphor: To
feel something is to touch it. We say I feel anger to describe a situation as
similar to I feel the keyboard. We say I can no longer feel love to
describe a situation as similar to I can no longer feel anything from the
waist down. Unfortunately trickery is involved, as something weird happens in
our minds when we change from saying I am angry to I feel anger. In the
first case, it is clear that I = a body that is in a given state. In the
second case, there seems to be a second I that is not the body, but is
commenting on the state of the body. This just leads into silly confusion. If
your question is Can we make a robot such that it acts angrily? the answer
is, or soon will be, an obvious yes. Can we make a robot that notices when it
is acting angrily? Yes. Can we make a robot that tells us when it is acting
angrily? Yes. Does that mean we have a robot that knows when it is angry?
So far as I can tell
Yes
What else do you thinking people are doing?
To make the above point more clear: Sometimes I know that I am acting angrily.
In the former times, I may say (to you or to myself) I am angry, I feel
angry, or I feel anger. In any case, all I am doing is commenting on my
current state. I have noticed it, I am conscious of it, I am responding to it,
whatever you like. It is NOT that talking is important, it is merely that
talking often takes the shape of behavior in reference to other behavior (or
some other experience if you don't want to go behaviorist), it is a
meta-behavior (or a meta-experience). The experiencing of the anger is an
experiencing of some-thing, just like the experiencing of the lamp. Nick wants
anger to be in behavioral terms, most on the list would probably prefer some
specified internal state, but no matter for this point once anger is
happening, experiencing the anger is merely being responsive to that particular
variety of happening. This is more obvious when it fails to achieve: Sometimes
I act angrily without knowing it. In those cases, the anger is just as much
there, it is just as real whether or not I am conscious of it. In these cases I
would not tell you that I am angry, I may even vehemently deny it the meta
part of the equation is missing. Same exact situation when we talk about robots
or computers.
Again, I AM using the normal meaning of the words
you people are using strange
meanings that ONLY appear in weird conversations like this one.
----
Did any of that help?
Eric
P.S. I think the problem with I feel nauseous is handled in the above
conversation. The problem is merely that it seems like more than I am
nauseous, but it really isnt. There is no inner I that is inner feeling
the inner nausea there is only a body in a state. The language is
misleading it is just a self report that the room is spinning and my lunch is
likely to return.
P.P.S. Yes, Nick has been talking this way, and meaning it seriously for at
least 40 years now. You should all insist that he write a book on the subject
(with my aid) so that he will pester you less and produce something to clarify
these issues.
P.P.S. If the people involved in this conversation are anything like my
students, I suspect that anytime I, or Nick, rephrase your question into
something that seems answerable, you will quickly say But that doesnt speak
to my question at all! This leads me to the sneaking suspicion that the game
we are playing is not a question answering game, but a game intended merely to
phrase a question in an unanswerable way and gaze in wonderment at our
linguistic ability. The question-answering game is fun, the
rephrase-to-be-unanswerable game is boring... minus the Irish Whiskey.
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org