On Tue, 17 Jul 2001, Joshua Bell wrote:

> "Marvin Long, Jr." <[EMAIL PROTECTED]> wrote:
> >
> >Compare David and Joe.  It's easy to imagine how to program something that
> >would act like Joe.  Not easy to acccomplish, mind you, but we can easily
> >imagine programming a set of behaviors that would enable a machine to act
> >like a sexually compulsive rent-boy.  That's because it's a fairly limited
> >and shallow set of behaviors.
> 
> Harumph - I disagree.
> 
> I think it's easy to imagine a machine that can understand a natural 
> language and cultural context, in the same way that it's easy to imagine a 
> flying pig. We have small words to describe it and we can draw pictures of 
> it, and we have concepts that, when put together right, give us that.

Ok, that's true, I'll admit.  It's easy to think that if we put a
sufficiently large number of pieces together in just the right way, we'll
have a thinking machine--without having the slightest idea of how to
actually go about doing it.
 
> Getting a machine to comprehend and converse in a natural language and have 
> a very deep cultural context - like being able to bumb a ride to Rouge City 
> from a bunch of teenagers with a flat tire - but yet not be capable of 
> higher emotions, that's difficult to imagine.

I'm afraid I just don't feel that way.  I have a very hard time
imagining a man-made device that has an inner emotional world.  On the
other hand, human beings have spent a lot of effort teaching themselves
how to act--programming themselves through culture that whether you like
it or not, in context A you behave one way, and in context B,
another--that I find it easy to imagine programming a robot with a large
number of rules of thumb that allow it to generate appropriate responses
to human behavior in order to put them at ease.  And yet I would have a
hard time believing that those expressions of "emotion" reflect the needs
and desires of a being with an ego, aspirations, fear of death and
failure, and so on.  Joe's bumming of a ride to Rouge city requires an
understanding of humans, but does it require any kind of empathy?  Maybe
it's enough for Joe, as a sexbot, to know that male humans of a certain
age are awash with hormones, and that to get a ride all he has to do is
offer the likelihood of satisfaction.

> Heck, some of our disposable toys are capable of basic emotions. The love 
> that David portrayed wasn't even a very complex sort of love, almost a 
> Tamagotchi parody of love. 

Well, we really have two issues:  a) how well does David express himself,
and b) what does he feel?  As the movie's tag line says, the robot's love
is real, but he is not.  Making a machine that can integrate well with a
human family may not be the same thing as a machine that really does
*need* a human family.  As for our disposable toys, we have there an
example of something designed to push well-known human emotional
buttons--but you don't seriously think they feel, do you?

> My dog has more compelling interactions with me - 
> she waits anxiously for me to get home, then runs and tells my wife because 
> she's so ecstatic; she also tries to dominate me by bringing me toys for her 
> to play with and assuming dominant poses, but she's happy when I dominate 
> her instead. She has particular moods in which she prefers to interact with 
> me or my wife, and would rather go off by herself then interact with the 
> other. In contrast, David seemed much more limited in his expressions of 
> love - but not all of the other stuff which is far beyond my puppy.

Mmm hmmm.  This example highlights one of the reasons I think that
emotions and even very advanced problem-solving a.i. are not necessarily
going to be naturally linked.

When I was in college I read a book called "The Passions" by Robert C.
Solomon, a philosophy prof at the University of Texas.  He argues that
emotions are judgements, assertions that events and situations are good or
bad, with different kinds of emotions reflecting different kinds of
subject matter (in a dizzying variety of possible ways). Such definitions
seem to imply that if you make something smart enough, it will start
making these kinds of judgements, i.e. have emotions.

But if we look at our "lower" mammal buddies, it seems clear that emotions
precede the evolution of language, cognition, and so on.  In a way they
provide a basic feedback mechanism that allows the dog to know if it's
healthy or not, if it's standing with the pack is appropriate, and so on,
in order to trigger the appropriate instincts for whatever happens next.
I don't doubt for a moment that my dog feels, but I don't think my dog
thinks much about what it feels either.

So where in the design of robots will emotions have a chance to evolve?
They won't, unless it's by design.  Or unless their brains are designed in
a way that takes advantage of some kind of simulated evolution which
causes their intelligence to develop in a way that parallels human
devlopment.  That's certainly plausible.  It's also plausible that we'll
simply create, in a brute-force kind of way, a very detailed model of
human behavior and expectations, and give robots sufficient instructions
to integrate with the model and perform their specialized tasks.  When
actual human behavior deviates from the model, the robot will be a trifle
befuddled and wait for a human cue on how to act next.  Though such
robots might be convincing actors, I don't think anyone will really
believe that they have morally compelling analogues to feelings which can
be hurt or soothed.

On the other hand, it's possible that emotions would simply emerge once
sufficient intelligence is obtained on a robot's part--but is there any
reason to think those emotions would in any way resemble the emotions of
human beings?  I don't think there is.  I think human emotions are deeply
entwined with the ways our bodies are regulated by the brain; robots, not
having human bodies, may have a totally different natural "palette" of
emotional responses.  Perhaps its a mistake to be anthropocentric here.
Maybe Gigolo Joe has emotions, but they're something we wouldn't
recognize as such, and something that he himself wouldn't be able to
express in terms of his human-centered programming.  Perhaps he wouldn't
even recognize them in a way analogous to human feelings.


Marvin Long
Austin, Texas

Nuke the straight capitalist wildebeests for Buddha!

Reply via email to