SPOILER WARNING

On Tue, 17 Jul 2001, Erik Reuter wrote:

> >
> > Obviously they've made robots that can act like they love you--the new
> > challenge, both technical and moral, is in creating robots that really
> > do.
> 
> They are one in the same. Its not such a big challenge, given what else
> they can do.

a) You and I both know that "they are one and the same" is a
philosophical, not a scientific assertion. b) Not knowing how they do what
they do do in this fictional world, assuming that they can do another
possibly very difficult thing is a matter of pure supposition.
 
> You are assuming there is something almost magical about emotions and
> love. I believe if we can ever do the intelligence part, the emotions
> will be a snap. Intelligence is hard; emotions are not.

Again, a purely philsophical assumption.  I think there is something very
organic about emotions and love, and that creating robots designed to do
specialized things--nannybots, sexbots, comdeybots, etc.--can very likely
be done without any desire or plan to attempt to recreate emotions and
reflexive awareness of emotions.  Such specialized bots as Joe are very
complex problem-solving machines (one definition of a.i.), but the ability
to create them doesn't automatically imply the ability to make something
that approximates being human, or equivalently human, with respect to an
interior emotional landscape.

One can assert that there's no difference, of course, or that there is
no such thing as "an interior emotional landscape" and I think that
such assumptions are among the things the movie examines, but as far as I
know such an assertion has no grounding in data, theory, or science.  It's
an assertion that sounds scientific due to its lack of overt sentiment,
but it's really nothing more than one philosohpical prejudice among many.
 
> > The difference between these two states of being is a titanic chasm,
> > in my opinion, and to blithely suppose that achieving David's state is
> > just a matter of the machinery itself strikes me as absurd.  It's like
> > the difference between software and soul.
> 
> But this is not science. As I said before, it is a fairy tale, not
> science fiction.

? If science fiction is not allowed to touch matters of the soul (loosely
construed--no implications of an afterlife or such) then you're going to
have to throw out a lot of stuff, I think.

Is it science fiction if it examines the assertion that the thing we
loosely call soul can be reduced to hardware and software?  No, it's not
science, but no form of science fiction is "science."  Can a scientific
understanding of humans be had if it doesn't also explain our propensity
for fairy tales?


> In case it is not clear, I totally disagree with your supposition that
> there is something special about emotions. I'm sure it would be easier
> to create an emotion simulation program than an intelligence simulation
> program.

I can program my computer to tell me it loves me, but somehow that
wouldn't be nearly as satisfying as having my wife tell me the same thing.
Let me ask this:  at the beginning of the movie, when Prof. Hobby asks
the female robot what love is, and she answers with a series of nothing
but physiological responses (rapid breathing, etc.)--is she right?  Was
making David, then, beside the point, because they had already
accomplished their goals?

> > To use the obvious example, an Asimovian positronic brain is nothing
> > like a modern computer.
> 
> That was fiction. For plausibility, it is better to extrapolate from
> current SCIENCE, not from FICTION.

Gee, there goes 90% of the SF from my bookshelf.  Henceforth let it be
decreed that no author of science fiction may posit a technology not
easy to extrapolate directly from something that already exists!  Let it
never be written, let it never be done!  :-(

More to the point:  a David-style artificial mind is extrapolated from
two things, not one--modern computers, which can be rebooted, and the
human brain, which cannot.  If the hypothetical imaginary technology falls
a little more on one side than the other, in a matter that is essentially
moot because either alternative destroys the personality in question, I
can't see why that should make much difference in terms of plausibility.
 
> > If the writer wants to posit a whole new kind of hardware, he can,
> > IMO.
> 
> Sure. But if it is implausible enough, I don't consider it science
> fiction.

Surely you'll grant that your own subjective evaluation of "implausible"
is not definitive? <blinks innocently>  You're entitled to it, of course.

> It does not have to be downloading (meaning some sort of
> hardware<->brain connection). There are many other possibilities. One
> simple example would be to have the robot follow you around and learn to
> mimic you. This could be combined with direct programming of important
> traits and knowledge.

Which would benefit me how?  The superbots clearly reflect their human
origin, but if they have the ability to modify themselves and improve
their designs over time, which seems to me to be implied, what motive
would they have to retain superficial mimicry of my personal traits?  That
data might go into an encyclopedia somewhere, but that's about it.
 
> As I said in my original post, the technology was just all out of whack.
> If they could make such sophisticated robots, with the intelligence that
> they had, and for them to be so life-like and long-lived, then they
> should have also been able to create a pretty good simulacrum of a given
> person. Since it probably wouldn't feel to the person who was simulated
> that they had any continuity with the robot, it would not be any form of
> immortality. But I think there would be plenty of people who would try
> it, nevertheless.

Why?  If I'm gonna die anyway, I might want my robots to remember me in
some way, but what could I get from spending my waning time and resources
to train a robot to do an impression of me, however good it is?  It might
do my ego some good to think the robots that remain will take something of
my knowlege or insight into the future, but what do I care if a robot can
walk and pick my nose the way I do?

> I have different standards for science fiction and for fairy-tales
> (fantasy). I wouldn't be complaining if this movie were not billed as
> science fiction.

Forgive me, but it sounds as though any science fiction that doesn't
reinforce a particular philosophical assumption of yours ceases to
qualify.  What A.I. does is examine the assumption that digitally
simulated emotions are the same as real ones. I can't imagine a more
science fictional theme.  It doesn't give a clear answer.  It employs
elements of fairy tales (which could include fantasy and religion and the
humanities generally) because those are the ways in which humans, as a
rule, approach issues of emotion and genuiness of humanity.  Without using
these kinds of tests of David's "realness" there would be no story, only a
naked assertion of yes, no, or maybe.

Marvin Long
Austin, Texas

Nuke the straight capitalist wildebeests for Buddha!

Reply via email to