On Tue, Jul 17, 2001 at 01:56:59PM -0500, Marvin Long, Jr. wrote:
> 
> SPOILER WARNING
> 
> On Tue, 17 Jul 2001, Erik Reuter wrote:
> 
> 
> I can program my computer to tell me it loves me, but somehow that
> wouldn't be nearly as satisfying as having my wife tell me the same thing.
> Let me ask this:  at the beginning of the movie, when Prof. Hobby asks
> the female robot what love is, and she answers with a series of nothing
> but physiological responses (rapid breathing, etc.)--is she right?

Her answer only covered the physiological aspect.

> Gee, there goes 90% of the SF from my bookshelf.  Henceforth let it be
> decreed that no author of science fiction may posit a technology not
> easy to extrapolate directly from something that already exists!  Let it
> never be written, let it never be done!  :-(

Cute. 90% of "science fiction" IS bad. One quality of
good science fiction is that it posits technologies, or more broadly,
a technological environment, that is plausible. Sometimes it works,
often they fail to create a plausible technological environment. Some
of my favorite SF does not merely extrapolate current technology, but
to do that believably is difficult! Most fail. It is easier, a higher
probability play, to extrapolate current technology.

> More to the point:  a David-style artificial mind is extrapolated from
> two things, not one--modern computers, which can be rebooted, and the
> human brain, which cannot.  If the hypothetical imaginary technology falls
> a little more on one side than the other, in a matter that is essentially
> moot because either alternative destroys the personality in question, I
> can't see why that should make much difference in terms of plausibility.

So you wouldn't mind if you had to throw away your computer rather then
sell or give it away when you decided you didn't want it anymore?

>
> > As I said in my original post, the technology was just all out of
> > whack.  If they could make such sophisticated robots, with the
> > intelligence that they had, and for them to be so life-like and
> > long-lived, then they should have also been able to create a pretty
> > good simulacrum of a given person. Since it probably wouldn't feel
> > to the person who was simulated that they had any continuity with
> > the robot, it would not be any form of immortality. But I think
> > there would be plenty of people who would try it, nevertheless.
>
> Why?  If I'm gonna die anyway, I might want my robots to remember me
> in some way, but what could I get from spending my waning time and
> resources to train a robot to do an impression of me, however good it
> is?  It might do my ego some good to think the robots that remain will
> take something of my knowlege or insight into the future, but what do
> I care if a robot can walk and pick my nose the way I do?

Ever heard of time capsules? Or of parents who wish their children to
follow in their footsteps? How about auto-biographies? Memorials?

> > I have different standards for science fiction and for fairy-tales
> > (fantasy). I wouldn't be complaining if this movie were not billed
> > as science fiction.
>
> Forgive me, but it sounds as though any science fiction that doesn't
> reinforce a particular philosophical assumption of yours ceases to
> qualify.  What A.I. does is examine the assumption that digitally
> simulated emotions are the same as real ones. I can't imagine a more
> science fictional theme.

A SF theme is not enough to make good SF. Implementation is critical.
The setting, the technological environment, must be consistent and
plausible. That is a large part of what makes it science fiction and not
just fiction.

-- 
"Erik Reuter" <[EMAIL PROTECTED]>       http://www.erikreuter.com/

Reply via email to