SPOILER WARNING On Tue, 17 Jul 2001, Erik Reuter wrote: > > That's a poor analogy. Are computers aren't anywhere near as > sophisticated as the robots they could make. And what-his-name robot > scientist at the beginning implied that they had been making robots > almost as sophisticated for 50 years. The sophistication of their robots > and no one having explored this before (after all, it's the first thing > the movie makers thought of, so of course some one would try it!) is > what I was questioning. The movie doesn't really tell us it's never been tried, IIRC--maybe somebody tried it early on, and the best anybody's been able to do is to create robots like Joe Gigolo, who does a reasonable simulation of lust but doesn't actually *feel* it. The point you're missing--and which every review I've read to date misses--is that teaching a robot to feel isn't just a matter of making sufficiently sophisticated hardware. It's a matter of understanding (or redefining) emotions and consciousness itself on some highly reductive level that allows it to be recreated in a way that's as real as, say, Dolly the cloned sheep. The issue isn't making robots that can model or exhibit emotion, it's making robots that can *feel* emotion. Since we have absolutely *no* idea how to do this today, it's perfectly reasonable to suppose that even a more technically advanced society would find this quite a tough nut to crack. Obviously they've made robots that can act like they love you--the new challenge, both technical and moral, is in creating robots that really do. Compare David and Joe. It's easy to imagine how to program something that would act like Joe. Not easy to acccomplish, mind you, but we can easily imagine programming a set of behaviors that would enable a machine to act like a sexually compulsive rent-boy. That's because it's a fairly limited and shallow set of behaviors. (Of course, Joe appears to exceed his programming from time to time, suggesting there's more going on than his creators really intend, but that's a slightly different matter.) David's trickier, since the point is not just to create a robot boy that can act like he loves his mommy, but to create a robot boy that really does, and which has all the vulnerabilities such dependence implies. Joe can't be hurt if a customer changes her mind about having sex with him. David *can* be hurt if Monica refuses to love him (unless we grant this premise, there's no story). The difference between these two states of being is a titanic chasm, in my opinion, and to blithely suppose that achieving David's state is just a matter of the machinery itself strikes me as absurd. It's like the difference between software and soul. > > I don't think it is reasonable. By far the most likely type of hardware > that we will use to create a robot brain will be sufficiently like > current computers that a reset function will be quite feasible. And > desirable, unless you are trying to write in a cute plot device to a > story. The truth is that we simply have no reason to assume this. To use the obvious example, an Asimovian positronic brain is nothing like a modern computer. If the writer wants to posit a whole new kind of hardware, he can, IMO. Spielberg could, of course, have posited that the boy can be reset, and then written a plot in which the boy struggles not to be reset because it's equivalent to death--imagine if somebody reset your personality so that you had no memory of life since day one--but he didn't. Being reset or being destroyed and replaced with another robot makes no difference in this context except one: being destroyed is more viscerally horrifying than being reset, but the difference to the personality being erased isn't significant. > > No, but the sophistication of the robots they built, combined with the > fact that it could have been hundreds of years before the humans were > wiped out, suggests that something could be done. Since we have no knowledge of the technical details of the robots, there is nothing at all to suggest that human personality transfer should be considered possible or even likely in this particular fictional world. I agree, however, that the inability to survive an ice age, and the ice age itself, really needs to be explained somehow. > You don't have to have > perfect transfer either, at least not preserving consciousness per se, > but I think it is reasonable to believe that they could transfer quite > a bit of knowledge and personality traits given the sophistication of > their technology. Knowledge, yes: just write it down. As for personality, why should we believe this? What about the creation of robots implies that they should also be able to "download" a personality from a living brain? Marvin Long Austin, Texas Nuke the straight capitalist wildebeests for Buddha!
