Arthur T. Murray wrote:
>
> [snippage]
> why should we creators of Strong AI have to take any
> more precautions with our Moravecian "Mind Children"
> than human parents do with their human babies?
>

Here are three reasons I can think of, Arthur:

1) Because we know in advance that 'Strong AI', as you put it, will be
very much smarter and very much more capable than we are - that is not
true in the human scenario.

2) If we don't get AI morality right the first time (or very close to
it), its "game over" for the human race.

3) Attempting to develop 'Strong AI' without spending time getting the
morality-bit correct, may cause a governmental agency to squash you like
a bug.

And I didn't even have to think very hard to come up with those... I'm
sure there are other reasons.  Could you articulate the reasons why you
think the 'quest' is hopeless?

Michael Roy Ames


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to