[agi] AI Morality -- a hopeless quest

2003-02-12 Thread Arthur T. Murray
Alois Schicklgruber and his wife Klara probably did not give much thought to possible future aberrations when unser kleine Adi was born to them on 20 April 1889. Our little Adolf Hitler was probably cute and cuddly like any other baby. No one could be expected to know whether he would grow into

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
Hi Arthur, On Wed, 12 Feb 2003, Arthur T. Murray wrote: . . . Since the George and Barbara Bushes of this world are constantly releasing their little monsters onto the planet, why should we creators of Strong AI have to take any more precautions with our Moravecian Mind Children than human

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I don't think any human alive has the moral and ethical underpinnings to allow them to resist the corruption of absolute power in the long run. We are all kept in check by our lack of power, the competition of our fellow humans, the laws of society, and the instructions of our peers. Remove

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Arthur T. Murray wrote: [snippage] why should we creators of Strong AI have to take any more precautions with our Moravecian Mind Children than human parents do with their human babies? Here are three reasons I can think of, Arthur: 1) Because we know in advance that 'Strong AI', as you

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote: I don't think any human alive has the moral and ethical underpinnings to allow them to resist the corruption of absolute power in the long run. I am exceedingly glad that I do not share your opinion on this. Human altruism *is* possible, and indeed I observe myself

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I am exceedingly glad that I do not share your opinion on this. Human altruism *is* possible, and indeed I observe myself possessing a significant measure of it. Anyone doubting thier ability to 'resist corruption' should not IMO be working in AGI, but should be doing some serious

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
On Wed, 12 Feb 2003, Arthur T. Murray wrote: The quest is as hopeless as it is with human children. Although Bill Hibbard singles out the power of super-intelligence as the reason why we ought to try to instill morality and friendliness in our AI offspring, such offspring are made in our own

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I can't imagine the military would be interested in AGI, by its very definition. The military would want specialized AI's, constructed around a specific purpose and under their strict control. An AGI goes against everything the military wants from its weapons and agents. They train soldiers

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Eliezer S. Yudkowsky
Brad Wyble wrote: Tell me this, have you ever killed an insect because it bothered you? In other words, posthumanity doesn't change the goal posts. Being human should still confer human rights, including the right not to be enslaved, eaten, etc.. But perhaps being posthuman will confer

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Kevin
Morality -- a hopeless quest Brad Wyble wrote: Tell me this, have you ever killed an insect because it bothered you? In other words, posthumanity doesn't change the goal posts. Being human should still confer human rights, including the right not to be enslaved, eaten, etc.. But perhaps being

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread C. David Noziglia
- Original Message - From: Philip Sutton To: [EMAIL PROTECTED] Sent: Wednesday, February 12, 2003 2:55 PM Subject: Re: [agi] AI Morality -- a hopeless quest Brad, Maybe what you said below is the key to friendly GAI I don't think any human

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Stephen Reed
On Wed, 12 Feb 2003, Brad Wyble wrote: I can't imagine the military would be interested in AGI, by its very definition. The military would want specialized AI's, constructed around a specific purpose and under their strict control. An AGI goes against everything the military wants from its

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel
As has been pointed out on this list before, the military IS interested in AGI, and primarily for information integration rather than directly weapons-related purposes. See http://www.darpa.mil/body/NewsItems/pdf/iptorelease.pdf for example. -- Ben G I can't imagine the military would be

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote: Under the ethical code you describe, the AGI would swat them like a bug with no more concern than you swatting a mosquito. I did not describe an ethical code, I described two scenarios about a human (myself) then suggested the non-bug-swatting scenario was possible,

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Stephen Reed
Daniel, For a start look at the IPTO web page and links from: http://www.darpa.mil/ipto/research/index.html Darpa has a variety of Offices which sponsor AI related work, but IPTO is now being run by Ron Brachman, the former president of the AAAI. When I listened to the talk he gave a Cycorp in

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel
Steve, Ben, do you have any gauge as too what kind of grants are hot right now or what kind of narrow AI projects with AGI implications have recently been funded through military agencies? The list would be very long. Just look at the DARPA IPTO website for starters...