Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
Hi Arthur,

On Wed, 12 Feb 2003, Arthur T. Murray wrote:

 . . .
 Since the George and Barbara Bushes of this world
 are constantly releasing their little monsters onto the planet,
 why should we creators of Strong AI have to take any
 more precautions with our Moravecian Mind Children
 than human parents do with their human babies?

Because of the power of super-intelligence.

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble

I don't think any human alive has the moral and ethical underpinnings to allow them to 
resist the corruption of absolute power in the long run.  We are all kept in check by 
our lack of power, the competition of our fellow humans, the laws of society, and the 
instructions of our peers.  Remove a human from that support framework and you will 
have a human that will warp and shift over time.  We are designed to exist in a social 
framework, and our fragile ethical code cannot function properly in a vacuum.

This says two things to me.  First, we should try to create friendly AI's.  Second, we 
have no hope of doing it.  

We will forge ahead anyway because progress is always inevitable.  We'll do as good a 
job as we can.  At some point humans will be obsolete, but that's no reason to turn 
back.

I'm also a strong proponent of the idea that humans can be made much better with the 
addition of enhancements, first through external add-ons  (gargoyle type apparati 
which enhance our minds through UI's that are as intuitively useful as a hammer), and 
later through direct enhancement of our brains.  

In summary, I think we are getting ahead of ourselves in thinking we even have the 
capacity to predict what a friendly AI will be, especially if said AI is 
hyperintelligent and self-modifying.  

-Brad
  


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Arthur T. Murray wrote:

 [snippage]
 why should we creators of Strong AI have to take any
 more precautions with our Moravecian Mind Children
 than human parents do with their human babies?


Here are three reasons I can think of, Arthur:

1) Because we know in advance that 'Strong AI', as you put it, will be
very much smarter and very much more capable than we are - that is not
true in the human scenario.

2) If we don't get AI morality right the first time (or very close to
it), its game over for the human race.

3) Attempting to develop 'Strong AI' without spending time getting the
morality-bit correct, may cause a governmental agency to squash you like
a bug.

And I didn't even have to think very hard to come up with those... I'm
sure there are other reasons.  Could you articulate the reasons why you
think the 'quest' is hopeless?

Michael Roy Ames


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote:
 I don't think any human alive has the moral and ethical underpinnings
 to allow them to resist the corruption of absolute power in the long
 run.


I am exceedingly glad that I do not share your opinion on this.  Human
altruism *is* possible, and indeed I observe myself possessing a
significant measure of it.  Anyone doubting thier ability to 'resist
corruption' should not IMO be working in AGI, but should be doing some
serious introspection/study of thier goals and motivations. (No offence
intended, Brad)

Michael Roy Ames


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
 
 I am exceedingly glad that I do not share your opinion on this.  Human
 altruism *is* possible, and indeed I observe myself possessing a
 significant measure of it.  Anyone doubting thier ability to 'resist
 corruption' should not IMO be working in AGI, but should be doing some
 serious introspection/study of thier goals and motivations. (No offence
 intended, Brad)
 
 Michael Roy Ames
 

None taken.  I'm altruistic myself, to a fault oftentimes. 

I have no doubt of my ability to help my fellow man.  I bend over backwards to help 
complete strangers without a care because it makes me feel good.  I am a friendly 
person.

But that word fellow is the key.  It implies peers, relative equals.  

I don't think I, or you, or anyone, can expect our personal ethical frameworks to 
function properly in a situation like that a hyperintelligent AI will face.


Tell me this, have you ever killed an insect because it bothered you?


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
On Wed, 12 Feb 2003, Arthur T. Murray wrote:

 The quest is as hopeless as it is with human children.
 Although Bill Hibbard singles out the power of super-intelligence
 as the reason why we ought to try to instill morality and friendliness
 in our AI offspring, such offspring are made in our own image and
 likeness:  receptive to parental ideas, but ultimately on their own.

We better not make them in our own image. We can make
them with whatever reinforcement values we like, rather
than the ones we humans were born with. Hence my often
repeated suggestion that they reinforce behaviors
according to human happiness.

 DISCLAIMERS
 - In less than one hour I will go on a mountain day-trip
   and not be on-line to answer even the most personal queries.

Have fun,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble

I can't imagine the military would be interested in AGI, by its very definition.  The 
military would want specialized AI's, constructed around a specific purpose and under 
their strict control.  An AGI goes against everything the military wants from its 
weapons and agents.  They train soldiers for a long time specifically to beat the GI 
out of them (har har, no pun intended) so that they behave in a predictable manner in 
a dangerous situation.


And while I'm not entirely optimistic about the practicality of building ethics into 
AI's, I think we should certainly try, and that rules military funding right out. 

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Eliezer S. Yudkowsky
Brad Wyble wrote:

 Tell me this, have you ever killed an insect because it bothered you?

In other words, posthumanity doesn't change the goal posts. Being human 
should still confer human rights, including the right not to be enslaved, 
eaten, etc.. But perhaps being posthuman will confer posthuman rights that 
we understand as little as a dog understands the right to vote.
	-- James Hughes

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Kevin
Hello All..

After reading all this wonderful debate on AI morality and Eliezer's People
eating AGI concerns, I'm left wondering this: Am I the *only* one here who
thinks that the *most* likely scenario is that such a thing as a universe
devouring AGI is utterly impossible?

Everyone here seems to talk about this as if it was inevitable and probable.
Just because we can dream of something, does not mean it can exist anywhere
except our dreams.  For instance, time travel has not been entorely refuted
as of yet, but that doesn't mean it is practically doable in any way.  These
discussions seem especially far fetched given that this damn computer
doesn't have the slightest idea what I am typing in right now or what it
means ;)

I think an AGI is *very* plausible and probably imminent.  I also think
Eliezer is right in that we have to give strong consideration to the ethics
of such a machine as they could be dangerous, if even just economically
dangerous by crashing financial markets or whole countries economies.  They
could also potentially use all our own wonderful killing machines against
us.  But the idea that they will manipulate matter and devour the universe
is ludicrous IMO.  I am much more inclined to believe that an AGI of
tremendous utility will emerge that will be a tool for our use in almost any
scientific\engineering\medical\educational etc etc domain.

If such a thing as a matter manipulating machine were possible, it should
have happened already in this universe.  This leads to one of three
conclusions as far as I can tell:

1) matter manipulating machines of such a grand scale are not possible
2) mmm's are possible, but never actually do such a thing
3) mmm's are possible and they created this current universe as a simulation
ala The Matrix.

My bet is on number 1.  But none of these three are horrible.  Of course, an
AGI could be destructive only on the local level, and that is where we have
to be wary.

I am glad that Ben is working on this and may be closer to succeeding than
anyone else.  I believe he sincerely has altruistic motives and is open
minded enough to consider others thoughts\concerns.  That will mean alot as
this project progresses towards completion..

Kevin



- Original Message -
From: Eliezer S. Yudkowsky [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, February 12, 2003 1:53 PM
Subject: Re: [agi] AI Morality -- a hopeless quest


 Brad Wyble wrote:
  
   Tell me this, have you ever killed an insect because it bothered you?

 In other words, posthumanity doesn't change the goal posts. Being human
 should still confer human rights, including the right not to be enslaved,
 eaten, etc.. But perhaps being posthuman will confer posthuman rights that
 we understand as little as a dog understands the right to vote.
 -- James Hughes

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread C. David Noziglia




  - Original Message - 
  From: 
  Philip Sutton 
  To: [EMAIL PROTECTED] 
  Sent: Wednesday, February 12, 2003 2:55 
  PM
  Subject: Re: [agi] AI Morality -- a 
  hopeless quest
  
  Brad,
  
  Maybe what you 
  said below is the key to friendly GAI
  
   I don't think any human alive has the moral and 
  ethical underpinnings
   to allow them to resist the corruption of 
  absolute power in the long
   run. We are all kept in check by our lack 
  of power, the competition
   of our fellow humans, the laws of society, and 
  the instructions of our
   peers. Remove a human from that support 
  framework and you will have a
   human that will warp and shift over time. 
  We are designed to exist in
   a social framework, and our fragile ethical code 
  cannot function
   properly in a vacuum. 
  
  If we create a 
  *community* of AGIs that have ethics orientated architecture/ethical 
  training then *they* might stand a chance of policing 
  themselves.
  
  The situation 
  is analagous to how we try (so far with not enough success, but with improving 
  odds) to protect non-human species.
  
  Humans are the 
  biggest threat to non-human species (well demonstrated) but there are more and 
  more efforts being made by humans to stop that and to provide other species a 
  chance to survive and continue evolving.
  
  I think that we 
  need to structure and train AGIs knowing that the same scenario could be 
  played out in relation to us as has happened between us and less poweful life 
  - but we have the advantages that:
  - 
  we've seen where WE went wrong 
  - 
  we can shape the deep ethical structure of AGIs from the start 
  with
   this meta issue in 
  mind.
  
  Cheers, 
  Philip

  
I would second this, and note for the record the instance 
of the defense of the treatment of women in traditional 
societies.

In places like Pakistan and Arabia, apologists defend the 
second-class status of women by saying that they are being "protected" by 
their male relatives. But without the ability to protect their own 
safety and status, such "protection" becomes honor killings and 
FGM.

The only guarantee of protection and rights is to give 
womenthe ability to protect themselves, and that's a tremendous 
cultural change. Especially when such cultural traditions are claimed 
to be mandated by God.
  
I guess the relevance here is that Philip has reached the 
core of this issue. There are no guarantees in this business, 
especially when trying to predict the behavior of complex adaptive entities 
with cognitive abilities that we are assuming will be greater than 
ours. Thus, the only safeguard is the classic one: division of 
power.


C. David NozigliaObject Sciences Corporation6359 Walker Lane, 
Alexandria, VA(703) 253-1095

 "What is true and what is not? Only God knows. And, 
maybe, 
America." 
Dr. Khaled M. Batarfi, Special to Arab News

 "Just because something is obvious doesn't mean it's 
true." 
--- Esmerelda Weatherwax, witch of Lancre




Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Stephen Reed
On Wed, 12 Feb 2003, Brad Wyble wrote:


 I can't imagine the military would be interested in AGI, by its very
 definition.  The military would want specialized AI's, constructed
 around a specific purpose and under their strict control.  An AGI goes
 against everything the military wants from its weapons and agents.  They
 train soldiers for a long time specifically to beat the GI out of them (har
 har, no pun intended) so that they behave in a predictable manner in a
 dangerous situation.

 And while I'm not entirely optimistic about the practicality of
 building ethics into AI's, I think we should certainly try, and that
 rules military funding right out.

As an employee at Cycorp, a DARPA sub-contractor, and as project manager
for Cycorp's Terrorism Knowledge Base participation in the DARPA GENOA II
program, I believe that the military would be very interested in AGI
*because* of its definition.  A hierarchical military AGI would bottom out
in weapon systems but the General aspect of it facilitates coordination
at the battlespace level - involving forces from all services and allies.

With the growing military acceptance of Effects Based Operations, national
objectives assigned to our military can be accomplished by means other
than putting metal on a target.  EBO is implemented by Bayesian nets which
I imagine will be in the toolbox of any AGI group posting here.

In opposition to the military aspect of your second point, I am very
comfortable with the building of ethics into an AI and at the moment
subscribe the Friendly AI principles which I believe can be straight
forwardly expressed in the Cyc symbolic logic vocabulary and whose causal
goal structure can be the basis of future Cyc active, self-improving
behavior.  Furthermore, I believe that our culture entrusts the military
with awesome destructive power because of civilian oversight, legal
constraints and the ethical structure developed and taught to military
personnel of all ranks.  For operational ethics, I certainly would accept
the teachings of our military academies and staff schools.  And I can
provide web site links for anyone interested.

Civilian oversight is already a reality for my AI work.  For example, the
GENOA II program is funded by the DARPA Information Awareness Office whose
actives will be subject to congressional scrutiny and possible
termination if the current funding bill becomes law.

I believe that as evidence of AGI (e. g. software that can learn
from reading) becomes widely known: (1) the military will provide abundant
funding - possibly in excess of what commercial firms could do without a
consortium  (2) public outcry will assure that military AGI development
has civilian and academic oversight.

-Steve

-- 
===
Stephen L. Reed  phone:  512.342.4036
Cycorp, Suite 100  fax:  512.342.4040
3721 Executive Center Drive  email:  [EMAIL PROTECTED]
Austin, TX 78731   web:  http://www.cyc.com
 download OpenCyc at http://www.opencyc.org
===

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel


As has been pointed out on this list before, the military IS interested in
AGI, and primarily for information integration rather than directly
weapons-related purposes.

See

http://www.darpa.mil/body/NewsItems/pdf/iptorelease.pdf

for example.

-- Ben G



 I can't imagine the military would be interested in AGI, by its
 very definition.  The military would want specialized AI's,
 constructed around a specific purpose and under their strict
 control.  An AGI goes against everything the military wants from
 its weapons and agents.  They train soldiers for a long time
 specifically to beat the GI out of them (har har, no pun
 intended) so that they behave in a predictable manner in a
 dangerous situation.


 And while I'm not entirely optimistic about the practicality of
 building ethics into AI's, I think we should certainly try, and
 that rules military funding right out.

 -Brad

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote:

 Under the ethical code you describe, the AGI would swat
 them like a bug with no more concern than you swatting a mosquito.


I did not describe an ethical code, I described two scenarios about a
human (myself) then suggested the non-bug-swatting scenario was
possible, analogically, for an AGI.



 All I'm trying to do is shift the focus for a few moments to our own
 ethical standards as people.  If we were put into the shoes of an
 AGI, would we behave well towards the inferior species?


I presume from the phrase If we were put into the shoes of an AGI that
human morality and ethics would come along for the ride.  If that is
what you meant: then it depends on which human you pick as to what
happens.  I have observed both altruism and cruelty, obsession and
indifference in human behaviour toward other species.  It bears some
thinking about just exactly what one would do in such a situation... I
know I have often thought about it.



 Philip brings up the point that a community AGI's could possibly
 self-police.  I agree.


I don't.  Policing is only useful/meaningful within a community of
almost equal actors that have very little real power.  If the actors are
not almost equally powerful then you have the 'human and a bug'
scenario.  If the actors have a very large amount of power, then a
single 'transgression' could wipe us all out before any 'policing
action' could be initiated.



 Nor, would one presume, on an AGI's.  They might end up with it
 anyway.


I would not presume that so readily.  Taking it as a given that we are
discussion a Friendly AGI, I would say that there would be significant
utility in obtaining a great deal of power.  Not to 'Lord it over the
petty humans', but to protect them both internal and external threats.


Michael Roy Ames


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Stephen Reed
Daniel,

For a start look at the IPTO web page and links from:

http://www.darpa.mil/ipto/research/index.html

Darpa has a variety of Offices which sponsor AI related work, but IPTO is
now being run by Ron Brachman, the former president of the AAAI.  When I
listened to the talk he gave a Cycorp in December he spoke of his strong
desire to fund AI, if we can tell a compelling story.  His budget could be
as much as $50 - $100 million per year - if he develops good reseach
programs that withstand the competitive pressure for funds from the other
DARPA offices.

Other government agencies fund work and we submit SBIR proposals when the
research objective sufficiently overlaps our core work.

-Steve

On Wed, 12 Feb 2003, Daniel Colonnese wrote:


 I believe that as evidence of AGI (e. g. software that can learn
 from reading) becomes widely known: (1) the military will provide
 abundant
 funding - possibly in excess of what commercial firms could do without
 a
 consortium  (2) public outcry will assure that military AGI development
 has civilian and academic oversight.

 Steve, Ben, do you have any gauge as too what kind of grants are hot
 right now or what kind of narrow AI projects with AGI implications have
 recently been funded through military agencies?

-- 
===
Stephen L. Reed  phone:  512.342.4036
Cycorp, Suite 100  fax:  512.342.4040
3721 Executive Center Drive  email:  [EMAIL PROTECTED]
Austin, TX 78731   web:  http://www.cyc.com
 download OpenCyc at http://www.opencyc.org
===

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel

 Steve, Ben, do you have any gauge as too what kind of grants are hot
 right now or what kind of narrow AI projects with AGI implications have
 recently been funded through military agencies?

The list would be very long.  Just look at the DARPA IPTO website for
starters...

http://www.darpa.mil/ipto/


  And while I'm not entirely optimistic about the practicality of
  building ethics into AI's, I think we should certainly try, and that
  rules military funding right out.

 Yeah, it seems like somewhat of a *moral compromise* to pursue narrow AI
 research funding with the hopes of creating doing work which may help to
 one day create AGI.  Or as Sartre said:

I don't agree that receiving military funding for specific purposes rules
out creating an ethical AGI, nor that doing narrow AI work is a moral
compromise.

For example, suppose one accepts military funding to create an AI
application aimed at computer network security.

Suppose one creates this application using parts of one's in-development AGI
codebase.  But, suppose one retains ownership of one's AGI codebase.

Where's the ethical dilemma here?  In the fact that, theoretically, the
military could take one's computer security code and repurpose it for
violent intents?  There is a bit of an ethical dilemma here, but it is a
narrow-AI ethical dilemma, not an AGI ethical dilemma.  Because one may
still train one's AGI oneself, using one's own ethical principles...

-- Ben


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]