RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Ben Goertzel

 Ben,

 In reply to my para saying :

  if the one AGI goes feral the rest of us are going to need to access
  the power of some pretty powerful AGIs to contain/manage the feral
  one. Humans have the advantage of numbers but in the end we may not
  have the intellectual power or speed to counter an AGI that is
  actively setting out to threaten humans.

 you said:

  I don't see why multiple superintelligent AGI's are safer than a single
  one

 [EMAIL PROTECTED]@[EMAIL PROTECTED]  I thought the previous para gave at least one 
 good
 reason.  :)

That paragraph gave one possible dynamic in a society of AGI's, but there
are many many other possible social dynamics, including those that lead to
mobs of rampaging violent AGI's.  What's the probability distribution across
the different AGI social dynamic patterns???


  Certainly among human societies, the only analogue we have,

 Human societies are NOT the only analogues we have so for
 understanding GI social behaviour.  There are lots of social animals
 and chimpanzees and bonobos are very closely related to humans
 (they are moreclosely realted to humans than they are to gorillas,
 orangutans and gibbons).

Well, OK, but I reckon ape societies are worse analogues for AGI societies
than human societies are...

 So I think the way to approach the issue is to say what behavioural
 drivers do we need to generate in AGIs so that their collective
 behaviour tend strongly towards peaceful or better still peace making
 behaviour.

Sure... I'm just not so sure that there's any benefit to the society of
AGI as opposed to one big AGI approach

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton



Ben,


 Ben: That paragraph 
gave one possible dynamic in a society of AGI's,
 but there are 
many many other possible social dynamics 

Of course. What you say is quite 
true. But so what?


Let's go back to that one possible 
dynamic. Can't you bring yourself to 
agree that if a one-and-only super-AGI went feral that humans would 
then be at a greater disadvantage relative to it than if there was more 
than one AGI around and the humans could call on the help of one or 
more of the other AGIs??


Forget about all the other possible 
hypotheticals. Is my assessment of 
the specific scenario above about right or not - doesn't it have some 
element of common sense about 
it?

If there is any benefit 
in having more than one AGI around in the case 
where an AGI does go feral then your comment I'm just not so sure 
that there's any benefit to the 
society of AGI as opposed to one big 
AGI approach no longer holds as an absolute.


It then gets back to having a 
society of AGIs might be an advantage 
in certain cercumstances, but having more than one AGI might have 
the following down sides. At this point a balanced risk/benefit 
assessment can be made (not definitive of course since we haven't 
seen super-intelligent AGIs operation yet). But at least we've got some 
relevant issues on the table to think about.

Cheers, Philip





Re: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton



Hi Eliezer,


 This does not 
follow. If an AI has a P chance of going feral, then a
 society of AIs 
may have P chance of all simultaneously going feral 

I can see you point but I don't agree 
with it.


If General Motors churns out 100,000 
identical cars with all the same 
charcteristics and potiential flaws, they will not all 
fail at the same 
instant in time. Each of them will be placed in a different operating 
environment and the failures will probably spread over a bell curve 
style distribution.


If we apply this logic to AGIs we 
have a chance to enlist the support of 
most of the AGIs to 'recall' the population to take preventive action to 
avoid failure and will have their help to deal with the AGIs that have 
already failed.


Cheers, Philip





RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Ben Goertzel


 Ben Goertzel wrote:
 
  Yes, I see your point now.
 
  If an AI has a  percentage p chance of going feral, then in the case of
  a society of AI's, only p percent of them will go feral, and the odds
  are that other AI's will be able to stop it from doing anything bad.
  But in the case of only one AI, then there's just a p% chance of it
  going feral, without much to do about it...
 
  Unknowns are the odds of AI's going feral and supersmart at the same
  time, and the effects of society-size on the probability of ferality...
 
  But you do have a reasonable point, I'll admit, and I'll think about it
  more...

 This does not follow.  If an AI has a P chance of going feral, then a
 society of AIs may have P chance of all simultaneously going feral - it
 depends on how much of the probability is independent among
 different AIs.

yeah, this dependency is what I meant by my awkward and hasty phrase the
effects of society-size on the probability of going feral.  (I should have
said sociodynamics, not society-size).

This is why Philip's point is reasonable but not well-demonstrated...

   Actually, for the most worrisome factors, such as theoretical flaws in
 the theory of AI morality, I would expect the risk factor to be almost
 completely shared among all AIs.  Furthermore, risk factors stemming from
 divergent rates of self-enhancement or critical thresholds in
 self-enhancement may not be at all improved by multiply copying or
 multiply diverging AIs if AI improvement is less than perfectly
 synchronized.

All true.  There are a lot of uncertainties here.

My guess is that I'm gonna want to build one big Novamente mind and not a
society of smaller ones.  But I'll revisit Philip's points very carefully in
a few years when the time comes to think about such things seriously...

Ben G


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Ben Goertzel

Eliezer is certainly correct here -- your analogy ignores probabilistic
dependency, which is crucial.

Ben

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Eliezer S. Yudkowsky
 Sent: Tuesday, March 04, 2003 1:33 AM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] Why is multiple superintelligent AGI's safer than a
 single AGI?


 Philip Sutton wrote:
  Hi Eliezer,
 
   This does not follow.  If an AI has a P chance of going feral, then a
   society of AIs may have P chance of all simultaneously going feral
 
  I can see you point but I don't agree with it.
 
  If General Motors churns out 100,000 identical cars with all the same
  charcteristics and potiential flaws, they will */not /*all fail at the
  same instant in time.  Each of them will be placed in a different
  operating environment and the failures will probably spread over a bell
  curve style distribution.

 That's because your view of this problem has automatically factored out
 all the common variables.  All GM cars fail when dropped off a
 cliff.  All
 GM cars fail when crashed at 120 mph.  All GM cars fail on the moon, in
 space, underwater, in a five-dimensional universe.  All GM cars
 are, under
 certain circumstances, inferior to telecommuting.

 How much of the risk factor in AI morality is concentrated into such
 universals?  As far as I can tell, practically all of it.  Every AI
 morality failure I have ever spotted has been of a kind where a
 society of
 such AIs would fail in the same way.

 The bell-curve failures to which you refer stem from GM making a
 cost-performance tradeoff.  The bell-curve distributed failures, like the
 fuel filter being clogged or whatever, are *acceptable* failures, not
 existential risks.  It therefore makes sense to accept a probability X of
 failure, for component Q, which can be repaired at cost C when it fails;
 and when you add up all those probability factors you end up with a bell
 curve.  But if the car absolutely had to work, you would be minimizing X
 like hell, to the greatest degree allowed by your *design ability and
 imagination*.  You'd use a diamondoid fuel filter.  You'd use three of
 them.  You wouldn't design a car that had a single point of
 failure at the
 fuel filter.  You would start seriously questioning whether what you
 really wanted should be described as a car.  Which in turn would shift
 the most probable cause of catastrophic failure away from bell-curve
 probabilistic failures and into outside-context failures of imagination.

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton
Eliezer,

 That's because your view of this problem has automatically factored
 out all the common variables.  All GM cars fail when dropped off a
 cliff.  All GM cars fail when crashed at 120 mph.  All GM cars fail on
 the moon, in space, underwater, in a five-dimensional universe.  All
 GM cars are, under certain circumstances, inferior to telecommuting. 

Good point.

Although not all failires will be of this sort so the group strategy is still 
useful for at least a sibset of the failure cases.

Seems to me then that safety lies in a combination of all our best safety 
factors:

-   designing all AGIs to be as effectively friendly as possible - as if
we had a one shot chance of getting it right and we can't afford the
risk of failure, and AS WELL

-   developing quite a few different types of AGI architecture so that
the risk of sharing the same class of critical error is reduced; and
AS WELL 

-   having a society of AGIs with multiples of each different type -
that are uniquely trained - so that the degree of sameness and hence
risk of failure is not so tightly linked. 

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]