> Ben Goertzel wrote:
> >
> > Yes, I see your point now.
> >
> > If an AI has a  percentage p chance of going feral, then in the case of
> > a society of AI's, only p percent of them will go feral, and the odds
> > are that other AI's will be able to stop it from doing anything bad.
> > But in the case of only one AI, then there's just a p% chance of it
> > going feral, without much to do about it...
> >
> > Unknowns are the odds of AI's going feral and supersmart at the same
> > time, and the effects of society-size on the probability of ferality...
> >
> > But you do have a reasonable point, I'll admit, and I'll think about it
> > more...
>
> This does not follow.  If an AI has a P chance of going feral, then a
> society of AIs may have P chance of all simultaneously going feral - it
> depends on how much of the probability is independent among
> different AIs.

yeah, this dependency is what I meant by my awkward and hasty phrase "the
effects of society-size on the probability of going feral."  (I should have
said sociodynamics, not society-size).

This is why Philip's point is reasonable but not well-demonstrated...

>   Actually, for the most worrisome factors, such as theoretical flaws in
> the theory of AI morality, I would expect the risk factor to be almost
> completely shared among all AIs.  Furthermore, risk factors stemming from
> divergent rates of self-enhancement or critical thresholds in
> self-enhancement may not be at all improved by multiply copying or
> multiply diverging AIs if AI improvement is less than perfectly
> synchronized.

All true.  There are a lot of uncertainties here.

My guess is that I'm gonna want to build one big Novamente mind and not a
society of smaller ones.  But I'll revisit Philip's points very carefully in
a few years when the time comes to think about such things seriously...

Ben G


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to