Ben,
In reply to my para saying :
if the one AGI goes feral the rest of us are going to need to access
the power of some pretty powerful AGIs to contain/manage the feral
one. Humans have the advantage of numbers but in the end we may not
have the intellectual power or speed to counter
Ben,
Ben: That paragraph
gave one possible dynamic in a society of AGI's,
but there are
many many other possible social dynamics
Of course. What you say is quite
true. But so what?
Let's go back to that one possible
dynamic. Can't you bring yourself to
agree that if a one-and-only
Hi Eliezer,
This does not
follow. If an AI has a P chance of going feral, then a
society of AIs
may have P chance of all simultaneously going feral
I can see you point but I don't agree
with it.
If General Motors churns out 100,000
identical cars with all the same
charcteristics
Ben Goertzel wrote:
Yes, I see your point now.
If an AI has a percentage p chance of going feral, then in the case of
a society of AI's, only p percent of them will go feral, and the odds
are that other AI's will be able to stop it from doing anything bad.
But in the case of only
: [agi] Why is multiple superintelligent AGI's safer than a
single AGI?
Philip Sutton wrote:
Hi Eliezer,
This does not follow. If an AI has a P chance of going feral, then a
society of AIs may have P chance of all simultaneously going feral
I can see you point but I don't agree
Eliezer,
That's because your view of this problem has automatically factored
out all the common variables. All GM cars fail when dropped off a
cliff. All GM cars fail when crashed at 120 mph. All GM cars fail on
the moon, in space, underwater, in a five-dimensional universe. All
GM cars