Linas Vepstas wrote:
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
When the first AGI is built, its first actions will be to make sure that nobody is trying to build a dangerous, unfriendly AGI.

Yes, OK, granted, self-preservation is a reasonable character trait.

After that point, the first friendliness of the first one will determine the subsequent motivations of the entire population, because they will monitor each other.

Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would not be what I'd call
"freindly" behavior.

There's also a strong sense that winnner-takes-all, or
first-one-takes-all, as the first one is strongly motivated,
by instinct for self-preservation, to make sure that no other
AGI comes to exist that could threaten, dominate or terminate it.

In fact, the one single winner, out of sheer loneliness and boredom,
might be reduced to running simulations a la Nick Bostrom's simulation
argument (!)

--linas

This is interesting, because you have put your finger on a common reaction to the "First AGI will take down the others" idea.

When I talk about this first-to-market effect, I *never* mean that the first one will "eliminate" or "exterminate" all the others, and I do not mean to imply that it would do so because it feels motives akin to self-preservation, or because it does not want to be personally dominated, threatened (etc) by some other AGI.

What I mean is that ASSUMING the first one is friendly (that assumption being based on a completely separate line of argument), THEN it will be obliged, because of its commitment to friendliness, to immediately search the world for dangerous AGI projects and quietly ensure that none of them are going to become a danger to humanity. There is absolutely no question of it doing this because of a desire for self-preservation, or jealousy, feeling threatened, or any of those other motivations, because the most important part of the design of the first friendly AGI will be that it will not have those motivations.

Not only that, but it will not necessarily wipe out those other AGIs, either. If we value intelligent, sentient life, we may decide that the best thing to do with these other AGI designs, if they have reached the point of self-awareness, is to let them keep most of their memories but modify them slightly so that they can be transferred into the new, friendly design. To be honest, I do not think it is likely that there will be others that are functioning at the level of self-awareness at that stage, but that's another matter.

So this would be a quiet-but-friendly modification of other systems, to put security mechanisms in place, not an aggressive act. This follows directly from the assumption of friendliness of the first AGI.

Some people have talked about aggressive takeovers. That is a completely different kettle of fish, which assumes the first one will be aggressive.

For reasons I have stated elsewhere, I think that *in* *practice* the first one will not be aggressive.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49645537-bddb78

Reply via email to