Randall Randall wrote:
On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote:
Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One example is the idea that there will be a situation in the world in which there are many superintelligent AGIs in the world, all competing with each other for power in a souped up version of today's arms race(s). This is extraordinarily unlikely: the speed of development would be such that one would have an extremely large time advantage (head start) on the others, and during that time it would merge the others with itself, to ensure that there was no destructive competition. Whichever way you try to think about this situation, the same conclusion seems to emerge.
As a counterexample, I offer evolution. There is good evidence that every living thing evolved from a single organism: all DNA is twisted in the same
direction.

I don't understand how this relates to the above in any way, never mind how it amounts to a counterexample.

If you're actually arguing against the possibility of more than
one individual superintelligent AGI, then you need to either
explain how such an individual could maintain coherence over
indefinitely long delays (speed of light) or just say up front
that you expect magic physics.

If you're arguing that even though individuals will emerge,
there will be no evolution, then Matt's counterexample applies
directly.

I was talking about early development of AGI on this planet, and I was specifically addressing the idea (frequently repeated) that there will be a phase when all kinds of groups will separately develop AGIs that make it to full human+ intelligence. The assumption attached to this idea is that these AGIs will each obey their own imperatives, working in a competitive way for themselves or their owners, thereby landing us in a situation where these things would be duking it out with one another.

That is not the same as the situation you raise, which is the question of what comes much later when the AGI(s) on this planet (if there are any) start moving outward to other bodies.

1) Considering my scenario first, the argument rests on (A) how fast the AGIs will develop, and (B) whether they will be driven by the same forces that lead to evolutionary pressure.

A) Development curve. In the case of all human-drive arms races, the curve of development is driven by intelligences that are all approximately the same level (i.e. humans), and feeding on roughly the same pool of knowledge. Because of this the development curves are very close to one another and have roughly the same slope: nobody ever gets a "killer advantage" that lets them overcome all competition in one sudden coup.

However, in the case of AGI development the situation is completely different because the intelligences that are the drivers of technological progress are not all at the same level. If country or organization A gets an AGI program operating five years before B gets theirs started, and if the program yields an AGI that starts to go superintelligent over the course of a few months in (say) 2010, then the rival B program is rendered completely invalid if its own peak is not due to occur until a few years later: by the time A has gone thourgh its peak, the drivers of its technology will be >1000 times faster than B's, so long before B can catch up, A's AGI system will quietly take over B's programme. This is all to do with the shape of the development curve: a sudden spike to 1000x intelligence is something that has NEVER occurred in the history of human arms races.

There are many other arguments that bear on the question of what the first AGI look like, so for the sake of brevity I will just sketch what I believe to be the conclusion from all those other arguments: the simplest AGI design will be the one that gets there first, and the simplest design that is actually capable of understanding the design of intelligent systems (an absolute prerequisite for an AGI to recursively self-improve) is one that has the most balanced, human-empathic motivational system. The conclusion is that the first AGI will almost certainly be one that is in tune with human motivations in a broad-based way ... willingly locked into a state in which its morals and desires (and everything else that matters for the friendliness question) are in sync with those of the human species as a whole. As a result, this first AGI will naturally move to ensure that other AGI projects do not yield dangerous AGI systems.

B) Evolution. For evolutionary pressure to manifest itself, there are some prerequisites. The individuals must compete for resources according to some criterion that captures their degree of success in this competition. The individuals must have varying degrees of success, with the successful ones being weeded out and replaced by new ones that retain the design of their parents.

Both of these are completely absent from the AGI scenario. The AGI is not one of a large population; there is no competition; there is no mechanism for weeding out those who fail a competition. This is all trivially obvious: when systems actively understand their own design, and set out to improve at the individual level, evolution simply does not apply. Naturally evolved creatures did not understand their design and make concious decisions about possible improvements.

Even if there was eventually a population of thousands or gazillions of AGIs, it still would not follow that they would evolve: they could all be built with no competitive instincts and only a desire to cooperate with one another as if they were separate parts of one entity. In the scenario that I just described, where there is only one AGI at the beginning, cooperation is the obvious outcome: the first AGI would understand the danger of competition and simply never do it.

The mistake that people often make is to assume that just because there could be many AGIs at some time, that they would therefore be competing with one another and living and dying and reproducing, and therefore they would be evolving. In fact, this is the most unlikely scenario whatever happens. People who try to think about these issues today, I would argue, are having difficulty stretching their minds to get out of old ways of thinking.

2) Moving on to your question about how to maintain coherence across unconnectable worldlines: it matters hugely what kind of AGIs you are sending to the stars. If you start off assuming that you are sending aggressive, competitive AGIs outward from this planet, then you have prejudged the situation to ensure that there is evolution between them (eventually).

But again, there is no reason to do that. If the AGIs sent out from Earth are built from the cooperative design that (I have just argued) is the most likely one to be dominating the planet by then, then it will continue to be cooperative no matter how long it remains separated from the mother planet. It does depend on the mechanisms used to motivate the AGI, but I have argued elsewhere that those mechanisms could be made so stable that I doubt there would be enough divergence in the lifetime of the universe for it to matter much.

But whether you are immediately convinced by that earlier argument or not, the important point is that the only way to arrive at a conclusion about competitive vs cooperative interstellar AGIs is to examine the question of what their motivations would be. Nobody can ignore that question and just asserts that there *must* be forcefully maintained coherence across the stars.




Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89523139-700162

Reply via email to