Bob Mottram wrote:
On 25/03/2008, *Mark Waser* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

You're thinking too small. The AGI will distribute itself. And money is likely to be:

* rapidly deflated, * then replaced with a new, alternate currency
that truly values talent and effort (rather than just playing with
the money supply -- aka interest, commissions, inheritances, etc.) *
while everyone's basic needs (most particularly water, food, shelter,
energy, education, and health care) are provided for free

So your brilliant arbitrage to become rich is unlikely to be of much value just a few years later.



The arrival of smarter than human intelligence will bring about
changes which are hard to anticipate, and somehow I doubt that this
will mean that we all live in some kind of utopia.  The only
historical precedent which I can think of is the emergence of homo
sapiens and the effects which that had upon other human species
living at the time.  This must have been quite a revolution, because
the new species was able to manufacture many different types of tools
and therefore survive in environments which were previously
inaccessible, or perform more efficiently within existing ones.

There may be a period where proto-AGIs are available and companies
can use these as "get rich quick" schemes of various kinds to
radically automate processes and jobs which were previously performed
manually. But once the real deal arrives then even the captains of
industry are themselves likely to be overthrown.  Ultimately
evolutionary forces will decide what happens, as has always been the
case.

Bob,

The problem with trying to decide what will happen by looking at
precedents is that none of them apply.

Consider.  The behavior of every species of higher animal is governed by
the design of their brains, and without exception evolution has made
sure that all creatures try to satisfy a set of selfish goals.  It is
noticeable, of course, that the more selfish, aggressive and intelligent
the species, the more successful it has been.  The reason for this
success is evolutionary pressure:  individuals competing with one
another, and species competing with one another.  The driver of this
process is not a Supreme Designer, but random mutation.

When real AGI systems are built, there is no reason to assume that their
behavior will be determined by evolutionary pressures of this sort. Of course it is always *possible* that evolution will play a role (we can imagine scenarios in which it does), but it is by no means certain that this is the way it will go. Unlike the rise of biological life, there really are Designers involved.

Also, there has never been situation in which the intelligence of a creature was so high that it could rebuild its own intelligence, thereby increasing its capabilities to an arbitrary degree.

Three factors will govern how the first AGI will behave. First, there will be a strong incentive to build the first AGI as a non-aggressive, non-selfish creature. Second, the best way to ensure Friendliness would be to build it with motivations that are closely sympathetic to our own goals and aspirations - to make it feel like it is one of us. Thirdly, there will also be a strong incentive to make sure that this type of AGI will be the only type, because it would be pointless to have a Friendly AGI in one place but allow anyone and everyone to build whatever other types of AGI they feel like building.

The net result of these three factors is that the first AGI will
probably be used as the *only* effective AGI.  That does not mean there
will be only one intelligence, but it does mean that the design will
stay the same, that other non-friendly designs will not be allowed, and
that if there are many AGIs they will be closely connected, working as a
family of very close sisters rather than as a competing species. In fact, the most accurate way to think of a situation in which non-proliferation was being ensured would be to imagine one main AGI plus a very large number of drones.

But if this is the way things develop at first, this situation will
become locked in (in the same way that the rotation direction of our
clocks became locked in at an early stage of their development).

If this lock-in really is the most likely course of events, then this would make the future extremely predictable indeed. If we were
to set up these first AGIs to be broadly empathic to human beings (with
no preference for empathizing with any one individual human but a
having instead a species-wide feeling of belonging, and a desire to help
us achieve our collective aspirations) then this would mean that if we
were to sit down today and write out a vision for what we want the
future to be like (modulo some fine details that can be left to develop
by themselves without destabilizing the overall design), then this
collective plan is exactly what the AGIs would try to build.

And, as several people have just noted, as soon as real AGIs arrive,
they will have the power to make changes on a stupendous scale.

The bottom line?  Instead of being unpredictable, the arrival of an AGI
system could, for the first time in human history, cause a sequence of
changes that we could specify in precise detail right now.

The "could" in that sentence is important.  I believe that there are
many arguments that imply this "could" is actually an "almost certainly
will".  Not the least of these arguments is that when people understand
that this is a real possibility, they will lobby hard to make it happen,
and this desire will make it more likely.

So I disagree with you when you say "The arrival of smarter than human intelligence will bring about changes which are hard to anticipate".

However, I think you are right that there could be an intermediate period when "proto-AGI" systems are a nuisance. However, these proto-AGI systems will really only be souped up Narrow-AI systems, so I believe their potential for mischief will be strictly limited.




Richard Loosemore














-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to