Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One example is the idea that there will be a situation in the world in which there are many superintelligent AGIs in the world, all competing with each other for power in a souped up version of today's arms race(s). This is extraordinarily unlikely: the speed of development would be such that one would have an extremely large time advantage (head start) on the others, and during that time it would merge the others with itself, to ensure that there was no destructive competition. Whichever way you try to think about this situation, the same conclusion seems to emerge.
As a counterexample, I offer evolution.  There is good evidence that every
living thing evolved from a single organism: all DNA is twisted in the
same
direction.
I don't understand how this relates to the above in any way, never mind how it amounts to a counterexample.

Because recursive self improvement is a competitive evolutionary process even
if all agents have a common ancestor.

As explained in parallel post:  this is a non-sequiteur.

An agent making modified copies of
itself cannot be sure that the copies will be better adapted to future
environments

Adaptation?  What adaptation?  See parallel post.

because the parent cannot perfectly predict those environments. The process must therefore be experimental. Evolution will favor

Evolution will not apply.  See parallel post.

agents that
are better at acquiring computational resources

Nonsense. Only if 'acquiring more computational resources' conveys advantage in a competitive environment. Even if there were some competition (which there would not be), there is no reason to believe that acquiring more computational resources would be the success measure.

regardless of what initial
goals we give them.  Maybe the first million generations will be friendly, but
that might only be a few hours.

Everything you say is built on wild and completely unexamined assumptions, all of which (on examination) turn out to be deeply implausible.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89543617-c48129

Reply via email to