--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Richard Loosemore <[EMAIL PROTECTED]> wrote: > >> The problem with the scenarios that people imagine (many of which are > >> Nightmare Scenarios) is that the vast majority of them involve > >> completely untenable assumptions. One example is the idea that there > >> will be a situation in the world in which there are many > >> superintelligent AGIs in the world, all competing with each other for > >> power in a souped up version of today's arms race(s). This is > >> extraordinarily unlikely: the speed of development would be such that > >> one would have an extremely large time advantage (head start) on the > >> others, and during that time it would merge the others with itself, to > >> ensure that there was no destructive competition. Whichever way you try > >> to think about this situation, the same conclusion seems to emerge. > > > > As a counterexample, I offer evolution. There is good evidence that every > > living thing evolved from a single organism: all DNA is twisted in the > same > > direction. > > I don't understand how this relates to the above in any way, never mind > how it amounts to a counterexample.
Because recursive self improvement is a competitive evolutionary process even if all agents have a common ancestor. An agent making modified copies of itself cannot be sure that the copies will be better adapted to future environments, because the parent cannot perfectly predict those environments. The process must therefore be experimental. Evolution will favor agents that are better at acquiring computational resources, regardless of what initial goals we give them. Maybe the first million generations will be friendly, but that might only be a few hours. -- Matt Mahoney, [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=89506017-bf2878
