--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> My argument was (at the beginning of the debate with Matt, I believe)
> >> that, for a variety of reasons, the first AGI will be built with
> >> peaceful motivations.  Seems hard to believe, but for various technical
> >> reasons I think we can make a very powerful case that this is exactly
> >> what will happen.  After that, every other AGI will be the same way
> >> (again, there is an argument behind that).  Furthermore, there will not
> >> be any "evolutionary" pressures going on, so we will not find that (say)
> >> the first few million AGIs are built with perfect motivations, and then
> >> some rogue ones start to develop.
> > 
> > In the context of a distributed AGI, like the one I propose at
> > http://www.mattmahoney.net/agi.html this scenario would require the first
> AGI
> > to take the form of a worm.
> 
> That scenario is deeply implausible - and you can only continue to 
> advertise it because you ignore all of the arguments I and others have 
> given, on many occasions, concerning the implausibility of that scenario.
> 
> You repeat this line of black propaganda on every occasion you can, but 
> on the other hand you refuse to directly address the many, many reasons 
> why that black propaganda is nonsense.
> 
> Why?

Perhaps "worm" is the wrong word.  Unlike today's computer worms, it would be
intelligent, it would evolve, and it would not necessarily be controlled by or
serve the interests of its creator.  Whether or not it is malicious would
depend on the definitions of "good" and "bad", which depend on who you ask.  A
posthuman might say the question is meaningless.

If I understand your proposal, it is:
1. The first AGI to achieve recursive self improvement (RSI) will be friendly.
2. "Friendly" is hard to define, but because the AGI is intelligent, it would
know what we mean and get it right.
3. The goal system is robust because it is described by a very large number of
soft constraints.
4. The AGI would not change the motivations or goals of its offspring because
it would not want to.
5. The first AGI to achieve RSI will improve its intelligence so fast that all
competing systems will be left far behind.  (Thus, a "worm").
6. RSI is deterministic.

My main point of disagreement is 6.  Increasing intelligence requires
increasing algorithmic complexity.  We know that a machine cannot output a
description of another machine with greater complexity.  Therefore
reproduction is probabilistic and experimental, and RSI is evolutionary.  Goal
reproduction can be very close but not exact.  (Although the AGI won't want to
change the goals, it will be unable to reproduce them exactly because goals
are not independent of the rest of the system).  Because RSI is very fast,
goals can change very fast.  The only stable goals in evolution are those that
improve fitness and reproduction, e.g. efficiency and acquisition of computing
resources.

Which part of my interpretation or my argument do you disagree with?



-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to