Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:
On Feb 3, 2008 10:22 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
My argument was (at the beginning of the debate with Matt, I believe)
that, for a variety of reasons, the first AGI will be built with
peaceful motivations.  Seems hard to believe, but for various technical
reasons I think we can make a very powerful case that this is exactly
what will happen.  After that, every other AGI will be the same way
(again, there is an argument behind that).  Furthermore, there will not
be any "evolutionary" pressures going on, so we will not find that (say)
the first few million AGIs are built with perfect motivations, and then
some rogue ones start to develop.
In the context of a distributed AGI, like the one I propose at
http://www.mattmahoney.net/agi.html this scenario would require the first
AGI
to take the form of a worm.
That scenario is deeply implausible - and you can only continue to advertise it because you ignore all of the arguments I and others have given, on many occasions, concerning the implausibility of that scenario.

You repeat this line of black propaganda on every occasion you can, but on the other hand you refuse to directly address the many, many reasons why that black propaganda is nonsense.

Why?

Perhaps "worm" is the wrong word.  Unlike today's computer worms, it would be
intelligent, it would evolve, and it would not necessarily be controlled by or
serve the interests of its creator.  Whether or not it is malicious would
depend on the definitions of "good" and "bad", which depend on who you ask.  A
posthuman might say the question is meaningless.

So far, this just repeats the same nonsense: your scenario is based on unsupported assumptions.



If I understand your proposal, it is:
1. The first AGI to achieve recursive self improvement (RSI) will be friendly.

For a variety of converging reasons, yes.


2. "Friendly" is hard to define, but because the AGI is intelligent, it would
know what we mean and get it right.

No, not correct. "Friendly" is not hard to define if you build the AGI with a full-fledged motivation system of the "diffuse" sort I have advovcated before. To put it in a nutshell, the AGI can be made to have a primary motivation that involves empathy with the human species as a whole, and what this do in practice is that the AGI would stay locked in sync with the general desires of the human race.

The question of "knowing what we mean by 'friendly'" is not relevant, because this kind of "knowing" is explicit declarative knowledge.


3. The goal system is robust because it is described by a very large number of
soft constraints.

Correct. The motivation system, to be precise, depends for its stability on a large number of interconnections, so trying to divert it from its main motivation would be like unscrambling an egg.


4. The AGI would not change the motivations or goals of its offspring because
it would not want to.

Exactly. It would not just not change them, it would take active steps to ensure that any other AGI would have exactly the same safeguards in its system that it (the mother) would have.


5. The first AGI to achieve RSI will improve its intelligence so fast that all
competing systems will be left far behind.  (Thus, a "worm").

No, not thus a worm. It will simply be an AGI. The concept of a computer worm is so far removed from this AGI that it is misleading to recruit the term.

6. RSI is deterministic.

Not correct.

The factors that make a collection of free-floating atoms, in a zero-gravity environment) tend to coalesce into a sphere are not "deterministic" in any relevant sense of the term. A sphere forms because a RELAXATION of all the factors involved ends up in the same shape every time.

If you mean any other sense of "deterministic" then you must clarify.


My main point of disagreement is 6.  Increasing intelligence requires
increasing algorithmic complexity.  We know that a machine cannot output a
description of another machine with greater complexity.  Therefore
reproduction is probabilistic and experimental, and RSI is evolutionary.  Goal
reproduction can be very close but not exact.  (Although the AGI won't want to
change the goals, it will be unable to reproduce them exactly because goals
are not independent of the rest of the system).  Because RSI is very fast,
goals can change very fast.  The only stable goals in evolution are those that
improve fitness and reproduction, e.g. efficiency and acquisition of computing
resources.

Which part of my interpretation or my argument do you disagree with?

The last paragraph! To my mind, this is a wild, free-wheeling non-sequiteur that ignores all the parameters laid down in the preceding paragraphs:


"Increasing intelligence requires increasing algorithmic complexity."

If its motivation system is built the way that I describe it, this is of no relevance.


"We know that a machine cannot output a description of another machine with greater complexity."

When would it ever need to do such a thing? This factoid, plucked from computational theory, is not about "description" in the normal scientific and engineering sense, it is about containing a complete copy of the larger system inside the smaller. I, a mere human, can "describe" the sun and its dynamics quite well, even though the sun is a system far larger and more complex than myself. In particular, I can give you some beyond-reasonable-doubt arguments to show that the sun will retain its spherical shape for as long as it is on the Main Sequence, without *ever* changing its shape to resemble Mickey Mouse. Its shape is stable in exactly the same way that an AGI motivation system would be stable, in spite of the fact that I cannot "describe" this large system in the strict, compututational sense in which some systems "describe" other systems.



"Therefore reproduction is probabilistic and experimental, and RSI is evolutionary."

Completely wrong, because "evolution" presupposes a search for fitness by random mutation in a population of competing individuals ..... and none of these things (fitness function, random mutation, competition) apply in the terms set out above.


The rest of what you say is dependent on the above, so I don't need to deal with it separately.




Richard Loosemore

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to