--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > Perhaps "worm" is the wrong word.  Unlike today's computer worms, it would
> be
> > intelligent, it would evolve, and it would not necessarily be controlled
> by or
> > serve the interests of its creator.  Whether or not it is malicious would
> > depend on the definitions of "good" and "bad", which depend on who you
> ask.  A
> > posthuman might say the question is meaningless.
> 
> So far, this just repeats the same nonsense:  your scenario is based on 
> unsupported assumptions.

OK, let me use the term "mass extinction".  The first AGI that implements RSI
is so successful that it kills off all its competition.

> The question of "knowing what we mean by 'friendly'" is not relevant, 
> because this kind of "knowing" is explicit declarative knowledge.

I can accept that an AGI can have empathy toward humans, although no two
people will agree exactly on what this means.

> > 6. RSI is deterministic.
> 
> Not correct.

This is the only point where we disagree, and my whole argument depends on it.

> The factors that make a collection of free-floating atoms, in a 
> zero-gravity environment) tend to coalesce into a sphere are not 
> "deterministic" in any relevant sense of the term.  A sphere forms 
> because a RELAXATION of all the factors involved ends up in the same 
> shape every time.
> 
> If you mean any other sense of "deterministic" then you must clarify.

I mean in the sense that if RSI was deterministic, then a parent AGI could
predict a child's behavior in any given situation.  If the parent knew as much
as the child, or had the capacity to know as much as the child could know,
then what is the point of RSI?


> > Which part of my interpretation or my argument do you disagree with?
> 
> "Increasing intelligence requires increasing algorithmic complexity."
> 
> If its motivation system is built the way that I describe it, this is of 
> no relevance.

Instead of the fuzzy term "intelligence" let me say "amount of knowledge"
which most people would agree is correlated with intelligence.  Behavior
depends not just on goals but also on what you know.  A child AGI may have
empathy toward humans just like its parent, but may have a slightly different
idea of what it means to be human.

> "We know that a machine cannot output a description of another machine 
> with greater complexity."
> 
> When would it ever need to do such a thing?  This factoid, plucked from 
> computational theory, is not about "description" in the normal 
> scientific and engineering sense, it is about containing a complete copy 
> of the larger system inside the smaller.  I, a mere human, can 
> "describe" the sun and its dynamics quite well, even though the sun is a 
> system far larger and more complex than myself.  In particular, I can 
> give you some beyond-reasonable-doubt arguments to show that the sun 
> will retain its spherical shape for as long as it is on the Main 
> Sequence, without *ever* changing its shape to resemble Mickey Mouse. 
> Its shape is stable in exactly the same way that an AGI motivation 
> system would be stable, in spite of the fact that I cannot "describe" 
> this large system in the strict, compututational sense in which some 
> systems "describe" other systems.

Your model of the sun does not include the position of every atom.  It has
less algorithmic complexity than your brain.  Why is your argument relevant?



-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to