--- Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > Legg's paper is of no relevance to the argument
> whatsoever, because it 
> > first redefines "intelligence" as something else,
> without giving any 
> > justification for th redefinition, then proves
> theorems about the 
> > redefined meaning.  So it supports nothing in any
> discussion of the 
> > behavior of intelligent systems.  I have discussed
> this topic on a 
> > number of occasions.
> 
> Since everyone defines intelligence as something
> different, I picked a
> definition where we can actually say something about
> it that doesn't require
> empirical experimentation.  What definition would
> you like to use instead?

A definition that actually resembles some sort of
logical sense. Algorithmic complexity has nothing
whatsoever to do with what we think of as
intelligence! I'm sure any computer scientist worth
their salt could use a computer to write up random
ten-billion-byte-long algorithms that would do exactly
nothing. Defining intelligence that way because it's
mathematically neat is just cheating- while we're at
it, why don't we tell the biologists to redefine cells
as Boolean variables?

> We would all like to build a machine smarter than
> us, yet still be able to
> predict what it will do.  I don't believe you can
> have it both ways.  And if
> you can't predict what a machine will do, then you
> can't control it.

The idea isn't to control its every action; the idea
is to set a goal system such that the result of its
actions is very desirable to humans, desirable even in
ways we could not have anticipated.

> I
> believe this is true whether you use Legg's
> definition of universal
> intelligence or the Turing test.

You do realize that there are more, far more, possible
intelligence tests than two of the ones we thought up
decades ago?

> Suppose you build a system whose top level goal is
> to act in the best interest
> of humans.  You still have to answer:
> 
> 1. Which humans?
> 2. What does "best interest" mean?
> 3. How will you prevent the system from
> reprogramming its goals, or building a
> smarter machine with different goals?
> 4. How will you prevent the system from concluding
> that extermination of the
> human race is in our best interest?
> 
> Here are some scenarios in which (4) could happen. 
> The AGI concludes (or is
> programmed to believe) that what "best interest"
> means to humans is goal
> satisfaction.  It understands how human goals like
> pain avoidance, food,
> sleep, sex, skill development, novel stimuli such as
> art and music, etc. all
> work in our brains.  The AGI ponders how it can
> maximize collective human goal
> achievement.  Some possible solutions:
> 
> 1. By electrical stimulation of the nucleus
> accumbens.
> 2. By simulating human brains in a simple artificial
> environment with a known
> solution to maximal goal achievement.
> 3. By reprogramming the human motivational system to
> remove all goals.
> 4. Goal achievement is a zero sum game, and
> therefore all computation
> (including human intelligence) is irrelevant.  The
> AGI (including our uploaded
> minds) turns itself off.

This is certainly true. It's very easy to poke holes
in simple definitions of things like "what people
want", but that's not the objective, not if you're
going to build an actual working AGI.

> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



 
____________________________________________________________________________________
Looking for earth-friendly autos? 
Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center.
http://autos.yahoo.com/green_center/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to