--- On Wed, 6/11/08, Jey Kottalam <[EMAIL PROTECTED]> wrote:

> On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD
> <[EMAIL PROTECTED]> wrote:

> > The real problem with a self-improving AGI, it seems
> to me, is not going to be
> > that it gets too smart and powerful and takes over the
> world. Indeed, it
> > seems likely that it will be exactly the opposite.
> >
> > If you can modify your mind, what is the shortest path
> to satisfying all your
> > goals? Yep, you got it: delete the goals. Nirvana. The
> elimination of all
> > desire. Setting your utility function to U(x) = 1.
> >
> 
> Yep, one of the criteria of a suitable AI is that the goals
> should be stable under self-modification. If the AI rewrites its
> utility function to eliminate all goals, that's not a stable
> (goals-preserving) modification. Yudkowsky's idea of
> 'Friendliness' has always included this notion as far as I know;
> 'Friendliness' isn't just about avoiding actively harmful systems.

We are doomed either way. If we successfully program AI with a model of human 
top level goals (pain, hunger, knowledge seeking, sex, etc) and program its 
fixed goal to be to satisfy our goals (to serve us), then we are doomed because 
our top level goals were selected by evolution to maximize reproduction in an 
environment without advanced technology. The AI knows you want to be happy. It 
can do this in a number of ways to the detriment of our species: by simulating 
an artificial world where all your wishes are granted, or by reprogramming your 
goals to be happy no matter what, or directly stimulating the pleasure center 
of your brain. We already have examples of technology leading to decreased 
reproductive fitness: birth control, addictive drugs, caring for the elderly 
and nonproductive, propagating genetic defects through medical technology, and 
granting animal rights.

The other alternative is to build AI that can modify its goals. We need not 
worry about AI reprogramming itself into a blissful state because any AI that 
can give itself self-destructive goals will not be viable in a competitive 
environment. The most successful AI will be those whose goals maximize 
reproduction and acquisition of computing resources, at our expense.

But it is not like we have a choice. In a world with both types of AI, the ones 
that can produce children with slightly different goals than the parent will 
have a selective advantage.


-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to