> 
> An obstacle to what?
> This is the one thing that doesn't make any sense to me... People assume
> that once it is capable of devowering the universe it _WILL_ devour the
> universe. The trick here is that for it to undertake such an agenda it
> must have, explicitly, some code which gives it a positive oppinion
> regarding some goal that involves patricide... 

No, they assume it *might*, and that's enough to lose sleep over.

> 
> I think our only challenge is to discover what types of motovators (or
> even inhibitors) will give it ambitions of that nature and then put
> whatever code is necessary to counteract any feature which might lead to
> those tendancies...
> 

The problem is not just the motivators.  Human morality is extremely irrational as Ben 
has pointed out.

If you simply tell it to value human life, a rational AGI might happily take a 100% 
chance of killing 1 human for a 51% chance of saving 2 of them.  It would expect a pat 
on the back for doing such a great job, while the newspapers ran stories of the 
monster computer and its reign of terror.

And it's not just that we're irrational, we're inconsistent.  The proper response in 
any situation changes with context in ways that are common to all, and in other ways 
that are unique to certain times and countries.  At the moment in the US, Terrorists 
can be killed, freedom fighters and soldiers cannot.  The distinction is often a very 
fine line.    And then there's the concept of "suffering" and minimization of that.  
What constitutes suffering varies wildly from person to person.  

Encoding an ethical/moral framework that is compatible with humanity is going to be 
one of the tougher challenges.  Eventually, an AGI is going to be given control over 
something important, something in which it has the power to sacrifice life.  It will 
be aware of the odds, and need to accomodate the irrationality of our expectations.  

-Brad





-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to