I didn't say the algorithm needs to be simple, I said the goal of the algorithm 
ought to be simple. What are you trying to compute? 

Your answer is, "what is the right thing to do?"

The obvious next question is, what does "the right thing" mean?  The only way 
that the answer to that is not context-dependent is if there's such a thing as 
objective morality, something you've already dismissed by referring to the 
"there are no universally compelling arguments" post on the Overcoming Bias 
blog.

You have to concede here that Friendliness is not objective. Therefore, it 
cannot be expressed formally. It can only be approximated, with error. 


--- On Tue, 8/26/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> From: Vladimir Nesov <[EMAIL PROTECTED]>
> Subject: Re: [agi] The Necessity of Embodiment
> To: agi@v2.listbox.com
> Date: Tuesday, August 26, 2008, 1:21 PM
> On Tue, Aug 26, 2008 at 8:54 PM, Terren Suydam
> <[EMAIL PROTECTED]> wrote:
> >
> > If Friendliness is an algorithm, it ought to be a
> simple matter to express
> > what the goal of the algorithm is. How would you
> define Friendliness, Vlad?
> >
> 
> Algorithm doesn't need to be simple. The actual
> Friendly AI that
> started to incorporate properties of human morality in it
> is a very
> complex algorithm, and so is the human morality itself.
> Original
> implementation of Friendly AI won't be too complex
> though, it'll only
> need to refer to the complexity outside in a right way, so
> that it'll
> converge on dynamic with the right properties. Figuring out
> what this
> original algorithm needs to be, not to count the technical
> difficulties of implementing it, is very tricky though. You
> start from
> the question "what is the right thing to do?"
> applied in the context
> of unlimited optimization power, and work on extracting a
> technical
> answer, surfacing the layers of hidden machinery that
> underlie this
> question when *you* think about it, translating the
> question into a
> piece of engineering that answers it, and this is Friendly
> AI.
> 
> -- 
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to