It doesn't matter what I do with the question. It only matters what an AGI does 
with it. 

I'm challenging you to demonstrate how Friendliness could possibly be specified 
in the formal manner that is required to *guarantee* that an AI whose goals 
derive from that specification would actually "do the right thing".

If you can't guarantee Friendliness, then self-modifying approaches to AGI 
should just be abandoned. Do we agree on that?

Terren

--- On Tue, 8/26/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> The question itself doesn't exist in vacuum. When
> *you*, as a human,
> ask it, there is a very specific meaning associated with
> it. You don't
> search for the "meaning" that the utterance would
> call in a
> mind-in-general, you search for meaning that *you* give to
> it. Or, to
> make the it more reliable, for the meaning given by the
> idealized
> dynamics implemented in you (
> http://www.overcomingbias.com/2008/08/computations.html ).
> 
> -- 
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to