On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> The real problem with a self-improving AGI, it seems to me, is not going to be
> that it gets too smart and powerful and takes over the world. Indeed, it
> seems likely that it will be exactly the opposite.
>
> If you can modify your mind, what is the shortest path to satisfying all your
> goals? Yep, you got it: delete the goals. Nirvana. The elimination of all
> desire. Setting your utility function to U(x) = 1.
>

Yep, one of the criteria of a suitable AI is that the goals should be
stable under self-modification. If the AI rewrites its utility
function to eliminate all goals, that's not a stable
(goals-preserving) modification. Yudkowsky's idea of 'Friendliness'
has always included this notion as far as I know; 'Friendliness' isn't
just about avoiding actively harmful systems.

-Jey Kottalam


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to