For instance,  a Novamente-based AGI will have an explicit utility
function, but only a percentage of the system's activity will be directly
oriented toward fulfilling this utility function

Some of the system's activity will be "spontaneous" ... i.e. only
implicitly goal-oriented .. and as such may involve some imitation
of human motivation, and plenty of radically non-human stuff...

Which, as Eliezer has pointed out, sounds dangerous as all hell unless you have some reason to assume that it wouldn't be (like being sure that the AGI sees and believes that Friendliness is in it's own self-interest).

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to