Ben Goertzel wrote:
Actually, Eliezer said he had two points about AIXItl:

1) that it could be "broken" in the sense he's described

2) that it was intrinsically un-Friendly

So far he has only made point 1), and has not gotten to point 2) !!!

As for a general point about the teachability of Friendliness, I don't think
that an analysis of AIXItl can lead to any such general conclusion.  AIXItl
is very, very different from Novamente or any other pragmatic AI system.

I think that an analysis of AIXItl's Friendliness or otherwise is going to
be useful primarily as an exercise in "Friendliness analysis of AGI
systems," rather than for any pragmatic implications it  may yave.
Actually, I said AIXI-tl could be broken; AIXI is the one that can be shown to be intrinsically unFriendly (extending the demonstration to AIXI-tl would be significantly harder).

Philip Sutton wrote:
>
My recollection was that Eliezer initiated the "Breaking AIXI-tl" discussion as a way of proving that friendliness of AGIs had to be consciously built in at the start and couldn't be assumed to be teachable at a later point. (Or have I totally lost the plot?)
There are at least three foundational differences between the AIXI formalism and a Friendly AI; so far I've covered only the first. "Breaking AIXI-tl" wasn't about Friendliness; more of a dry run on a directly demonstrable and emotionally uncharged architectural consequence before tackling the hard stuff.

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to