Re: [agi] Breaking AIXI-tl - AGI friendliness

2003-02-16 Thread Philip Sutton
Hi Eliezer/Ben, My recollection was that Eliezer initiated the Breaking AIXI-tl discussion as a way of proving that friendliness of AGIs had to be consciously built in at the start and couldn't be assumed to be teachable at a later point. (Or have I totally lost the plot?) Do you feel the

RE: [agi] Breaking AIXI-tl - AGI friendliness

2003-02-16 Thread Ben Goertzel
Actually, Eliezer said he had two points about AIXItl: 1) that it could be broken in the sense he's described 2) that it was intrinsically un-Friendly So far he has only made point 1), and has not gotten to point 2) !!! As for a general point about the teachability of Friendliness, I don't

RE: [agi] Novamente: how crtical is self-improvement to getting human parity?

2003-02-16 Thread Philip Sutton
Ben, Thanks for that. Your explanation makes the whole thing a lot clearer. I'll come back to this thread again after Eliezer's discussion on AGI friendliness has progressed a bit further. Cheers, Philip From: Ben Goertzel [EMAIL PROTECTED] To: [EMAIL

[agi] The core of the current debate??

2003-02-16 Thread Philip Sutton
I was just thinking, it might be useful to make sure that in pusuing the Breaking AIXI-tl - AGI friendliness debate we should be clear what the starting issue is. I think it is best defined by Eliezer's post on 12 Feb and Ben's reply of the same day Eliezer's post:

RE: [agi] Breaking AIXI-tl - AGI friendliness - how to move on

2003-02-16 Thread Philip Sutton
Hi Ben, From a high order implications point of view I'm not sure that we need too much written up from the last discussion. To me it's almost enough to know that both you and Eliezer agree that the AIXItl system can be 'broken' by the challenge he set and that a human digital simulation

RE: [agi] Breaking AIXI-tl - AGI friendliness - how to move on

2003-02-16 Thread Ben Goertzel
To me it's almost enough to know that both you and Eliezer agree that the AIXItl system can be 'broken' by the challenge he set and that a human digital simulation might not. The next step is to ask so what?. What has this got to do with the AGI friendliness issue. This last point of