[agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
Ben, you and I have a long-standing disagreement on a certain issue which impacts the survival of all life on Earth. I know you're probably bored with it by now, but I hope you can understand why, given my views, I keep returning to it, and find a little tolerance for my doing so. The issue

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
I can spot the problem in AIXI because I have practice looking for silent failures, because I have an underlying theory that makes it immediately obvious which useful properties are formally missing from AIXI, and because I have a specific fleshed-out idea for how to create moral systems

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Your intuitions say... I am trying to summarize my impression of your viewpoint, please feel free to correct me... AI morality is a matter of experiential learning, not just for the AI, but for the programmers. Also, we plan to start Novamente off with some initial goals embodying ethical

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Hi, 2) If you get the deep theory wrong, there is a strong possibility of a silent catastrophic failure: the AI appears to be learning everything just fine, and both you and the AI are apparently making all kinds of fascinating discoveries about AI morality, and everything seems to be

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Eliezer S. Yudkowsky wrote: 1) AI morality is an extremely deep and nonobvious challenge which has no significant probability of going right by accident. 2) If you get the deep theory wrong, there is a strong possibility of a silent catastrophic failure: the AI appears to be learning

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: Your intuitions say... I am trying to summarize my impression of your viewpoint, please feel free to correct me... AI morality is a matter of experiential learning, not just for the AI, but for the programmers. To teach an AI morality you must give it the right feedback

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
This is slightly off-topic but no more so than the rest of the thread... 1) That it is selfishly pragmatic for a superintelligence to deal with humans economically rather than converting them to computronium. For convenience, lets rephrase this the majority of arbitrarily generated

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Jonathan Standley wrote: Now here is my question, it's going to sound silly but there is quite a bit behind it: Of what use is computronium to a superintelligence? If the superintelligence perceives a need for vast computational resources, then computronium would indeed be very useful.