RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Alan, With comments like this > I want this list to be useful to me and not have to skim through > hundreds of e-mails watching the rabbi drive conversation into useless > spirals as he works on the implementation details of the real problems. > Really, I'm getting dizzy from all of this. Lets s

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
> Jonathan Standley wrote: > > Now here is my question, it's going to sound silly but there is >> quite a bit behind it: > > "Of what use is computronium to a superintelligence?" > If the superintelligence perceives a need for vast computational > resources, then computronium would indeed be very

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Jonathan Standley
> Now here is my question, it's going to sound silly but there is quite a> bit behind it: > > "Of what use is computronium to a superintelligence?" >   If the superintelligence perceives a need for vast computational resources, then computronium would indeed be very useful.  Assuming said SI

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Goertzel the good wrote: > Perhaps living in Washington has made me a little paranoid, but I am > continually aware of the increasing threats posed by technology to > humanity's survival. I often think of humanity's near-term future as a > race between destructive and constructive technologies.

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
This is slightly off-topic but no more so than the rest of the thread... > 1) That it is selfishly pragmatic for a superintelligence to deal with > humans economically rather than converting them to computronium. For convenience, lets rephrase this "the majority of arbitrarily generated s

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Alan Grimes wrote: > You have not shown this at all. From everything you've said it seems > that you are trying to trick Ben into having so many misgivings about > his own work that he holds it up while you create your AI first. I hope > Ben will see through this deception and press ahead with no

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: > >> Your intuitions say... I am trying to summarize my impression of your >> viewpoint, please feel free to correct me... "AI morality is a >> matter of experiential learning, not just for the AI, but for the >> programmers. To teach an AI morality you must give it the right >

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Michael Roy Ames
Alan Grimes wrote: > > You have not shown this at all. From everything you've said it seems > that you are trying to trick Ben into having so many misgivings about > his own work that he holds it up while you create your AI first. I > hope Ben will see through this deception and press ahead with >

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Philip Sutton
Alan, > You have not shown this at all. From everything you've said it seems > that you are trying to trick Ben into having so many misgivings about > his own work that he holds it up while you create your AI first. I > hope Ben will see through this deception and press ahead with > novamente. --

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Eliezer S. Yudkowsky wrote: > 1) AI morality is an extremely deep and nonobvious challenge which has > no significant probability of going right by accident. > 2) If you get the deep theory wrong, there is a strong possibility of > a silent catastrophic failure: the AI appears to be learning e

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> So it's not the case that we intend to rely ENTIRELY on experiential > learning; we intend to rely on experiential learning from an engineering > initial condition, not from a complete tabula rasa. > > -- Ben G "engineered" initial condition, I meant, oops [typed in even more of a hurry as I ge

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Hi, > 2) If you get the deep theory wrong, there is a strong possibility of a > silent catastrophic failure: the AI appears to be learning > everything just > fine, and both you and the AI are apparently making all kinds of > fascinating discoveries about AI morality, and everything seems to be

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> Your intuitions say... I am trying to summarize my impression of your > viewpoint, please feel free to correct me... "AI morality is a matter of > experiential learning, not just for the AI, but for the programmers. Also, we plan to start Novamente off with some initial goals embodying ethical

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> I can spot the problem in AIXI because I have practice looking for silent > failures, because I have an underlying theory that makes it immediately > obvious which useful properties are formally missing from AIXI, and > because I have a specific fleshed-out idea for how to create > moral system

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> Your intuitions say... I am trying to summarize my impression of your > viewpoint, please feel free to correct me... "AI morality is a matter of > experiential learning, not just for the AI, but for the programmers. To > teach an AI morality you must give it the right feedback on moral > quest

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Philip Sutton
Eliezer, Thanks for being clear at last about what the deep issue is that you were driving at. Now I can start getting my head around what you are trying to talk about. Cheers, Philip --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http:

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Shane Legg
Eliezer, I suppose my position is similar to Ben's in that I'm more worried about working out the theory of AI than about morality because until I have a reasonable idea of how an AI is going to actually work I don't see how I can productively think about something as abstract as AI morality. I