[agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Philip . Sutton
I've just read the first chapter of The Metamorphosis of Prime Intellect. http://www.kuro5hin.org/prime-intellect It makes you realise that Ben's notion that ethical structures should be based on a hierarchy going from general to specific is very valid - if Prime Intellect had been programmed

Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote: There may be additional rationalization mechanisms I haven't identified yet which are needed to explain anosognosia and similar disorders. Mechanism (4) is the only one deep enough to explain why, for example, the left hemisphere automatically and unconsciously

Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: This is exactly why I keep trying to emphasize that we all should forsake those endlessly fascinating, instinctively attractive political arguments over our favorite moralities, and instead focus on the much harder problem of defining an AI architecture which can understand

RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Ben Goertzel
In Novamente, this skeptical attitude has two aspects: 1) very high level schemata that must be taught not programmed 2) some basic parameter settings that will statistically tend to incline the system toward skepticism of its own conclusions [but you can't turn the dial too far in

RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Billy Brown
Eliezer S. Yudkowsky wrote: I don't think we are the beneficiaries of massive evolutionary debugging. I think we are the victims of massive evolutionary warpage to win arguments in adaptive political contexts. I've identified at least four separate mechanisms of rationalization in human