Eliezer S. Yudkowsky wrote: > I don't think we are the beneficiaries of massive evolutionary debugging. > I think we are the victims of massive evolutionary warpage to win > arguments in adaptive political contexts. I've identified at least four > separate mechanisms of rationalization in human psychology so far:
Well, yes. Human minds are tuned for fitness in the ancestral environment, not for correspondence with objective reality. But just getting to the point where implementing those rationalizations is possible would be a huge leap forward from current AI systems. In any case, I think your approach to the problem is a step in the right direction. We need a theory of AI ethics before we can test it, and we need lots of experimental testing before we start building things that have any chance of taking off. Sometimes I think it is a good thing that AI is still stuck in a mire of wishful thinking, because we aren't ready to build AGI safely. Billy Brown ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]