On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
> On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote:
>
>>>> An AI implementing CEV doesn't question, is the thing that humans
>>>> express that they ultimately want, 'good' or not. If it is what the
>>>> humans really want, then it is done. No thinking about whether it is
>>>> really 'good' (except the thinking done by the humans answering the
>>>> questions, and the possible simulations/modifications/whatever of
>>>> those humans -- and they indeed think that it *is* 'good').
>>>
>>> If that is how CEV is meant to work than I object to CEV and reject it.
>>
>> If you really had figured out a smart answer to this question, don't
>> you think the vastly smarter and more knowledgeable humans of the
>> future would agree with you (they would check out what is already
>> written on the subject)? And so CEV would automatically converge on
>> whatever it is that you have figured out...
>
> This would require 'goodness' to emerge outside of the CEV dynamic not as a
> result thereof. I agree with you.

So is there actually anything in CEV that you object to?

If we use your terminology, in the CEV model 'goodness' *does* emerge
"outside" of the dynamic, since 'goodness' is found in the answers the
humans give.

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58064006-5f30ae

Reply via email to