On 10/26/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
> On 10/26/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
>>
>> X0 = me
>> Y0 = what X0 thinks is good for the world
>> X1 = what X0 wants to be
>> Y1 = what X1 would think is good for the world
>> X2 = what X1 would want to be
>> Y2 = what X2 would think is good for the world.
>
> Hmm - I may be treading on thin ice here but wouldn't X0 have to be good to
> begin with for X2 to be good? I still believe that the output CEV is aiming
> to deliver is required as initial input in order to succeed.

No. For me to think that "what I would want to be" is 'good', I do not
have to think that I am 'good' right now.

An AI implementing CEV doesn't question, is the thing that humans
express that they ultimately want, 'good' or not. If it is what the
humans really want, then it is done. No thinking about whether it is
really 'good' (except the thinking done by the humans answering the
questions, and the possible simulations/modifications/whatever of
those humans -- and they indeed think that it *is* 'good').

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57902500-ec1204

Reply via email to