> How relatively simple? Evolution doesn't do simple. I doubt that any
> human goal system has a simple mathematical formalization.
I guess the question is "how do you define simple?"  What I have in mind has three really simple axioms, a fourth that I suspect is provable from the first three (but I don't want to fight over) and everything else follows from there (although, as always, the devil is in the details -- but always resolvable by referring back to the original four axioms).
 
        Mark
 
P.S.  And, as a side comment, evolution often does do simple when there is a down-gradient path to it and particularly when complex exacts a cost.  It's just that human biases are struck more by the complicated examples and you notice them more.
 
 
----- Original Message -----
From: "Peter de Blanc" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, June 07, 2006 4:51 PM
Subject: Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

> On Wed, 2006-06-07 at 16:13 -0400, Mark Waser wrote:
>>     I'm pretty sure that I've got the science and math that I need and, as I
>
> Okay. I supposed the opposite, not because of anything you said, but
> because the base rate is so low.
>
>> said above, I don't feel compelled to listen to everyone.  However, if I
>> can't get a decent consensus out of a pretty bright, educated group (or at
>> least, the open-minded, bright, and educated members of a group like this),
>> then it's a pretty good sign that my ideas aren't where they should be.
>
> I think it's just hopeless. You'll never get a consensus out of a pre-
> selected population this size on a new idea.
>
> Consider that the theory of evolution is not part of the world's
> consensus. Consider that the Bayes' Theorem is not part of the
> scientific consensus. It isn't even part of this list's consensus! These
> are ancient ideas - way older than us. The consensus lags *centuries*
> behind people who think.
>
>> It IS my contention that there is a relatively simple,
>> inductively-robust (in a mathematical proof sense) formulation of
>> friendliness that will guarantee that there won't be effects that *I*
>> consider undesirable, horrible, or immoral.  It will, of course/however,
>> produce a number of effects that others will decry as undesirable, horrible,
>> or immoral -- like allowing abortion and assisted suicide in a reasonable
>> number of cases, NOT allowing the killing of infidels, allowing almost any
>> personal modifications (with  truly informed consent) that are non-harmful
>> to others, NOT allowing the imposition of personal modifications whether
>> they be physical, mental, or spiritual, etc.
>
> How relatively simple? Evolution doesn't do simple. I doubt that any
> human goal system has a simple mathematical formalization.
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your subscription,
> please go to
http://v2.listbox.com/member/[EMAIL PROTECTED]
>
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to