I'd be interested in knowing if anyone else on this list has had any experience with policy-based governing . . . .

Questions like
Are the following things good?
- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.
can only be properly answered by reference to your *ordered* list of goals *WITH* reference to your list of limitations (restrictions, to use the lingo).

For me, yes, all of those things are good since they are on my list of goals *unless* the method of accomplishing them steps on a higher goal OR a collection of goals with greater total weight OR violates one of my limitations (restrictions).

As long as I'm intelligent enough to put things like "don't do anything to me without my informed consent" on my limitations list, I don't expect too many problems (and certainly not any of the problems that were brought up later in the "Questions" post).

Personally, I find the level of many of these morality discussions ridiculous. It is relatively easy for any competent systems architect to design complete, internally consistent systems of morality from sets of goals and restrictions. The problem is that any such system is just not going to match what everybody wants since everybody embodies conflicting and irreconcilable requirements.

Richard's system of evolved morality through a large number of diffuse constraints is a good attempt at creating a morality system that is unlikely to offend anyone while still making "reasonable" decisions about contested issues. The problem with Richard's system is that it may well make decisions like outlawing stem cell research since so many people are against it (or maybe, if it is sufficiently intelligent, it's internal consistency routines may reduce the weight of the arguments from people who insist upon conflicting priorities like "I want the longest life and best possible medical care" and "I don't want stem cell research").

The good point about an internally consistent system designed by me is that it's not going to outlaw stem cell research. The bad point about my system is that it's going to offend *a lot* of people and if it's someone else's system, it may well offend (or exterminate) me. And, I must say, based upon the level of many of these discussions, the thought of a lot of you designing a morality system is *REALLY* frightening.

----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Friday, December 01, 2006 11:56 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]


Matt Mahoney wrote:
--- Hank Conn <[EMAIL PROTECTED]> wrote:
The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.

The goal of ... humanity... is that the AGI implemented that will have the strongest RSI curve also will be such that its actual target goal state is
exactly congruent to the actual target goal state of humanity.

This was discussed on the Singularity list. Even if we get the motivational system and goals right, things can still go badly. Are the following things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a computer and making many redundant backups, that you become immortal. Furthermore, once your consciousness becomes a computation in silicon, your
universe can be simulated to be anything you want it to be.

See my previous lengthy post on the subject of motivational systems vs "goal stack" systems.

The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to