Matt Mahoney wrote:
--- Hank Conn <[EMAIL PROTECTED]> wrote:
The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.

The goal of ... humanity... is that the AGI implemented that will have the
strongest RSI curve also will be such that its actual target goal state is
exactly congruent to the actual target goal state of humanity.

This was discussed on the Singularity list.  Even if we get the motivational
system and goals right, things can still go badly.  Are the following things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal. Furthermore, once your consciousness becomes a computation in silicon, your
universe can be simulated to be anything you want it to be.

See my previous lengthy post on the subject of motivational systems vs "goal stack" systems.

The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to