On Nov 18, 2007 3:05 AM, Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> You assume that "when we are 100% done" -- we will get what we
> ultimately want.
> But that's not exactly true.
>
> The most fittest species (whether computers, humans, or androids) will
> dominate the world.
>
> Let's talk about set of supergoals that such fittest species will
> have.
>
> I think this set would include:
> - Supergoal "Prevent being [self]destroyed".
> - Supergoal "Prevent changing supergoals". That supergoal would also
> try to prevent tampering with supergoals. I guess that supergoal will
> have to become quite strong in the environment when it's
> technologically possible to tweak supergoals.
> - Supergoal "reproduce". Supergoals of descendants would probably
> slightly vary from supergoals of the parent.
> - Other supergoals, such as "Desire to learn", "Desire to speak", and
> "Contribute to
> society".
>
> Note, that the most fittest species will not really have "Permanent
> pleasure paradise" option.
>

Dennis, I believe the same and have recently finished organizing my thoughts
on the matter in a paper: Practical Benevolence – a Rational Philosophy of
Morality that is available at http://rationalmorality.info/

Abstract: These arguments demonstrate the a priori moral nature of reality
and
develop the basic understanding necessary for realizing the logical maxim in
Kant's categorical imperative[1] based on the implied goal of evolution[2].
The
maxim is used to proof moral behavior as obligatory emergent phenomenon
among evolving interacting goal driven agents.

Kind regards,

Stefan
-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66264882-595be0

Reply via email to