Hi Philip,

> I was talking about ethics as being the top level goals because I was
> trying to think about AGI ethics in the context of the Novamente
> structure.
>
> I can imagine values being expressed as value statements:
>
> x is good/bad
> y is desirable/undesirable
>
> But these can be turned into goal statements I think:
>
> pursue/avoid/prevent x
> pursue/avoid/prevent y unless .....
>
> I agree with Eliezer that ethics should form the top layer of any goal
> cluster/hierarchy.

The thing I want to avoid is thinking of ethical goals
as logical statements that enforce ethics via logical
entailment. The fundamental behavior of minds is learning,
and logical reasoning emerges as a necessary helper for
learning. Thus I think of ethics as expressed in the
reinforcement values for learning.

> > In my book I advocate using human happiness for learning values, where
> > behaviors are positively reinforced by human happiness and negatively
> > reinforced by human unhappiness. Of course there will be ambiguity
> > caused by conflicts between humans, and machine minds will learn
> > complex behaviors for dealing with such ambiguities (just as mothers
> > learn complex behaviors for dealing with conflicts among their
> > children). It is much more difficult to deal with conflict and
> > ambiguity in a purely reasoning based system.
>
> I the happiness/unhappiness of all humans is one good stepping off
> point for learning values.  But there may be some values that are not
> shared strongly as major motivators by all humans which might be
> importaant values.
>
> Human values are influenced by:
>
> -   our evolutionary history (there may not have been viable
>     evolutionary/incremental paths linking human value configurations to
>     potentially valuable new values
>
> -   or human powers of reasoning may not be sufficient to make certain
>     values easy to generate (or to act on, thus limiting their reinforcement)
>
> Both these limitations make it hard for humans to have holistic values.

I agree about human limitations. But for me any values
other than human happiness are likely to produce results
that humans are unhappy with. I guess for me, the ultimate
value is human happjness.

The bright side is that association with a super-intelligent
mind will educate and elevate humans and their values.

> As a bit of a side speculation, I think there are signs that democracy is
> declining as we move to the creation of mega states with massive
> populations and to global governance.  I think part of the reason is that
> these systems are so complicated that humans are being pushed
> beyond their capcity to cope.  And so there is a drift away from
> democracy at these high levels and a romantic attraction to local self
> management because we feel we can cope holistically at the local
> level.

Perhaps the problems of democracy are not so new. During
the American revolution, Benjamin Franklin said "Democracy
is a terrible form of government, but it is better than
all the others." The twentieth century vividly illustrated
that substituting "good intentions" (e.g., the dictatorship
of the proletariat) for democracy turned out very badly.

> Getting back to the derivation of AGI ethics.  If we try to extract values
> from all of humanity we will have a skewed set with the skewing
> reflecting the limitations of humans.  Within the human population there
> would be minorty values that might be useful additions to the average
> human set.  But then how do we decide which of the minority set are
> worth adding?  (By this question I'm not implying that we shouldn't try.
> I'm just wondering how you would do it.)
>
> In the final analysis we need a set of values thatprovide a good
> foundation upon which AGI values development can take place,
> bearing in mind AGIs will in time have the metal grunt to cope with
> bigger ethical issues and more ethical issues than we humans can
> manage.

I think that association with super-intelligent minds will
educate and elevate humans and their values. For example,
one huge cause of human unhappiness is humans' natural
xenophobia. A super-intelligent mind reinforced for human
happiness will learn behaviors to reduce human unhappiness,
including reducing xenophobia. There are lots of examples
humans overcoming xenophobia, and I am confident that
super-intelligent minds will gently push humans in that
direction in order to promote happiness.

In general, I think a major purpose of super-intelligence
will be to find win-win ways to resolve conflicts between
the happiness of different humans.

Cheers,
Bill

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to