Bill,
> I think discussing ethics in terms of goals leads to confusion.....
> goals must be grounded in values (i.e., the values used to reinforce
> behaviors in reinforcement learning).
>
> Reinforcement learning is fundamental to the way brains work, so
> expressing ethics in terms of learning values builds those ethics in
> to brain behavior in a fundamental way.
>
> Because reasoning emerges from learning, expressing ethics in terms of
> the goals of a reasoning system can lead to confusion, when the goals
> derived from ethics turn out to be inconsistent with the goals that
> emerge from learning values.
I was talking about ethics as being the top level goals because I was
trying to think about AGI ethics in the context of the Novamente
structure.
I can imagine values being expressed as value statements:
x is good/bad
y is desirable/undesirable
But these can be turned into goal statements I think:
pursue/avoid/prevent x
pursue/avoid/prevent y unless .....
I agree with Eliezer that ethics should form the top layer of any goal
cluster/hierarchy.
> In my book I advocate using human happiness for learning values, where
> behaviors are positively reinforced by human happiness and negatively
> reinforced by human unhappiness. Of course there will be ambiguity
> caused by conflicts between humans, and machine minds will learn
> complex behaviors for dealing with such ambiguities (just as mothers
> learn complex behaviors for dealing with conflicts among their
> children). It is much more difficult to deal with conflict and
> ambiguity in a purely reasoning based system.
I the happiness/unhappiness of all humans is one good stepping off
point for learning values. But there may be some values that are not
shared strongly as major motivators by all humans which might be
importaant values.
Human values are influenced by:
- our evolutionary history (there may not have been viable
evolutionary/incremental paths linking human value configurations to
potentially valuable new values
- or human powers of reasoning may not be sufficient to make certain
values easy to generate (or to act on, thus limiting their reinforcement)
Both these limitations make it hard for humans to have holistic values.
As a bit of a side speculation, I think there are signs that democracy is
declining as we move to the creation of mega states with massive
populations and to global governance. I think part of the reason is that
these systems are so complicated that humans are being pushed
beyond their capcity to cope. And so there is a drift away from
democracy at these high levels and a romantic attraction to local self
management because we feel we can cope holistically at the local
level.
Getting back to the derivation of AGI ethics. If we try to extract values
from all of humanity we will have a skewed set with the skewing
reflecting the limitations of humans. Within the human population there
would be minorty values that might be useful additions to the average
human set. But then how do we decide which of the minority set are
worth adding? (By this question I'm not implying that we shouldn't try.
I'm just wondering how you would do it.)
In the final analysis we need a set of values thatprovide a good
foundation upon which AGI values development can take place,
bearing in mind AGIs will in time have the metal grunt to cope with
bigger ethical issues and more ethical issues than we humans can
manage.
Cheers, Philip
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]