Hi Philip,

On Tue, 11 Feb 2003, Philip Sutton wrote:

> Ben,
>
> If in the Novamente configuration the dedicated Ethics Unit is focussed
> on GoalNode refinement, it might be worth using another term to
> describe the whole ethical architecture/machinery which would involve
> aspects of most/all (??) Units plus perhaps even the Mind Operating
> System (??).
>
> Maybe we need to think about an 'ethics system' that is woven into the
> whole Novamente architecture and processes.
> . . .

I think discussing ethics in terms of goals leads to confusion.
As I described in an earlier post at:

  http://www.mail-archive.com/[email protected]/msg00390.html

reasoning must be grounded in learning and goals must be grounded
in values (i.e., the values used to reinforce behaviors in
reinforcement learning).

Reinforcement learning is fundamental to the way brains work, so
expressing ethics in terms of learning values builds those ethics
in to brain behavior in a fundamental way.

Because reasoning emerges from learning, expressing ethics in terms
of the goals of a reasoning system can lead to confusion, when the
goals derived from ethics turn out to be inconsistent with the goals
that emerge from learning values.

In my book I advocate using human happiness for learning values, where
behaviors are positively reinforced by human happiness and negatively
reinforced by human unhappiness. Of course there will be ambiguity
caused by conflicts between humans, and machine minds will learn
complex behaviors for dealing with such ambiguities (just as mothers
learn complex behaviors for dealing with conflicts among their
children). It is much more difficult to deal with conflict and
ambiguity in a purely reasoning based system.

Cheers,
Bill

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to