I've just read Ben's article
Thoughts on AI Morality, May, 2002
www.goertzel.org/dynapsyc/2002/AIMorality.htm
First a summary and then my comments.
Ben argues that AGIs desirably should
have a set of ethics that among
other things motivate their compassionate treatment of life including
humans.
He feels that an ethical structure
should have more general concepts at
the top of the hierarchy and less general at lower levels.
He feels that ethics should be imparted
via a combination of hard
wiring and teaching.
He also feels that moral goals can
be categorised into goals that are
easy to hard wire and those that are hard.
In general he seems to believe that
the ethical goals that are high on
the hierarchy are candidates for hard wiring, provided the hard wiring is
not too hard. The other ethical goals (lower in the hierarchy or hard to
hard wire) should be imparted through training.
He feels that there is little utility
in trying to hard wire ethical goals that
are highly specific because he thinks they will be more likely to be
removed or substantially morphed during an AGI's development
through self reprogramming.
(Ben, I hope I've got the above more
or less right).
The idea of putting the more general
concepts at the top of the ethical
hierarchy makes great sense.
But I think that there might be ways
to deal with some of the difficulties
of hard wiring of the "hard to hard wire values".
I think if an ethical goal is general
and highly important then we should
make sure we find ways to hard wire it - ie we shouldn't launch AGIs
into the world until we have worked out how to hardwire the really
critical ethical goals/contraints.
Ben lists the following as examples
of values that are difficult to hard
wire:
> - preserve life,
> - make other
intelligent or living systems happy,
> - value human
happiness and existence
> For an AI, defining
and recognizing life and happiness is a lot
> harder than defining
and recognizing my own health, diversity, or
> new patterns.
I think it is possible to lodge very
abstract concepts into an entity, and
use hard wiring to assist the AGI to rapidly and easily recognise
examples of the 'hard' abstract
concepts - thus giving some life to each
abstract concept.
Making sure that the AGI has the perceptual
mechanisms to know and
experience the critical early examples would be a key part of the values
development process. Training and self directed learning would then
add many many more examples to the core abstract concept thus
allowing it to become more and more general over time, informed by
the extensive and subtle experiential database that the AGI builds up
over time.
Also, the fact that ethics lower in
the general/specific hierarchy are less
likely to remain after AGIs self reprogram shouldn't in itself be an
argument against hardwiring per se. Early in the AGIs development the
hard wired values will affect the learning/development process and
important emergent properties triggered by the early hardwired values
may persist even after the original values are culled/morphed by the
AGI and this approach might be more effective in some circumstances
than the teaching/learning approach alone.
Cheers, Philip
Philip Sutton
Director, Strategy
Green Innovations Inc.
195 Wingrove Street
Fairfield (Melbourne) VIC 3078
AUSTRALIA
Tel & fax: +61 3 9486-4799
Email: <[EMAIL PROTECTED]>
http://www.green-innovations.asn.au/
Victorian Registered Association Number:
A0026828M
