I think maximization of negative entropy is a poor goal to have. Although
life perhaps has some intrinsic value, I think the primary thing we care
about is not life, per se, but beings with consciousness and capable of
well-being. Under your idea, it seems like the "interests" of a large tree
might count for as much, if not much more, than a human being.

I also think being beneficial to humans is a bad criterion. We care about
humans because most humans have particularly rich and complex mental lives,
not because they are biologically human (e.g. have 23 chromosomes or descent
from a certain evolutionary lineage, etc.). You might scrape off a few skin
cells of mine and keep them alive in a test tube but the mere fact that
those cells are human and living doesn't mean we have any reasons to promote
its life. I would hold the same goes for living human organisms that are
braindead with no capacity for consciousness (e.g. anencephalic
infants). Also, if there were aliens with as rich and complex a mental life
as humans, their interests should count for just as much as a normal
human's. To think otherwise would, I think, be on a par with racism;
philosophers call this speciesism.

The much more important issue at the moment though is the treatment of
non-human animals. We currently have truly appalling treatment of animals at
the moment, often subjecting animals to nothing less than torture for
trivial conveniences such as cheaper prices, better tasting meat, frivolous
laboratory testing and unnecessary scientific testing. While difficult, I
think we will probably manage to avoid Strong AI that acts in ways that most
people would regard as morally wrong. The bigger danger I fear is that the
false moral beliefs that most people have will end up creating Strong AI
that acts in horrible ways that people refuse to recognize as horrible.
Right now, beliefs about the moral status of animals is I think the most
pernicious widely held belief people have.

I think the correct view is that species by itself does not matter, but what
matters is the richness and complexity of mental lives of conscious beings.
If you think it would be wrong to treat retarded humans with the same level
of psychological complexity as, say, a pig, in certain ways, then you should
also think it is just as wrong to treat a pig in that way. (Pigs have a
pretty rich mental life, probably about the same as a dog or a three year
old with a language deficit.) I think it is pretty clear that this simple
principle would very strongly condemn most of the current treatment of
animals.

The biggest threat in my mind is that Strong AI will not only inherit the
moral beliefs of people who give little weight to the well-being of animals,
but also drastically increase the already vast mistreatment of animals to
unprecedented scales for trivial efficiency gains in things in the meat
industry and/or animal experimentation.

The best (though not quite perfect) published work in philosophy on this
issue at the moment is Jeff McMahan's book, *The Ethics of Killing: Problems
at the Margins of Life. *It also lays out the best and most nuanced (though
again not quite perfect) published theory on personal identity (obviously
very relevant to future issues with uploading, etc.). This theory of
personal identity lays the foundation for a theory of when things are good
or bad for creatures and informs his account of the morality of abortion,
euthanasia and treatment of animals.


I think people working on Friendly AI generally need a better background in
philosophy than what I have seen so far. I do understand though that this is
a difficult undertaking not least because there is plenty of bad philosophy
out there and not much good systematic philosophical thinking on these
issues even among professional philosophers.

So far my work in philosophy has been on the fundamental questions of ethics
and reasons more generally. I think I've basically reached fairly definitive
answers on what reasons are and how an objective (enough) morality (as well
as reasons for actions, beliefs, desires and emotions) can be grounded in
psychological facts. I've mostly been working with my coauthor on presenting
this work to other academic philosophers, but at some point, I would really
like to present this and other work on more applied moral theory to those
thinking about the question of Friendly AI. There is of course, a big step
from saying what reasons we humans have to saying what reasons we should
program a Strong AI to have, but clearly the former will greatly influence
the latter. If you are interested, I have tried to condense my view on the
fundamental abstract questions of reasons and ethics to a pamphlet as well
as a somewhat longer paper that will hopefully be fairly accessible to
non-philosophers:

 http://www.umich.edu/~jsku/reasons.html

John Ku

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to