Harry Chesley wrote:
On 11/4/2008 3:31 PM, Matt Mahoney wrote:
To answer your (modified) question, consciousness is detected by the
activation of a large number of features associated with living
humans. The more of these features are activated, the greater the
tendency to apply ethical guidelines to the target that we would
normally apply to humans. For example, monkeys are more like humans
than mice, which are more like humans than insects, which are more
like humans than programs. It does not depend on a single feature.
If I understand correctly, you're saying that there is no such thing as
objective ethics, and that our subjective ethics depend on how much we
identify/empathize with another creature. I grant this as a possibility,
in which case I guess my question should be viewed as subjective. I.e.,
how do I tell when something is sufficiently close to me, without being
able to see all the features directly, that I need to worry about the
ethics subjectively?
Let me give an example: If I take a person and put them in a box, so
that I can see none of their features or know how similar they are to
me, I still consider it unethical to conduct certain experiments on
them. This is because I believe those important similar features are
there, I just can't see them.
Similarly, I believe at some point in AGI development, features similar
to my own mind will arise, but since they will be obscured by a very
different (and incomplete) implementation from my own, they may not be
obvious, even though I believe they are there.
So although you've changed the phrasing of the question to a degree, the
question remains.
(Note: You could argue that ethics, being subjective, are irrelevant,
and while that may be true, I'm too squeamish to take that view, which
also leads to allowing arbitrary experiments on people.)
I can answer your questions about ethics from the perspective of someone
trying to build real AGI systems that are similar to human minds.
In principle, there is no reason why an AGI system should not be in need
of ethical protection, but it depends on the system.
At the moment, the design of AGI systems is such that there is no
immediate danger of an intelligence being created that is sufficiently
self-aware that it would have anything resembling human consciousness.
Simply put, present systems are almost certainly not capable of feeling
pain or needing ethical protection. This statement would require quite
a lengthy justification, but I think it is a fairly safe conclusion.
In the future (perhaps the near future) it will be possible to create
systems that will have their own consciousness. However, even then
there will be quite drastic differences between different designs, and
we will have to proceed quite carefully.
For example, it will be possible to create systems that are
fundamentally designed to want to do certain things, like serving
humans, or like living in virtual worlds where they do not have contact
with the real world. Those systems should not be viewed as 'enslaved"
because, in point of fact, they would want to do what they do: their
behavior is what makes them happy, and "liberating" them from this
behavior would make them unhappy. It would not be ethical to take such
a system and treat it as if it were a human slave that needed to be
liberated. This would never be true for any human being (no human being
truly would be happy as a slave), but it would be fundamentally true in
the case of this hypothetical AGI system.
This possibility of creating systems that get fulfilment in ways that
are different from the ways that humans get fulfilment must be taken
into account when ethical considerations are evaluated.
Stepping back for the moment, the entire question of ethics depends
crucially on your theory of how consciousness arises. There is no
consensus on this at the moment, but it is important to understand that
any judgement about ethics, either way, can only be made in the context
of a statement about what exactly the theory of consciousness is that
lies behind the statement.
Nobody could simply say, for example, "Let's assume that all AI systems
need ethical protection right now, as a default assumption", because
that kind of default has an *implicit* theory of consciousness behind it
that is pure guesswork, and is not supported by anything we understand
about consciousness at the moment.
For the record, I am treading carefully. As far as what happens in my
lab, I will explicitly put in place measures to ensure that AGI systems
that do have a chance of reasonably high levels of consciousness will
have the fullest possible ethical protections. I cannot speak for
anyone else, but that is my policy.
Richard Loosemore
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com