silky wrote:
On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker <meeke...@dslextreme.com> wrote:
silky wrote:
I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computational"ness" as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some "lesser" consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it mean
it's okay to "power off" a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?

I think the differences are

1) we generally cannot kill an animal without causing it some distress

Is that because our "off" function in real life isn't immediate?
Yes.

Or,
as per below, because it cannot get more pleasure?

No, that's why I made it separate.

2) as
long as it is alive it has a capacity for pleasure (that's why we euthanize
pets when we think they can no longer enjoy any part of life)

This is fair. But what if we were able to model this addition of
pleasure in the program? It's easy to increase happiness++, and thus
the desire to die decreases.

I don't think it's so easy as you suppose. Pleasure comes through satisfying desires and it has as many dimensions as there are kinds of desires. A animal that has very limited desires, e.g. eat and reproduce, would not seem to us capable of much pleasure and we would kill it without much feeling of guilt - as swatting a fly.
Is this very simple variable enough to
make us care? Clearly not, but why not? Is it because the animal is
more conscious then we think? Is the answer that it's simply
impossible to model even a cat's consciousness completely?

If we model an animal that only exists to eat/live/reproduce, have we
created any moral responsibility? I don't think our moral
responsibility would start even if we add a very complicated
pleasure-based system into the model.
I think it would - just as we have ethical feelings toward dogs and tigers.

My personal opinion is that it
would hard to *ever* feel guilty about ending something that you have
created so artificially (i.e. with every action directly predictable
by you, casually).

Even if the AI were strictly causal, it's interaction with the environment would very quickly make it's actions unpredictable. And I think you are quite wrong about how you would feel. People report feeling guilty about not interacting with the Sony artificial pet.

But then, it may be asked; children are the same.
Humour aside, you can pretty much have a general idea of exactly what
they will do,

You must not have raised any children.

Brent

and we created them, so why do we feel so responsible?
(Clearly, a easy answer is that it's chemical).


3) if we could
create an artificial pet (and Sony did) we can turn it off and turn it back
on.

Lets assume, for the sake of argument, that each instance of the
program is one unique pet, and it will never be re-created or saved.



4) if a pet, artificial or otherwise, has capacity for pleasure and
suffering we do have an ethical responsibility toward it.

Brent


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to