silky wrote:
On Mon, Jan 18, 2010 at 6:57 PM, Brent Meeker <meeke...@dslextreme.com> wrote:
silky wrote:
On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker <meeke...@dslextreme.com>
wrote:

silky wrote:

I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computational"ness" as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some "lesser" consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it mean
it's okay to "power off" a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?


I think the differences are

1) we generally cannot kill an animal without causing it some distress

Is that because our "off" function in real life isn't immediate?
Yes.

Do does that mean you would not feel guilty turning off a real cat, if
it could be done immediately?

No, that's only one reason - read the others.

Or,
as per below, because it cannot get more pleasure?

No, that's why I made it separate.
2) as
long as it is alive it has a capacity for pleasure (that's why we
euthanize
pets when we think they can no longer enjoy any part of life)

This is fair. But what if we were able to model this addition of
pleasure in the program? It's easy to increase happiness++, and thus
the desire to die decreases.
I don't think it's so easy as you suppose.  Pleasure comes through
satisfying desires and it has as many dimensions as there are kinds of
desires.  A animal that has very limited desires, e.g. eat and reproduce,
would not seem to us capable of much pleasure and we would kill it without
much feeling of guilt - as swatting a fly.

Okay, so for your the moral responsibility comes in when we are
depriving the entity from pleasure AND because we can't turn it off
immediately (i.e. it will become aware it's being switched off; and
become upset).


Is this very simple variable enough to
make us care? Clearly not, but why not? Is it because the animal is
more conscious then we think? Is the answer that it's simply
impossible to model even a cat's consciousness completely?

If we model an animal that only exists to eat/live/reproduce, have we
created any moral responsibility? I don't think our moral
responsibility would start even if we add a very complicated
pleasure-based system into the model.
I think it would - just as we have ethical feelings toward dogs and tigers.

So assuming someone can create the appropriate model, and you can
"see" that you will be depriving pleasure and/or causing pain, you'd
start to feel guilty about switching the entity off? Probably it would
be as simple as having the cat/dog "whimper" as it senses that the
program was going to terminate (obviously, visual stimulus would help
in a deterrent),  but then it must be asked, would the programmer feel
guilt? Or just an average user of the system, who doesn't know the
underlying programming model?


My personal opinion is that it
would hard to *ever* feel guilty about ending something that you have
created so artificially (i.e. with every action directly predictable
by you, casually).
Even if the AI were strictly causal, it's interaction with the environment
would very quickly make it's actions unpredictable.  And I think you are
quite wrong about how you would feel.  People report feeling guilty about
not interacting with the Sony artificial pet.

I've clarified my position above; does the programmer ever feel guilt,
or only the users?

The programmer too (though maybe less) because any reasonably high-level AI would have learned things and would no longer appear to just be running through a rote program - even to the programmer.

Brent

But then, it may be asked; children are the same.
Humour aside, you can pretty much have a general idea of exactly what
they will do,
You must not have raised any children.

Sadly, I have not.


Brent


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to