silky wrote:
On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker <meeke...@dslextreme.com> wrote:
silky wrote:
On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou <stath...@gmail.com>
wrote:

2010/1/18 silky <michaelsli...@gmail.com>:

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

Brent's reasons are valid,

Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever)
It's not just randomisation, it's experience.  If you create and AI at
fairly high-level (cat, dog, rat, human) it will necessarily have the
ability to learn and after interacting with it's enviroment for a while it
will become a unique individual.  That's why you would feel sad to "kill" it
- all that experience and knowledge that you don't know how to replace.  Of
course it might learn to be "evil" or at least annoying, which would make
you feel less guilty.

Nevertheless, though, I know it's exact environment,

Not if it interacts with the world. You must be thinking of a virtual cat AI in a virtual world - but even there the program, if at all realistic, is likely to be to complex for you to really comprehend. Of course *in principle* you could spend years going over a few terrabites of data and you could understand, "Oh that's why the AI cat did that on day 2118 at 10:22:35, it was because of the interaction of memories of day 1425 at 07:54:28 and ...(long string of stuff)." But you'd be in almost the same position as the neuroscientist who understands what a clump of neurons does but can't get a wholistic view of what the organism will do.

Surely you've had the experience of trying to debug a large program you wrote some years ago that now seems to fail on some input you never tried before. Now think how much harder that would be if it were an AI that had been learning and modifying itself for all those years.
so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same "seed" state;
unless the source of randomness is "true").



I don't see how I could ever think "No, you
can't harm X". But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.



but I don't think making an artificial
animal is as simple as you say.

So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.

I think unknowingness plays a big part, but it's because of our experience
with people and animals, we project our own experience of consciousness on
to them so that when we see them behave in certain ways we impute an inner
life to them that includes pleasure and suffering.

Yes, I agree. So does that mean that, over time, if we continue using
these computer-based cats, we would become attached to them (i.e. your
Sony toys example

Hell, I even become attached to my motorcycles.

Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's "own
thing", but is that in itself wrong?

I don't think so.  We don't worry about the internet's feelings, or the air
traffic control system.  John McCarthy has written essays on this subject
and he cautions against creating AI with human like emotions precisely
because of the ethical implications.  But that means we need to understand
consciousness and emotions less we accidentally do something unethical.

Fair enough. But by the same token, what if we discover a way to
remove emotions from real-born children. Would it be wrong to do that?
Is "emotion" an inherent property that we should never be allowed to
remove, once created?

Certainly it would be fruitless to remove all emotions because that would be the same as removing all discrimination and motivation - they'd be dumb as tape recorders. So I suppose you're asking about removing, or providing specific emotions. Removing, for example, empathy would certainly be bad idea - that's how you get sociopathic killers. Suppose we could remove all selfishness and create an altruistic being who only wanted to help and serve others (as some religions hold up as an ideal). I think you can immediately see that would be a disaster. Suppose we could add and emotion that put a positive value on running backwards. Would that add to their overall pleasure in life - being able to enjoy something in addition to all the other things they would have naturally enjoyed? I'd say yes. In which case it would then be wrong to later remove that emotion and deny them the potential pleasure - assuming of course there are no contrary ethical considerations.

Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to