On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker <meeke...@dslextreme.com> wrote:
> silky wrote:
>>
>> On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou <stath...@gmail.com>
>> wrote:
>>
>>>
>>> 2010/1/18 silky <michaelsli...@gmail.com>:
>>>
>>>>
>>>> It would be my (naive) assumption, that this is arguably trivial to
>>>> do. We can design a program that has a desire to 'live', as desire to
>>>> find mates, and otherwise entertain itself. In this way, with some
>>>> other properties, we can easily model simply pets.
>>>>
>>>
>>> Brent's reasons are valid,
>>>
>>
>> Where it falls down for me is that the programmer should ever feel
>> guilt. I don't see how I could feel guilty for ending a program when I
>> know exactly how it will operate (what paths it will take), even if I
>> can't be completely sure of the specific decisions (due to some
>> randomisation or whatever)
>
> It's not just randomisation, it's experience.  If you create and AI at
> fairly high-level (cat, dog, rat, human) it will necessarily have the
> ability to learn and after interacting with it's enviroment for a while it
> will become a unique individual.  That's why you would feel sad to "kill" it
> - all that experience and knowledge that you don't know how to replace.  Of
> course it might learn to be "evil" or at least annoying, which would make
> you feel less guilty.

Nevertheless, though, I know it's exact environment, so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same "seed" state;
unless the source of randomness is "true").



>> I don't see how I could ever think "No, you
>> can't harm X". But what I find very interesting, is that even if I
>> knew *exactly* how a cat operated, I could never kill one.
>>
>>
>>
>>>
>>> but I don't think making an artificial
>>> animal is as simple as you say.
>>>
>>
>> So is it a complexity issue? That you only start to care about the
>> entity when it's significantly complex. But exactly how complex? Or is
>> it about the unknowningness; that the project is so large you only
>> work on a small part, and thus you don't fully know it's workings, and
>> then that is where the guilt comes in.
>>
>
> I think unknowingness plays a big part, but it's because of our experience
> with people and animals, we project our own experience of consciousness on
> to them so that when we see them behave in certain ways we impute an inner
> life to them that includes pleasure and suffering.

Yes, I agree. So does that mean that, over time, if we continue using
these computer-based cats, we would become attached to them (i.e. your
Sony toys example


> > Indeed, this is something that concerns me as well. If we do create an
> > AI, and force it to do our bidding, are we acting immorally? Or
> > perhaps we just withhold the desire for the program to do it's "own
> > thing", but is that in itself wrong?
> >
>
> I don't think so.  We don't worry about the internet's feelings, or the air
> traffic control system.  John McCarthy has written essays on this subject
> and he cautions against creating AI with human like emotions precisely
> because of the ethical implications.  But that means we need to understand
> consciousness and emotions less we accidentally do something unethical.

Fair enough. But by the same token, what if we discover a way to
remove emotions from real-born children. Would it be wrong to do that?
Is "emotion" an inherent property that we should never be allowed to
remove, once created?


> Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

FRACTURE THISTLEDOWN CURIOUSLY! Sixfold columned HOBBLER shouter
OVERLAND axon ZANY interbree...
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to