2010/1/19 silky <michaelsli...@gmail.com>:
> On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou <stath...@gmail.com> 
> wrote:
>> 2010/1/18 silky <michaelsli...@gmail.com>:
>> > It would be my (naive) assumption, that this is arguably trivial to
>> > do. We can design a program that has a desire to 'live', as desire to
>> > find mates, and otherwise entertain itself. In this way, with some
>> > other properties, we can easily model simply pets.
>>
>> Brent's reasons are valid,
>
> Where it falls down for me is that the programmer should ever feel
> guilt. I don't see how I could feel guilty for ending a program when I
> know exactly how it will operate (what paths it will take), even if I
> can't be completely sure of the specific decisions (due to some
> randomisation or whatever) I don't see how I could ever think "No, you
> can't harm X". But what I find very interesting, is that even if I
> knew *exactly* how a cat operated, I could never kill one.

That's not being rational then, is it?

>> but I don't think making an artificial
>> animal is as simple as you say.
>
> So is it a complexity issue? That you only start to care about the
> entity when it's significantly complex. But exactly how complex? Or is
> it about the unknowningness; that the project is so large you only
> work on a small part, and thus you don't fully know it's workings, and
> then that is where the guilt comes in.

Obviously intelligence and the ability to have feelings and desires
has something to do with complexity. It would be easy enough to write
a computer program that pleads with you to do something but you don't
feel bad about disappointing it, because you know it lacks the full
richness of human intelligence and consciousness.

>> Henry Markham's group are presently
>> trying to simulate a rat brain, and so far they have done 10,000
>> neurons which they are hopeful is behaving in a physiological way.
>> This is at huge computational expense, and they have a long way to go
>> before simulating a whole rat brain, and no guarantee that it will
>> start behaving like a rat. If it does, then they are only a few years
>> away from simulating a human, soon after that will come a superhuman
>> AI, and soon after that it's we who will have to argue that we have
>> feelings and are worth preserving.
>
> Indeed, this is something that concerns me as well. If we do create an
> AI, and force it to do our bidding, are we acting immorally? Or
> perhaps we just withhold the desire for the program to do it's "own
> thing", but is that in itself wrong?

If we created an AI that wanted to do our bidding or that didn't care
what it did, then it would not be wrong. Some people anthropomorphise
and imagine the AI as themselves or people they know: and since they
would not like being enslaved they assume the AI wouldn't either. But
this is false. Eliezer Yudkowsky has written a lot about AI, the
ethical issues, and the necessity to make a "friendly AI" so that it
didn't destroy us whether through intention or indifference.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to