On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
<stath...@gmail.com> wrote:
> 2010/1/19 silky <michaelsli...@gmail.com>:
> > On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou <stath...@gmail.com> 
> > wrote:
> > > 2010/1/18 silky <michaelsli...@gmail.com>:
> > > > It would be my (naive) assumption, that this is arguably trivial to
> > > > do. We can design a program that has a desire to 'live', as desire to
> > > > find mates, and otherwise entertain itself. In this way, with some
> > > > other properties, we can easily model simply pets.
> > >
> > > Brent's reasons are valid,
> >
> > Where it falls down for me is that the programmer should ever feel
> > guilt. I don't see how I could feel guilty for ending a program when I
> > know exactly how it will operate (what paths it will take), even if I
> > can't be completely sure of the specific decisions (due to some
> > randomisation or whatever) I don't see how I could ever think "No, you
> > can't harm X". But what I find very interesting, is that even if I
> > knew *exactly* how a cat operated, I could never kill one.
>
> That's not being rational then, is it?

Exactly my point! I'm trying to discover why I wouldn't be so rational
there. Would you? Do you think that knowing all there is to know about
a cat is unpractical to the point of being impossible *forever*, or do
you believe that once we do know, we will simply "end" them freely,
when they get in our way? I think at some point we *will* know all
there is to know about them, and even then, we won't end them easily.
Why not? Is it the emotional projection that Brent suggests? Possibly.


> > > but I don't think making an artificial
> > > animal is as simple as you say.
>>
>> So is it a complexity issue? That you only start to care about the
>> entity when it's significantly complex. But exactly how complex? Or is
>> it about the unknowningness; that the project is so large you only
>> work on a small part, and thus you don't fully know it's workings, and
>> then that is where the guilt comes in.
>
> Obviously intelligence and the ability to have feelings and desires
> has something to do with complexity. It would be easy enough to write
> a computer program that pleads with you to do something but you don't
> feel bad about disappointing it, because you know it lacks the full
> richness of human intelligence and consciousness.

Indeed; so part of the question is: Qhat level of complexity
constitutes this? Is it simply any level that we don't understand? Or
is there a level that we *can* understand that still makes us feel
that way? I think it's more complicated than just any level we don't
understand (because clearly, I "understand" that if I twist your arm,
it will hurt you, and I know exactly why, but I don't do it).


> > > Henry Markham's group are presently
> > > trying to simulate a rat brain, and so far they have done 10,000
> > > neurons which they are hopeful is behaving in a physiological way.
> > > This is at huge computational expense, and they have a long way to go
> > > before simulating a whole rat brain, and no guarantee that it will
> > > start behaving like a rat. If it does, then they are only a few years
> > > away from simulating a human, soon after that will come a superhuman
> > > AI, and soon after that it's we who will have to argue that we have
> > > feelings and are worth preserving.
> >
> > Indeed, this is something that concerns me as well. If we do create an
> > AI, and force it to do our bidding, are we acting immorally? Or
> > perhaps we just withhold the desire for the program to do it's "own
> > thing", but is that in itself wrong?
>
> If we created an AI that wanted to do our bidding or that didn't care
> what it did, then it would not be wrong. Some people anthropomorphise
> and imagine the AI as themselves or people they know: and since they
> would not like being enslaved they assume the AI wouldn't either. But
> this is false. Eliezer Yudkowsky has written a lot about AI, the
> ethical issues, and the necessity to make a "friendly AI" so that it
> didn't destroy us whether through intention or indifference.
>
> --
> Stathis Papaioannou

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

JUGULAR MATERIALS: thundershower! PRETERNATURAL anise! Stressed
BATTERED KICKBALL neophyte: k...
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to