silky wrote:


On Tue, Jan 19, 2010 at 2:19 PM, Brent Meeker <meeke...@dslextreme.com <mailto:meeke...@dslextreme.com>> wrote:

    silky wrote:

        On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker
        <meeke...@dslextreme.com <mailto:meeke...@dslextreme.com>
        <mailto:meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>>> wrote:

           silky wrote:

               On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
               <meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>
        <mailto:meeke...@dslextreme.com
        <mailto:meeke...@dslextreme.com>>> wrote:
silky wrote: On Tue, Jan 19, 2010 at 1:24 AM, Stathis
        Papaioannou
                       <stath...@gmail.com <mailto:stath...@gmail.com>
        <mailto:stath...@gmail.com <mailto:stath...@gmail.com>>>

                       wrote:

2010/1/18 silky <michaelsli...@gmail.com
        <mailto:michaelsli...@gmail.com>
                           <mailto:michaelsli...@gmail.com
        <mailto:michaelsli...@gmail.com>>>:


It would be my (naive) assumption, that
        this
                               is arguably trivial to
                               do. We can design a program that has a
        desire
                               to 'live', as desire to
                               find mates, and otherwise entertain
        itself. In
                               this way, with some
                               other properties, we can easily model
        simply pets.

Brent's reasons are valid,

Where it falls down for me is that the programmer
                       should ever feel
                       guilt. I don't see how I could feel guilty for
        ending
                       a program when I
                       know exactly how it will operate (what paths it
        will
                       take), even if I
                       can't be completely sure of the specific decisions
                       (due to some
                       randomisation or whatever)
It's not just randomisation, it's experience. If you
                   create and AI at
                   fairly high-level (cat, dog, rat, human) it will
                   necessarily have the
                   ability to learn and after interacting with it's
                   enviroment for a while it
                   will become a unique individual.  That's why you would
                   feel sad to "kill" it
                   - all that experience and knowledge that you don't know
                   how to replace.  Of
                   course it might learn to be "evil" or at least
        annoying,
                   which would make
                   you feel less guilty.
               Nevertheless, though, I know it's exact environment,


           Not if it interacts with the world.  You must be thinking of a
           virtual cat AI in a virtual world - but even there the
        program, if
           at all realistic, is likely to be to complex for you to really
           comprehend.  Of course *in principle* you could spend years
        going
           over a few terrabites of data and  you could understand, "Oh
           that's why the AI cat did that on day 2118 at 10:22:35, it was
           because of the interaction of memories of day 1425 at
        07:54:28 and
           ...(long string of stuff)."  But you'd be in almost the same
           position as the neuroscientist who understands what a clump of
           neurons does but can't get a wholistic view of what the
        organism
           will do.

           Surely you've had the experience of trying to debug a large
           program you wrote some years ago that now seems to fail on some
           input you never tried before.  Now think how much harder that
           would be if it were an AI that had been learning and modifying
           itself for all those years.


        I don't disagree with you that it would be significantly
        complicated, I suppose my argument is only that, unlike with a
        real cat, I - the programmer - know all there is to know about
        this computer cat.


    But you *don't* know all there is to know about it.  You don't
    know what it has learned - and there's no practical way to find out.


Here we disagree. I don't see (not that I have experience in AI-programming specifically, mind you) how I can write a program and not have the results be deterministic. I wrote it; I know, in general, the type of things it will learn. I know, for example, that it won't learn how to drive a car. There are no cars in the environment, and it doesn't have the capabilities to invent a car, let alone the capabilities to drive it.

You seem to be assuming that your AI will only interact with a virtual world - which you will also create. I was assuming your AI would be in something like a robot cat or dog, which interacted with the world. I think there would be different ethical feelings about these two cases.

If you're suggesting that it will materialise these capabilities out of the general model that I've implemented for it, then clearly I can see this path as a possible one.

Well it's certainly possible to write programs so complicated that the programmer doesn't forsee what it can do (I do it all the time :-) ).

Is there a fundamental misunderstanding on my part; that in most sufficiently-advanced AI systems, not even the programmer has an *idea* of what the entity may learn?

That's certainly the case if it learns from interacting with the world because the programmer can practically analyze all those interactions and their effect - except maybe by running another copy of the program on recorded input.


    [...]

           Suppose we could add and emotion that put a positive value on
           running backwards.  Would that add to their overall pleasure in
           life - being able to enjoy something in addition to all the
        other
           things they would have naturally enjoyed?  I'd say yes.  In
        which
           case it would then be wrong to later remove that emotion
        and deny
           them the potential pleasure - assuming of course there are no
           contrary ethical considerations.


        So the only problem you see is if we ever add emotion, and
        then remove it. The problem doesn't lie in not adding it at
        all? Practically, the result is the same.


    No, because if we add it and then remove it after the emotion is
    experienced there will be a memory of it.  Unfortunately nature
    already plays this trick on us.  I can remember that I felt a
    strong emotion the first time a kissed girl - but I can't
    experience it now.


I don't mean we do it to the same entity, I mean to subsequent entites. (cats or real life babies). If, before the baby experiences anything, I remove an emotion it never used, what difference does it make to the baby? The main problem is that it's not the same as other babies, but that's trivially resolved by performing the same removal on all babies. Same applies to cat-instances; if during one compilation I give it emotion, and then I later decide to delete the lines of code that allow this, and run the program again, have I infringed on it's rights? Does the program even have any rights when it's not running?

I don't think of rights as some abstract thing "out there". They are inventions of society saying we, as a society, will protect you when you want to do these things that you have a *right* to do. We won't let others use force or coercion to prevent you. So then the question becomes what rights is in societies interest to enforce for a computer program (probably none) or for an AI robot (maybe some).

From this viewpoint the application to babies and cats is straightforward. What are the consequences for society and what kind of society do we want to live in? Suppose all babies were born completely greedy and self-centered and it was proposed that this value system should be replaced by a balanced concern for self and others and...wait a minute, that IS what we do.


        If a baby is born without the "emotion" for feeling
        overworked, or adjusted so that it enjoys this overworked
        state, then we take advantage of that, are we wrong? If the AI
        we create is modelled on humans anyway, isn't it somewhat
        "cheating" to not re-implement everything, and instead only
        implement the parts that we selflishly consider useful?

        I suppose there is no real obligation to recreate an entire
        human consciousness (after all, if we did, we'd have no more
        control over it than we do other "real" humans), but it's
        interesting that we're able to pick and choose what to create,
        and yet, not able to remove from real children what we
        determine is inappropriate to make *them* more "effective"
        workers.


    We do try to remove emotions that we consider damaging, even
    though they may diminish the life of the subject.  After all
    serial killers probably get a lot of pleasure from killing people.
     This is the plot of the play "Equus"; ever seen it?


No, I haven't. But we don't do this type of thing via genetic modification (which is what I'm suggesting; or at least whatever type of modification it would be called if we did it before the baby experience anything; perhaps even if it was changes to the egg/sperm before they meet).

We haven't done it deliberately but I think cultural selection has modified human nature just as much as selective breeding has modified dogs. So cultural selection has ensured that we have feelings of empathy and well as zenophobia and dozens of other emotions and values. For at least tens of thousands of years interactions with other humans has been the most important single factor in reproductive success.


        The argument against that sort of thing would be we are
        depriving the child of a different life; but would it ever
        know? What would it care?


    And who is competent to say which life is better?  We wouldn't
    hesitate deprive a serial killer of his pleasure in killing
    because of societal concerns out weight his pleasure.  But what
    about extreme feelings of physical aggressiveness?...we just draft
    the guy into the NFL as a linebacker.


So if society decides it's more appropriate to put "hard-working" in all children, I suppose it becomes acceptable. This seems wrong, somehow, but you're right, if not society (i.e. government) then who.

The only standard by which such a move could be judged wrong would be evolutionary failure - i.e. other societies that didn't do it survived while the "hard workers" didn't. But even that standard isn't what we'd call an ethical one. Incidentally, I try to distinguish between ethics and morals. I like to use "ethics" to mean rules for getting along with other people in society; while I use "morals" to be standards by which I judge myself. It's not a sharp division and it doesn't seem to correspond to common usage where the two are use interchangeably, but I think it's a useful distinction whatever words you use.

Brent


    Brent

        And regardless, doesn't the program we've written deserve the
        same rights? Why not?


Brent


           --
           You received this message because you are subscribed to the
        Google
           Groups "Everything List" group.
           To post to this group, send email to
           everything-list@googlegroups.com
        <mailto:everything-list@googlegroups.com>
           <mailto:everything-list@googlegroups.com
        <mailto:everything-list@googlegroups.com>>.

           To unsubscribe from this group, send email to
           everything-list+unsubscr...@googlegroups.com
        <mailto:everything-list%2bunsubscr...@googlegroups.com>
           <mailto:everything-list%2bunsubscr...@googlegroups.com
        <mailto:everything-list%252bunsubscr...@googlegroups.com>>.

           For more options, visit this group at
           http://groups.google.com/group/everything-list?hl=en.



-- silky
         http://www.mirios.com.au/
          http://island.mirios.com.au/t/rigby+random+20

        RAMIE bloated double-knit hearten fleetness.
        ------------------------------------------------------------------------

-- You received this message because you are subscribed to the
        Google Groups "Everything List" group.
        To post to this group, send email to
        everything-list@googlegroups.com
        <mailto:everything-list@googlegroups.com>.
        To unsubscribe from this group, send email to
        everything-list+unsubscr...@googlegroups.com
        <mailto:everything-list%2bunsubscr...@googlegroups.com>.
        For more options, visit this group at
        http://groups.google.com/group/everything-list?hl=en.



    --
    You received this message because you are subscribed to the Google
    Groups "Everything List" group.
    To post to this group, send email to
    everything-list@googlegroups.com
    <mailto:everything-list@googlegroups.com>.
    To unsubscribe from this group, send email to
    everything-list+unsubscr...@googlegroups.com
    <mailto:everything-list%2bunsubscr...@googlegroups.com>.
    For more options, visit this group at
    http://groups.google.com/group/everything-list?hl=en.






--
silky
http://www.mirios.com.au/ http://island.mirios.com.au/t/rigby+random+20

cover! Laden nonfat: RELATIONAL = disinterestedness groats piazza; scion.
------------------------------------------------------------------------

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


Reply via email to