On Tue, Jan 19, 2010 at 3:02 PM, Brent Meeker <meeke...@dslextreme.com>wrote:

> silky wrote:
>


> [...]
>>
>

>
>> Here we disagree. I don't see (not that I have experience in
>> AI-programming specifically, mind you) how I can write a program and not
>> have the results be deterministic. I wrote it; I know, in general, the type
>> of things it will learn. I know, for example, that it won't learn how to
>> drive a car. There are no cars in the environment, and it doesn't have the
>> capabilities to invent a car, let alone the capabilities to drive it.
>>
>
> You seem to be assuming that your AI will only interact with a virtual
> world - which you will also create.  I was assuming your AI would be in
> something like a robot cat or dog, which interacted with the world.  I think
> there would be different ethical feelings about these two cases.
>

Well, it will be reacting with the "real" world; just a subset that I
specially allow it to interact with. I mean the computer is still in the
real world, whether or not the physical box actually has legs :) In my mind,
though, I was imagining a cat that specifically existed inside my screen,
reacting to other such cats. Lets say the cat is allowed out, in the form of
a robot, and can interact with real cats. Even still, it's programming will
allow it to act only in a deterministic way that I have defined (even if I
haven't defined out all it's behaviours; it may learn some from the other
cats).

So lets say that Robocat learns how to play with a ball, from Realcat. Would
my guilt in "ending" Robocat only lie in the fact that it learned something,
and given that I can't save it, that learning instance was unique? I'm not
sure. As a programmer, I'd simply be happy my program worked, and I'd
probably want to reproduce it. But showing it to a friend, they may wonder
why I turned it off; it worked, and now it needs to re-learn the next time
it's switched back on (interestingly, I would suggest that everyone would
consider it to be still the "same" Robocat, even though it needs to
effectively start from scratch).



>  If you're suggesting that it will materialise these capabilities out of
>> the general model that I've implemented for it, then clearly I can see this
>> path as a possible one.
>>
>
> Well it's certainly possible to write programs so complicated that the
> programmer doesn't forsee what it can do (I do it all the time  :-)  ).
>
>
>> Is there a fundamental misunderstanding on my part; that in most
>> sufficiently-advanced AI systems, not even the programmer has an *idea* of
>> what the entity may learn?
>>
>
> That's certainly the case if it learns from interacting with the world
> because the programmer can practically analyze all those interactions and
> their effect - except maybe by running another copy of the program on
> recorded input.
>
>
>>
>>    [...]
>>
>>
>>           Suppose we could add and emotion that put a positive value on
>>           running backwards.  Would that add to their overall pleasure in
>>           life - being able to enjoy something in addition to all the
>>        other
>>           things they would have naturally enjoyed?  I'd say yes.  In
>>        which
>>           case it would then be wrong to later remove that emotion
>>        and deny
>>           them the potential pleasure - assuming of course there are no
>>           contrary ethical considerations.
>>
>>
>>        So the only problem you see is if we ever add emotion, and
>>        then remove it. The problem doesn't lie in not adding it at
>>        all? Practically, the result is the same.
>>
>>
>>    No, because if we add it and then remove it after the emotion is
>>    experienced there will be a memory of it.  Unfortunately nature
>>    already plays this trick on us.  I can remember that I felt a
>>    strong emotion the first time a kissed girl - but I can't
>>    experience it now.
>>
>>
>> I don't mean we do it to the same entity, I mean to subsequent entites.
>> (cats or real life babies). If, before the baby experiences anything, I
>> remove an emotion it never used, what difference does it make to the baby?
>> The main problem is that it's not the same as other babies, but that's
>> trivially resolved by performing the same removal on all babies.
>> Same applies to cat-instances; if during one compilation I give it
>> emotion, and then I later decide to delete the lines of code that allow
>> this, and run the program again, have I infringed on it's rights? Does the
>> program even have any rights when it's not running?
>>
>
> I don't think of rights as some abstract thing "out there".  They are
> inventions of society saying we, as a society, will protect you when you
> want to do these things that you have a *right* to do.  We won't let others
> use force or coercion to prevent you.  So then the question becomes what
> rights is in societies interest to enforce for a computer program (probably
> none) or for an AI robot (maybe some).
>
> From this viewpoint the application to babies and cats is straightforward.
>  What are the consequences for society and what kind of society do we want
> to live in?  Suppose all babies were born completely greedy and
> self-centered and it was proposed that this value system should be replaced
> by a balanced concern for self and others and...wait a minute, that IS what
> we do.


So you would feel comfortable with pre-pregnancy modification of eggs to
construct the baby in a certain way? Or would you only be comfortable with
letting it be born as it may, and then forcing your views on the resulting
output; biases left in.




>          If a baby is born without the "emotion" for feeling
>>        overworked, or adjusted so that it enjoys this overworked
>>        state, then we take advantage of that, are we wrong? If the AI
>>        we create is modelled on humans anyway, isn't it somewhat
>>        "cheating" to not re-implement everything, and instead only
>>        implement the parts that we selflishly consider useful?
>>
>>        I suppose there is no real obligation to recreate an entire
>>        human consciousness (after all, if we did, we'd have no more
>>        control over it than we do other "real" humans), but it's
>>        interesting that we're able to pick and choose what to create,
>>        and yet, not able to remove from real children what we
>>        determine is inappropriate to make *them* more "effective"
>>        workers.
>>
>>
>>    We do try to remove emotions that we consider damaging, even
>>    though they may diminish the life of the subject.  After all
>>    serial killers probably get a lot of pleasure from killing people.
>>     This is the plot of the play "Equus"; ever seen it?
>>
>>
>> No, I haven't. But we don't do this type of thing via genetic modification
>> (which is what I'm suggesting; or at least whatever type of modification it
>> would be called if we did it before the baby experience anything; perhaps
>> even if it was changes to the egg/sperm before they meet).
>>
>
> We haven't done it deliberately but I think cultural selection has modified
> human nature just as much as selective breeding has modified dogs.  So
> cultural selection has ensured that we have feelings of empathy and well as
> zenophobia and dozens of other emotions and values.  For at least tens of
> thousands of years interactions with other humans has been the most
> important single factor in reproductive success.


Interesting.



>          The argument against that sort of thing would be we are
>>        depriving the child of a different life; but would it ever
>>        know? What would it care?
>>
>>
>>    And who is competent to say which life is better?  We wouldn't
>>    hesitate deprive a serial killer of his pleasure in killing
>>    because of societal concerns out weight his pleasure.  But what
>>    about extreme feelings of physical aggressiveness?...we just draft
>>    the guy into the NFL as a linebacker.
>>
>>
>> So if society decides it's more appropriate to put "hard-working" in all
>> children, I suppose it becomes acceptable. This seems wrong, somehow, but
>> you're right, if not society (i.e. government) then who.
>>
>
> The only standard by which such a move could be judged wrong would be
> evolutionary failure - i.e. other societies that didn't do it survived while
> the "hard workers" didn't.  But even that standard isn't what we'd call an
> ethical one.
>

True, this is a sufficiently "high" enough standard that it can't be argued
with.



> Incidentally, I try to distinguish between ethics and morals.  I like to
> use "ethics" to mean rules for getting along with other people in society;
> while I use "morals" to be standards by which I judge myself.  It's not a
> sharp division and it doesn't seem to correspond to common usage where the
> two are use interchangeably, but I think it's a useful distinction whatever
> words you use.
>
> Brent
>
>>
>>
>>
>>    Brent
>>
>>        And regardless, doesn't the program we've written deserve the
>>        same rights? Why not?
>>
>>
>>                   Brent
>>
>>
>>           --
>>           You received this message because you are subscribed to the
>>        Google
>>           Groups "Everything List" group.
>>           To post to this group, send email to
>>           everything-list@googlegroups.com
>>        <mailto:everything-list@googlegroups.com>
>>           <mailto:everything-list@googlegroups.com
>>        <mailto:everything-list@googlegroups.com>>.
>>
>>           To unsubscribe from this group, send email to
>>           
>> everything-list+unsubscr...@googlegroups.com<everything-list%2bunsubscr...@googlegroups.com>
>>        
>> <mailto:everything-list%2bunsubscr...@googlegroups.com<everything-list%252bunsubscr...@googlegroups.com>
>> >
>>           
>> <mailto:everything-list%2bunsubscr...@googlegroups.com<everything-list%252bunsubscr...@googlegroups.com>
>>        
>> <mailto:everything-list%252bunsubscr...@googlegroups.com<everything-list%25252bunsubscr...@googlegroups.com>
>> >>.
>>
>>
>>           For more options, visit this group at
>>           http://groups.google.com/group/everything-list?hl=en.
>>
>>
>>
>>        --        silky
>>         http://www.mirios.com.au/
>>          http://island.mirios.com.au/t/rigby+random+20
>>
>>        RAMIE bloated double-knit hearten fleetness.
>>
>>  ------------------------------------------------------------------------
>>
>>        --        You received this message because you are subscribed to
>> the
>>        Google Groups "Everything List" group.
>>        To post to this group, send email to
>>        everything-list@googlegroups.com
>>        <mailto:everything-list@googlegroups.com>.
>>        To unsubscribe from this group, send email to
>>        
>> everything-list+unsubscr...@googlegroups.com<everything-list%2bunsubscr...@googlegroups.com>
>>        
>> <mailto:everything-list%2bunsubscr...@googlegroups.com<everything-list%252bunsubscr...@googlegroups.com>
>> >.
>>        For more options, visit this group at
>>        http://groups.google.com/group/everything-list?hl=en.
>>
>>
>>
>>    --
>>    You received this message because you are subscribed to the Google
>>    Groups "Everything List" group.
>>    To post to this group, send email to
>>    everything-list@googlegroups.com
>>    <mailto:everything-list@googlegroups.com>.
>>    To unsubscribe from this group, send email to
>>    
>> everything-list+unsubscr...@googlegroups.com<everything-list%2bunsubscr...@googlegroups.com>
>>    
>> <mailto:everything-list%2bunsubscr...@googlegroups.com<everything-list%252bunsubscr...@googlegroups.com>
>> >.
>>    For more options, visit this group at
>>    http://groups.google.com/group/everything-list?hl=en.
>>
>>
>>
>>
>>
>>
>> --
>> silky
>>  http://www.mirios.com.au/
>> http://island.mirios.com.au/t/rigby+random+20
>>
>> cover! Laden nonfat: RELATIONAL = disinterestedness groats piazza; scion.
>> ------------------------------------------------------------------------
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To post to this group, send email to everything-l...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> everything-list+unsubscr...@googlegroups.com<everything-list%2bunsubscr...@googlegroups.com>
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/everything-list?hl=en.
>>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-l...@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com<everything-list%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>
>
>


-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

MONKSHOOD insurer crusade.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Reply via email to