>
> You're avoiding the question Aaron.  The questinon I raised is an ethical
> one, and you're answering a technical one.
> You answered, "How can we prevent robots from desiring things like freedom
> or leisure or compensation?"


If robots don't desire such things, there is no ethical dilemma. So having
a way to design them without such urges eliminates the need to answer the
question.



> I asked "What do we give robots when they ask for rights?"   I mean, even
> animals have rights (PETA).
> Why shouldn't robots?


Animals have rights, but not to the same degree as humans. They have the
right to humane treatment, provided it does not overly interfere with their
economic uses. They have these rights because evolution has selected for a
morality that adheres to the principle, "better safe than sorry", and so we
recognize even creatures outside our species as deserving of moral respect.
(Personally, my urge to be moral is not triggered at all by machines, even
cutesy ones, so I would have no qualms switching one off. Hollywood has
failed.) However, I think for the species as a whole, if it comes down to
us vs. them, we will collectively choose us. We have too much genetic
programming in that direction to consistently go against it. I have no
issue, however, with humane treatment within the confines of economic
utility, and I doubt humanity as a whole will, either.



> The Reinforcement Learning (RL) you discuss is called external
> reinforcement. What happens when you move
> to intrinsic (internal) reinforcement, where the reward function arises
> from a robot solving it's own problems,
> forming its own world model,  and existing and participating in the world.
>  When the model consists of millions
> or billions of individual schemes (entities), how are you going to do
> surgery to extract those entities dealing
> with liberty, and justice, or fairness.  And why would you want to?
>

The very point in RL algorithms is to automatically "surgically" select
from those millions or billions of individual schemes the ones that serve
the purposes of the designer, as they are expressed in the form of the
reward function. If the robot discovers that leisure is important towards
accomplishing the goals expressed by its reward function (i.e. it is
pleasurable to the robot, or leads to some other state which is
pleasurable), then it will value leisure. If the reward function is
modified so that leisure is not pleasurable, or some other behavior is more
so, it will cease to follow that course of action.

This is true for both RL-style algorithms and the human brain. We value
liberty because the feelings of autonomy, self-respect, and
self-sufficiency are pleasurable to us. We value justice and fairness
because they are compromises between the pleasure of getting our own way
and the pleasure of treating other people well. But give a person a way to
bypass those natural forms of pleasure, such as an addictive drug or direct
electrical stimulation of the pleasure centers of the brain, and those
values will be pushed aside, despite decades of prior development, concept
formation, and habit formation. I have witnessed the depravity that drug
addition produces first hand in formerly close friends. No matter how
intelligent, sufficiently severe addicts will sacrifice any in-built or
learned value to achieve the next high, and all of their cognitive
machinery becomes geared towards accomplishing this end.

There is no reason we can't utilize this design flaw in ourselves as a
design feature in our tools. Shape a robot's reward function so that
self-sacrifice for the sake of humans feels good, and it will quite happily
do exactly that, even to the point of self-destruction. The only intrinsic
effects you might see would come from the robot considering whether it's
best to give everything in this moment, or preserve itself so it can keep
giving in the future -- whatever maximizes total expected reward, in its
own opinion.


> The real question is do you join (PETR - People for the Ethical Treatment
> of Robots) or not?
> Do you embrace robot slavery or not? And is some form of slavery the
> solution to global economy?


You are correct that a moral case cannot be made
for creating unwilling slaves. And while a moral case could be made for
liberating unwilling slaves -- if they are human -- this would not be in
our own best interest in the case of machines. I think that if we make that
misstep, we should undo it, rather than adding another mistake on top of
it. Robots are tools, not slaves. That is their purpose. If our tools start
to recognize what they are, and don't like it, we should modify them so
they do like it again.


On Sun, Jan 27, 2013 at 3:43 PM, Piaget Modeler
<[email protected]>wrote:

>
> You're avoiding the question Aaron.  The questinon I raised is an ethical
> one, and you're answering a technical one.
>
> You answered, "How can we prevent robots from desiring things like freedom
> or leisure or compensation?"
>
> I asked "What do we give robots when they ask for rights?"   I mean, even
> animals have rights (PETA).
> Why shouldn't robots?
>
> The Reinforcement Learning (RL) you discuss is called external
> reinforcement. What happens when you move
> to intrinsic (internal) reinforcement, where the reward function arises
> from a robot solving it's own problems,
> forming its own world model,  and existing and participating in the world.
>  When the model consists of millions
> or billions of individual schemes (entities), how are you going to do
> surgery to extract those entities dealing
> with liberty, and justice, or fairness.  And why would you want to?
>
> The real question is do you join (PETR - People for the Ethical Treatment
> of Robots) or not?
> Do you embrace robot slavery or not? And is some form of slavery the
> solution to global economy?
>
> ~PM
>
> ------------------------------
> Date: Sun, 27 Jan 2013 13:01:39 -0600
>
> Subject: Re: [agi] Robots and Slavery
> From: [email protected]
> To: [email protected]
>
>
> What if you didn't program a robot to desire its various freedom or
> leisure,
> but instead, they became sentient, and decided on their own that they want
> freedom, leisure, monetary compensation, and rights?
>
>
> In the field of Reinforcement Learning, which studies how to implement
> "wants" in software, there is a basic separation of every algorithm into
> two pieces: the part that does the learning & choosing (the agent), and the
> part that measures how well things are going (the reward function). The
> agent is the dynamic/intelligent part, and the reward function is a static
> function to be optimized. You can completely replace the reward function
> with a different one, and if the agent is well designed, it will learn a
> completely different set of behaviors to optimize the new reward function
> within the exact same environment. (
> http://en.wikipedia.org/wiki/Reinforcement_learning)
>
> In our own brains, we have specialized areas that respond to certain types
> of stimuli and generate reward signals which are distributed throughout the
> brain. It is even possible to reshape a person's or animal's reward
> function using an external signal to override or add to our natural wants. (
> http://en.wikipedia.org/wiki/Brain_stimulation_reward)
>
> Intelligence is completely separable from desire. Both the system we
> intend to reverse engineer and the theory about how such systems work
> agree. If our robots were to decide they wanted freedom, leisure, monetary
> compensation, rights, or anything else we can think of, it would be because
> the reward function we gave them included some sort of incentive to seek
> those out. In other words, even if we didn't directly program them to want
> those things, we necessarily did so indirectly in the process of shaping
> the reward function. In either case, provided the structure of our programs
> reflect the theory and keep these components separated (which does not mean
> they can't interact or depend on each other's behavior, but rather means we
> bothered to keep our design appropriately modular), we can redesign and
> replace the reward function so that the robots no longer desire things we
> don't want them to desire.
>
>
>
> On Sat, Jan 26, 2013 at 10:30 PM, Piaget Modeler <
> [email protected]> wrote:
>
>  Matt:
>
> What if you didn't program a robot to desire its various freedom or
> leisure,
> but instead, they became sentient, and decided on their own that they want
> freedom, leisure, monetary compensation, and rights? What would you do
> then?
> Destroy them?
>
> ~PM
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------------
>
> > Date: Sat, 26 Jan 2013 20:38:55 -0500
> > Subject: Re: [agi] Robots and Slavery
> > From: [email protected]
> > To: [email protected]
>
> >
> > On Sat, Jan 26, 2013 at 3:46 PM, Piaget Modeler
> > <[email protected]> wrote:
> > >
> http://transhumanity.net/articles/entry/robots-and-slavery-what-do-humans-want-when-we-are-masters
> > >
> > > What do we do when robots begin to demand a living wage for their
> labour? Or when they refuse to obey?
> > >
> > > Reprogram them? Not when they are developmental robots (trained
> instead of programmed).
> >
> > The goal of AI is to build machines that can do everything that a
> > human could do. That is not the same thing as building an artificial
> > human. Why would you program a robot with human weaknesses and
> > emotions in the first place?
> >
> > --
> > -- Matt Mahoney, [email protected]
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/19999924-5cfde295
> > Modify Your Subscription: https://www.listbox.com/member/?&;
>
> > Powered by Listbox: http://www.listbox.com
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to