>
> What if you didn't program a robot to desire its various freedom or
> leisure,
> but instead, they became sentient, and decided on their own that they want
> freedom, leisure, monetary compensation, and rights?


In the field of Reinforcement Learning, which studies how to implement
"wants" in software, there is a basic separation of every algorithm into
two pieces: the part that does the learning & choosing (the agent), and the
part that measures how well things are going (the reward function). The
agent is the dynamic/intelligent part, and the reward function is a static
function to be optimized. You can completely replace the reward function
with a different one, and if the agent is well designed, it will learn a
completely different set of behaviors to optimize the new reward function
within the exact same environment. (
http://en.wikipedia.org/wiki/Reinforcement_learning)

In our own brains, we have specialized areas that respond to certain types
of stimuli and generate reward signals which are distributed throughout the
brain. It is even possible to reshape a person's or animal's reward
function using an external signal to override or add to our natural wants. (
http://en.wikipedia.org/wiki/Brain_stimulation_reward)

Intelligence is completely separable from desire. Both the system we intend
to reverse engineer and the theory about how such systems work agree. If
our robots were to decide they wanted freedom, leisure, monetary
compensation, rights, or anything else we can think of, it would be because
the reward function we gave them included some sort of incentive to seek
those out. In other words, even if we didn't directly program them to want
those things, we necessarily did so indirectly in the process of shaping
the reward function. In either case, provided the structure of our programs
reflect the theory and keep these components separated (which does not mean
they can't interact or depend on each other's behavior, but rather means we
bothered to keep our design appropriately modular), we can redesign and
replace the reward function so that the robots no longer desire things we
don't want them to desire.



On Sat, Jan 26, 2013 at 10:30 PM, Piaget Modeler
<[email protected]>wrote:

>  Matt:
>
> What if you didn't program a robot to desire its various freedom or
> leisure,
> but instead, they became sentient, and decided on their own that they want
> freedom, leisure, monetary compensation, and rights? What would you do
> then?
> Destroy them?
>
> ~PM
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------------
>
> > Date: Sat, 26 Jan 2013 20:38:55 -0500
> > Subject: Re: [agi] Robots and Slavery
> > From: [email protected]
> > To: [email protected]
>
> >
> > On Sat, Jan 26, 2013 at 3:46 PM, Piaget Modeler
> > <[email protected]> wrote:
> > >
> http://transhumanity.net/articles/entry/robots-and-slavery-what-do-humans-want-when-we-are-masters
> > >
> > > What do we do when robots begin to demand a living wage for their
> labour? Or when they refuse to obey?
> > >
> > > Reprogram them? Not when they are developmental robots (trained
> instead of programmed).
> >
> > The goal of AI is to build machines that can do everything that a
> > human could do. That is not the same thing as building an artificial
> > human. Why would you program a robot with human weaknesses and
> > emotions in the first place?
> >
> > --
> > -- Matt Mahoney, [email protected]
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/19999924-5cfde295
> > Modify Your Subscription: https://www.listbox.com/member/?&;
>
> > Powered by Listbox: http://www.listbox.com
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to