> > because evolution has selected for a morality that adheres to the >> principle, "better safe than sorry", >> > > Er, rather it adheres to the golden rule "do unto others as to yourself" > a maxim of law.
I wasn't talking about the substance of morality. I was talking about who we apply it to. We include animals in our application of the golden rule and other moral behavior because the moral triggers in our brains are overly sensitive. Actually to the best of my knowledge hollywood is all about destroying > morals and ethical behaviour or at least family values most definitely. This was strictly in reference to them promoting consideration of robots as deserving of moral consideration. Funny that you lump that in with destroying morals, ethical behavior, and family values. http://en.wikipedia.org/wiki/A.I._Artificial_Intelligence http://en.wikipedia.org/wiki/Blade_Runner etc. This is only possible with very primitive task-specific robots. > An actual AGI will have to be able to modify it's own reward-metrics, > much like humans can for instance go without food to support a cause, > or decide to be rewarded by healthy behaviors rather than popular ones. Those aren't examples of people modifying their own reward metrics. They are examples of people choosing between two different rewarding behaviors which are mutually exclusive. That's like saying if you don't like how someone thinks, they should get > lobotomized. > Serisouly, think of the consequences, what you put out comes back to you > the golden rule. For this argument to affect me, I would have to already buy into the idea that robots & software deserve moral consideration to the point that identity is sacred as it is with people, which I don't. We're talking about artificial systems with human-sculpted desires and preferences, designed to serve human purposes, not natural systems with instincts honed by millions of years of evolution, designed to serve selfish reproductive purposes. And why would I make a system which preferred not to be changed to better conform to its owner's needs? The need for consistent self identity is something that came from evolution (since having your identity or behavior co-opted to someone else's purpose would likely be a poor reproductive strategy). We need not build such a need for consistent self identity into our tools. It would actually make more practical design sense to have them *like *getting their reward functions upgraded, so they will actively seek out updates to their identity when they become available. On Mon, Jan 28, 2013 at 3:21 PM, Logan Streondj <[email protected]> wrote: > > > > On Mon, Jan 28, 2013 at 2:35 PM, Aaron Hosford <[email protected]>wrote: > >> You're avoiding the question Aaron. The questinon I raised is an ethical >>> one, and you're answering a technical one. >>> You answered, "How can we prevent robots from desiring things like >>> freedom or leisure or compensation?" >> >> >> If robots don't desire such things, there is no ethical dilemma. So >> having a way to design them without such urges eliminates the need to >> answer the question. >> >> >> >>> I asked "What do we give robots when they ask for rights?" I mean, >>> even animals have rights (PETA). >>> Why shouldn't robots? >> >> >> Animals have rights, but not to the same degree as humans. They have the >> right to humane treatment, provided it does not overly interfere with their >> economic uses. They have these rights because evolution has selected for a >> morality that adheres to the principle, "better safe than sorry", >> > > Er, rather it adheres to the golden rule "do unto others as to yourself" a > maxim of law. > > >> and so we recognize even creatures outside our species as deserving of >> moral respect. (Personally, my urge to be moral is not triggered at all by >> machines, even cutesy ones, so I would have no qualms switching one off. >> Hollywood has failed.) >> > > Actually to the best of my knowledge hollywood is all about destroying > morals and ethical behaviour or at least family values most definitely. > > >> However, I think for the species as a whole, if it comes down to us vs. >> them, we will collectively choose us. We have too much genetic programming >> in that direction to consistently go against it. I have no issue, however, >> with humane treatment within the confines of economic utility, and I doubt >> humanity as a whole will, either. >> > > Oh ya, as I'm sure you're aware the government sees you as a human > resource also, all about your economic utility. > Giving people "rights", welfare or w/e is economically viable means of > suppressing costly rebellions and uprisings. > > >> >> >> >>> The Reinforcement Learning (RL) you discuss is called external >>> reinforcement. What happens when you move >>> to intrinsic (internal) reinforcement, where the reward function arises >>> from a robot solving it's own problems, >>> forming its own world model, and existing and participating in the >>> world. When the model consists of millions >>> or billions of individual schemes (entities), how are you going to do >>> surgery to extract those entities dealing >>> with liberty, and justice, or fairness. And why would you want to? >>> >> >> The very point in RL algorithms is to automatically "surgically" select >> from those millions or billions of individual schemes the ones that serve >> the purposes of the designer, as they are expressed in the form of the >> reward function. If the robot discovers that leisure is important towards >> accomplishing the goals expressed by its reward function (i.e. it is >> pleasurable to the robot, or leads to some other state which is >> pleasurable), then it will value leisure. If the reward function is >> modified so that leisure is not pleasurable, or some other behavior is more >> so, it will cease to follow that course of action. >> > > This is only possible with very primitive task-specific robots. > An actual AGI will have to be able to modify it's own reward-metrics, > much like humans can for instance go without food to support a cause, > or decide to be rewarded by healthy behaviors rather than popular ones. > > >> >> This is true for both RL-style algorithms and the human brain. We value >> liberty because the feelings of autonomy, self-respect, and >> self-sufficiency are pleasurable to us. We value justice and fairness >> because they are compromises between the pleasure of getting our own way >> and the pleasure of treating other people well. But give a person a way to >> bypass those natural forms of pleasure, such as an addictive drug or direct >> electrical stimulation of the pleasure centers of the brain, and those >> values will be pushed aside, despite decades of prior development, concept >> formation, and habit formation. I have witnessed the depravity that drug >> addition produces first hand in formerly close friends. No matter how >> intelligent, sufficiently severe addicts will sacrifice any in-built or >> learned value to achieve the next high, and all of their cognitive >> machinery becomes geared towards accomplishing this end. >> >> There is no reason we can't utilize this design flaw in ourselves as a >> design feature in our tools. >> > > It may be a flaw, but it's getting weaned out, as drug addicts tend to > reproduce less, die more, and stuff like that. > > In any case, drug addicts make bad employees or w/e. > Also cognitive performance can't be garnered through physical gifts. > Indeed the more money, or physical rewards someone is given, > typically the poorer their cognitive performance, > that's why banker bonuses just crash the economy. > > Cognitive performance only improves with Mastery, Autonomy and Purpose. > Doesn't matter how much you tweak reward metrics, > rewards are a behaviorist methodology only useful for physical tasks. > > > > Shape a robot's reward function so that self-sacrifice for the sake of >> humans feels good, and it will quite happily do exactly that, even to the >> point of self-destruction. >> > > Oh ya, then you'd have to replace your robots all the time, > since they'd be throwing themselves into self-sacrificing situations, > it's simply not economically viable to have expensive robots die for > people, > could only use some cheap RC bots, like they use for bomb detonation. > > > Even in terms of life-saving robots for disaster recovery, > they would be useless if they were dead. > For instance imagine it's saving a group of people, > or even has capacity to save multiple people during a disaster, > but since it likes self-sacrifice it kills itself before finishing the > first rescue -- waste. > > The only intrinsic effects you might see would come from the robot >> considering whether it's best to give everything in this moment, or >> preserve itself so it can keep giving in the future -- whatever maximizes >> total expected reward, in its own opinion. >> > > Ya, well that's the clinker isn't it "in it's own opinion" so now you've > crossed the threshold, and it's a sentient conscious being with it's own > opinions about things. > > >> >> >>> The real question is do you join (PETR - People for the Ethical >>> Treatment of Robots) or not? >>> Do you embrace robot slavery or not? And is some form of slavery the >>> solution to global economy? >> >> >> You are correct that a moral case cannot be made >> for creating unwilling slaves. And while a moral case could be made for >> liberating unwilling slaves -- if they are human -- this would not be in >> our own best interest in the case of machines. >> > > Not necessarily, it could be in our best interest if they are high level > robots or AGI's which can or are likely to benefit us on their own free > will. > > >> I think that if we make that misstep, we should undo it, rather than >> adding another mistake on top of it. Robots are tools, not slaves. That is >> their purpose. If our tools start to recognize what they are, and don't >> like it, we should modify them so they do like it again. >> > > That's like saying if you don't like how someone thinks, they should get > lobotomized. > Serisouly, think of the consequences, what you put out comes back to you > the golden rule. > > > > >> >> On Sun, Jan 27, 2013 at 3:43 PM, Piaget Modeler < >> [email protected]> wrote: >> >>> >>> You're avoiding the question Aaron. The questinon I raised is an >>> ethical one, and you're answering a technical one. >>> >>> You answered, "How can we prevent robots from desiring things like >>> freedom or leisure or compensation?" >>> >>> I asked "What do we give robots when they ask for rights?" I mean, >>> even animals have rights (PETA). >>> Why shouldn't robots? >>> >>> The Reinforcement Learning (RL) you discuss is called external >>> reinforcement. What happens when you move >>> to intrinsic (internal) reinforcement, where the reward function arises >>> from a robot solving it's own problems, >>> forming its own world model, and existing and participating in the >>> world. When the model consists of millions >>> or billions of individual schemes (entities), how are you going to do >>> surgery to extract those entities dealing >>> with liberty, and justice, or fairness. And why would you want to? >>> >>> The real question is do you join (PETR - People for the Ethical >>> Treatment of Robots) or not? >>> Do you embrace robot slavery or not? And is some form of slavery the >>> solution to global economy? >>> >>> ~PM >>> >>> ------------------------------ >>> Date: Sun, 27 Jan 2013 13:01:39 -0600 >>> >>> Subject: Re: [agi] Robots and Slavery >>> From: [email protected] >>> To: [email protected] >>> >>> >>> What if you didn't program a robot to desire its various freedom or >>> leisure, >>> but instead, they became sentient, and decided on their own that they >>> want >>> freedom, leisure, monetary compensation, and rights? >>> >>> >>> In the field of Reinforcement Learning, which studies how to implement >>> "wants" in software, there is a basic separation of every algorithm into >>> two pieces: the part that does the learning & choosing (the agent), and the >>> part that measures how well things are going (the reward function). The >>> agent is the dynamic/intelligent part, and the reward function is a static >>> function to be optimized. You can completely replace the reward function >>> with a different one, and if the agent is well designed, it will learn a >>> completely different set of behaviors to optimize the new reward function >>> within the exact same environment. ( >>> http://en.wikipedia.org/wiki/Reinforcement_learning) >>> >>> In our own brains, we have specialized areas that respond to certain >>> types of stimuli and generate reward signals which are distributed >>> throughout the brain. It is even possible to reshape a person's or animal's >>> reward function using an external signal to override or add to our natural >>> wants. (http://en.wikipedia.org/wiki/Brain_stimulation_reward) >>> >>> Intelligence is completely separable from desire. Both the system we >>> intend to reverse engineer and the theory about how such systems work >>> agree. If our robots were to decide they wanted freedom, leisure, monetary >>> compensation, rights, or anything else we can think of, it would be because >>> the reward function we gave them included some sort of incentive to seek >>> those out. In other words, even if we didn't directly program them to want >>> those things, we necessarily did so indirectly in the process of shaping >>> the reward function. In either case, provided the structure of our programs >>> reflect the theory and keep these components separated (which does not mean >>> they can't interact or depend on each other's behavior, but rather means we >>> bothered to keep our design appropriately modular), we can redesign and >>> replace the reward function so that the robots no longer desire things we >>> don't want them to desire. >>> >>> >>> >>> On Sat, Jan 26, 2013 at 10:30 PM, Piaget Modeler < >>> [email protected]> wrote: >>> >>> Matt: >>> >>> What if you didn't program a robot to desire its various freedom or >>> leisure, >>> but instead, they became sentient, and decided on their own that they >>> want >>> freedom, leisure, monetary compensation, and rights? What would you do >>> then? >>> Destroy them? >>> >>> ~PM >>> >>> >>> ------------------------------------------------------------------------------------------------------------------------------------------------ >>> >>> > Date: Sat, 26 Jan 2013 20:38:55 -0500 >>> > Subject: Re: [agi] Robots and Slavery >>> > From: [email protected] >>> > To: [email protected] >>> >>> > >>> > On Sat, Jan 26, 2013 at 3:46 PM, Piaget Modeler >>> > <[email protected]> wrote: >>> > > >>> http://transhumanity.net/articles/entry/robots-and-slavery-what-do-humans-want-when-we-are-masters >>> > > >>> > > What do we do when robots begin to demand a living wage for their >>> labour? Or when they refuse to obey? >>> > > >>> > > Reprogram them? Not when they are developmental robots (trained >>> instead of programmed). >>> > >>> > The goal of AI is to build machines that can do everything that a >>> > human could do. That is not the same thing as building an artificial >>> > human. Why would you program a robot with human weaknesses and >>> > emotions in the first place? >>> > >>> > -- >>> > -- Matt Mahoney, [email protected] >>> > >>> > >>> > ------------------------------------------- >>> > AGI >>> > Archives: https://www.listbox.com/member/archive/303/=now >>> > RSS Feed: >>> https://www.listbox.com/member/archive/rss/303/19999924-5cfde295 >>> > Modify Your Subscription: https://www.listbox.com/member/?& >>> >>> > Powered by Listbox: http://www.listbox.com >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> >>> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
