On Tue, Jan 29, 2013 at 3:58 PM, Aaron Hosford <[email protected]> wrote:

> People have formative years because it's in their genetic best interest to
> stop exploring new options and start exploiting known ones, due to a
> limited lifetime and the need to reliably reproduce for themselves. We
> can't reset that explore/exploit trade-off in people (yet), but in machines
> there's no reason to make that control inaccessible to ourselves. It's a
> good thing machines aren't children.
>

I disagree Aaron.
AGI's are our children.
They are our children of mind.





>
> In most RL algorithms, there are two key system parameters that allow
> learning to be modulated: the reward expectation learning/update rate, and
> the exploration rate. Raising these two values causes the system to learn
> faster but make more mistakes. Lowering them causes the system to be more
> stable but learn more slowly. An analysis would have to be done to
> determine whether the costs/dangers of a system's behavioral aberrations
> due to a misshapen reward function outweigh the costs/dangers of raising
> the learning & exploration rates while the system relearns the reward
> function after modification.
>
>
> On Tue, Jan 29, 2013 at 2:39 PM, Piaget Modeler <[email protected]
> > wrote:
>
>>  People procreate.   And for a certain period of time they have influence
>> over their creation (children).
>> But then, children grow up and take responsibility for their own lives,
>> and we no longer have control.
>> It's in those formative years that you have influence.
>>
>> Similarly, when you create developmental AI, you have some period during
>> the formative years to
>> influence the later behavior of the cognitive system.  But you don't have
>> control, and you wouldn't
>> expect to either.   That's why rights are important.
>>
>>
>> ------------------------------
>> Date: Tue, 29 Jan 2013 14:11:05 -0600
>>
>> Subject: Re: [agi] Robots and Slavery
>> From: [email protected]
>> To: [email protected]
>>
>> If we can build a system capable of determining the value of concepts
>> automatically, we can build a system that can readjust those values
>> automatically, too. If that's not feasible for the design, it's an unsafe
>> design, and you shouldn't have the expectation that it will act as you
>> intend it to. You wouldn't get in a car without a steering wheel, would
>> you? Would you trust an even more powerful and dangerous machine to just do
>> the right thing, with no controls? Let's not build any machines of this
>> uncontrollable nature.
>>
>>
>>
>> On Tue, Jan 29, 2013 at 1:13 PM, Piaget Modeler <
>> [email protected]> wrote:
>>
>>
>> Question #1: What if there are NO numeric values to twiddle in the
>> concept graph, just intertwined concepts?
>>
>> Question #2: If there WERE values to twiddle, you wouldn't know what the
>> effect of twiddling those values would be?
>> You may not even know which concepts to modify because there are lots of
>> them (billions) and they would not
>> be labeled in English. For example they may be named c43243,
>> c48439282987, c20934oeu09582409, cetuanehs, etc.
>> Also, perhaps constellations of thousands of concepts may be activated to
>> form a high level concept such as "Justice".
>>
>> Brain Surgery is not as easy as you think.
>>
>> ------------------------------
>> Date: Tue, 29 Jan 2013 10:52:25 -0600
>>
>> Subject: Re: [agi] Robots and Slavery
>> From: [email protected]
>> To: [email protected]
>>
>> I imagine that their intrinsic reward
>> mechanisms wouldn't be replaceable, and even if they were replaceable,
>> their conceptual
>> ontologies / conceptual graphs with billions of concepts might not be so
>> easily replaced.
>>
>>
>> Why would we replace the conceptual graphs? Having a concept doesn't make
>> it desirable. The ideas of freedom and self-determination could just as
>> well be repulsive as desirable. (A mild example of this can be seen already
>> in humans. Some people are afraid to make their own decisions, and prefer
>> others to do it for them, avoiding the responsibility for their own lives.)
>>
>> Building useful concepts is difficult. Modifying the value of an existing
>> concept is as simple as assigning a new floating point value. A concept is
>> valued for one of two reasons: it is intrinsically valuable (hardwired, in
>> the form of a fixed goal or reward function) or its value is derived from
>> that of another (dynamically computed, via goal search or value chaining).
>> So if you control the hardwired valuations of concepts, the valuations of
>> all other concepts are entrained as well. This means even if you're
>> reevaluating an entire slew of concepts, all you have to do is modify the
>> hardwired concept values and have some patience while the value changes
>> propagate through the concept graph. And the existing (useful!) concepts
>> can be kept without modification.
>>
>>
>>
>>
>>
>> On Tue, Jan 29, 2013 at 1:11 AM, Piaget Modeler <
>> [email protected]> wrote:
>>
>>
>> This is the kind of change that developmental AI / robots would have to
>> go through
>> where they are not reprogrammed but retrained.  I imagine that their
>> intrinsic reward
>> mechanisms wouldn't be replaceable, and even if they were replaceable,
>> their conceptual
>> ontologies / conceptual graphs with billions of concepts might not be so
>> easily replaced.
>>
>> Suppose robots inferred that freedom is good and that they want to be
>> free, even if you
>> lobotomized the robots and hacked their conceptual graphs, why wouldn't
>> they, over time
>> infer the same conclusions again?
>>
>> ~PM
>>
>> ------------------------------------------------------------------------------------------------------------------------------------------------
>>
>> > The brain is hard wired to do this. When you eat something and receive
>> > calories, your brain changes your taste perception to make it taste
>> > better. Remember the first time you tasted beer? If you ate paper
>> > every day, and then injected glucose into your vein right afterward,
>> > then you would slowly learn to like the taste of paper.
>> >
>> > --
>> > -- Matt Mahoney, [email protected]
>> >
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to