I am trying to say is that there are better ways to examine how
behavior can be gradually shaped than Reinforcement Theories can
offer. (The theorizing about human or animal psychology would have to
go beyond Behaviorist methods. For instance, we might use applied
comparative imagination and try to find algorithmic concepts that
might work in AI and see if they might better explain the internal
psychology of learning.)  Similarly I believe that there is are more
powerful ways of looking at the gradual shaping of AI behavior than
can be found by looking at Reinforcement Learning as it is currently
used in AI. A contemporary programmer who knows something about the
subject can appreciate why gradual reinforcement learning might be
desirable. As the AI programmer thinks about an idea like
reinforcement learning it might stimulate his imagination. He can see
how simple actions of an AI program might be guided toward achieving
AI goals that might not be otherwise achievable. It looks like a an
idea that might take a programmer from explicit step by step
programmed construction of knowledge (which seems feasible) toward a
less scripted method to 'teach' an AI program (which can be more
problematic). What I am saying is that there is a whole other way of
looking at the issue, and the first step is to realize that the
gradual shaping of knowledge (and behavior) does not have to be forced
fit into Reinforcement theories of learning.
Jim Bromer


On Wed, Feb 10, 2016 at 5:13 PM, Mike Archbold <[email protected]> wrote:
> I'd be interested in reading your paper.
>
> Reinforcement is crucial for shaping behavior.  But I also think the
> approaches may oversimplify things, although I can see why.  Also, it
> seems like we hear a lot about positive reinforcement but less about
> negative.  Eg:  a man exercises to lose weight (positive reactions
> from others -- he looks better) and avoid diabetes (he is incented to
> avoid a negative).   So behaviour is both concurrently.
> Though I can see now that it is not necessarily a simple matter to
> divide the positive and negative reinforcements.
>
> Mike
>
> On 2/10/16, Jim Bromer <[email protected]> wrote:
>> I was thinking about writing a paper and I thought that I should
>> describe a method of learning that I want to use with my AI project as
>> ‘reinforcement’ because I realized that it could win a few readers who
>> liked the AI reinforcement concept. The problem is that I really
>> disliked the Behaviorist concept and I disliked how the Behaviorist
>> term had been applied to AI. I do not think the Behaviorist concept of
>> reinforcement is adequate to describe learning and the AI application
>> of the concept seems even worse. Then I realized that I was really
>> thinking of some kind of gradual ‘shaping’ of behavior which might
>> look like reinforcement but which would be significantly different. It
>> is not just a reinforcement of simple behaviors but a more dynamic
>> process of shaping the behavior by trying to appeal to the the
>> internal interplay of different concepts that might be related to the
>> behavior that was being shaped. I do not see AI behavior as a
>> simplistic series of actions that can be shaped by associating two
>> deterministic (deterministic-like) sub programs. I think behavior has
>> to be shaped by encouraging a much richer internal interplay of
>> different concepts that I (as the instructor) think the AI program
>> should know (or be able to discover) were related to the situation.
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to