Suppose that an advocate of behaviorism and reinforcement
was able to make a successful general AI program that was clearly far in
advance of any other effort.  At first I
might argue that his advocacy of behaviorism and reinforcement was only an 
eccentricity,
that his program must be coded with some greater complexity than simple
reinforcement to produce true learning.  Now imagine that everyone got to 
examine his code, and after studying it
I discovered that it was amazingly simple in concept.  For example, suppose the 
programmer only used 32 methods to
combine or integrate referenced data objects and these 32 methods were used
randomly in the program to combine behaviors that were to be reinforced by
training.  At first, I might argue that
the 32 simple internal methods of combining data or references wasn't truly
behaviorist because behaviorism was only concerned with the observable gross
behavior of an animal.  My criticism
would be somewhat valid, but it would quickly be seen as petty quibbling and
non-instructive because, in this imagined scenario, the efficacy of the program
is so powerful, and the use of 32 simple integration methods along with a
reinforcement of observable 'behaviors' so simple, that my criticism against 
the programmer's explanation of the paradigm would be superficial.  I might 
claim that it would be more objective to drop the term
behaviorist in favor of the use of some more objective explanation using
familiar computational terms, but even this would be a minor sub-issue compared
to the implications of the success of the paradigm.
 
The programmer in my fictional story could claim that the
simplest explanation for the paradigm could qualify as the most objective
description.  While he did write 32
simple internal operations, the program had to be trained through the
reinforcement of its observable 'behavior', so it would qualify as a true
behavioral-reinforcement method.  People
could make the case that they could improve on the program by including more
sophisticated methods in the program, but the simplest paradigm that could
produce the desired effects would still suffice as an apt description of the
underlying method.
 
Now there are a number of reasons why I do not think that a
simple reinforcement scheme, like the one I mentioned in my story, will be
first to produce higher intelligence or even feasible as a model for general
use.  The most obvious one, is that the
number of combinations of data objects that are possible when strung
together would be so great, that it would be very unlikely that the program
would stumble on insight through a simplistic reinforcement method as
described.  And it would be equally
unlikely that the trainer would have the grasp of the complexity of possible
combinations to effectively guide the program toward that unlikely goal.  To 
put it another way, the simplistic
reinforcement paradigm is really only a substitute for insightful
programming.  The paradigm, even if
conceptually possible, only moves the complicated job of acquiring deeper
insight into how a computer can be programmed to exhibit human-like
intelligence from the programmer to the trainer.
 
So I am skeptical when people argue about various simplistic
paradigms without ever going into the deeper problems of conceptual complexity.

Jim Bromer



      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to