Re: [agi] Simplistic Paradigms Are Not Substitutes For Insight Into Conceptual Complexity

2008-05-31 Thread Tudor Boloni
Jim, We will eventually stumble upon this conceptual complexity, namely a
few algorithms that exceed the results that human intelligence uses (the
algorithms created through slow evolution and relatively fast learning).  we
would have a smarter machine that exhibits advanced intelligence in many
ways... maybe capable of self learning to ever higher levels and then
nothing else if needed, except that:

today, we dont know how to extract sufficient patterns yet from natural
language without additional training/trainers because languages reflect the
unique histories of the respective races.  Your conceptual complexity laden
program full of insights would need to be trained in these cases anyway, no
matter how insightful it became (think Wolfram's Computational Equivalence
theory, where some things are really not pattern matching but must be
simulated to the last detail to be fully understood due to their complex
nature).  So why start out with something that goes back to training issues
anyway and is not even available today.

Alternatively, semantic webs from expert systems will become more available
every year, the permutations of the objects contained therein will not be
exhaustive searches of the truly unrealistic search space that would result,
but are more like Deep Blue's solutions using trade offs of time and quality
of knowledge. Many permutations would never even be attempted because
objects are in different classes and context rules determine areas with a
high potential for valuable insights that would be favored.  The constant
self organization of the program and its database according to the rules of
maximal lossless compression would insure that a given set of computational
resources becomes intelligent over time.  Letting such a system read CYC
type databases will further reduce the search space of interest.

The benefit is this can be done sooner with the knowledge we have today.

t

On Sat, May 31, 2008 at 4:38 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 Suppose that an advocate of behaviorism and reinforcement was able to make
 a successful general AI program that was clearly far in advance of any other
 effort.  At first I might argue that his advocacy of behaviorism and
 reinforcement was only an eccentricity, that his program must be coded with
 some greater complexity than simple reinforcement to produce true learning.
 Now imagine that everyone got to examine his code, and after studying it I
 discovered that it was amazingly simple in concept.  For example, suppose
 the programmer only used 32 methods to combine or integrate referenced data
 objects and these 32 methods were used randomly in the program to combine
 behaviors that were to be reinforced by training.  At first, I might argue
 that the 32 simple internal methods of combining data or references wasn't
 truly behaviorist because behaviorism was only concerned with the observable
 gross behavior of an animal.  My criticism would be somewhat valid, but it
 would quickly be seen as petty quibbling and non-instructive because, in
 this imagined scenario, the efficacy of the program is so powerful, and the
 use of 32 simple integration methods along with a reinforcement of
 observable 'behaviors' so simple, that my criticism against the programmer's
 explanation of the paradigm would be superficial.  I might claim that it
 would be more objective to drop the term behaviorist in favor of the use of
 some more objective explanation using familiar computational terms, but even
 this would be a minor sub-issue compared to the implications of the success
 of the paradigm.



 The programmer in my fictional story could claim that the simplest
 explanation for the paradigm could qualify as the most objective
 description.  While he did write 32 simple internal operations, the
 program had to be trained through the reinforcement of its observable
 'behavior', so it would qualify as a true behavioral-reinforcement method.
 People could make the case that they could improve on the program by
 including more sophisticated methods in the program, but the simplest
 paradigm that could produce the desired effects would still suffice as an
 apt description of the underlying method.



 Now there are a number of reasons why I do not think that a simple
 reinforcement scheme, like the one I mentioned in my story, will be first to
 produce higher intelligence or even feasible as a model for general use.  The
 most obvious one, is that the number of combinations of data objects that
 are possible when strung together would be so great, that it would be very
 unlikely that the program would stumble on insight through a simplistic
 reinforcement method as described.  And it would be equally unlikely that
 the trainer would have the grasp of the complexity of possible combinations
 to effectively guide the program toward that unlikely goal.  To put it
 another way, the simplistic reinforcement paradigm is really only a
 substitute for insightful 

Re: [agi] Simplistic Paradigms Are Not Substitutes For Insight Into Conceptual Complexity

2008-05-31 Thread Jim Bromer



- Original Message 
From: Tudor Boloni [EMAIL PROTECTED]
Jim, We will eventually stumble upon this conceptual complexity, namely a few 
algorithms that exceed the results that human intelligence uses (the algorithms 
created through slow evolution and relatively fast learning). 
---
That is not what I was talking about.
Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com