I finally found what I
was looking for. Contemporary AGI
programs suffer from complexity problems. 
The more information the program is given or has attained the more
difficult it can be to find the right data for the situation.  Unfortunately, 
this complexity problem seems
to affect all stages of assessment and so existing AGI programs fail at even 
elementary
levels of competency.   This is unacceptable.  I came to the conclusion that 
animal minds
must either have a sophisticated system of solving complexity problems or that
intelligence was itself the solution for most complexity.  The first does not 
seem likely but without
even elementary competency I could not find evidence for the second.  I could 
make an argument for either case but
now I think I have insight into a possible solution for elementary competency. 
As I wrote my summary I
realized that I was constantly contradicting my own opinions.  For example, I 
do believe that if a problem
is super complicated then you need to study it carefully before you get
enmeshed in it.  Yet I am also critical
of becoming overly preoccupied with purely abstract generalizations.  We should 
not expect too much from an
elaboration of pure conjecture.  The
generalizations and the programming should be derived from the experience of
working with an extensive number of cases. 
These principles can be easily integrated, but there was something that
still bothered me about this.  I could
not really highlight it until yesterday. 
Complicated problems which do not lend themselves to the discovery of solutions
through trial and error have to be carefully studied.  We need to bring 
rational creativity to those
kinds of problems.  Rational creativity,
where possible solutions are designed according to a better knowledge of the
characteristics of the problems, can enhance the likelihood that incremental
trial and error methods will work.  The
conjectures have to be developed from the study and implementation of
individual cases.  But, contrary to an
implication of my summary, many of us have spent a great deal of time thinking
about the application of their theories on real world problems so I could not
understand why I bobbled that part of my summary so badly.  I now recognize 
that human beings and other
animals have methods to deal with ‘ideas’ and ‘concepts’ of the mind and these
need to be considered as well.  I think that
the ways we deal with ideas and concepts have to be studied more
thoughtfully and good AGI methods (sub programs) need to be developed to
emulate some of these abilities.  So
while the major AI paradigms have been applied to real world problems they have
not been insightfully applied to these hidden systems of how we work with
ideas.  I believe that this issue may be
a part of the best way to differentiate between “narrow AI” and AGI.  Narrow AI 
programs are unable to deal with
‘ideas’ and ‘concepts’ in ways that emulate or approximate how human beings do.
As I have explained there has been a great deal of bias directed toward the
idea of ‘ideas’ as a valid psychological theory or as a valid computational
theory.  This is why there is not much
validity to the criticism that the summary was just cog.sci stuff.  Developing 
the programming to work with concepts
and ideas is a major part of what will distinguish AGI from Narrow AI. The lack 
of recognition
that there might be systems of the mind that deal with the usages of ‘ideas’
and 'concepts’ is based on an outstanding academic bias.  The implications that 
my summary did not
describe any computational implementations shows that the bias against the
study of ‘ideas’ is so well rooted in our scientifically aware society that it
is unconscious.  And any theory that goes
against the predominant bias is going to sound like a manifesto of some sort.  
One criticism that was made was that my
summary wasn’t even a theory.  This is
relevant to what I am saying here.  The
principle that a ‘theory’ has to be based on ideas which the experts in the
field can test in actual experiments is somewhat like logical positivism.  You 
might call it experimental
positivism.  A theory is not a theory
unless it can be tested through experiment. 
We also have paradigm positivism. 
A theory is not a theory unless it can be experimented using our
paradigm.  (Experimental positivism and
paradigm positivism are perfectly ok but they cannot be taken as universal
tests of whether a theory concerning the subject is appropriate for the study
of the subject.)  Well, anyway, my
conclusion is that, yes I agree that I did not provide enough details about
implementation in my “Summary of my Theories About AGI,” but the summary did
provide some sketchy implementation considerations about important issues. So 
to summarize, I
believe that an emulation of how human beings work with ideas and concepts has
been seriously missing in AGI.  While some
methods were implicitly studied in the attempts to develop AGI programs, a
careful examination of the conceptual development has been unconsciously and
consciously suppressed during the last century. 
The difference between “narrow AI” and AGI can be described as an ability
to work with ideas in ways that approximate the creativity and insightfulness
that human beings and animals display.  If
my theory is right, then my highlighting of this new area of research should
give me an advantage that other contemporary AGI research lacks. So the summary 
of my
best ideas about AGI: An AGI program is going
to collect a lot of data. My guess has always been that about half the database
needs to be dedicated for providing indexing into and across the other data. 
Artificial Imagination
and creativity is absolutely necessary for AGI. 
Typically, the best application of imagination will be done through
rational creativity where selections of ‘concepts’ to be analyzed and
synthesized will be governed by relying on categorical relations. The typical 
scenario of
recognition or analysis is a back and forth process where input is analyzed to
find good possible matches with previously acquired information, and then the
selected previously learned information will be imaginatively projected back
onto the input to see if it can be used to explain the data. Then this process
will be refined and repeated until a fairly good interpretation of the data has
been found. One of the most
productive forms of learning is accomplished through structural integration 
insight,
where a component of knowledge can be used to explain many different things at
once.  This structural insight can then
be used as a part of recognition, analysis and reaction.

But the missing tier of contemporary
AGI is an emulation of how human beings (and other animals) work with concepts
and ideas. Jim Bromer                                     


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to