Computer programs are
dependent on symbolic components.  The
symbolic components can be interpreted or transformed into instructions for the
computer.  Each symbol either has a one
to one correspondence between symbol and a short series of instructions, or the
symbols are functional in the sense that a component can modify some of the 
other
symbols. The functions are componential but the final symbol string fully
specifies all the possible modifications in relation to the transformed
instruction string.  This fairly simple
model is made more flexible for most typical programs by partial run time
specification where some parts of the symbolic components are specified at run
time by user interaction or by interaction with other kinds of data that is
input as the program is running.  This is
done by declaring that some of the symbolic components refer to types which 
limit
the range of values and the way those values can modify some other parts of the
symbolic components.  If some unexpected
value occurs it can disrupt the expected behavior of the program. In AGI we 
have to deal
with thoughts and concepts of the mind.  I
believe that these concepts are relativistic - especially as compared to the
traditional symbolic component languages of a computer program.  The symbols 
that are intended to refer to or
should refer to concepts of the mind may require unknown methods to transform
them into appropriate behavior.  Even if
a future AGI program has the potential to transform the symbolic conceptual
components appropriately that process is usually dependent on a very specific
set of previously learned methods.  Because
of conceptual relativity and complexity, the AGI program can have a great deal
of difficulty to select the correct methods to act on a particular problem.  
When we think about
competency in human thought we usually think of familiar levels of
achievement.  Children will be able to
master certain activities and mental skills but they are not capable of doing
many of the things that adults do. 
However, because the contemporary level of achievement in AGI has been
surprisingly low, AGI programs have severe competency problems at very
fundamental levels of artificial mentation. 
At the same time we see computers that can do amazing mathematical
problems, forecast the weather, fly planes and spacecraft, beat human beings at
chess or at extremely challenging encyclopedic memory games like the television
program Jeopardy and it should make us wonder what is going on.  If computer 
programs can do all this why hasn’t
the foundational level of competency for AGI gotten any further than it has?   
I believe it is because we have not made a
thorough study of how ideas and concepts work in the terms of computer
programming.  I realize that many of the
programmers just want me to tell them how to do it or else just go away but it
should be obvious that this kind of attitude demonstrates a lack of individual
initiative.  I am telling you what I
think is missing from our current analysis of AGI programming. The reason this
situation has occurred is because the contemplation of how ideas and concepts
work have been dismissed as pre-science for the last 120 years.  In the last 
quarter of the nineteenth century
the science of psychology emerged as a result of the discovery that electric
current travelling through nerves caused muscles to contract and the growing
recognition that the development of processed drugs could cause different kinds
of psychological and mental reactions. 
This gave rise to new theories in psychology and many of these theories
could be tested either through observation including the observation of the 
effectiveness
of treatment or through creatively contrived experiments. Although these 
experiments
and observations could be controversial, the simple fact was that concepts and
ideas could not be observed and the philosophical methods that generated new
insights were dismissed unless some kind of carefully controlled experiment or
observation of behavior could be used as confirming evidence.  Then in the mid 
twentieth century when
electronic digital computers first appeared it was thought that logic methods,
which were foundational to the principles of the computer, were also the basis
of scientific thought so it was imagined that logical methods would quickly
lead to effective AI.  When weighted
methods were tried more intensively it was better understood that the 
alternative
to weighted reasoning wasn’t limited to logical reasoning and so the
generalization of non-weighted reasoning was better seen as discrete
reasoning.  Then when neural networks
were better developed this was seen as another kind of computational
reasoning.  Other network methods, like
Bayesian Networks and Semantic Networks were also developed.  All these methods 
could be applied to real
world situations and the carefully staged demonstration of achievements in
small problems were impressive. As these methods have been applied to big data 
and
the World Wide Web with more recent computers, more impressive achievements
have been demonstrated.  However, what
has been missing in this mix have been incremental achievements in the 
competency
of AGI.  I feel that the careful study of
how ideas and concepts work using computer programs has also been seriously
neglected. It has been very
difficult for me to understand this problem in just the right way to explain it
to other people or use it in an AGI project. 
As AI methods were developed they were applied to real world problems.  But if 
the way human beings deal with concepts
about real and imaginary world situations is not the same as a particular
paradigm of AI/AGI then those applications were never sufficient to fully
detail how intelligence works.  I believe
that we have overwhelming evidence that is exactly what has happened.  If, on 
the other hand, the methods of
intelligence are just too complicated for digital computers then why haven’t we
seen more incremental achievement in the foundations of AGI competency?  There 
could be some kind of complexity threshold
for foundational AGI competency but there is no hard evidence of that.  I am 
almost certain that the problem is that
for the past 120 years the study of how ideas and concepts work in the mind has
been dismissed as trivial.  You can even
see that in these AGI discussion groups. 
These ideas about how concepts work in the mind should be carefully aligned
with actual programming experiments, but the kind of complaints that I have
gotten when I have tried to explain these ideas can be partly explained as a
kind of ingrained resistance that came from years of the mind-set that thinking
about ideas and concepts is part of the pre-scientific stage of the 
philosopher’s
imagination.  Is that presumption is
going to turn out to be nonscience?  Is
the imagination a necessary part of the scientific method? Jim Bromer

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to