My idea that an AGI program has to have an executive function or
process that it is very simple but it has to be capable of AGI seems
obvious enough.  It has to be lightweight or simple because the more
complicated it gets the greater the potential it will have to create
logjams.  It might turn out that a lot of the potential for logjams
may be due to programming errors, but just as every little detail can
add some greater complexity to the programming, so can each detail add
to the complexity of the AGI program as it runs.

Secondly, the recognition that the integration of Conceptual Structure
is the key to making it work is also a potential key to making the AGI
part relatively simple.  Conceptual Structure is not a blanket
abstraction that the programmer completely details with his program
but a more creative structure that the program must create.  So, yes,
Conceptual Structure is an abstract system - or more accurately
Conceptual Structure will consist of multiple implementations of
abstract systems - but it will be systems that are generated by the
AGI program as it is running.  This idea of the Conceptual Structure,
which is based on the fact that concepts play roles when integrated
with other concepts, has to be kept simple or else it will be too
complicated and too slow for the program to manage it.

Finally the program has to use rational creativity and it has to use
some kind of trial and error method.  But the interesting thing about
this theory is that now I that I have an initial conjecture about
Conceptual Structure I should be able to craft it with as much control
as I need. Presuming that at first I will need to find a way to input
many of the details of how concepts should be integrated means that my
first endeavors would not really be AI or AGI even if my current
theory works. But at some point I hope to be able to figure out a way
for the program to learn how to determine more of the steps to
intelligently integrate conceptual structures.

One theory that was never established, even weakly, in experiment was
that once you figured out how to create an AI program that it should
eventually become more adept at learning new things.  I believe that
the theory of Conceptual Structures would make that feasible - if the
theory is any good at all.  And this is how you could test the program
to compare it against competing AGI programs. It could learn new
things and integrate it as long as you could teach it.

Jim Bromer


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to