Meta-logic might be a good theoretical framework to advance AGI a
little.  I don't mean that the program would have to use some sort of
pure logic, I am using the term as an idea or an ideal.  Meta logic
does not resolve the p=?np question. However, it makes a lot of sense.
 It would explain how people can believe that they do one thing even
though it seems obvious that they don't when you look at their actions
in slightly different situations.  It also explains how people can use
logic to change the logic of their actions or actions of their
thoughts.  It explains how knowledge seems relativistic.  And it
explains how we can adapt to a complicated situation even though we
walk around like we are blindered most of the time.

Narrow AI is powerful because a computer can run a line of narrow
calculations and hold numerous previous results until they are needed.
 But when we think of AGI we think of problems like the recognition
and search problems which are complex. Most possible results open up
to numerous more possibilities and so on. A system of meta logic
(literal or effective) allows an AGI program to explore numerous
possibilities and then use the results of those limited explorations
to change the systems of procedural logic that can be used.  I believe
that most AGI theories are effectively designed to act like this.  The
reason I am mentioning it is because I think that meta-logic makes so
much sense that it should be emphasized as a simplifying theory. The
theories of probability reasoning, for example, emphasizes another
method of simplifying AGI problems.

Thinking about a theory in a new way has some benefits similar to the
formalization of a system of theories.

Our computers use meta logic.  Since a program has to acquire a
program the logic that it uses can be acquired.  The rules of the meta
logic, which can be more or less general can be acquired.  You don't
want the program to literally forget everything it ever learned
(unless you want to seriously interfere with what it is doing) but one
thing that is missing in a program like Cyc is that it's effective
meta-logic is almost never acquired through learning. It never learns
to change its logical methods of reasoning except in a very narrow way
as a carefully introduced subject reference.  Isn't that the real
problem of narrow AI?  The effects of new ideas have to be carefully
vetted or constrained in order to prevent the program from messing up
what it has already learned or been programmed to do.

So this idea of meta-logic is not that different from what most people
in this group think of using anyway.  The program goes through some
kind of sequential operations and new ways to analyze the data is
selected as it goes through these sequences.  But rather than seeing
these states just as sub-classes of all possible states, (as if the
possibilities were only being filtered out as the meaning of the
situation narrows in), the concept of meta-logic can be used to change
the dynamics of the operations at any level of analysis.

However, I also believe that this kind of system has to have
cross-indexed paths that will allow it to best use the analysis that
has already been done even when it has to change its path of
exploration and analysis.

Jim Bromer


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to