I just read Abram Demski's comments about Loosemore's,
"Complex Systems, Artificial Intelligence and Theoretical
Psychology," at
http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html
I thought Abram's comments were interesting. I just wanted to make a few
criticisms. One
is that a logical or rational approach to AI does not necessarily mean that it
would be a fully constrained logical - mathematical method. My point of view
is that if you use a
logical or a rational method with an unconstrained inductive system (open and
not monotonic) then the logical system will, for any likely use, act like a
rational-non-rational system no matter what you do. So when, I for example,
start thinking about whether or not I
will be able to use my SAT system (logical satisfiability) for an AGI program,
I am not thinking of an implementation of a pure Aristotelian-Boolean system of
knowledge. The system I am currently
considering would use logic to study theories and theory-like relations that
refer to concepts about the natural universe and the universe of thought, but
without the expectation that those theories could ever constitute a sound
strictly logical or rational model of everything. Such ideas are so beyond the
pale that I do not even consider the
possibility to be worthy of effort. No
one in his right mind would seriously think that he could write a computer
program that could explain everything perfectly without error. If anyone
seriously talked like that I would
take it as a indication of some significant psychological problem.
I also take it as a given that AI would suffer from the
problem of computational irreducibility if it's design goals were to completely
comprehend all complexity using only logical methods in the strictest sense.
However, many complex ideas may be simplified and these simplifications can be
used wisely in specific circumstances. My belief is that many interrelated
layers of simplification, if they
are used insightfully, can effectively represent complex ideas that may not be
completely understood, just as we use insightful simplifications while trying
to discuss something that is completely understood, like intelligence. My
problem with developing an AI program is
not that I cannot figure out how to create complex systems of insightful
simplifications, but that I do
not know how to develop a computer program capable of sufficient complexity to
handle the load that the system would produce. So while I agree with Demski's
conclusion that, "there is a way to
salvage Loosemore's position, ...[through] shortcutting an irreducible
computation by compromising, allowing the system to produce less-than-perfect
results," and, "...as we tackle harder problems, the methods must
become increasingly approximate," I do not agree that the contemporary
problem is with logic or with the complexity of human knowledge. I feel that
the major problem I have is that writing a really really complicated computer
program is really really difficult.
The problem I have with people who talk about ANNs or
probability nets as if their paradigm of choice were the inevitable solution to
complexity is that they never discuss how their approach might actually handle
complexity. Most advocates of ANNs or probability deal with the problem of
complexity as if it were a problem that either does not exist or has already
been solved by whatever tired paradigm they are advocating. I don't get that.
The major problem I have is that writing a really really
complicated computer program is really really difficult. But perhaps Abram's
idea could be useful
here. As the program has to deal with
more complicated collections of simple insights that concern some hard subject
matter, it could tend to rely more on approximations to manage those complexes
of insight.
Jim Bromer
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com