Abram

I am pressed for time right now, but just to let you know that, now that I am aware of your post, I will reply soon. I think that many of your concerns are a result of seeing a different message in the paper than the one I intended.


Richard Loosemore



Abram Demski wrote:
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
& approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work. This is as Loosemore suggests.

On the other hand, I do not want to agree with Loosemore too strongly.
Mathematics and mathematical proof is a very important tool, and I
feel like he wants to reject it. His image of an AGI seems to be a
system built up out of totally dumb pieces, with intelligence emerging
unexpectedly. Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.

On Sat, Jun 21, 2008 at 6:54 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
I just read Abram Demski's comments about Loosemore's, "Complex Systems,
Artificial Intelligence and Theoretical Psychology," at
http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html

I thought Abram's comments were interesting.  I just wanted to make a few
criticisms. One is that a logical or rational approach to AI does not
necessarily mean that it would be a fully constrained logical - mathematical
method.  My point of view is that if you use a logical or a rational method
with an unconstrained inductive system (open and not monotonic) then the
logical system will, for any likely use, act like a rational-non-rational
system no matter what you do.  So when, I for example, start thinking about
whether or not I will be able to use my SAT system (logical satisfiability)
for an AGI program, I am not thinking of an implementation of a pure
Aristotelian-Boolean system of knowledge.  The system I am currently
considering would use logic to study theories and theory-like relations that
refer to concepts about the natural universe and the universe of thought,
but without the expectation that those theories could ever constitute a
sound strictly logical or rational model of everything.  Such ideas are so
beyond the pale that I do not even consider the possibility to be worthy of
effort.  No one in his right mind would seriously think that he could write
a computer program that could explain everything perfectly without error.
If anyone seriously talked like that I would take it as a indication of some
significant psychological problem.



I also take it as a given that AI would suffer from the problem of
computational irreducibility if it's design goals were to completely
comprehend all complexity using only logical methods in the strictest sense.
However, many complex ideas may be simplified and these simplifications can
be used wisely in specific circumstances.  My belief is that many
interrelated layers of simplification, if they are used insightfully, can
effectively represent complex ideas that may not be completely understood,
just as we use insightful simplifications while trying to discuss something
that is completely understood, like intelligence.  My problem with
developing an AI program is not that I cannot figure out how to create
complex systems of  insightful simplifications, but that I do not know how
to develop a computer program capable of sufficient complexity to handle the
load that the system would produce.  So while I agree with Demski's
conclusion that, "there is a way to salvage Loosemore's position,
...[through] shortcutting an irreducible computation by compromising,
allowing the system to produce less-than-perfect results," and, "...as we
tackle harder problems, the methods must become increasingly approximate," I
do not agree that the contemporary problem is with logic or with the
complexity of human knowledge. I feel that the major problem I have is that
writing a really really complicated computer program is really really
difficult.



The problem I have with people who talk about ANNs or probability nets as if
their paradigm of choice were the inevitable solution to complexity is that
they never discuss how their approach might actually handle complexity. Most
advocates of ANNs or probability deal with the problem of complexity as if
it were a problem that either does not exist or has already been solved by
whatever tired paradigm they are advocating.  I don't get that.



The major problem I have is that writing a really really complicated
computer program is really really difficult.  But perhaps Abram's idea could
be useful here.  As the program has to deal with more complicated
collections of simple insights that concern some hard subject matter, it
could tend to rely more on approximations to manage those complexes of
insight.

Jim Bromer

________________________________
agi | Archives | Modify Your Subscription


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to