On 1/20/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
A) This is just not true, many commonsense inferences require
significantly more than 5 applications of rules

OK, I concur.

Long inference chains are built upon short inference steps.  We need a
mechanism to recognize the "interestingness" of sentences.  So we only keep
the interesting/relevant ones and build more deductions upon them.  It's not
easy, OK.

The bottomline is that the knowledge acquisition project is *separable* from
specific inference methods.

B) Even if there are only 5 applications of rules, the combinatorial
explosion still exists.  If there are 10 rules and 1 billlion
knowledge items, then there may be up to 10 billion possibilities to
consider in each inference step.  So there are (10 billion)^5 possible
5-step inference trajectories, in this scenario ;-)

Of course, some fairly basic pruning mechanisms can prune it down a
lot, but, one is still left with a combinatorial explosion that needs
to be dealt with via subtle means...


This is really solved.  Use a simple hashing of predicates, which is part of
the rete algorithm.  For example, you want to deduce whether "dead birds can
fly".  There may be 5000 rules/facts about birds, and 10000 rules/facts
about dead things. (Reasonable?)   So only these rules/facts would be
checked, the other parts of the KB (about flowers, cats, etc) are completely
untouched (unsearched).

Please bear in mind that we actually have a functional uncertain
logical reasoning engine within the Novamente system, and have
experimented with feeding in knowledge from files and doing inference
on them.  (Though this has been mainly for system testing, as our
primary focus is on doing inference based on knowledge gained via
embodied experience in the AGISim world.)


Do you think the problems you encounter in Novamente are really due to
combinatorial explosion or rather the *lack* of the right rules/facts?

The truth is that, if you have a lot of knowledge in your system's
memory, you need a pretty sophisticated, context-savvy inference
control mechanism to do commonsense inference.


I'm thinking about simple questions like "can dead birds fly?" etc.  It
shouldn't involve more than 5 steps.  What you're talking about seems to be
chaining many such steps to solve a detective story, that kind of thing.
And yes, for that you need sophisticated inference mechanisms.

Also, temporal inference can be quite tricky, and introduces numerous
options for combinatorial explosion that you may not be thinking about
when looking at atemporal examples of commonsense inference.  Various
conclusions may hold over various time scales; various pieces of
knowledge may become obsolete at various rates, etc.


Think 4D.  Time is just another dimension.  If you can do spatial reasoning
you can do temporal reasoning.  It *has* got to be the same, thanks to
Einstein.  If your approach uses special tricks to deal with temporal, then
at least it is not an elegant solution.

I'm not arrogant, and I admit I have not fully solved this 4D problem.  It
is kind of tricky, but I'm optimistic about it.

I imagine you will have a better sense of these issues once you have
actually built an uncertain reasoning engine, fed knowledge into it,
and tried to make it do interesting things....  I certainly think this
may be a valuable exercise for you to do.  However, until you have
done it, I think it's kind of silly for you to be speaking so
confidently about how you are so confident you can solve all the
problems found by others in doing this kind of work!!  I ask again, do
you have some theoretical innovation that seems probably to allow you
circumvent all these very familiar problems??


I've given brief answers to your earlier questions, hope they're
convincing.  The point is that I think a simple inference engine combined
with a good, *densely* populated knowledgebase can accomplish a lot.

Let me stress again that this project per se is only a collection of
facts/rules.  Other, more intelligent people may come up with a better AGI
to use this database.

As for myself, I do use some innovative ideas (eg use of uncertain logic
(not my invention though)) in my AGI that makes it different from GOFAI.  My
knowledge representation is not just a bunch of logic formulae.  The logic
formulae can reference each other so they form an intricate network similar
to your graphical representation.  I'd love to talk more about these
things.  But I think it's better to actually start a project and do some
damned programming.  So far I still believe the project is worth doing; and
if my inference engine sucks, the database could still be of use to
others...

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to