YKY<

I thought you were   talking about the extraction of information that
is explicitly stated in online text.

Of course, inference is a separate process (though it may also play a
role in direct information extraction).

I don't think the rules of inference per se need to be learned.  In
our book on PLN we outline a complete set of probabilistic logic
inference rules, for example.

What needs to be learned via experience is how to appropriately bias
inference control -- how to sensibly prune the inference tree.

So, one needs an inference engine that can adaptively learn better and
better inference control as it carries out inferences.  We designed
and partially implemented this feature in the NCE but never completed
the work due to other priorities ... but I hope this can get done in
NM or OpenCog sometime in late 2008..

-- Ben

On Tue, Feb 26, 2008 at 3:02 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
>
>
> On 2/26/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > Obviously, extracting knowledge from the Web using a simplistic SAT
> > approach is infeasible
>  >
> > However, I don't think it follows from this that extracting rich
> > knowledge from the Web is infeasible
> >
> > It would require a complex system involving at least
> >
> > 1)
>  > An NLP engine that maps each sentence into a menu of probabilistically
> > weighted logical interpretations of the sentence (including links into
> > other sentences built using anaphor resolution heuristics).  This
>  > involves a dozen conceptually distinct components and is not at all
> > trivial to design, build or tune.
> >
> > 2)
> > Use of probabilistic inference rules to create implication links
> > between the different interpretations of the different sentences
>  >
> > 3)
> > Use of an optimization algorithm (which could be a clever use of SAT
> > or SMT, or something else) to utilize the links formed in step 2, to
> > select the right interpretation(s) for each sentence
>
>
> Gosh, I think you've missed something of critical importance...
>
> The problem you stated above is about choosing the correct interpretation of
> a bunch of sentences.  The problem we should tackle instead, is learning the
> "rules" that make up the KB.
>
> To see the difference, let's consider this example:
>
> Suppose I solve a problem (eg a programming exercise), and to illustrate my
> train of thoughts I clearly write down all the steps.  So I have, in
> English, a bunch of sentences A,B,C,...,Z where Z is the final conclusion
> sentence.
>
> Now the AGI can translate sentences A-Z into logical form.  You claim that
> this problem is hard because of multiple interpretations.  But I think
> that's relatively unimportant compared to the real problem we face.  So
> let's assume that we successfully -- correctly -- translate the NL sentences
> into logic.
>
> Now let's imagine that the AGI is doing the exercise, not me.  Then it
> should have a train of inference that goes from A to B to C ... and so on...
> to Z.  But, the AGI would NOT be able to make such a train of thoughts.  All
> it has is just a bunch of *static* sentences from A-Z.
>
> What is missing?  What would allow the AGI to actually conduct the inference
> from A-Z?
>
> The missing ingredient is a bunch of rules.  These are the "invisible glue"
> that links the thoughts "between the lines".  This is the knowledge that I
> think should be learned, and would be very difficult to learn.
>
> You know what I'm talking about??
>
>
>
> YKY
>  ________________________________
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to