On 3/8/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
[re: logical abduction for interpretation of natural language]

One disadvantage of this approach is that you have to hand code lots of
language knowledge.  They don't seem to have solved the problem of
acquiring
such knowledge from training data.  How much effort would it be to code
enough
knowledge to pass the Turing test?  Nobody knows.
Using this method the linguistic rules may be hand-coded or learned (via
inductive logic programming).  Learning is not easy, but is still possible.

Re your method:

1.  Remember, in your NN approach the learning space is even more
fine-grained and the network configuration space is insanely huge.  That
means, your system will take insanely long to train.  In ADDITION, you
cannot insert hand-coded rules like I do, because your system is opaque.

2.  Also, training your NN layer-by-layer would be incorrect because the
layers depends on each other to function correctly, in some mysterious /
opaque ways.  Freezing each layer after training will drive you straight
into a local minimum, which is guaranteed to be useless.  If you backtrack
from the local minimum, then you're exploring the global search space of all
network configuration, ie an insanely huge space.

All in all, the logic based approach seems to be the best choice because
learning can be augmented with hand-coding.  Certainly adding hand-coded
knowledge helps speedup the learning process.  And if we solicit the
internet community to help with hand-coding, it helps even more.

Also, what do you do with the data after you get it into a structured
format?
I think the problem of converting it back to natural language output is
going
to be at least as hard.  The structured format makes use of predicates
that
don't map neatly to natural language.



The inverse problem can probably be solved automatically if the logic is
reversible, which I believe is.  In other words, given a logical form, an
inference engine can use searching to generate NL sentences using the same
logical knowledge / constraints.

The paper is not dated, but there are no references after 1991.  I wonder
why
there has been no real progress using this approach in the last 16 years.

It was first published in 1993, but he's still working on it as a book
chapter to be out soon.

The whole project is a large-scale one and we'd need a knowledge
representation scheme to go with it.  But this paradigm is by far the
most promising because it addresses the entire NL problem instead of
a narrow facet of it.

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to