On 7/6/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
YKY> I know what you're talking about -- using NL directly as a KR
language.

I suspect it's not that. Problems you talk about are specific for
particular bet on system bootstrap method. It assumes that you can
code enough capability in system more or less explicitly, so that
it would be able to apply some sort of scientific method to churn in
actual real world data after that (in essense, it will be able to design
custom
peripheral perception modules feeding it real world data in
high-level KR language). I don't particularly believe in feasibility
of that approach (some jokes about how it can look like can be found
in [1]), but for the sake of current discussion it's enough to say
that it's not the only way possible.

The reason I think it could be workable to learn in NL early on is that
regardless of I/O KR, system should be able to manipulate with relatively
complex
combinations of symbols, forming atomic concepts out of them depending
on context (at least it's how it looks like from high-level perception
point of view). If it can do it with tag-words of high-level KR to perform
complex reasoning, why not do the same with letters in a text
string? No additional algorithms required. You can't code everything
in anyway, you have to stop at some point and start teaching it
(blurry...), and data representing real world is hardly more
structured than NL.



[1] Robert M. French. (1997). When Coffee Cups Are Like Old Elephants or
Why Representation Modules Don't Make Sense
http://www.u-bourgogne.fr/LEAD/people/french/elephants.pdf

It seems that you're inventing a new, sub-symbolic logic, that may be akin
to connectionism.  This approach can work, but it's less effective than
the logic-based one.

RM French's paper is fun to read, but my view is the exact opposite of his.
I also dislike Lakoff's stuff, which I think is time-wasting,
philosophical, and ineffectual (which doesn't mean I dislike him
personally).

French's objection to PSSH (physical symbol system hypothesis) is mainly 2
points:
1.  representations sometimes need to be context-dependent
2.  historically, successful systems had relied on hand-crafted
representations

His example is that "credit card" can be likened to "money", "doorkey", and
an almost infinite number of concepts.  But this can be dealt with under
logic-based AI, using abductive reasoning or some kind of similarity-based
searching of the KB.  Context-dependency is OK too, with abductive reasoning
-- for example, searching for the *explanation* of why a credit card can be
likened to a rose.

My counter argument is:  why don't you cite a scenario that will *break* the
logic-based AI model?  If you can't, then you should join the LBAI camp
because it has many well-established results to build up on. =)

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=13517042-6e217d

Reply via email to