Friday, July 6, 2007, YKY (Yan King Yin) wrote:

YYKY> RM French's paper is fun to read, but my view is the exact opposite of 
his.
YYKY> I also dislike Lakoff's stuff, which I think is time-wasting,
YYKY> philosophical, and ineffectual (which doesn't mean I dislike him
YYKY> personally).

YYKY> French's objection to PSSH (physical symbol system hypothesis) is mainly 2
YYKY> points:
YYKY> 1.  representations sometimes need to be context-dependent
YYKY> 2.  historically, successful systems had relied on hand-crafted
YYKY> representations

His argument is primarily against underestimation of problem of multiplicity of
representations, specific incarnation of lurking combinatorial explosion 
problem.
Approaches that don't address this problem have no
direct way to move to actually start dealing with real world. The only way I
see for systems not addressing this issue is what
I wrote about in previous message: such system must be
coded to be clever enough to do representation-selection 'manually'
and I doubt it's doable.

YYKY> His example is that "credit card" can be likened to "money", "doorkey", 
and
YYKY> an almost infinite number of concepts.  But this can be dealt with under
YYKY> logic-based AI, using abductive reasoning or some kind of similarity-based
YYKY> searching of the KB.  Context-dependency is OK too, with abductive 
reasoning
YYKY> -- for example, searching for the *explanation* of why a credit card can 
be
YYKY> likened to a rose.

Yes, reasonable approaches are certainly possible, nobody argues with
that. But then again, how 'logic-based' that system will be after it
starts using full-blown probabilistic searches? Search state can be
regarded as a kind of subsymbolic knowledge representation in working
memory. So there's no dichotomy. Main problem I see at this point of
feature shift is that logic itself becomes unnecessary in such system.
If logic rules to be applied are selected and act probabilistically,
they are not much more than specific instances of more general concept
activation rules.

YYKY> My counter argument is:  why don't you cite a scenario that will *break* 
the
YYKY> logic-based AI model?  If you can't, then you should join the LBAI camp
YYKY> because it has many well-established results to build up on. =)

It's like asking if I can argue that mind can't work on quantum physics or on
a TM. Or course it can, which is beside the point. Such argument must be more
context-specific to be fruitful :).

-- 
 Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=13841360-2b4455

Reply via email to