On 7/7/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
His argument is primarily against underestimation of problem of
multiplicity of
representations, specific incarnation of lurking combinatorial explosion
problem.
Under logic-based paradigm, combinatorial explosion is due to high
"branching factor" of searching (inference).  One way to solve this is to
use heuristics to prune the search tree, for example, ranking rules in terms
of how useful / popular they are.

Under logic-based AI (LBAI) these is no multiplicity of representation --
there is only one representation, all the variations come from "searching".
Searching can be sped up by clever indexing.

Approaches that don't address this problem have no
direct way to move to actually start dealing with real world. The only way
I
see for systems not addressing this issue is what
I wrote about in previous message: such system must be
coded to be clever enough to do representation-selection 'manually'
and I doubt it's doable.
Can you make a concrete example of representation switching?  Perhaps then I
can show you how this may be done under LBAI.

Yes, reasonable approaches are certainly possible, nobody argues with
that. But then again, how 'logic-based' that system will be after it
starts using full-blown probabilistic searches? Search state can be
regarded as a kind of subsymbolic knowledge representation in working
memory. So there's no dichotomy. Main problem I see at this point of
feature shift is that logic itself becomes unnecessary in such system.
If logic rules to be applied are selected and act probabilistically,
they are not much more than specific instances of more general concept
activation rules.

Exactly.  My LBAI will be augmented with fuzzy-probabilistic logic.  Let's
call it P-Z logic (P=probability, Z=fuzziness).  This P-Z logic can achieve
"soft computing" and has sub-symbolic capabilities.  Ben and Pei Wang's
approaches can be classified in this category too.

In the new P-Z logic, I'd be doing a kind of inference which is similar to
classical logic's resolution.  It bears *some* resemblance to the activation
of neurons, in the sense that neural activation values ~= fuzzy values.

But P-Z logic is *much* more powerful than neural activation in one aspect:
the use of variables.  It's a *first-order* P-Z logic.  That means you can
translate NL sentences to this logic easily.  But you'd have a very hard
time dealing with NL using neural network approach.

There are lurking complexities beneath the NN approach that you're not aware
of.  Many people think "the brain is neural, so the NN approach cannot go
wrong" but they're *ignorant* of the brain's immense complexity.  It's like
saying "the bird uses wing to fly, so the wings-feather-muscles approach
cannot go wrong".

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=14842719-44fa40

Reply via email to