Pei,

Thanks for your thoughtful comments!  Here are some responses...

---------
*. "S = space of formal synapses, each one of which is identified with a
pair (x,y), with x Î N and y ÎNÈS."

Why not "x ÎNÈS"?
---------

No strong reason -- but, I couldn't see a need for that degree of generality
in a Hebbian Logic context...
of course, there's no reason not to allow it in a formal model.

-------
*. "outgoing: N à S*" and "incoming: N -> S*"

Don't you want them to cover "higher-order" synapses?
-------

Yeah, you're right.

However, I may remove higher-order synapses from the paper entirely,
preferring to deal with higher-order relations via links to "multi-neuron"
paths as discussed later on.


---------
*. "standard neural net update and learning functions"

One thing I don't like in NN is globle updating, that is, all activations
and weights are updated in every step. Even if it is biologically plausible
(which I'm not sure), in an AI system it won't scale up. I know to drop this
will completely change the dynamics of NN.
----------

Actually, in attractor neural nets it's well-known that using random
asynchronous updating instead of deterministic synchronous updating does NOT
change the dynamics of a neural network significantly.  The attractors are
the same and the path of approach to an attractor is about the same.  The
order of updating turns out not to be a big deal in ANN's.  It may be a
bigger deal in backprop neural nets and the like, but those sorts of "neural
nets" are a lot further from anything I'm interested in...

-----------
*. "probability P(A,t), defined as the probability that, at time t, a
randomly chosen neuron xÎA is firing" and "the conditional probability
P(A|B; t) = P(A ÇB,t)/ P(B,t)"

This is the key assumption made in your approach: to take the frequency of
firing as the degree of truth. I need to explore further about its
implications, though currently I feel uncomfortable. In my own network
interpretation of NARS (for a brief description, see
http://www.cogsci.indiana.edu/farg/peiwang/papers.html#thesis Section 7.5),
I take activation/firing as a control parameter, indicate the recourse
spends on the node, which is independent to the truth value --- "I'm
thinking about T" and "T is true" are fundamentally different.

Of course, the logic/control distinction is not in NN, where both are more
or less reflected in activation value. When you map their notions into
logic, such a distinction become tricky.
-----------

Yeah, to make Hebbian Logic work, you need to assume that frequency of
firing roughly corresponds to degree of truth -- at least, for those neural
clusters that directly represent symbolic information.

So, for instance, the "cat" cluster fires a lot when a real or imaginary cat
is present to the mind.

If the mind wants to allocate attention to the "cat" cluster, but there is
no real cat present, it must then either

-- find a way to stimulate other things logically related to "cat"
-- create abstract quasi-perceptual stimuli that constitute a "mock cat" and
"fool" the "cat" cluster into firing

I think this is how the brain and human mind work.  I agree it's not
optimal, and that in an AI system it's nicer to make separate parameters for
activation and truth value, as is done in both NARS and Novamente.

-------------
*. Basic inference rules

I don't see what is gained by a network implementation (compared to direct
probabilistic calculation).
-------------

Actually, I think there is no big advantage.  This issue is discussed in the
very last section of the paper.  My view is that the brain uses a horribly
inefficient mechanism to achieve probabilistic inference, and AI systems can
achieve the same thing more efficiently.

I prefer the Novamente implementation of PTL to a Hebbian Logic
implementation.  However, I think it's interesting to observe,
theoretically, that a Hebbian Logic representation is possible.

-------------
*. Hebbian Learning

The original Hebbian learning rule woks on symmetric links (similarity, not
inheritance), because weight of a link is decrease when one end is activated
and the other isn't, and which is which doesn't matter.  What you does in
"Hebbian learning variant A" is necessary, but it is not "the original
Hebbian learning rule".
----------------

Oops, you are right.  My variant A is fairly standard in the literature
these days, but it's not the original one.  I will correct that, thanks.

----------------
*. Section 6

I'm not sure I understand the big picture here. Which of the following is
correct?

(1) PTL is fully justified according to probability theory, and the NN
mechanism is used to implement the truth value functions.

(2) PTL is fully justified according to probability theory, and the truth
value functions are directly calculated, but the NN mechanism is used to
implement inference control, that is, the selection of rules and premises in
each step.

(3) The logic is partially justified/calculated according to probability
theory, and partially according to NN (such as the Hebbian learning rule).
-------------

Option 1 is correct.

---------------
*. In general, I agree that it is possible to unify Hebbian network with
multi-valued term logic (with an experience-grounded semantics). NARS is
exactly such a logic, where a statement is a link from one term to another,
and its truth value is the accumulated confirmation/disconfirmation record
about the relation. In NARS, Hebbian learning rule correspond to the
comparison (with induction, abduction, and deduction as variants) plus
revision. Activation spreading corresponds to (time) resource allocation.
-----------------

Hmmm....  Pei, I don't see how to get NARS' truth value functions out of an
underlying neural network model.  I'd love to see the details....  If truth
value is not related to frequency nor to synaptic conductance, then how is
it reflected in the NN?

-- Ben


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to