Ben,

Some comments to this interesting article:

*. "S = space of formal synapses, each one of which is identified with a
pair (x,y), with x Î N and y ÎNÈS."

Why not "x ÎNÈS"?

*. "outgoing: N à S*" and "incoming: N -> S*"

Don't you want them to cover "higher-order" synapses?

*. "standard neural net update and learning functions"

One thing I don't like in NN is globle updating, that is, all activations
and weights are updated in every step. Even if it is biologically plausible
(which I'm not sure), in an AI system it won't scale up. I know to drop this
will completely change the dynamics of NN.

*. "AàI B, means that when B is present, A is also present"

What are A and B (outside the network)? Are they terms, sets, attributes,
events, or propositions? What do you mean by "present"?

*. "probability P(A,t), defined as the probability that, at time t, a
randomly chosen neuron xÎA is firing" and "the conditional probability
P(A|B; t) = P(A ÇB,t)/ P(B,t)"

This is the key assumption made in your approach: to take the frequency of
firing as the degree of truth. I need to explore further about its
implications, though currently I feel uncomfortable. In my own network
interpretation of NARS (for a brief description, see
http://www.cogsci.indiana.edu/farg/peiwang/papers.html#thesis Section 7.5),
I take activation/firing as a control parameter, indicate the recourse
spends on the node, which is independent to the truth value --- "I'm
thinking about T" and "T is true" are fundamentally different.

Of course, the logic/control distinction is not in NN, where both are more
or less reflected in activation value. When you map their notions into
logic, such a distinction become tricky.

*. Basic inference rules

I don't see what is gained by a network implementation (compared to direct
probabilistic calculation).

*. Hebbian Learning

The original Hebbian learning rule woks on symmetric links (similarity, not
inheritance), because weight of a link is decrease when one end is activated
and the other isn't, and which is which doesn't matter.  What you does in
"Hebbian learning variant A" is necessary, but it is not "the original
Hebbian learning rule".

*. Section 6

I'm not sure I understand the big picture here. Which of the following is
correct?

(1) PTL is fully justified according to probability theory, and the NN
mechanism is used to implement the truth value functions.

(2) PTL is fully justified according to probability theory, and the truth
value functions are directly calculated, but the NN mechanism is used to
implement inference control, that is, the selection of rules and premises in
each step.

(3) The logic is partially justified/calculated according to probability
theory, and partially according to NN (such as the Hebbian learning rule).

*. In general, I agree that it is possible to unify Hebbian network with
multi-valued term logic (with an experience-grounded semantics). NARS is
exactly such a logic, where a statement is a link from one term to another,
and its truth value is the accumulated confirmation/disconfirmation record
about the relation. In NARS, Hebbian learning rule correspond to the
comparison (with induction, abduction, and deduction as variants) plus
revision. Activation spreading corresponds to (time) resource allocation.

BTW, Pavlov's conditioning is similar to Hebbian learning, and can also be
seen as special case of induction in (higher-order) multi-valued term logic.

Pei

----- Original Message ----- 
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, December 20, 2003 8:26 PM
Subject: [agi] The emergence of probabilistic inference from hebbian
learning in neural nets


>
> Hi,
>
> For those with the combination of technical knowledge and patience
required
> to sift through some fairly mathematical and moderately speculative
cog-sci
> arguments... some recent thoughts of mine have been posted at
>
> http://www.goertzel.org/dynapsyc/2003/HebbianLogic03.htm
>
> The topic is:
> **How to construct a neural network so that symbolic logical inference
will
> emerge from its dynamics?**
>
> This is not directly relevant to my own current AI work (Novamente,
> www.agiri.org), which is not neural network based.  However, it is
> conceptually related to Novamente; and more strongly conceptually related
to
> Webmind, the previous AGI design with which I was involved.  It is also
> loosely related to Pei Wang's NARS inference system.
>
> While my guess is that this is not the most effective path to AGI at
> present, I do think it's a very interesting area for research and an
> exploration-worthy potential path toward AGI.
>
> Apologies for the rough-draft-ish-in-places document formatting ;-)
>
> -- Ben
>
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to