Shane and Ben,

Thanks for the comments.

Let me clarify some general points first.

(1) My memo is not intend to cover every system labeled as "neural network"

--- that is why I use a whole section to define what I mean by "NN
model" discussed in the paper. I'm fully aware of the fact that given
a diverse field like neural networks, any non-trivial summary will
have exceptions. For example, I don't think my description covers the
ideas of Jeff Hawkins very well.

My strategy is to first discuss the most typical models of the "neural
network" family (or the "standard NN architectures", as Ben put it),
as what it usually means to most people at the current time. After
that, we can study the special cases one-by-one, to see what makes
them different and how far they can go. Therefore, though the model I
specified doesn't cover every possible neural network (which I never
claimed), it is not a straw man.

Of course, if someone can suggest a different general summary of
neural networks which covers more cases more accurately, I'd be glad
to see it. However, to refuse any such attempt is not a good idea for
AGI study, because otherwise I don't think we can ever compare major
techniques, since every technique comes with many variants.

For example, I don't mind if someone says that "logic-based AI
research at the current time usually has the limitation of ..." as far
as the author doesn't claim that it covers every type of logic,
including NARS, which is highly atypical. Actually, I have to make
these statements myself, before I explain what makes NARS different.

(2) My memo is not intend to discuss what neural networks "can" or "cannot" do

--- I've said that explicitly in page 6. What I want to discuss is
whether NN is the best choice for AGI, and why. Obviously every
technique has its strength and weakness, but it doesn't mean we cannot
make a choice when facing a concrete problem. Even if neural network
can do something in principle, it doesn't mean that thing cannot be a
weakness of the technique.

(3) Neuroscience results cannot be directly used to support
"artificial neural networks"

The human brain is no doubt a neural network, at a certain level of
description. On the other hand, what we call "artificial neural
network" today is a (fuzzy) set of formal models that share certain
properties with the brain, described in that way. We cannot draw
confident conclusions from inference like "The human brain has
property P; the brain is a neural network; therefore, we can build a
neural network with property P". Even when it is true, it doesn't mean
that neural network is the preferred way to get P in AGI research.

BTW, the "Bill Clinton neuron" example directly conflicts with the
idea of distributed representation, so it is actually a piece of
negative evidence for NN. Of course, we can relax the definition of 
"artificial neural networks" to allow that, but if we keep doing this
we can even call NARS a neural network (which may make some people
like it better). However, in this way the concept of "artificial
neural networks" may become too stretchy to be meaningful. For
example, that is the reason for me to dislike the term "agent" --- if
every system can be label as an "agent", then the label means nothing.

Pei

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to