Hi Pei,

As usual, I disagree!

I think you are making a straw man argument.

The problem is that what you describe as neural networks is just a certain
limited class of neural networks.  That class has certain limitations, which
you point out.  However you can't then extend those conclusions to neural
networks in general.  For example...

You say, "Starting from an initial state determined by an input vector..."
For recurrent NNs this isn't true, or at least I think that your description
is confusing.  The state is a product of the history of inputs, rather than
being determined by "an input vector".  Similarly I also wouldn't say that
NNs are about input-output function learning.  Back prop NNs are about
this when used in simple configurations.   However this isn't true of NNs
in general, in particular its not true of recurrent NNs.  See for example
liquid machines or echo state networks.

I also wouldn't be so sure about neurons not being easily mapped to
conceptual units.  In recent years neuro scientists have found that
small groups of neurons in parts of the human brain correspond to very
specific things.  One famous case is the "Bill Clinton neuron".  Of course
you're talking about artificial NNs not real brains.  Nevertheless, if biologicial
NNs can have this quasi-symbolic nature in places, then I can't see how
you could argue that artificial NNs can't do it due to some fundamental
limitation.

I have other things to say as well, but my main problem with the paper
is what I've described above.  I don't think your criticisms apply to neural
networks in general.

Shane



To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to