> What you seem to be criticising in your memo is what I'd call
> "feed forward neural networks".
I see what you mean, though in the memo I didn't rule out feedback.
Recurrence makes all the difference...
For example, consider a very simple neural network model:
Rational activations, fixed topologies, no learning rules or weight
changes allowed, and just a trivial linear activation function.
If you allow only feed forward connections then such a network
can only compute linear functions of inputs. Pretty dumb.
On the other hand, if you allow these networks to have recurrent
connections, the model is in fact Turing complete. Indeed people
have built networks in this model that simulate classical universal
Turing machines (see the work of Siegelman). There are even
compilers for high level languages like Occam that will output
a recurrent neural network to execute your program (see the work
of Neto for example).
Anyway my point is, once you allow recurrent connections even
trivial types of neural networks can, in theory, compute anything.
This is why I prefer to think of NNs as a computational paradigm
rather than a class of techniques, algorithms or methods.
You could argue that NARS is a better way of thinking about or
expressing AGI, or something like that, perhaps. Just as you
might argue that C is a better way of programming than machine
code. However the limitations aren't fundamental, indeed I could
write a NARS system in Occam and then compile it to run as
a neural network.
Shane
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
