On Sun, Dec 18, 2005 at 09:55:32PM +0100, Shane Legg wrote:

> To my mind the key thing with neural networks is that they
> are based on large numbers of relatively simple units that
> interact in a local way by sending fairly simple messages.

That's just one facet. These are units of a large number
of classes, with different message types and *changing 
connectivity*, depending on internal and external
(rich) state (morphogenetic code).
 
> What you seem to be criticising in your memo is what I'd call
> "feed forward neural networks".

Of course Minsky & Papert in 1969 have shown that all ANNs are useless,
by virtue that single-layer perceptrons can't solve problems
that are linearly inseparable.

The inescapable logic of it has killed ANN R&D funding for about
15 years. 

-- 
Eugen* Leitl <a href="http://leitl.org";>leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820            http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Attachment: signature.asc
Description: Digital signature

Reply via email to