Ben Goertzel wrote: >> The limited expressive scope of classic ANNs was actually essential >> for getting relatively naïve and simplistic learning algorithms (e.g. >> backprop, Hebbian learning) to produce useful solutions to an >> interesting (if still fairly narrow) class of problems. > > Well, recurrent NN's also have universal applicability, just like > probabilistic logic systems.
And not coincidentally designing learning algorithms that work well on recurrent networks is much harder than for non-recurrent ones. Though many of the more extreme ANN fans seem to be in denial of this (or that fine-grained recurrency is actually important). In general I am more in favour of designing powerful learning algorithms that work on rough fitness landscapes than I am of designing a substrate that flattens the apparent fitness landscape for relevant classes of problem. The former approach scales better, forces you to understand what you're doing better and is usually more compatible with reflection and a causally clean goal system. The latter approach is more compatible with the zero-foresight and incremental-dev-path restrictions of evolution, but humans shouldn't be hobbled by those. Michael Wilson Director of Research and Development Bitphase AI Ltd - http://www.bitphase.com ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
