hi,

> My strategy is to first discuss the most typical models of the "neural
> network" family (or the "standard NN architectures", as Ben put it),
> as what it usually means to most people at the current time. After
> that, we can study the special cases one-by-one, to see what makes
> them different and how far they can go. Therefore, though the model I
> specified doesn't cover every possible neural network (which I never
> claimed), it is not a straw man.
>
> Of course, if someone can suggest a different general summary of
> neural networks which covers more cases more accurately, I'd be glad
> to see it.

I think I prefer Daniel Amit's approach, where one views NN's as the
class of nonlinear dynamical systems composed of networks of
neuron-like elements.

Then, it becomes clear that the standard NN architectures form a very
small subclass of possible NN's....

As for whether NN's are the best architecture for AGI right now, I
agree that they are not.  My reason is that no one knows how complex,
abstract knowledge can be efficiently, adaptably represented in NN's.

I'm sure there *is* a way to do it, but since no has discovered it yet
(via either mathematical theory, computational experimentation, or
neuroscience), basically in my view NN-based AGI is a non-starter.

Once this conceptual problem is solved, then NN-based AGI may become a
good strategy.  However, I still think it will wind up being worse
than probabilistic logic based AGI, at least on von Neumann computers
(or networks thereof), because NN architectures don't take good
advantage of the particular strengths of von Neumann computers (which
can do very precise operations very quickly in serial, a quite
different strength from the human brain, and one which is in large
part wasted when computers are used for NN computation)

ben g
-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to