Steve Richfield wrote:
THIS is a big question. Remembering that absolutely ANY function can be
performed by passing the inputs through a suitable non-linearity, adding
them up, and running the results through another suitable non-linearity,
it isn't clear what the limitations of "linear" operations are, given
suitable "translation" of units or point-of-view. Certainly, all fuzzy
logical functions can be performed this way. I even presented a paper at
the very 1st NN conference in San Diego, showing that one of the two
inhibitory synapses ever to be characterized was precisely what was
needed to perform an AND NOT to the logarithms of probabilities of
assertions being true, right down to the discontinuity at 1.
Steve,
You are stating a well-known point of view which makes no sense, and
which has been widely discredited in cognitive science for five decades:
you are stating [one version of] the core of the Behaviorist manifesto.
Yes, in principle you could argue that intelligent systems consist only
of a black box with one gargantuan nonlinear function that maps inputs
to outputs.
The trouble is that such a "flat" system is only possible in principle:
it would be ridiculously huge, and it gives us no clue about how it
becomes learned through experience.
So the fact that everything could IN PRINCIPLE be done in this
simplistic, flat kind of system means nothing. The devil is in the
detals and the details are just ridiculous.
One problem is that this idea - this "Hey!! Let's Just Explain It With
One Great Big Nonlinear Function, Folks!!!!" idea - keeps creeping back
into the cognitive science-neural nets-artificial intelligence complex.
Otherwise sensible people keep accidentally reintoducing it without
really understanding what they are doing; without understanding the
ramifications of this idea.
That is why it is meaningless to say something like "Make that
present-day PCA. Several people are working on its limitations, and
there seems to be some reason for hope of much better things to come."
There is little reason to hope for better things to come (except for the
low level mechanisms that Derek quite correctly pointed out), because
the whole PCA idea is a dead end.
A dead end as a general AGI theory, mark you. It has its uses.
Richard Loosemore
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com