Steve Richfield wrote:
Richard,
Good - you hit this one on its head! Continuing... On 7/22/08, *Richard Loosemore* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    Steve Richfield wrote:

        THIS is a big question. Remembering that absolutely ANY function
        can be performed by passing the inputs through a suitable
        non-linearity, adding them up, and running the results through
        another suitable non-linearity, it isn't clear what the
        limitations of "linear" operations are, given suitable
        "translation" of units or point-of-view. Certainly, all fuzzy
        logical functions can be performed this way. I even presented a
        paper at the very 1st NN conference in San Diego, showing that
        one of the two inhibitory synapses ever to be characterized was
        precisely what was needed to perform an AND NOT to the
        logarithms of probabilities of assertions being true, right down
        to the discontinuity at 1.


    Steve,

    You are stating a well-known point of view which makes no sense, and
    which has been widely discredited in cognitive science for five decades:

I don't really understand how it is possible to "discredit" a prospective solution that is not yet known, other than exhibiting people's inability to arrive at it, e.g. as people have been unable to parse English using POS-based approaches, given ~40 years to do so.

I am going to have stop. How can I explain how this idea became discredited, using only the space available in one list post, when it takes an entire course in cognitive science to drill it into the heads of undergraduate cog sci students (and quite often it does not click even then)?




Richard Loosemore












     you are stating [one version of] the core of the Behaviorist manifesto.

Close, but not exactly. I believe that there is a common math basis with some "tweaks" as needed for things that don't "fit the pattern".

    Yes, in principle you could argue that intelligent systems consist
    only of a black box with one gargantuan nonlinear function that maps
    inputs to outputs.

Remembering that there are ~200 different types of neurons, probably some with different physical structure but the same math, and others with different math, it would be good to arrive at a full understanding of at least one of them, and move out from there.

    The trouble is that such a "flat" system is only possible in
    principle:  it would be ridiculously huge, and it gives us no clue
    about how it becomes learned through experience.

    So the fact that everything could IN PRINCIPLE be done in this
    simplistic, flat kind of system means nothing.  The devil is in the
    detals and the details are just ridiculous.

Again, I am NOT proposing a single type of building block, but rather a family with a common mathematical underpinning, plus whatever "special sauce" these fail to provide.

    One problem is that this idea - this "Hey!! Let's Just Explain It
    With One Great Big Nonlinear Function, Folks!!!!" idea - keeps
    creeping back into the cognitive science-neural nets-artificial
    intelligence complex.

How about substituting ~200 for One.

    Otherwise sensible people keep accidentally reintoducing it without
    really understanding what they are doing;  without understanding the
    ramifications of this idea.

    That is why it is meaningless to say something like "Make that
    present-day PCA. Several people are working on its limitations, and
    there seems to be some reason for hope of much better things to
    come." There is little reason to hope for better things to come
    (except for the low level mechanisms that Derek quite correctly
    pointed out), because the whole PCA idea is a dead end.

I hear that you are quite convinced of this, and if this is true, then I should also become quite convinced. I just don't yet see how to get there (mentally burying PCA-like approaches and other similar NN-like views) given that something like these seem to be working for us.

This seems to be going the way of the discussion on the viability of ad hoc approaches to AGI we had a couple of months ago, where I asked for the prima facie case that it should work, and got a bunch of opinions generally to the effect that people felt that it could work, but couldn't state why they felt this way. Is that the case here - that you feel that PCA-like approaches can't work, but you can't make the prima facie case?

    A dead end as a general AGI theory, mark you.  It has its uses.

If I could see just one narrow application where something worked every bit as well as neurons in people do, then there would be some sort of starting point. Until then, nothing, not even PCA, would seem to "have its uses". You have quite rightly moved the level of this discussion up to where it belongs. Now the challenge seems to be for one of us to "put a stake through the heart" of the other. You just got my spleen - would you care to take another shot?! Steve Richfield ------------------------------------------------------------------------ *agi* | Archives <https://www.listbox.com/member/archive/303/=now> <https://www.listbox.com/member/archive/rss/303/> | Modify <https://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to