Steve Richfield wrote:
Richard,

On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    Steve Richfield wrote:

        Richard,
         Broad agreement, with one comment from the end of your posting...
         On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]> <mailto:[EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>>> wrote:

           Another, closely related thing that they do is talk about low
        level
           issues witout realizing just how disconnected those are from
        where
           the real story (probably) lies.  Thus, Mohdra emphasizes the
           importance of "spike timing" as opposed to average firing rate.

         There are plenty of experiments that show that consecutive
        closely-spaced pulses result when something goes "off scale",
        probably the equivalent to computing Bayesian probabilities >
        100%, somewhat akin to the "overflow" light on early analog
        computers. These closely-spaced pulses have a MUCH larger
        post-synaptic effect than the same number of regularly spaced
        pulses. However, as far as I know, this only occurs during
        anomalous situations - maybe when something really new happens,
        that might trigger learning?
         IMHO, it is simply not possible to play this game without
        having a close friend with years of experience poking mammalian
        neurons. This stuff is simply NOT in the literature.

           He may well be right that the pattern or the timing is more
           important, but IMO he is doing the equivalent of saying
        "Let's talk
           about the best way to design an algorithm to control an airport.
            First problem to solve:  should we use Emitter-Coupled Logic
        in the
           transistors that are in oour computers that will be running the
           algorithms."

         Still, even with my above comments, you conclusion is still
        correct.


    The main problem is that if you interpret spike timing to be playing
    the role that you (and they) imply above, then you are commiting
    yourself to a whole raft of assumptions about how knowledge is
    generally represented and processed.  However, there are *huge*
    problems with that set of implicit assumptions .... not to put too
    fine a point on it, those implicit assumptions are equivalent to the
    worst, most backward kind of cognitive theory imaginable.  A theory
    that is 30 or 40 years out of date.

OK, so how else do you explain that in fairly well understood situations like stretch receptors, that the rate indicates the stretch UNLESS you exceed the mechanical limit of the associated joint, whereupon you start getting pulse doublets, triplets, etc. Further, these pulse groups have a HUGE effect on post synaptic neurons. What does your cognitive science tell you about THAT?

See my parallel reply to Ben's point: I was talking about the fact that neuroscientists make these claims about high level cognition; I was not referring to the cases where they try to explain low-level, sensory and motor periphery functions like stretch receptor neurons.

So, to clarify: yes, it is perfectly true that the very low level perceptual and motor systems use simple coding techniques. We have known for decades (since Hubel and Weisel) that retinal ganglion cells use simple coding schemes, etc etc.

But the issue I was discussing was about the times when neuroscientists make statements about high level concepts and the processing of those concepts. Many decades ago people suggested that perhaps these concepts were represented by single neurons, but that idea was shot down very quickly, and over the years we have found such sophisticated information processing effects occurring in cognition that it is very difficult to see how single neurons (or multiple redundant sets of neurons) could carry out those functions.

This idea is so discredited that it is hard to find references on the subject: it has been accepted for so long that it is common knowledge in the cognitive science community.



    The gung-ho neuroscientists seem blissfully unaware of this fact
because they do not know enough cognitive science. I stated a Ben's List challenge a while back that you apparently missed, so here it is again. *You can ONLY learn how a system works by observation, to the extent that its operation is imperfect. Where it is perfect, it represents a solution to the environment in which it operates, and as such, could be built in countless different ways so long as it operates perfectly. Hence, computational delays, etc., are fair game, but observed cognition and behavior are NOT except to the extent that perfect cognition and behavior can be described, whereupon the difference between observed and theoretical contains the information about construction.* ** *A perfect example of this is superstitious learning, which on its surface appears to be an imperfection. However, we must use incomplete data to make imperfect predictions if we are to ever interact with our environment, so superstitious learning is theoretically unavoidable. Trying to compute what is "perfect" for superstitious learning is a pretty challenging task, as it involves factors like the regularity of disastrous events throughout evolution, etc.* If anyone has successfully done this, I would be very interested. This is because of my interest in central metabolic control issues, wherein superstitious "red tagging" appears to be central to SO many age-related conditions. Now, I am blindly assuming perfection in neural computation and proceeding on that assumption. However, if I could recognize and understand any imperfections (none are known), I might be able to save (another) life or two along the way with that knowledge. Anyway, this suggests that much of cognitive "science", which has NOT computed this difference but rather is running with the "raw data" of observation, is rather questionable at best. For reasons such as this, I (perhaps prematurely and/or improperly) dismissed cognitive science rather early on. Was I in error to do so?

I cannot make much detailed sense of what you say in the above (it seems pitched at such a high level of abstraction and generality that I doubt its validity). You speak of "perfection" and "imperfection" in ways that make me confused about what sorts of perfection and imperfection you could possibly mean.

However, having said all that, I can comment on your last paragraph. You say that cognitive science is "running on raw data". I cannot find any way to understand this statement that does not lead directly to the conclusion that it is completely and utterly wrong. Cognitive science involves a huge theoretical interpretation of raw data. You seem to be implying that cognitive science is all about cataloguing data (or something: I am really not sure what you mean). This is so far from the truth that I can only express extreme astonishment that you would say that.




Richard Loosemore














-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to