Richard,

On 11/20/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Steve Richfield wrote:
>
>> Richard,
>>  Broad agreement, with one comment from the end of your posting...
>>  On 11/20/08, *Richard Loosemore* <[EMAIL PROTECTED] <mailto:
>> [EMAIL PROTECTED]>> wrote:
>>
>>    Another, closely related thing that they do is talk about low level
>>    issues witout realizing just how disconnected those are from where
>>    the real story (probably) lies.  Thus, Mohdra emphasizes the
>>    importance of "spike timing" as opposed to average firing rate.
>>
>>  There are plenty of experiments that show that consecutive closely-spaced
>> pulses result when something goes "off scale", probably the equivalent to
>> computing Bayesian probabilities > 100%, somewhat akin to the "overflow"
>> light on early analog computers. These closely-spaced pulses have a MUCH
>> larger post-synaptic effect than the same number of regularly spaced pulses.
>> However, as far as I know, this only occurs during anomalous situations -
>> maybe when something really new happens, that might trigger learning?
>>  IMHO, it is simply not possible to play this game without having a close
>> friend with years of experience poking mammalian neurons. This stuff is
>> simply NOT in the literature.
>>
>>    He may well be right that the pattern or the timing is more
>>    important, but IMO he is doing the equivalent of saying "Let's talk
>>    about the best way to design an algorithm to control an airport.
>>     First problem to solve:  should we use Emitter-Coupled Logic in the
>>    transistors that are in oour computers that will be running the
>>    algorithms."
>>
>>  Still, even with my above comments, you conclusion is still correct.
>>
>
> The main problem is that if you interpret spike timing to be playing the
> role that you (and they) imply above, then you are commiting yourself to a
> whole raft of assumptions about how knowledge is generally represented and
> processed.  However, there are *huge* problems with that set of implicit
> assumptions .... not to put too fine a point on it, those implicit
> assumptions are equivalent to the worst, most backward kind of cognitive
> theory imaginable.  A theory that is 30 or 40 years out of date.


OK, so how else do you explain that in fairly well understood situations
like stretch receptors, that the rate indicates the stretch UNLESS you
exceed the mechanical limit of the associated joint, whereupon you start
getting pulse doublets, triplets, etc. Further, these pulse groups have a
HUGE effect on post synaptic neurons. What does your cognitive science tell
you about THAT?



> The gung-ho neuroscientists seem blissfully unaware of this fact because
>  they do not know enough cognitive science.


I stated a Ben's List challenge a while back that you apparently missed, so
here it is again.

*You can ONLY learn how a system works by observation, to the extent that
its operation is imperfect. Where it is perfect, it represents a solution to
the environment in which it operates, and as such, could be built in
countless different ways so long as it operates perfectly. Hence,
computational delays, etc., are fair game, but observed cognition and
behavior are NOT except to the extent that perfect cognition and behavior
can be described, whereupon the difference between observed and theoretical
contains the information about construction.*
**
*A perfect example of this is superstitious learning, which on its
surface appears to be an imperfection. However, we must use incomplete data
to make imperfect predictions if we are to ever interact with our
environment, so superstitious learning is theoretically unavoidable. Trying
to compute what is "perfect" for superstitious learning is a pretty
challenging task, as it involves factors like the regularity of disastrous
events throughout evolution, etc.*

If anyone has successfully done this, I would be very interested. This is
because of my interest in central metabolic control issues, wherein
superstitious "red tagging" appears to be central to SO many age-related
conditions. Now, I am blindly assuming perfection in neural computation
and proceeding on that assumption. However, if I could recognize and
understand any imperfections (none are known), I might be able to save
(another) life or two along the way with that knowledge.

Anyway, this suggests that much of cognitive "science", which has NOT
computed this difference but rather is running with the "raw data" of
observation, is rather questionable at best. For reasons such as this, I
(perhaps prematurely and/or improperly) dismissed cognitive science rather
early on. Was I in error to do so?

Steve Richfield



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to