Hi everybody.

I've got a couple of questions for you.

I'm a med student and I'm new to Nupic.
I'm very impressed by what Numenta is achieving and I do believe that your work in the long run will be compared to the discovery of penicillin :)

My project -for now- is to produce a model able to detect neurological/psychiatric issues through a simple eeg waves recognition; I'm an intern at the Neurosurgery dept. and my final goal would be to use patterns recognition as an intraoperative tool to help surgeons distinguish between healthy tissue and cancerous cells just with a continuous eeg/emg data feed.

In order to get familiar with the machine learning world, and not disposing of good enough datasets, since a couple of years I use financial data (notoriously difficult-impossible do predict) as a sandbox environment to experiment with NNs. I had encouraging results.

I'm still learning the python code behind nupic and I've two questions for you.

1 - FIRST QUESTION
In the paper "Hierarchical Temporal Memory" (version 0.2.1, 2011), I read that: "[...] The predominant view (especially in regards to the neo-cortex) is that the rate of spikes is what matters. Therefore the output of a cell can be viewed as a scalar value". I'm aware that transforming a biological complex system such as the neocortex into an computer software necessarily leads to simplifications. As we know from a biological point of view the transmission of the signal is subject to numerous variables and I wonder how their implementation could improve the software predictions.
The variables I'd like to focus on are:
-A) The propagation of the action potential along the membranes follows an exponential loss distribution due to the resistances met along the axons. For the HTM model (where the synapses express binary weights) this could mean that the more distant two connected cells are, the weakest their shared signal becomes (of course directly depending on "where" the dendrite segments are from their starting point - this would probably require the introduction of the physical concept of space in HTM). -B) The signal propagation speed is directly proportional to the axon's diameter carrying it; this appears to be valid both in unmyelinated and myelinated axons (though representing a more obvious phenomenon in the latter type). This could have a huge impact for HTM: if bigger axons (= weight+) burst temporally before other smaller ones towards the same target dendrite, they can also inhibit temporarily that targeted cell (causing a later refractory period) therefore filtering the signal. -C) Receptors, neurotransmitters, electrical and chemical synapses, EPSP (excitatory postsynaptic potential) and IPSP (inhibitory postsynaptic potential) . This is an enormous chapter. Current NNs systems and, if my understanding is correct, also Nupic treat synapses like if they were all electrical synapses. In reality, according to the current consesus, the mammal brain uses electrical synapses mostly to "synchronize" vast areas of the neocortex (I'm deliberately omitting other findings because are not relevant to my point). Although the electrical synapses demonstrates various advantage when compared to their chemical equivalents (speed, resistance, fatigue, etc.), it appears that the complexity and the fine filtering/modulation of the signals inside the PFC is due to the presence of numerous other elements present in chemical synapses: neurotransmitters (such as acetylcholine, dopamine, gaba, norepinephrine...); pre-synaptic, synaptic gap and post-synaptic features; different receptors; etc. Each of these elements can strongly influence the signal and the overall "learning" process. For example: although an axon "weight" is big and it is bursting copiously the above mentioned elements can suppress its signal.

My first question is: are the first two points (A and C) implemented in Nupic? Do you reckon that it could be useful to increase the complexity of Nupic also implementing the chemical synapses "class" with the elements described in point C?


2 - SECOND QUESTION
I'm trying to run a couple of models. This is an extract from a OPF I created through swarming.

 'model': 'CLA',
 'modelParams': {'anomalyParams': {u'anomalyCacheRecords': None,
                                   u'autoDetectThreshold': None,
                                   u'autoDetectWaitRecords': None},
                 'clParams': {'alpha': 0.06173462582232023,
                              'clVerbosity': 0,
                              'regionName': 'CLAClassifierRegion',
                              'steps': '0'},
                 'inferenceType': 'NontemporalClassification',
                 'sensorParams': {'encoders': {u'DATE_dayOfWeek': None,
u'DATE_timeOfDay': {'fieldname': 'DATE',
'name': 'DATE',
'timeOfDay': (21,
2.2537623685060675),
'type': 'DateEncoder'},
                                               u'DATE_weekend': None,
'_classifierInput': {'classifierOnly': True,
'clipInput': True,
'fieldname': 'VO',
'maxval': 2.0,
'minval': 0.0,
'n': 449,
'name': '_classifierInput',
'type': 'ScalarEncoder',
'w': 21},
                                               u'o10N_A': None,
                                               u'o11N_A': None,
                                               u'o12N_A': None,
                                               u'o13N_A': None,
                                               u'o14N_A': None,
                                               u'o15N_A': None,
                                               u'o1N_A': None,
                                               u'o1N_B': None,
                                               u'o2N_A': None,
                                               u'o2N_B': None,
                                               u'o3N_A': None,
                                               u'o3N_B': None,
                                               u'o4N_A': None,
                                               u'o4N_B': None,
                                               u'o5N_A': None,
                                               u'o5N_B': None,
                                               u'o6N_A': None,
                                               u'o6N_B': None,
                                               u'o7N_A': None,
                                               u'o7N_B': None,
                                               u'o8N_A': None,
                                               u'o8N_B': None,
                                               u'o9N_A': None,
                                               u'o9N_B': None},
                                  'sensorAutoReset': None,
                                  'verbosity': 0},
                 'spEnable': False,
                 'spParams': {'columnCount': 2048,
                              'globalInhibition': 1,
                              'inputWidth': 0,
                              'maxBoost': 2.0,
                              'numActiveColumnsPerInhArea': 40,
                              'potentialPct': 0.8,
                              'seed': 1956,
                              'spVerbosity': 0,
                              'spatialImp': 'cpp',
                              'synPermActiveInc': 0.05,
                              'synPermConnected': 0.1,
                              'synPermInactiveDec': 0.0005},
                 'tpEnable': False,
                 'tpParams': {'activationThreshold': 16,
                              'cellsPerColumn': 32,
                              'columnCount': 2048,
                              'globalDecay': 0.0,
                              'initialPerm': 0.21,
                              'inputWidth': 2048,
                              'maxAge': 0,
                              'maxSegmentsPerCell': 128,
                              'maxSynapsesPerSegment': 32,
                              'minThreshold': 12,
                              'newSynapseCount': 20,
                              'outputType': 'normal',
                              'pamLength': 1,
                              'permanenceDec': 0.1,
                              'permanenceInc': 0.1,
                              'seed': 1960,
                              'temporalImp': 'cpp',
                              'verbosity': 0},
                 'trainSPNetOnlyIfRequested': False},

If I understood correctly, all the inputs (from o1N_A to o15N_A) were discarded by the swarming process. I've also run a larger swarm, but they are still discarded. Unfortunately I'm sure that at least a good 60% of them are relevant sensors. How can I improve the swarming? Am I doing something wrong? (The sensors are outputs from thoracic low-res electrodes; the predicted field is "VO" which represents the amount of spO2 present in the blood stream at the moment - the idea is to predict the oxygen saturation from the respiratory act).

Thanks for your replies. Sorry for my english (I'm italian).

Raf


--
Raf

www.madraf.com/algotrading
reply to: [email protected]
skype: algotrading_madraf

Reply via email to