Hi there, wondering if anyone had a chance to look at this issue? (I cannot
reproduce results from an example involving multiple time series)

On Thu, Oct 9, 2014 at 5:18 PM, John Blackburn <john.blackbur...@gmail.com>
wrote:

> OK, thanks for getting back to me. Please let me know how you get on...
> This may be a discrepancy between Grok and Nupic perhaps or it may be just
> a reversion...?
>
>
>
>
>
> On Thu, Oct 9, 2014 at 10:02 AM, Subutai Ahmad <subu...@numenta.org>
> wrote:
>
>>
>> Hmm, I haven't run that in a while but I hope nothing significant has
>> changed in NuPIC.  There is some natural variation in the swarm algorithm
>> from run to run but it shouldn't be that large.
>>
>> Unfortunately I am out of the country on vacation for 5 more days with
>> limited email access. I probably won't be able to look at it until next
>> week sometime.  Hope that's ok.
>>
>> --Subutai
>>
>> On Wed, Oct 8, 2014 at 10:28 AM, John Blackburn <
>> john.blackbur...@gmail.com> wrote:
>>
>>> Dear Subutai
>>>
>>> I tried to run your "multiple fields example 1" from
>>>
>>> https://github.com/subutai/nupic.subutai/tree/master/swarm_examples
>>>
>>> I ran the command
>>>
>>> run_swarm.py multi1_search_def.json --overwrite --maxWorkers 5
>>>
>>> using the supplied JSON file and "run_swarm.py" from the "scripts"
>>> directory. I got the result:
>>>
>>> Field Contributions:
>>> {   u'metric1': 0.0,
>>>     u'metric2': 20.0598347434741,
>>>     u'metric3': -63.85677190034707,
>>>     u'metric4': -157.77883953004587,
>>>     u'metric5': -153.23706619032606}
>>>
>>> Best results on the optimization metric
>>> multiStepBestPredictions:multiStep:errorMetric='altMAPE':steps=[1]:window=1000:field=metric1
>>> (maximize=False):
>>> [41] Experiment _NupicModelInfo(jobID=1062, modelID=4815,
>>> status=completed, completionReason=eof, updateCounter=22, numRecords=1500)
>>> (modelParams|clParams|alpha_0.055045.modelParams|tpParams|minThreshold_11.modelParams|tpParams|activationThreshold_14.modelParams|tpParams|pamLength_3.modelParams|sensorParams|encoders|metric2:n_296.modelParams|sensorParams|encoders|metric1:n_307.modelParams|spParams|synPermInactiveDec_0.055135):
>>>
>>> multiStepBestPredictions:multiStep:errorMetric='altMAPE':steps=[1]:window=1000:field=metric1:
>>> 1.57090277774
>>>
>>> So the error was only slightly improved to 1.57 (altMAPE) compared to
>>> the "basic swarm with one field"
>>>
>>> Now in the readme file, you stated you got the result:
>>>
>>> Best results on the optimization metric
>>> multiStepBestPredictions:multiStep:errorMetric='altMAPE':steps=[1]:window=1000:field=metric1
>>> (maximize=False): [52] Experiment _GrokModelInfo(jobID=1161, modelID=23650,
>>> status=completed, completionReason=eof, updateCounter=22, numRecords=1500)
>>> (modelParams|clParams|alpha_0.0248715879513.modelParams|tpParams|minThreshold_10.modelParams|tpParams|activationThreshold_13.modelParams|tpParams|pamLength_2.modelParams|sensorParams|encoders|metric2:n_271.modelParams|sensorParams|encoders|metric1:n_392.modelParams|spParams|synPermInactiveDec_0.0727958344423):
>>> multiStepBestPredictions:multiStep:errorMetric='altMAPE':steps=[1]:window=1000:field=metric1:
>>> 0.886040768868
>>>
>>> Field Contributions:
>>> {   u'metric1': 0.0,
>>>     u'metric2': 54.62889798318686,
>>>     u'metric3': -23.71223053273957,
>>>     u'metric4': -91.68162623355796,
>>>     u'metric5': -25.51553640787998}
>>>
>>> Which gives a considerable improvement to to 0.886 (altMAPE). Note that in 
>>> "Field
>>> Contributions" you get a 54.6% improvement from metric2 while in my run I 
>>> only got 20.05% improvement.
>>>
>>> Can we explain this discrepancy? I think I ran your code exactly. It's 
>>> important because it shows my NUPIC
>>>
>>> is not working as well with multiple fields as yours is which is especially 
>>> important for the bridge
>>> project I keep going on about! I notice your output refers to 
>>> GrokModelInfo, while mine refers to
>>>
>>> NupicModelInfo.
>>>
>>> John.
>>>
>>>
>>>
>>>
>>>
>>
>

Reply via email to