A perfect prediction would predict the **next** value perfectly, not
the current value.

Basically, what I was trying to say is that when NuPIC is entirely
confused, it guesses that the next value in the data will be the same
as the last value it saw. So if you pass in "5" as the value, it might
return a predicted value of "5". It is fairly obvious for moving data
when this is the case, because you can easily see it when plotted.

When evaluating prediction models, the bottom line method of measuring
performance is to compare the predicted values NuPIC is returning to
the actual values in the data *when they occur*. So if you are
predicting 10 steps ahead, you'll not know how well the predictions
really are until you get to 10 steps in the future. But at that point
it's fairly easy to compare them and see how far off NuPIC is.

I generally use the inference shifter anytime I am plotting results.
This assures that the data is lined up properly in time, and that each
timestamp plots the actual result and predicted result on the same
vertical frame. I suggest if you are plotting to also use the
inference shifter. Your misunderstanding of this issue may certainly
be cleared up. Your charts will NOT be correct if you don't use it.

I don't know what your PPPS comment means. :/ NuPIC will predict
repeating patterns in the sequence if it has seem them enough times.

Regards,

---------
Matt Taylor
OS Community Flag-Bearer
Numenta


On Mon, Nov 2, 2015 at 1:57 AM, Wakan Tanka <[email protected]> wrote:
> Hello Matt,
>
> On 11/02/2015 06:16 AM, Matthew Taylor wrote:
>>
>> Hello,
>>
>> Generally, you can tell when NuPIC is totally confused about input
>> data when it simply repeats the last value it saw as the prediction.
>> This is pretty easy to see in a graph, because (assuming you used the
>> inference shifter) the prediction line is trailing the actual value
>> line by one step.
>
> OK but, how can I be sure that it is not perfect prediction but rather
> repeating?
>
>
>>
>> I don't quite understand the rest of your question. When NuPIC just
>> repeats the last line, it will not be correct as the input data
>> changes.
>
> Now I do not understand you. I was thinking about your video "One Hot Gym
> Prediction Tutorial" around 45:00 you've made mistake in code and NuPIC just
> repeated values. Around 48:00 you've fixed the code mistake and then NuPIC
> start to predict values. In your data it is obvious when NuPIC started to
> make predictions and when was repeating, but what if I have data in which
> this is not so obvious? How can I know if NuPIC is predicting?
>
>
> PS:
> What do you mean by:
> "When NuPIC just repeats the last line, it will not be correct as then input
> data changes."
>
> PPS:
> I did not use inference shifter. I was predicting one step ahead and it was
> pretty clear from graph. I've assumed that inference shifter is useful with
> larger steps.
>
> PPPS:
> Just for sake of clarity, it is not to possible for NuPIC to repeat then
> predict and then again repeat? NuPIC repeat and then predict, am I correct?
>
>
>  You can also look into using the MetricsManager (see the hot
>>
>> gym source code [1][2]) to get some error metrics out of the results.
>>
>> [1]
>> https://github.com/numenta/nupic/blob/master/examples/opf/clients/hotgym/prediction/one_gym/run.py#L53-L66
>> [2]
>> https://github.com/numenta/nupic/blob/master/examples/opf/clients/hotgym/prediction/one_gym/run.py#L103-L122
>
>
> Thank you for that, I will definitively check those metrics, can I find
> further info about this?
>
> Thank you very much.
>

Reply via email to