This is a great question, and I'm not sure I can answer it. My litmus test
has always been whether the prediction is valuable or marketable.
Basically, does it solve your problem? Does it perhaps solve somebody
else's problem? The value is hard to measure, except in results. HTM is
still very new.

This is one of the major miscommunications between us HTM folks and the
rest of the ML community. They expect measured success in the form of
scientific proofs or benchmarks, competing to solve ever more interesting
problems. It is a noble effort and well worth it. I have the utmost respect
for our AI forefathers. But what we and Numenta are doing is different. We
started with the brain and a cellular, synaptic level. At the core, this is
an intelligence architecture pulled straight from the neocortex. Our
primary goal was to replicate this intelligence in software, and it's going
pretty well so far! :)

So anyway I don't know the answer to your question :P

Regards,


---------
Matt Taylor
OS Community Flag-Bearer
Numenta

On Tue, Jan 12, 2016 at 5:19 PM, Wakan Tanka <[email protected]> wrote:

> Hello NuPIC,
>
> How do you evaluate a correctness and accurancy of a prediction? Or if you
> have multiple predictions for same data how do you compare which prediction
> was more accurate? I've seen that there is NAB [1] but to be honest I did
> not get deep into so I do not know if it might help or not. AFAIK when you
> want to do such things the correlation should work fine, in this case
> correlation between original and predicted data. But correlation works only
> when you have linear data, it would not work e.g. on hotgym example where
> you have repeating cycles, peaks, maybe random events in particular days
> etc. So my intuitive approach was to calculate absolute difference [2] of
> original and predicted value and then calculate mean of those values. The
> lower the mean is the better the prediction is. Then I've realized that
> there is standard deviation [3] which can be calculated from those absolute
> differences. Next step would be pick up all values which have absolute
> differences of original and predicted value:
> 1. above  mean + standard deviation
> 2. bellow mean - standard deviation
>
> This should give me an overview of how many values falls in this interval
> and how many is doesn't. The dataset where more values falls in the
> interval is dataset with better prediction.
>
> Does this make sense?
>
>
>
>
> [1]
> http://numenta.com/blog/nab-a-benchmark-for-streaming-anomaly-detection.html
> [2] https://en.wikipedia.org/wiki/Absolute_difference
> [3] http://www.mathsisfun.com/data/standard-deviation.html
>
>

Reply via email to