Awesome work Matt! This will be a high priority for me to learn about NuPIC configuration! I'm going to study this!
Cheers, David On Tue, May 3, 2016 at 12:14 PM, Matthew Taylor <[email protected]> wrote: > Oh yeah, to run my example, go into the "part-1-scalar-input" directory, > then: > > python run_prediction.py data/fives-and-sixes.csv > python plot.py out/prediction_fives-and-sixes.csv > > You must have a Plot.ly account to do the plotting, but you can just look > at the output file and see the predictions are accurate. > > --------- > Matt Taylor > OS Community Flag-Bearer > Numenta > > On Tue, May 3, 2016 at 10:13 AM, Matthew Taylor <[email protected]> wrote: > >> Took me awhile to get back to this, but I have some news at least. :) >> >> I looked at your example code, but was a bit confused, so I modified an >> existing code sample I have to do predictions on your "5s and 6s" data set. >> See: >> >> >> https://github.com/numenta/nupic.workshop/tree/fives-and-sixes/part-1-scalar-input >> >> And the resulting predictions match perfectly: >> https://plot.ly/~rhyolight/301/just-some-data/ >> >> In particular, see the model params I used: >> https://github.com/numenta/nupic.workshop/blob/fives-and-sixes/part-1-scalar-input/model_params/model_params_fives_sixes.json >> And also this bit identifying the RDSE "resolution" based on the min/max >> might be what was missing from the previous example I gave you: >> https://github.com/numenta/nupic.workshop/blob/fives-and-sixes/part-1-scalar-input/run_prediction.py#L36-L41 >> >> I hope that helps? >> >> --------- >> Matt Taylor >> OS Community Flag-Bearer >> Numenta >> >> On Thu, Apr 28, 2016 at 7:41 AM, Alexandre Vivmond <[email protected]> >> wrote: >> >>> I appreciate that you're going the extra mile here in helping me out. >>> I'll try to keep it short then, I've run 2 swarms, >>> -- The first setup -- >>> Swarm size: medium >>> Input data size: 20000 lines >>> "last_record": 3000 >>> "maxValue": 6.0 >>> "minValue": 5.0 >>> Once the swarm had run its course, I ran the OPF with the swarm's >>> generated model_params.py file. >>> The output file showed that HTM struggles to learn the pattern >>> 5,5,5,5,5,5,5,5,5,5,6,5,5,... predicting the 6 seemingly randomly. >>> >>> -- The second setup -- >>> Same as above, except I followed your previous advice about using >>> a RandomDistributedScalarEncoder instead of the regular ScalarEncoder. >>> Again the output file showed pretty much the same thing as for the >>> previous setup >>> >>> If you want to double check for yourself, I provided all the files in >>> the attachment that you would need to test it yourself. >>> All I want really, is to be sure that my setup is not wrong, and that >>> Nupic's results really show that the above mentioned pattern truly is hard >>> for HTM to learn. >>> >> >> > -- *With kind regards,* David Ray Java Solutions Architect *Cortical.io <http://cortical.io/>* Sponsor of: HTM.java <https://github.com/numenta/htm.java> [email protected] http://cortical.io
