If you are using the OPF [1], it is super easy.

    model.save(pathToDirectory) [2]

and to resurrect a saved model from a file path:

    ModelFactory.loadFromCheckpoint(pathToDirectory) [3]

This is a very useful tactic. We call it "model swapping". We'll have
an example of a couple applications we've written at Numenta that do
this on our servers in order to run hundreds of models at once.

[1] https://github.com/numenta/nupic/wiki/Online-Prediction-Framework
[2] 
http://numenta.org/docs/nupic/classnupic_1_1frameworks_1_1opf_1_1model_1_1_model.html#aba0970ece8740693d3b82e656500a9c0
[3] 
http://numenta.org/docs/nupic/classnupic_1_1frameworks_1_1opf_1_1modelfactory_1_1_model_factory.html#a73b1a13824e1990bd42deb288a594583
---------
Matt Taylor
OS Community Flag-Bearer
Numenta


On Tue, May 19, 2015 at 2:58 PM, Tom Tan <[email protected]> wrote:
> Hi Matt and All,
>
> Our data stream comes in once in an hour and once in every 30 minutes.  We 
> can’t keep the program running CLA in memory.  In stead, we will need to use 
> cron job to read the data, invoke CLA, make predication and generate anomaly 
> score.   My understanding is we will have to serialize the CLA state after 
> previous run and de-serialize it before the next run.   Could you point me to 
> examples/docs for serialization?
>
>
> To extend the topic a little further, if our data stream is down for certain 
> period of time, do we need to discard the serialized state and start anew 
> when data stream starts again?
>
> Regards,
> Tom
>
>

Reply via email to