Oh wow, thanks for the link, I just skimmed over the code, but this is an 
interesting idea snd looks like the sort of thing that would make my life 
easier in future. I will dig into that! That’s great, thanks!


> On Aug 19, 2015, at 12:58 AM, Stefan van der Walt <stef...@berkeley.edu> 
> wrote:
> 
> On 2015-08-18 21:37:41, Sebastian Raschka <se.rasc...@gmail.com> 
> wrote:
>> I think for “simple” linear models, it would be not a bad idea 
>> to save the weight coefficients in a log file or so. Here, I 
>> think that your model is really not that dependent on the 
>> changes in the scikit-learn code base (for example, imagine that 
>> you trained a model 10 years ago and published the results in a 
>> research paper, and today, someone asked you about this 
>> model). I mean, you know all about how a logistic regression, 
>> SVM etc. works, in the worst case you just use those weights to 
>> make the prediction on new data — I think in a typical “model 
>> persistence” case you don’t “update” your model anyways so 
>> “efficiency” would not be that big of a deal in a typical “worst 
>> case use case”.
> 
> Agreed—this is exactly the type of use case I want to support. 
> Pickling won't work here, but using HDF5 like MNE does would 
> probably be close to ideal (thanks to Chris Holdgraf for the 
> heads-up):
> 
> https://github.com/mne-tools/mne-python/blob/master/mne/_hdf5.py
> 
> Stéfan
> 
> ------------------------------------------------------------------------------
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general


------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to