> Don't you think that I could also benchmark models that are not 
> implemented in sklearn? For instance, I could write a wrapper 
> DeepNet(...) with fit() and predict(), and which uses internally theano 
> to build a ANN? In this way, I could benchmark complex deep networks 
> beyond what will be possible with the new sklearn ANN module.

I am personally less interested in that. We have already a lot in
scikit-learn and more than enough to test the model selection code. The
focus should be on providing code that is readily-usable.

I am worried that such task will be very time consuming and will not move
us much closer to code that improves model selection in scikit-learn.

Gaƫl


------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to