Maybe what would be nice is to be able to pass a list of commits, so we can add 
data points for every release. Benchmarks that fail because of the estimator 
not existing in that release will simply not appear.

> do whatever is reasonable with a trade off usefulness / time investment.
>
> it was just a suggestion to eventually spot some performance
> regression in the past.
>
I agree that this would be very useful to have, but might not be so 
trivial, depending on how
far you want to go back. I think many estimators changed their 
behavior/interface over time.
Take for example the scale_C thing. This might end up being a very weird 
graph
since changing C really impacts the runtime.
And then there was the scikits.learn -> sklearn migration...

I guess, as Alex said, whatever seems reasonable ;)

btw: great job Vlad :)

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to