> Thank you! It is, but for the moment it's not parametrized; I don't want to 
> start doing control interfaces before knowing exactly how we will deploy this 
> somewhere.
>
> The default run option, as Wes thought it out, was 'eod', for 'end of day': 
> for the defined time range, run the benchmark for the last commit every day. 
> I added an option more fit for the continuous benchmark scenario, 'last', 
> meaning just to run the last available commit. In order to get a reasonably 
> long history in a reasonable amount of time, I would need to add something 
> like 'run the last commit every k days'. The problem is that I can run this 
> on my work machine, guaranteeing no other intensive tasks overlapping, but 
> when we deploy, the benchmarks would not be comparable.
>
> Maybe what would be nice is to be able to pass a list of commits, so we can 
> add data points for every release. Benchmarks that fail because of the 
> estimator not existing in that release will simply not appear.

do whatever is reasonable with a trade off usefulness / time investment.

it was just a suggestion to eventually spot some performance
regression in the past.

Alex

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to