On 02/19/2015 09:51 PM, Joel Nothman wrote:
> Ties within a confidence interval happen in practice and it could be 
> nice to have grid search use a model complexity criterion to select 
> between insignificantly different top performers. But I think this is 
> separate to the notion of scorer. It relies on custom logic beyond 
> argmax to select the best parameters. The current design of 
> GirdSearchCV is not particularly suitable for extending in this way.
>
Capturing model complexity is exactly what scorers are for, right?
Well, I guess there is another step, which would be defining model 
complexity in a consistent way. For a single model it is easy, though.
We don't do anything about confidence intervals, which is another story. 
Then we really need to do more than argmax.
I know Gael thinks we already do too much in GridSearchCV ;)
Actually, I would disagree on that. It is mostly a call to 
_fit_and_score which itself is not that complex.
Yesterday two people asked me about other methods than argmax for 
selecting the best parameter setting.

------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to