hi,

I am looking for advice on how to pick a classifier among n competing
classifiers when they are evaluated on more than a single training/test data
set. i.e., I would like to compare, for each classifier, the set of roc
curves that are generated from each training/test data set.  Is there an
established way of doing this ?

Mathieu
-- 
Mathieu Lacage <[email protected]>
------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to