On Sun, 20 Jul 2003 05:10:44 -0700, Greg Heath wrote: > The combined terminology XVAL & ROC could imply one > of several approaches including (I'll assume 10-fold XVAL): > 1. 10-fold XVAL yields 10 designs and 10 associated > validation set ROCs w.r.t. varying a single parameter > (see the definition of ROC in my previous posts). > The "best" design is chosen based on its validation > set ROC (area under portion of the curve?).
The complain here is that I do not want to choose one best design for the whole curve, since for some portions one design may be better than the other. For example, one design may rise quickly to 75% true positive rate and stay there, while another may rise more slowly, but up to 95% TPR. > 2. 10-fold XVAL yields 10 designs that are combined > into an ensemble that weights outputs or decisions > (usually in a linear or log-linear combination). > The weights are determined from the 10 individual > validation set ROCs to minimize whatever criterion > is used above in option 1. A final ROC for the > ensemble is obtained using either all of the design > and validation data or from independent test set > data that was not used for design or validation. You could describe it this way. However, the combination would select one of the designs based on, for example, the requested false positive rate. The complaint here is not what you described, but to decide what would be the next step. I could use either bootstrapping or cross-validation to estimate the error of this complete procedure. But I was wondering if any of this is valid and/or makes any sense. > These results support the bootstrapping suggestion > of Frank Harrell in a previous post. I'm not that fond of bootstrapping yet; I know it does well quite often, but sometimes one flavour does very well while in other cases it may perform worse than others. Regards, Koen . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
