Dear List,
I have developed two models i want to use to predict a response, one with a
binary response and one with a ordinal response.
My original plan was to divide the data into test (300 entries) and training
(1000 entries) and check the power of the model by looking at the % correct
Split sample validation is highly unstable with your sample size.
The rms package can help with bootstrapping or cross-validation, assuming
you have all modeling steps repreated for each resample.
Frank
-
Frank Harrell
Department of Biostatistics, Vanderbilt University
--
View this
Thanks for this,
I had used
validate(model0, method=boot,B=200)
To get a index.corrected Brier score,
However i am also wanting to bootstrap the predicted probabilities output from
predict(model1, type = response) to get a idea of confidence, or am i best
just using se.fit = TRUE and
It all depends on the ultimate use of the results.
Frank
-
Frank Harrell
Department of Biostatistics, Vanderbilt University
--
View this message in context:
http://r.789695.n4.nabble.com/Validation-Training-test-data-tp2718523p2719370.html
Sent from the R help mailing list archive at
4 matches
Mail list logo