Re: [R] ROCR - best sensitivity/specificity tradeoff?
Thanks Claudia, Meanwhile I implemented a simple function to evaluate the Youden-Index and subsequently all other parameters. This is sufficient for my purpose. Cheers, Christian __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] ROCR - best sensitivity/specificity tradeoff?
Christian, My questions concerns the ROCR package and I hope somebody here on the list can help - or point me to some better place. When evaluating a model's performane, like this: pred1<- predict(model, ..., type="response") pred2<- prediction(pred1, binary_classifier_vector) perf<- performance(pred, "sens", "spec") (Where "prediction" and "performance" are ROCR-functions.) How can I then retrieve the cutoff value for the sensitivity/specificity tradeoff with regard to the data in the model (e.g. model = glm(binary_classifier_vector ~ data, family="binomial", data=some_dataset)? Perhaps I missed something in the manual? Or do I need an entirely different approach for this? Or is there an alternative solution? a) look into the performance object, you find all values there b) have a look at this thread https://stat.ethz.ch/pipermail/r-help/attachments/20100523/51ec813f/attachment.pl http://finzi.psych.upenn.edu/Rhelp10/2010-May/240021.html http://finzi.psych.upenn.edu/Rhelp10/2010-May/240043.html Claudia __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] ROCR - best sensitivity/specificity tradeoff?
On Apr 6, 2011, at 2:27 PM, Christian Meesters wrote: Hi, My questions concerns the ROCR package and I hope somebody here on the list can help - or point me to some better place. When evaluating a model's performane, like this: pred1 <- predict(model, ..., type="response") pred2 <- prediction(pred1, binary_classifier_vector) perf <- performance(pred, "sens", "spec") (Where "prediction" and "performance" are ROCR-functions.) How can I then retrieve the cutoff value for the sensitivity/ specificity tradeoff with regard to the data in the model (e.g. model = glm(binary_classifier_vector ~ data, family="binomial", data=some_dataset)? Perhaps I missed something in the manual? Or perhaps in your learning phase regarding decision theory perhaps? You have not indicated that you understand the need to assign a cost to errors of either type before you can talk about a preferred cutoff value. Or do I need an entirely different approach for this? Or is there an alternative solution? Thanks, Christian -- David Winsemius, MD West Hartford, CT __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] ROCR - best sensitivity/specificity tradeoff?
Hi, My questions concerns the ROCR package and I hope somebody here on the list can help - or point me to some better place. When evaluating a model's performane, like this: pred1 <- predict(model, ..., type="response") pred2 <- prediction(pred1, binary_classifier_vector) perf <- performance(pred, "sens", "spec") (Where "prediction" and "performance" are ROCR-functions.) How can I then retrieve the cutoff value for the sensitivity/specificity tradeoff with regard to the data in the model (e.g. model = glm(binary_classifier_vector ~ data, family="binomial", data=some_dataset)? Perhaps I missed something in the manual? Or do I need an entirely different approach for this? Or is there an alternative solution? Thanks, Christian -- __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.