Ravi Varadhan <rvarad...@jhmi.edu> wrote
>
>I have heard this (i.e. only head-to-head comparisons are valid) and various
>other folklores about AIC and BIC based model selection before, including
>one that these information criteria are only applicable for comparing two
>nested models.  
>
>Where has it been demonstrated that AIC/BIC cannot be used to find the best
>subset, i.e. the subset that is closest to the "true" model (assuming that
>true model is contained in the set of models considered, and that maximum
>likelihood estimation is used for estimating parameters in the models)?  
>
>I would greatly appreciate any reference that shows this.
>

Burnham and Anderson state a different result - not exactly opposite, but 
different - in that they recommend use of AICC to choose among several 
competing models.  

But defining 'best' is tricky.  In most situations where there are many 
variables, each of several models will be almost equally good, and which is 
'best' would vary if you took a different sample from the same population.

Peter

Peter L. Flom, PhD
Statistical Consultant
Website: www DOT peterflomconsulting DOT com
Writing; http://www.associatedcontent.com/user/582880/peter_flom.html
Twitter:   @peterflom

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to