Hi David, Thank you for the response. A few comments below.
On Fri, May 23, 2008 at 2:42 PM, David Hewitt <[EMAIL PROTECTED]> wrote: > > Model selection doesn't reduce to AIC vs. BIC, or to Bayesian vs. > frequentist. AIC and BIC are only two approaches for model selection, after > all. That was part of my main point. Nonetheless, the fact remains that > Bayesian methods differ from "pure" likelihood methods, in principle and in > practice. If you're going to use BIC, how will you choose your priors? It's > a practical issue. EJW has done a lot of work on model selection and I > thought his papers were a good intro to the variety of approaches. > > My remark was in response to the statement that AIC/AICc is used following ML estimation and BIC is used in a Bayesian context with a likelihood and a prior. I wanted to point out that BIC doesn't need to be though of in a Bayesian context and there is no need for the user to explicitly specify a prior to use BIC -- it is simply -2*(loglik) + k*log(n), with k being the number of estimated parameters and n the sample size. > >>> All that said, since you're dealing with random effects, Bayesian >>> approaches >>> do appear to have the upper hand at present, and a shift in that >>> direction >>> may be warranted. >> >> Can you expound on the last paragraph? >> > > Others on the list are far better positioned than I to expound, but as a > lurker in stats journals I see a lot more work on model selection methods > for models with random effects in a Bayesian context. For instance, type > "random effects model selection" into Google and almost all the first 20 > results are Bayesian. David Anderson told me personally that he thinks I-T > methods (AICc) are really struggling with random effects. I don't honestly > know how the various packages in R calculate the AIC values for models with > random effects (of course, you can look and see), but I'd guess it's > something you have to be rather careful about. I still need to read Pinheiro > and Bates, obviously. > I think you're right that there is some shaky ground here, and Doug Bates has pointed out some issues on the R-sig-mixed-models list (I can't seem to find the thread right now). One of the issues is that mixed models are generally fit with REML, which is not ML and therefore does not technically conform to the derivations of the *IC. If you fit a mixed model with ML instead, bias is introduced. Another issue that is a bit murky is the question of how many parameters are being estimated in a model with random effects. In this thread we have discussed models with huge numbers of random effects (i.e. >300 intercept adjustments, >300 slope adjustments for diameter, >300 slope adjustments for vineload, etc), yet we only increase k in the AIC/BIC equations by 1 per variance component because technically the random effects are predicted while the variance components are estimated. best, Kingsford Jones > ----- > David Hewitt > Research Fishery Biologist > USGS Klamath Falls Field Station (USA) > -- > View this message in context: > http://www.nabble.com/nlme-model-specification-tp17375109p17441489.html > Sent from the r-sig-ecology mailing list archive at Nabble.com. > > _______________________________________________ > R-sig-ecology mailing list > R-sig-ecology@r-project.org > https://stat.ethz.ch/mailman/listinfo/r-sig-ecology > _______________________________________________ R-sig-ecology mailing list R-sig-ecology@r-project.org https://stat.ethz.ch/mailman/listinfo/r-sig-ecology