Dear R-helpers

I have generated a suite of GLMs.  To select the best model for each set, I am 
using the
meta-analysis approach of de Luna and Skouras (Scand J Statist 30:113-128).  
Simply
put, I am calculating AIC, AICc, BIC, etc., and then using whichever criterion
minimizes APE (Accumulated Prediction Error from cross-validations on all model 
sets)
to select models.

My problem arises where I have noticed my rankings from BIC and AICc are exactly
inverse.  I fear this behaviour is a result of my coding as follows:

I calculate BIC from sample size:
stepAIC (mymodel.glm, k=log(n))

I then calculate AICc by:
stepAIC (mymodel.glm, 
k=2*sum(mymodel.glm$prior.weights)/(sum(mymodel$prior.weights) -
length(coef(mymodel.glm))-1)).

I base these calculations for:
BIC on Venables and Ripley's MASS ("...Only k=2 gives the genuine AIC: k = 
log(n) is
sometimes referred to as BIC or SBC."...)
; and for AICc from that AICc = AIC + ((2K*(K+1))/(n-K-1))
        
Is this behaviour expected?  Or is the coding off?  I could find no reference 
to this
problem in the archives here, nor at S-news.

Cheers,
Joe

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Joseph J. Nocera
Ph.D. Candidate
Biology Department - Univ. New Brunswick
Fredericton, NB
Canada   E3B 6E1
tel: (902) 679-5733

______________________________________________
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to