Brian,

I assume you've got the Burnham and Anderson, 2002, book.  Everything, and 
more,
is in there.

If I recall correctly, AIC weights are calculated from deltaAIC, so I don't
understand how you are having disagreements.  Model weights quanitfy the 
deltaAIC
so that parameters can be "model averaged," i.e. a weighted average of 
parameter
estimates from the models.

One philosophical advantage of model selection is trying to get away from 
black/white
significant/non-sig, like the p<0.05 rule.  By assessing the relative value 
of models, it
forces us to deal with reality and varying levels of support for models, 
which is not black/white,
instead of assuming it's true/not true depending on the p value.  So if you 
are looking for
a hard and fast rule of what model you should use and not, there isn't one, 
except model-average
your parameters.

Tyler Grant

----- Original Message ----- 
From: "Brian D. Campbell" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Tuesday, October 31, 2006 9:52 AM
Subject: Model selection using AIC


> When comparing models using an information-theoretic approach, I have seen
> several means to assess the likelihood of candidate models.  One method 
> uses
> the AIC value of a given model relative to best model in the set, i.e. 
> delta
> AIC.  When delta AIC is less than or equal to 2, the given model is
> suggested to be within the range of plausible models to best fit the
> observed data.  However, one can also compute Akaike's weights, which 
> seems
> to me a more intuitive means of assessing the likelihood of a candidate
> model being the best for the observed data.  Have guidelines on use of
> Akaike's weights to assess model likelihood been published somewhere, for
> example, when the evidence ratio (ith model relative to the best) is above 
> a
> given value?  I have found a comparison of these two approaches can yield
> somewhat inconsistent results and would appreciate any feedback on what
> others have found.
>
> Sincerely:
>
> Brian D. Campbell
> 

Reply via email to