When comparing models using an information-theoretic approach, I have seen
several means to assess the likelihood of candidate models.  One method uses
the AIC value of a given model relative to best model in the set, i.e. delta
AIC.  When delta AIC is less than or equal to 2, the given model is
suggested to be within the range of plausible models to best fit the
observed data.  However, one can also compute Akaike's weights, which seems
to me a more intuitive means of assessing the likelihood of a candidate
model being the best for the observed data.  Have guidelines on use of
Akaike's weights to assess model likelihood been published somewhere, for
example, when the evidence ratio (ith model relative to the best) is above a
given value?  I have found a comparison of these two approaches can yield
somewhat inconsistent results and would appreciate any feedback on what
others have found.

Sincerely:

Brian D. Campbell

Reply via email to