willow1980 wrote:
Hi, Dieter,
I did add (test=F) in my script, but it does not matter. The following
is my whole script.
#
modelsurs_fer13-gam(sum_surv15~s(FLBS)+SES+s(byear)+s(FLBS,byear),family=quasipoisson)
Hi, Dieter,
I did add (test=F) in my script, but it does not matter. The following is
my whole script.
#
modelsurs_fer13-gam(sum_surv15~s(FLBS)+SES+s(byear)+s(FLBS,byear),family=quasipoisson)
Hi, Simon,
I am using mgcv:gam and the version number is mgcv_1.5-2. I also exchanged
the order of two models in
anova, but this also did not help.
From the differences in DF(0.77246) and deviance (-0.02), these two models
seem to be not significantly different. Isn't it?
Thank you anyway!
The issue isn't really about which order you supply the models to `anova'. The
problem is that there is no meaningful test to perform with these two models,
because the `larger' model has actually been estimated as having a *larger*
deviance than the `smaller' model, so there is never going
Hello, everybody,
There is the first time for me to post a question, because I really cannot
find answer from books, websites or my colleagues. Thank you in advance for
your help!
I am running likelihood ratio test to find if the simpler model is not
significant from more complicated model.
willow1980 jianghua.liu at shef.ac.uk writes:
However, when I run LRT to compare
them, the test did not return F value and p-value for me. What's the reason?
Analysis of Deviance Table
Model 1: sum_surv15 ~ s(FLBS) + s(byear) + s(FLBS,
The simpler model has the lower deviance (marginally), so there is nothing to
test here. This can happen with maximum penalized likelihood estimators, even
though the models are nested (especially if the smoothing parameters are
selected automatically). Are you using gam:gam or mgcv:gam (and
7 matches
Mail list logo