A suggestion:
Let's say that you have two possible predictors to choose from,
systolic blood pressure and diastolic blood pressure, and that we have
found
in a random population sample that one of them is a bit better to
predict a certain outcome, but we are unsure if this difference have
any practical importance.
In this case we have no interest in finding if adding systolic bp
improves the model when diastolic pd is already included or vice
versa.
To gain an insight into our problem, we could calculate a confidence
interval
for the model goodness-of-fit for each predictor. If the distribution
of
this statistic is unkown or difficult to evaluate, we can use
bootstrapping.
Then if the two c.i.s overlap, we would conclude that the observed
difference in prediction power may be due to chance only.
Also a formal significance test could be constructed by taking as test
statistic the difference in (or ratio of) explanatory power, and
finding a confidence interval or a p-value by resampling methods.

I'm not a professional or academic statistician, so I may be wrong,
and in that case it would be educating if someone would be kind to
explain how my logic fails.

A much simpler method would be to standardize the distribution of the
predictors so they have the same distribution, for example mean 0 and
standard deviation 1. Then their regressions coefficients are in the
same scale and might be directly compared, and we could see if their
c.i.s are overlapping.
Is this correct to do?


Rich Ulrich <[EMAIL PROTECTED]> wrote in message 
news:<[EMAIL PROTECTED]>...
> On Thu, 19 Sep 2002 12:37:38 -0400, "Scott Richardson"
> <[EMAIL PROTECTED]> wrote:
> 
> > I am trying to evaluate the explanatory power of various (X1, X2, X3)
> > variables to predict an event, Y. I would like know if there is a test
> > statistic that allows me to compare the goodness of fit across several
> > logistic regressions. I know that such a test exists for continuous
> > dependent variables (there is a paper by Vuong 1989 in Econometrica titled
> > "Likelihood ratio tests for model selection and non-nested hypotheses").
> > 
> 
> It is the same rather-illegimate testing, in one setting or the other.
> Regression with R-squared;  maximum likelihood with Chi-squared.
> You can do a search on < AIC  BIC > .
> 
> > 
> > At the moment all I have is the output from 3 logistic regressions as
> > follows: Y = f(X1) Y = f(X2) Y = f(X3)
> > 
>  [...]
> Is there any reason you can't do a *nested*  test,
> since the nested tests are legitimate? --
> to see if X1 adds to the prediction of X2,   and vice-versa.
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to