On Fri, 17 Mar 2000, Derek Ogle wrote:

> 1)  When performing a multiple linear regression we have examined 
> t-tests for the null hypothesis that a coefficient is zero.  If we 
> examined the tests for each variable in the model, should you correct 
> for multiple comparisons (e.g., use a Bonferroni-corrected alpha)?
        Depends.  (a) Why are you so testing each variable?  Are you 
trying to find out what variables may be germane in predicting the 
response variable, or are you seeking a model with as few predictors as 
are supportable, or something else?
 (b) How have you treated your predictors?  Taken 'em raw & unmodified, 
cross-correlated as they come in the sample;  or orthogonalized them in 
some hierarchical order;  or otherwise sought to adjust the R matrix? 
And do they include interactions among the original predictors?
 (c) Do you plan a cross-validation, in which the degree of shrinkage in 
the regression coefficients can be estimated?  (And if not, why not?)  If 
so, I'd be inclinced to be liberal in including variables;  the weak 
sisters will be shot down in the follow-up unless they turn out to be 
stronger in the replication.

> 2)  When performing a multiple linear regression we have performed 
> partial f-tests with the sequential SS (Type I SS) to examine if a 
> particular variable "should be added" to a simpler model.  If a series 
> of these tests are used to find a parsimonious model that still fits 
> should we correct for multiple comparisons?

Procedure sounds inappropriate to me.  What I use sequential SS for is to 
find groups of two or more variables which together have a discernible 
effect on variation in the response variable, although singly they be not 
significant.  Can also be used as you describe, but I wouldn't use this 
to decide NOT to include a variable, any more than I would use the 
t-tests onk coefficients for that purpose, unless the variables had been 
at least partly orthogonalized (or had shown that they didn't need to be 
so massaged).

Bear in mind that while (1) and (2) are NOT equivalent tests in general, 
(2) IS equivalent to (1) for the last variable entered.
 (Specifically, (1) deals with tests based on the condition that all the 
other variables are present in the model, while (2) uses the condition 
that all the _preceding_ variables (and only those) are in the model. 
If the predictors are highly intercorrelated, it is entirely possible for 
NONE of the t-tests in (1) to be significant, even while the model as a 
whole accounts for a large fraction of the variation in the response 
variable.  And, under those circumstances, it can be informative to 
perform (2) with respect to several different sequences of predictors.)

> I am not aware of correcting for multiple comparisons in these 
> instances, but I'm not sure why (if that is even true).

Not commonly done, as a colleague has already remarked;  but, as in all 
things, it depends precisely on how much protection you want against 
what particular peril(s);  and whether you want that protection for 
logical and defensible reasons, or for essentially psychological and/or 
aesthetic reasons (e.g., to satisfy a journal editor, or to impress a 
dissertation committee member).
                                        -- DFB.
 ------------------------------------------------------------------------
 Donald F. Burrill                                 [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,          [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264                                 603-535-2597
 184 Nashua Road, Bedford, NH 03110                          603-471-7128  



===========================================================================
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to