On 24 Sep 2003 22:02:27 -0700, [EMAIL PROTECTED] (Donald Burrill) wrote:

> On Tue, 23 Sep 2003, Rich Ulrich wrote inter alia (concerning
> inferences from an ancova in the presence of interaction between the
> covariate and the categoircal variables (aka "factors")):

[ snip - about interactions, and regression lines, including
all my posted lines. ]
DB >
> If interaction is present, then the regression lines are not uniformly
> the same distance apart vertically (that is, in the direction of the
> response variable, which is commonly plotted as the ordinate) at
> different values of the covariate (customarily the abscissa).
> 
> If two lines actually cross, there is clearly a region of the covariate
> where the lines are not significantly different from each other;  and
> there may be a region where line A is significantly higher than line B,
> and there may be another region (in the other direction from the
> crossing)  where line B is significantly higher than line A.  (I write
> "there may be" because the existence of these regions, in the observed
> range of the covariate, depends on (1) how sharply the two lines
> actually diverge from each other and (2) where, in the observed range of
> the covariate, the crossing is.  The existence of a significant
> interaction would ordinarily lead one to expect that at least one of
> these regions exists, on logical grounds.)  In this case, as you
> observe, the method Kylie reports is clearly applicable.

Well, you *can*  apply it.  I still doubt that I want to do it.

I've come up with a simplified example using a Main Effect.

Suppose that you have a trend line, showing that Height
predicts success at basketball.  The population SD  for
height is 3 inches or so.  Let us say that the Success 
tends to increase by about   0.60  SD  for every SD  of 
height;  this is a description based on effect size.

Clearly, the effect is "0"  for "0"  increase in height.
Now, the Kylie method, adapted, would say that the
effect is "statistically significant"  for some distance.
Either way, in inches or in SD  units, I do not  like it.
Here is a *continuous*  effect  which is being 
interpreted discretely.  One problem, it seems to me,
is that the number depends on the sample size.
(Another problem,  more technical, is that I don't see
a stated  basis  for prescribing a test; and it seems to
*me*   that sometimes I would want to be more 
'Bayesian'  about it, than other times.)

[snip, stuff about models]

> Rich, you raised this question of "the proper model" in the context of
> ANCOVA and interaction involving the covariate.  Do you hold the same
> opinion, as strongly, where the interaction involves only (some of) the
> categorical predictors?  If so, I'd like to know your choice of "the
> proper model" for the example (the PULSE data set in MINITAB) dealt with
> in my White Paper on modelling interactions in multiple regression, on
> the Minitab web site (www.minitab.com -- I forget the rest of the
> specific URL, but you can get there from the home page by looking for
> the section on "white papers").  I rather thought, in that context, that
> interaction was a quite reasonable thing to look for, and perhaps even
> to expect...

In context, there can be reasonable interactions, especially
in the statistics.  [ I might get to that site, one day, but not
yet today.]  I can add -- I may have had "inappropriate 
interactions"  on my mind, owing to the origin of this thread.
The same person (I believe)  started out with asking about 
skewness, and perhaps ( I wondered) was perpetrating 
odd transformations on numbers coded as Proportions 


In fact, we may have to find them and *demonstrate*  them,
before we can become convinced that the simpler,  non-
interaction model  is therefore reasonable.  What is 
simpler might be different variables, but I think it happens
that we can decide to code up the *interaction*  as a single
term, and 'reify'  it.  Example --
 - If you have two factors that are Male/Female,  it is 
possible to code both main effects  as M/F, and the
interaction is Same/Opposite (sex).  The three dummy
variables also work if you enter one main factor 
as M/F  and main factor as Same/Opposite.
Can you explain Same/Opposite in the word-story
that accompanies your statistical explanation?

It seems to me that the interaction instance does not
arise all that often, but here is another Main effect 
example.   Whenever two variables are highly 
correlated, I try to model them as something like  
(A+B) and  (A-B).  I don't *need*  the same idea
in the model, two times, and if the (A-B)  term matters,
then I would see awkward suppression and weird
coefficients, if I did build the model with both.

-- 
Rich Ulrich, [EMAIL PROTECTED]


http://www.pitt.edu/~wpilib/index.html
"Taxes are the price we pay for civilization." 
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to