Until a few weeks ago I too hadn't come across the Johnson-Neyman procedure, so I
appreciate your comments. While I agree that it is no replacement for further
investigation of what 'correct models' may be better suited to the data, I think
it did prove useful for the application I was working on.

A key point in this case was that only a small number of outcomes being
investigated didn't show HOS. The J-P procedure was used for those to try and
keep a consistent framework for all analyses in the study. A literature search
showed a small number of examples using J-P in recent years in the subject area.
For the author these were pretty important points.

Perhaps it can be considered a kind of descriptive analysis for those scenarios
with homogeniety of slopes isn't met?

I found a really useful reference to be Bradley E. Huitema's 'The Analysis of
Covariance and Alternatives' (1980, Wiley) which discusses a number of
alternatives to ANCOVA both from the perspective of choosing the most appropriate
model and of what is available when assumptions are not met.

Kylie.

Rich Ulrich wrote:

> On Tue, 23 Sep 2003 06:31:36 GMT, Kylie Lange
> <[EMAIL PROTECTED]> wrote:
>
> > Further to Donald's comments about investigating the nature of the
> > dependent-covariate interaction is the Johnson-Neyman technique for
> > calculating the region of significance. This will tell you at what
> > cut-points the regression slopes of your treatment groups change from being
> > not significantly different to significant.
>
> "... at what cut-points ... change from being not
> significantly different..."
>
> Don't  you  have a linear model?
> That sounds pretty bogus to me...   Off hand,
> I don't think of  good reasons for helping
> the reader to think that an effect works in  *lumps*.
>
> I see from google that the technique apparently dates
> back 50 years, and it is used for showing where the
> regression lines 'intersect'  and are 'not different'  in
> terms of the CI  on the intersection.
> For something that old, it is not well known -- I did
> not know that name or that it was legitimate until google
> gave me those hits.
>
> From my brief glance, I don't think it is anything that
> I will use or recommend.    (Am I being unfair?  is this
> important to someone?)
>
> The inference drawn from the 95%  CI  on
> the intersection-of-regression lines   is *cute*  but
> I don't think  you can read it that strongly, as a fair point.
> Also, a point about the technicality:  Does the technique
> get applied *only*  in the case of disordinal interactions,
> or is it also used when the lines do not cross?
>
>  - I think that one thing that affects me here is that I
> tend, rather strongly, to regard  'interactions'   as being
> a failure to find the proper elements to model.  That is,
> if the definitions were right, we'd  see main effects;
> while the definitions are wrong, we should be rather
> calm and quiet about our pronouncements.
>
> >
> > This then allows you to put values on the regions that Donald described
> > where group A > group B,  group A < group B etc.
> >
> > There is SPSS syntax for the J-P technique available at
> > http://support.spss.com/answernet/details.asp?ID=19193 which in turn was
> > developed from SAS code (reference given).
> >  [ ... ]
>
> --
> Rich Ulrich, [EMAIL PROTECTED]
> http://www.pitt.edu/~wpilib/index.html
> "Taxes are the price we pay for civilization."

.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to