On 24 Dec 2001, Carol Burris wrote:

> I am a doctoral student who wants to use student performance on a
> criterion test, a state Regents exam, as a dependent variable in a
> quasi-experimental study.  The effects of previous achievement can be 
> controlled for using  a standardized test, the Iowa test of Basic
> skills.  What kind of an analysis can I use to determine the effects
> of a particular treatment on Regents exam scores?  The Regents exam
> produces a "percentage correct" score, not a standardized score,
> therefore it is not interval data, 

Non sequitur, and probably not true.  Percentage correct, if it means 
what it says, is the same variable as number of items correct (merely 
reduced to a percentage by dividing by the total number of items and 
multiplying by 100%), which is about as interval as you can expect to get 
in this business.  (Even a standardized score is often but a linear 
transformation of the number of items right.)  

Of course, if you have mis-stated things (I don't have personal knowledge 
of the Regents exams and the marking thereof) and what is produced is a 
set of percentiles rather than percentage correct, THAT variable is not 
interval (although it can be converted to an interval score fairly 
readily, by making some assumptions about the form of the distribution).

> and I can not use analysis of covariance (or at least that is what I 
> surmise from my reading).  Any suggestions?

The phrase "can not" should not even be in your vocabulary in this 
context.  You can ALWAYS carry out an analysis of covariance (or a 
multiple regression analysis, or an analysis of variance;  and any of 
these in their univariate or multivariate forms).  Whether the results 
mean what you would like them to mean is another matter, of course, and 
that depends to some degree on what assumptions you are willing (and what 
assumptions you are UNwilling!) to make about the variables you have and 
about the models you are entertaining.  First carry out your analyses 
(several of them, if you're unsure, as most of us are at the outset, 
which one is "best" in some useful sense);  then look for ways in which 
the universe may be misleading you (or ways in which you may be deceiving 
yourself).  If several analyses seem to be telling you much the same 
thing (at least in a general way), then that thing is probably both 
believable and reliable.  If they tell you different things, you know the 
data isn't different, so the differences must be reflecting differences 
(possibly subtle ones) in the questions being addressed by the several 
analyses:  which in turn means that something interesting is going on, 
and it msay repay you well to find out what that something is.

However, if the analysis you think you want is analysis of covariance, 
I'd strongly urge you to carry it out as a multiple regression problem, 
or as a general linear model problem;  "analysis of covariance" programs 
often do not permit the user to examine whether the slope of the 
dependent variable on the covariate interacts with the treatment variable 
(that is, whether the slopes are different in different groups, thus 
contradicting the too-facile assumption of homogeneity of regression).
Such an interaction does not invalidate the analysis;  it merely makes 
the interpretation more challenging.  And if such an interaction is 
visibly present, the analysis that assumes its absence will in general 
have less power to detect _other_ interesting things.

                        -- DFB.
 ------------------------------------------------------------------------
 Donald F. Burrill                                 [EMAIL PROTECTED]
 184 Nashua Road, Bedford, NH 03110                          603-471-7128



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to