On 22 Nov 2003 01:36:13 GMT, Eric Bohlman <[EMAIL PROTECTED]>
wrote:

> [EMAIL PROTECTED] (Robert J. MacG. Dawson) wrote in
> news:[EMAIL PROTECTED]: 
> 
> >>let's say we have SAT scores where the mean is about 500 and the SD
> >>about 100 ... here ... the COV is 100/500 = 20%
> > 
> > IF this were true, I can conclude:   the vast majority of scores will
> > be above 300, with the result that a large proportion of the students'
> > test-writing time is spent on questions that pretty nearly everybody
> > gets and which thus have little predictive power.  This might justify

That's a pretty bad statement;  I don't know the context is 
supposed to be.  The folks who develop tests
are not going to keep items that have "little predictive power."
Now, it is true that the items that are predictive at the *bottom*
are not going to be very predictive at the *top* ... and vice-versa.

> > a major revision of the testing protocol (or at least the admission
> > that the goal of making sure that nobody feels squashed is more
> > important!) It is a 
> 
> That's an urban legend.  The reason SAT scores bottom out at 200 is the 
> same reason they top out at 800, namely that there aren't enough test items 
> to meaningfully distinguish levels of aptitude/achievement/whatever way out 
> in the tails.  There aren't enough items to distinguish 600 levels of 
> performance, let alone 1000.  SAT scores aren't linear functions of the 
> number of correct test items: how many points each item is "worth" varies 
> with the total score; the marginal gain from getting a single test item 
> right increases with distance from the mean (I vaguely recall that the 
> difference between a 750 and an 800 is one test item).

I can believe that the value varies per item, but I don't believe
that a single item has ever been the difference between 750
and 800.  That seems extra-unlikely, since they have changed
the scoring and its intentions since my own youth (so I have 
read), such that: (a) these days, an 800 might include several 
wrong answers.  That's partly because (b) some items each time
are for yearly updating, (c) not every test in one year has the 
same items, and (c) they are willing to allow a relatively large 
number of 800s to be scored these days, as opposed to the 
1960s when the popularity of the SATs  first boomed
(hundreds or thousands, today, instead of just dozens).

> 
> SAT scores are just rescaled Z scores (SAT=100*Z+500), with the 
> complication that the mean subtracted and the standard deviation divided by 
> are those of a reference group rather than one's fellow test-takers (IOW, 
> it's normed only once every 50 years or so, not with each administration).  

Yes, basically standard scores.

They did a *major*  re-norming about a decade ago.  However,
they necessarily do some version of re-norming every year,
since new items have to be introduced.

> So a score of 300, which means nothing more than "the test-taker's raw 
> score was 2 standard deviations below the mean," actually works out to 
> getting very few items right.

Yes, that should be right.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
"Taxes are the price we pay for civilization." 
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to