Francis Galton explained it in 1885. Possibly, the Mass. Dept. of
Education missed it! Or, could it be that the same gang who brought
us the exit poll data during the November election were helping them
out? :-)
I am wondering why they did not have a set of objective standards for
ALL students to meet. Of course, it is nice to reward academically
weaker districts for "improving," but the real issue may not be
"improvement," rather it might be attainment at a specific level for
all schools as a minimum target. A sliding scale depicting
"improvement" means little if the schools in question are producting
students who fall behind in math, reading comprehension, etc.
Rewarding urban schools for improving probably is a good idea, but
that should not mean entering a zero sum game with the "good"
schools. When a given school is already "good" it naturally can't
"improve" more than schools on the bottom of the achievement ladder.
It seems they really should have prepared a better public announcement
of results. Rather than "knocking" the high achieving schools, they
should praise them justifiably. Then, noting the improvement in the
large urban schools would seem positive as well.
On Wed, 10 Jan 2001 21:32:43 GMT, Gene Gallagher
<[EMAIL PROTECTED]> wrote:
>The Massachusetts Dept. of Education committed what appears to be a
>howling statistical blunder yesterday. It would be funny if not for the
>millions of dollars, thousands of hours of work, and thousands of
>students' lives that could be affected.
>
>Massachusetts has implemented a state-wide mandatory student testing
>program, called the MCAS. Students in the 4th, 8th and 10th grades are
>being tested and next year 12th grade students must pass the MCAS to
>graduate.
>
>The effectiveness of school districts is being assessed using average
>student MCAS scores. Based on the 1998 MCAS scores, districts were
>placed in one of 6 categories: very high, high, moderate, low, very low,
>or critically low. Schools were given improvement targets based on the
>1998 scores, with schools in the highest two categories were expected to
>increase their average MCAS scores by 1 to 2 points, while schools in
>the lowest two categories were expected to improve their scores by 4-7
>points (http://www.doe.mass.edu/ata/ratings00/rateguide00.pdf).
>
>Based on the average of 1999 and 2000 scores, each district was
>evaluated yesterday on whether they had met their goals. The report was
>posted on the MA Dept. of education web site:
>http://www.doe.mass.edu/news/news.asp?id=174
>
>Those familiar with "regression to the mean" know what's coming next.
>The poor schools, many in urban centers like Boston, met their
>improvement "targets," while most of the state's top school districts
>failed to meet their improvement targets.
>
>The Boston Globe carried the report card and the response as a
>front-page story today:
>http://www.boston.com/dailyglobe2/010/metro/Some_top_scoring_schools_fau
>lted+.shtml
>
>The Globe article describes how superintendents of high performing
>school districts were outraged with their failing grades, while the
>superintendent of the Boston school district was all too pleased with
>the evaluation that many of his low-performing schools had improved:
>
>[Brookline High School, for example, with 18 National Merit Scholarship
>finalists and the highest SAT scores in years, missed its test-score
>target - a characterization blasted by Brookline Schools Superintendent
>James F. Walsh, who dismissed the report.
>
>"This is not only not helpful, it's bizarre," Walsh said. ''To call
>Brookline, Newton, Medfield, Weston, Wayland, Wellesley as failing to
>improve means so little, it's not helpful. It becomes absurd when you're
>using this formula the way they're using it.''
>
>Boston School Superintendent Thomas W. Payzant, whose district had 52 of
>113 schools meet or exceed expectations, was more blunt: "For the
>high-flying schools, I say they have a responsibility to not be smug
>about the level they have reached and continue to aspire to do better."]
>
>
>
>Freedman, Pisani & Purvis (1998, Statistics 3rd edition) describe the
>fallacy involved:
>"In virtually all test-retest situations, the bottom group on the first
>test will on average show some improvement on the second test and the
>top group will on average fall back. This is the regression effect.
>Thinking that the regression effect must be due to something important,
>..., is the regression fallacy."
>
>I find this really disturbing. I am not a big fan of standardized
>testing, but if the state is going to spend millions of dollars
>implementing a state-wide testing program, then the evaluation process
>must be statistically valid. This evaluation plan, falling prey to the
>regression fallacy, could not have been reviewed by a competent
>statistician.
>
>I hate to be completely negative about this. I'm assuming that
>psychologists and others involved in repeated testing must have
>solutions to this test-retest problem.
>
>If I'm missing the boat on the this test-retest error, I'd also
>appreciate others pointing it out.
>
>--
>Dr. Eugene D. Gallagher
>ECOS, UMASS/Boston
>
>
>Sent via Deja.com
>http://www.deja.com/
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================