----- Original Message ----- 
From: "Martin Lewis" <[EMAIL PROTECTED]>
To: "Killer Bs Discussion" <[EMAIL PROTECTED]>
Sent: Monday, November 08, 2004 10:19 AM
Subject: Re: Iraq civilian casualties.


> On Thu, 4 Nov 2004 16:42:53 -0800 (PST), Gautam Mukunda
> <[EMAIL PROTECTED]> wrote:
>
> > > I have read about a recent study on the number of
> > > civilian deaths during
> > > the last year in Iraq.  The methodology seems, on
> > > paper, capable of
> > > providing at least order of magnitude accuracy (i.e.
> > > differentiating to
> > > within at least a factor of 2).  It reports at least
> > > 100k civilian deaths
> > > in Iraq since the US's invasion..  My questions are:
> > >
> > > 1) Is there anything obviously wrong with their
> > > methodlogy?
> >
> > Yes - the methodology is extremely bad.  It uses
> > clustering, etc.
>
>  The Slate article you link to describes clustering as a "time-honored
> technique for many epidemiological studies". How does this make it
> extremely bad? There are difficulties with using clustering but this
> doesn't mean it is obviously wrong which is what Dan asked.

Gautam had pointed out several systematic problems that I didn't see
adressed in the Economist article or in the paper.  I consider it a minus
for the paper that they weren't adressed in the paper.

1) True random techniques were not used.  I can understand clusters, but
the problem with them is getting representative clusters.  Selecting random
clusters works; one just has a much smaller sample size than if one chose
randomly.  Since the violence is not randomly distributed among all people,
the n  used to calculate statistical uncertainty should be the number (n)
of clusters, not the number of deaths.  That brings n down to 33, which is
quiet low.  In and of itself, this is not an insurmountable problem, but it
does exacerbate any systematic problems.  Indeed, the difference in the
rates when one outlier was included gives a good feel for the inherent
uncertainty in the measurement.

The selection was not random.  The initial selection was.  But, the
investigators could not go everywhere.  Thus, they chose neighborhoods that
"looked the same."  This substitution was subjective and not random.  Thus,
a bias was built into the methodology.

The most important systematic problem was the selection of the nominal
death rate.  There were a number of different nominal death rates
calculated for various spans of years before the US invasion.  According to
Gautam, they picked one that was a low outlier.  If they had picked others,
the range of excess deaths would be far lower...and would even include a
lower death rate (negative increases).  Thus, a significant part of the
paper should have been an analysis of why this particular previous death
rate is considered far superior to other official estimates, as well as
cacluations of the excess deaths.

Origionally, I naturally assumed that work done by a group that included
John Hopkins would have done due diligence to get the basics right.  It
appears that they did not, which is quite disturbing.  Scientific standards
for papers require the authors (especially of longer papers such as this
one) to give the readers an indication of the likely error sources.  To not
adress a key one at all, the variation in the base rate, is inexcusably
sloppy.  As a scientist, I have a right to demand good technique from my
peers.

Dan M.




_______________________________________________
http://www.mccmedia.com/mailman/listinfo/brin-l

Reply via email to