On 26 Feb 2003 05:46:46 -0800, [EMAIL PROTECTED] (alain) wrote:

> Dear all,
> we conducted a study to determine the prevalence of
> leukemia in a population. That's a retrospective study
> collecting the information for people living in an
> area from 70s.

Did you do something as impressive as previous studies,
or that adds to previous studies? 
 - sample size; subject; diligence in ascertainment 
(by quantity, by precision of diagnosis); interesting results?


> Until now we have 58% of respondant, and we think all
> cases of leukemia. 10% are definitly lost, 10% could
> be found but not easily, 20% refuse to answer.

"could be found but not easily"  sounds like a 
category invented for denigrating the study as 
insufficient.  

Let's say that what you do have consists of the results
of followup by 
(a) initial phone call to old number;
(b) initial post card to old address;
(c) phone call to possible listings in phone book;
(d) post cards to similar names, from phone book

and then by whatever help you can get from old employers
or their unions, and from the social security number and 
the state;  plus some review of names on death certificates 
in your state, and the neighboring states.

One classic approach for assessing " followup"  is 
to note  the difference or bias according 
to how easy it was to get the data.  Do samples in group
(a) ... (d)  all look the same?  And the tougher-to-get
groups?

 - If the 'results'  disappear when you include the
tougher-to-get  data, then you're more like to be 
looking at ascertainment bias.


[ snip, some]

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to