> > Dear all, >> we conducted a study to determine the prevalence of >> leukemia in a population. That's a retrospective study >> collecting the information for people living in an >> area from 70s.
>Did you do something as impressive as previous >studies, >or that adds to previous studies? >- sample size; subject; diligence in ascertainment >(by quantity, by precision of diagnosis); interesting >results? A litterature review gave us some ideas on the necessary power to detect at our level of risk any significant effect. So we already knew that the power was insufficient to detect a significant effect. But we already did it as we had to face a strong political pressure. >> Until now we have 58% of respondant, and we think all >> cases of leukemia. 10% are definitly lost, 10% could >> be found but not easily, 20% refuse to answer. >"could be found but not easily" sounds like a >category invented for denigrating the study as >insufficient. could be found is not explicit. The study is done in france. When you are visiting a physician, you're recorded on a national basis for 2 years. That means that if you were not ill during these 2 years, you will not appear in the database. But you can suppose that after several years of asking for those people, you should find them (finally they'll be ill). That's a source of bias as we know that those people are in a good health. Ignoring them will give more weight to cases. >Let's say that what you do have consists of the >results >of followup by >(a) initial phone call to old number; >(b) initial post card to old address; >(c) phone call to possible listings in phone book; >(d) post cards to similar names, from phone book > >and then by whatever help you can get from old >employers >or their unions, and from the social security number >and >the state; plus some review of names on death >certificates >in your state, and the neighboring states. > >One classic approach for assessing " followup" is >to note the difference or bias according >to how easy it was to get the data. Do samples in >group >(a) ... (d) all look the same? And the tougher-to-get >groups? >- If the 'results' disappear when you include the >tougher-to-get data, then you're more like to be >looking at ascertainment bias. What bother me is that a small study on a sample of non-respondant shows that they are in a good health. So my question in that case is should we give any results? We know that our study is biased, but people are waiting for results. Thanks for the comments [ snip, some] -- Rich Ulrich, [EMAIL PROTECTED] http://www.pitt.edu/~wpilib/index.html ___________________________________________________________ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en fran�ais ! Yahoo! Mail : http://fr.mail.yahoo.com . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
