----- Original Message -----
From: Robert J. MacG. Dawson <[EMAIL PROTECTED]>
To: Dale Berger <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>; Art Kendall <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Thursday, July 27, 2000 12:05 PM
Subject: Re: On-line survey


>
>
> Dale Berger wrote:
> >
> > Adding to Art's list:
> > If one has email addresses for a population of interest and
> > wishes to collect information that is not particularly
> > sensitive, an internet format might work as well or better
> > than a mail survey.

Robert J. MacG. Dawson wrote:

> Yes. However, those called "internet" rather than "email" surveys are
> usually those that involve putting up a form on a web page and waiting
> for a Heffalump to fall into it. IIRC, the original question dealt with
> concerns such as whether other users could access the responses that
> only make sense in such a context. The best that can be said for _these_
> is that they might work as well or better than the Shere Hite style
> questionnaire- in- a- magazine survey.
>
Dale Berger replied:

No one has suggested that wide open self-selection surveys allow
generalization to a known population.  Let's leave that poor dead horse to
rest in peace.

We need better vocabulary here.  I recognize that my title "on-line survey"
might imply a wide-open "y'all come" survey.  In fact, I have in mind data
collection using a web site where the address is given to a targeted
population.  There are clear advantages over a hard copy survey form sent
through regular mail - greater convenience for the respondents, quicker
responses, automatic data handling, etc.  Perhaps "targeted on-line data
collection" would be a better term.  I don't think the term "email survey"
is adequate, because email might not be involved except through automatic
transmission of responses.

The concern for privacy of responses is perhaps greater for a closed group
of people who know each other than it would be for a wide open survey on the
internet.


DB  > > There would still be problems of inference if the response rate was
low
>
RD > Indeed. And "low" does not mean less than 10% of the sample, it means
> less than (say) 90% of the _population_. The nonrespondents must be too
> few to matter. This is a nonrandom sample, and the beginner's intuition
> that a small sample cannot represent the whole is _correct_ for such.
>
> If the starting address list is truly randomized within the population,
> things are a little better; in such a case, 90% or so of the address
> list may be enough. It would also be enough if the address list were
> chosen in a way that had no plausible connection to the question at
> hand.
>
> Now, all this is verifiable. If the researcher using email (or snail
> mail, or telephone interviews - pick your technology level) for a survey
> can verify that the address list is randomly chosen from a well-defined
> sampling frame, and that the nonresponse rate is low enough not to
> affect the inference, the results may be usable.
>
> However, is this going to be the case?  If in fact the address list is
> chosen for convenience and may be significantly nonrandom, will the
> study go ahead anyway?  If the nonresponse rate is 75% - or even 50% or
> 25% - will the study be dropped? If results are published, will they be
> titled "Perceptions of Innumeracy Among American College Graduates" or
> "Perceptions of Innumeracy Among Euphoric State University Alumni Who
> Gave Their Email Addresses To The Alumni Office And Choose To Answer A
> Certain Survey?" Only the latter would be accurate.
>
> The problem is very simple. Random sampling is a powerful  technique
> that allows us - despite the intuition of many intelligent people
> without a statistical background - to make valid inferences from a small
> sample to a larger population.
> Nonrandom sampling is not;  to do it, and to expect it to have the same
> results as random sampling, is like building an airplane out of straw
> and expecting it to fly.
>
>
> -Robert Dawson
>

Dale Berger replied:

Is a 90%+ response rate for a survey really necessary?

One can argue that logically a 90% response rate leaves the possibility that
the remaining 10% of the population would all have responded in a direction
opposite to those who did respond.   But we know that the real world does
not work that way.  Consider polls regarding elections, which produce
verifiably accurate results with (typically) well under 90% response rates.
(Polls may be adjusted to take into account known characteristics of
non-respondents, but they still don't have the actual responses from those
people - who COULD all vote for Ralph Nader.)

Robert Dawson identified many problems with sampling in the social
sciences - convenience sampling, self-selection, low response rates, etc.
There are also problems with measurement, possible errors in data handling,
etc.   The sophisticated researcher understands these limitations and takes
them into account when interpreting research findings.

Multiple lines of evidence that point in the same direction are much more
compelling than any one finding, and even weak findings can be useful.
Sure, it would be great to have 100% response rates, and the value of
surveys decreases with lower response rates.  How much weight should we give
to surveys with less than 90% response rates?  There is no simple answer -
but they can certainly have value.



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to