Geoff Sayer wrote:
> Tim said:
>> How will you analyse these data? In other words, what calculations will
>> you do to render a set of 1 to 5 rankings out of 33 choices into a set
>> of interpretable statistics?
> 
> Geoff said:
> Current thinking is that will present the results simply as no. of times
> item received no. 1, no. of times item received any vote and score for each
> item based on the ranking by each respondent where rank 1 = 5 pts, 2 = 4
> pts... 5 = 1 pt,  0 rank = 0 pts. This is not about fancy statistics (95%CI
> intervals with post stratification weightings to make it now appear the the
> data is representative of all GPs) rather it is about showing the range and
> preference of opinion from a group of people who have taken part in an
> opinion survey. 

OK. Not sure about a score based on summed ranks, especially with 28/33
of possible responses scoring zero (a certain lack of linearity there),
but given the modest aims of the exercise, who cares?

> Tim said:
>> Also, what's your sample frame and sampling strategy, or do you plan to
>> use a self-selected convenience sample, in which case why are you
>> bothering because as you know the results will be meaningless?
> 
> Geoff said:
> As previously mentioned we are looking to send the survey to 5,000 GP
> practices and 2,000 specialist practices through the Pulse IT magazine. I
> have also had an offer to distribute it through some other channels that
> could target 15,000 odd GPs directly but will be assessing the sampling
> frame in these two contexts. Keep in mind the sampling frame is not a census
> but are looking to target 5000 out 6000 GP practices, 15,000 out of 23,000
> GPs.

OK, sorry, I had forgotten that you mentioned distribution with the
Pulse IT magazine. As long as the distribution of the magazine is
approximately known, ten you will be able to work out your response
rate, which is the key metric of whether the survey will tell you
anything of value.

> I think it is important to realize a number of things:
> 1. The research is clearly an opinion survey.

But still a survey and hence basics of sampling etc still need to be
observed. Even the market research firms understand that (as I am sure
you do too).

> 2. We are trying to get as wider a coverage as possible to elicit a wider
> range of response. It is an attempt to start to quantify the debate around
> innovation.
> 3. We are trying to get a wider coverage of opinion beyond what a textual
> analysis of GPCG postings would reveal what are the important IT/IM
> Innovation that is going to make a difference for General Practice. Let's
> just call the GPCG people a bunch of case studies in research methodological
> terms this is trying to go beyond a series of case studies. I suspect the
> range of response using the 5-10% rule will end up with a few hundred
> respondents - now a whole lot more case studies. I of course would welcome
> any donations of incentives to help improve the response rates. 

OK, as long as, if the response rate is 5-10%, you present the results
as "a bunch of case studies" and not as "survey results".

> 4. IMHO scientific rigour is about evaluating evidence in the context of
> already available evidence and taking steps accordingly of which of course
> may be more targeted research in a particular area. 
> 5. Part of the exercise will be about contextualizing the findings in the
> context of other available evidence.

Of course, contextualising the discourse!

> 6. The list currently presented has revealed ideas that were not mentioned
> by the GPCG contributors.
> 7. The list currently presented has already generated some more ideas from
> respondents beyond the GPCG.
> 8. There will always be response rates whereby not every one approached will
> answer the question. The scientist puts the results into the context of does
> the respondents represent the target population and therefore is the
> findings generalisable. Do people who take part provide different answers to
> people who don't take part?

Yes, but the answer to the last question is by definition unknowable,
which is why decent response rates are important. You can infer that
respondents are similar on demographic and other characteristics to
none-respondents, but that doesn't mean they are simple with respect to
the attributes being asked about in the survey.

> 9. I haven't been able to find any better evidence or previous studies to:
> "What IT/IM innovations will make an improvement to General Practice" so
> given the available resources of my time given for free and the generosity
> of the people who have help so far for free I have compromised a bit on my
> sampling frame methodology and follow-up strategies to obtain the nirvana of
> "representativeness" - however the exercise isn't meaningless if the results
> are put into the context of the available evidence.

As long as the results are presented with the appropriate degree of
circumspection, there is no problem with using scientifically dodgy
methodologies - but you do have to be completely honest about the
limitations of the methods and the reliability of the results when
presenting them, and not gloss over them by trying to "contextualise"
the results.

> 10. I am big fan of making sure research is done that can create a change or
> give one a better probability of making a good decision. It appears that by
> doing this type of research we may increase the probability of people making
> a good decision. 

If, and only if, the answers obtained are reliable i.e. representative
of a defined population (of IT users). If the answers are misleading or
highly biased, such research may do us all a disservice.

Tim C
_______________________________________________
Gpcg_talk mailing list
[email protected]
http://ozdocit.org/cgi-bin/mailman/listinfo/gpcg_talk

Reply via email to