Tim said: > How will you analyse these data? In other words, what calculations will > you do to render a set of 1 to 5 rankings out of 33 choices into a set > of interpretable statistics?
Geoff said: Current thinking is that will present the results simply as no. of times item received no. 1, no. of times item received any vote and score for each item based on the ranking by each respondent where rank 1 = 5 pts, 2 = 4 pts... 5 = 1 pt, 0 rank = 0 pts. This is not about fancy statistics (95%CI intervals with post stratification weightings to make it now appear the the data is representative of all GPs) rather it is about showing the range and preference of opinion from a group of people who have taken part in an opinion survey. Tim said: > Also, what's your sample frame and sampling strategy, or do you plan to > use a self-selected convenience sample, in which case why are you > bothering because as you know the results will be meaningless? Geoff said: As previously mentioned we are looking to send the survey to 5,000 GP practices and 2,000 specialist practices through the Pulse IT magazine. I have also had an offer to distribute it through some other channels that could target 15,000 odd GPs directly but will be assessing the sampling frame in these two contexts. Keep in mind the sampling frame is not a census but are looking to target 5000 out 6000 GP practices, 15,000 out of 23,000 GPs. I think it is important to realize a number of things: 1. The research is clearly an opinion survey. 2. We are trying to get as wider a coverage as possible to elicit a wider range of response. It is an attempt to start to quantify the debate around innovation. 3. We are trying to get a wider coverage of opinion beyond what a textual analysis of GPCG postings would reveal what are the important IT/IM Innovation that is going to make a difference for General Practice. Let's just call the GPCG people a bunch of case studies in research methodological terms this is trying to go beyond a series of case studies. I suspect the range of response using the 5-10% rule will end up with a few hundred respondents - now a whole lot more case studies. I of course would welcome any donations of incentives to help improve the response rates. 4. IMHO scientific rigour is about evaluating evidence in the context of already available evidence and taking steps accordingly of which of course may be more targeted research in a particular area. 5. Part of the exercise will be about contextualizing the findings in the context of other available evidence. 6. The list currently presented has revealed ideas that were not mentioned by the GPCG contributors. 7. The list currently presented has already generated some more ideas from respondents beyond the GPCG. 8. There will always be response rates whereby not every one approached will answer the question. The scientist puts the results into the context of does the respondents represent the target population and therefore is the findings generalisable. Do people who take part provide different answers to people who don't take part? 9. I haven't been able to find any better evidence or previous studies to: "What IT/IM innovations will make an improvement to General Practice" so given the available resources of my time given for free and the generosity of the people who have help so far for free I have compromised a bit on my sampling frame methodology and follow-up strategies to obtain the nirvana of "representativeness" - however the exercise isn't meaningless if the results are put into the context of the available evidence. 10. I am big fan of making sure research is done that can create a change or give one a better probability of making a good decision. It appears that by doing this type of research we may increase the probability of people making a good decision. For example: i. A vendor now chooses one development over another because they have limited resources and are listening to at least a segment of the market and believe that the probability of success is now greater than the toss of a coin. ii. A particular development is not rated highly amongst end users but empirical evidence shows that the development does make an improvement - the developer now sees a potential barrier to uptake and starts working on an educational program showing the empirical merits of the development or needs to deploy the development in a way that makes it happening in the background with no impact on the end user. It has been an interesting exercise to date and no way has it been meaningless IMHO. The challenge of course is how we create change from the exercise. We have a dissemination strategy for the results for starters... and I am confident that vendors and government agencies are interested in the results. Geoff _______________________________________________ Gpcg_talk mailing list [email protected] http://ozdocit.org/cgi-bin/mailman/listinfo/gpcg_talk
