dennis roberts wrote:

> well, glad you asked ... one of the best treatments of this question you 
> have raised was a small handout done by bob frary (retired from vpi) on 
> ?airre development ... and lucky for you there is a url to this
> http://www.testscoring.vt.edu/fraryquest.html

Thanks for the reference. It seems to be a very comprehensive look at scales. 
As far as I can tell the ? response errs on two counts because it has been 
positioned as mid-point (even though it isn't), and also because it could be 
interpreted as an "other" response. The paper doesn't actually mention ?s at 
all, which suggests to me that this probably isn't a very common, and certainly 
isn't a recommended thing to do. 

The survey, which was conducted on all the employees in the company I work for, 
was prepared by a professional consultancy. We have the results back and on 
average around 20% of responses are question marks. For some questions as many 
as 47% ticked the question mark. 

I'm not sure how much of a difference this makes to the results. I've 
questioned the consultants about it and they said they'd used a ? to avoid 
clustering on the midpoint. But it still seems to me that using ? ie

agree / tend to agree / ? / tend to disagree / disagree

offers a midpoint, only it's even worse than having a midpoint because it isn't 
actually supposed to be a midpoint at all. They haven't solved the midpoint 
problem, they've just fudged it. According to consultants the ? meant "I don't 
know" or "the question isn't relevant to me". Apparently this was specified 
amongst a list of guidelines on the first page of the survey. However the 
survey was web-based, with over 20 screens and over 100 items. Nobody involved 
in the discussion of results even remembered that such a guideline had been 
given, let alone what it was. I should also point out that approx two-thirds of 
respondents were not native English speakers and I would estimate that around 
10% have very little command of the language at all. For budget reasons it 
wasn't possible to translate the questionnaire, however given this limitation 
it would seem to me that all effort should be made to make it as clear and 
intuitive as possible. 

We're now in the position of having to conduct focus groups to find out what 
people thought the ? meant. But is it not the point of a survey to find out 
what people think? We shouldn't have to have another survey to find out what 
the first survey meant. Focus groups are useful but they should focus on 
discussing why results emerged and what to do about them, not WHAT the results 
were. 

I'm not really that familiar with statistics. I did a little when I was a 
student and I do recall the professors being very concerned about which scales 
were used and why - it was certainly something they took very seriously. But 
I'm out of touch now, I can't say for certain whether the consultants have been 
negligent or not. It does seem to me that they should have at least warned us 
about these issues before going ahead with the survey? We've paid them a great 
deal of money, and a great deal more money will be invested in communicating 
the results and in taking action on the areas of weakness that the survey 
revealed. It all seems very unprofessional to me.

Thanks to all those who have replied

James


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to