James Page said: > It depends on the design. > You can have badly done qualitative studies, > as well as poorly designed quantitative studies.
I replied: > True, but it's so much *easier* to mess up on a survey. James replied: > Depends on if you create your own questions > or use ones that have been tested before. Nope, it doesn't. It depends on what those questions mean to your users at the time that you ask them, and how relevant they are to the topic that you want to research. Just one example: my brother wanted to use a survey instrument for his master's research that had supposedly been well-validated for the same topic and the same users. Apparently. Then we went through it for his actual topic (which was close, but not precisely the same) and for his actual users (who were close, but not precisely the same). About 30% of it survived. >There is allot of literature on what works. But very few people read it. And those that do, become highly familiar with the concept that you have to test your survey (gasp) by yes, guess what, as I already said: usability testing it. > All the standard surveys have been tested, > some work better than others. Nope. For example, even the most commonly-used survey in the usability world, SUS, is rarely used exactly in its original format. And it's well-known that one word in it, "cumbersome", routinely causes difficulty for users. If you haven't tested your exact survey with your actual users, you're toast. And if you're doing that, you may as well do some usability testing at the same time. > On the other hand interviewing well takes allot of skill, and the correct methods. Not really. I've had huge success in teaching people 'hey you' usability testing (see the extremely short chapter in my book if you're not sure what this means. www.formsthatwork.com). Typically, I get people doing good beginner-level usability testing in about half an hour, and the second half an hour is enough to get them starting on being reflective practitioners who will improve. It's genuinely quite easy to do adequately, and then to improve. > With both methods a bad question, > is a bad question. > It is very easy to prime people. > Would you not say it is more difficult > to make a mistake with a pre tested standard > survey question, that has been tested > many times before than a novice interviewing somebody? When people are face to face, the normal rules of conversation mean that mistakes get rapidly repaired and clarified. This can't happen in a survey. It is *definitely* much, much easier to screw up a survey question than an face to face interview. Even easier for a novice, who is likely to have no understanding that what was a good question last week in *that* survey is rubbish in this one. <snip - background in anthropology> I'm not talking about anthropology, I'm talking about the normal everyday work of the interaction designer. > As I have said before we employ > a mix of both qualitative and > quantitative methods in discovering > and fixing usability problems A mix is good. I do that too. I just know that quant methods can be a lot harder. Cheers Caroline ________________________________________________________________ Welcome to the Interaction Design Association (IxDA)! To post to this list ....... [email protected] Unsubscribe ................ http://www.ixda.org/unsubscribe List Guidelines ............ http://www.ixda.org/guidelines List Help .................. http://www.ixda.org/help
