@dana
>
> I'd argue that in the small tests that most of us do percentages are
> bogus).
>
Totally agree. Most of our tests are 30 plus, but for most
small studies they are meaningless.

You said

> Instead, they're [firms] asking, What do we want the user experience to be?
> What are the constraints of the technology? What are the priorities of the
> business? Which usability issues prevent us from reaching the vision we have
> for this design?


Our research show that most firms (and I am not talking about the
exceptional)  know they have issues. But they either are overwhelmed by the
long list, or don't believe that they matter enough compared to adding an
extra feature, or sorting out another issue. In some cases there is a
conflict within the team.

@Caroline,
I said:-

On the other hand interviewing well takes allot of skill, and the correct
> methods.


You  said ;-

> Not really. I've had huge success in teaching people 'hey you' usability
> testing (see the extremely short chapter in my book if you're not sure what
> this means. www.formsthatwork.com). Typically, I get people doing good
> beginner-level usability testing in about half an hour, and the second half
> an hour is enough to get them starting on being reflective practitioners
> who
> will improve.


Study after study has found massive variations in the Usability issues found
by different evaluators.  If you where right usability studies would be
replicable, which current research shows they are not. On methods see
*Imre* Lakatos
criticism of Milton Friedman.

There are many books on the market on how to do Usability Testing, but has
anybody tested the books?

We have done quite a bit of our own field research on people doing usability
studies. (We don't just believe in one technique, but in the right tool, for
the right purpose). The subjects ranged from the professional to the person
just picking up a book. Standards are pretty much all over the place.

The evaluators we have observed range from somebody just picking up the text
book, to people who have done courses, to people with degrees in Usability
related subjects.

We have observed some pretty strange practices, such as participants been
tested behind a glass wall (not a one way mirror), with about 6 people
looking in. We have seen test participants been given a very detailed script
(i.e. go to the page, go to the X form item, enter Y, then go to the next
item, enter Z). Another time five participants been tested simultaneously in
one room, with the only evaluator present running behind the participants.
Another time a video of test sessions been posted to YouTube without
the participants consent.

Those where some of the strangest examples, but in most cases we have
observed, we have seen leading questions been asked, or priming. The good
evaluators are the exception. Clients of Usability firms seem to echo these
findings. "X Agency has a very good evaluator, but don't use anybody else
there", Or "we had Y who was terrible, then we found Z who was brilliant".

I said :-
> All the standard surveys have been tested, some work better than others.

You said

> Nope.

See Tullis et Al, Good summery here
http://www.upassoc.org/usability_resources/conference/2004/UPA-2004-TullisStetson.pdf

You said

> <snip - background in anthropology>
>

> I'm not talking about anthropology, I'm talking about the normal everyday
>
work of the interaction designer.


Research methods are research methods, and unless they are carried out with
some diligence will lead to the wrong conclusions.

James
http://bog.feralabs.com


2009/3/12 Caroline Jarrett <[email protected]>

> James Page said:
> > It depends on the design.
> > You can have badly done qualitative studies,
> > as well as poorly designed quantitative studies.
>
> I replied:
> > True, but it's so much *easier* to mess up on a survey.
>
> James replied:
> > Depends on if you create your own questions
> > or use ones that have been tested before.
>
> Nope, it doesn't. It depends on what those questions mean to your users at
> the time that you ask them, and how relevant they are to the topic that you
> want to research.
>
> Just one example: my brother wanted to use a survey instrument for his
> master's research that had supposedly been well-validated for the same
> topic
> and the same users. Apparently. Then we went through it for his actual
> topic
> (which was close, but not precisely the same) and for his actual users (who
> were close, but not precisely the same). About 30% of it survived.
>
> >There is allot of literature on what works.
>
> But very few people read it. And those that do, become highly familiar with
> the concept that you have to test your survey (gasp) by yes, guess what, as
> I already said: usability testing it.
>
> > All the standard surveys have been tested,
> > some work better than others.
>
> Nope. For example, even the most commonly-used survey in the usability
> world, SUS, is rarely used exactly in its original format. And it's
> well-known that one word in it, "cumbersome", routinely causes difficulty
> for users. If you haven't tested your exact survey with your actual users,
> you're toast. And if you're doing that, you may as well do some usability
> testing at the same time.
>
> > On the other hand interviewing well takes allot of skill, and the correct
> methods.
>
> Not really. I've had huge success in teaching people 'hey you' usability
> testing (see the extremely short chapter in my book if you're not sure what
> this means. www.formsthatwork.com). Typically, I get people doing good
> beginner-level usability testing in about half an hour, and the second half
> an hour is enough to get them starting on being reflective practitioners
> who
> will improve. It's genuinely quite easy to do adequately, and then to
> improve.
>
> > With both methods a bad question,
> > is a bad question.
> > It is very easy to prime people.
> > Would you not say it is more difficult
> > to make a mistake with a pre tested standard
> > survey question, that has been tested
> > many times before than a novice interviewing somebody?
>
> When people are face to face, the normal rules of conversation mean that
> mistakes get rapidly repaired and clarified. This can't happen in a survey.
> It is *definitely* much, much easier to screw up a survey question than an
> face to face interview. Even easier for a novice, who is likely to have no
> understanding that what was a good question last week in *that* survey is
> rubbish in this one.
>
> <snip - background in anthropology>
>
> I'm not talking about anthropology, I'm talking about the normal everyday
> work of the interaction designer.
>
> > As I have said before we employ
> > a mix of both qualitative and
> > quantitative methods in discovering
> > and fixing usability problems
>
> A mix is good. I do that too. I just know that quant methods can be a lot
> harder.
>
> Cheers
> Caroline
>
> ________________________________________________________________
> Welcome to the Interaction Design Association (IxDA)!
> To post to this list ....... [email protected]
> Unsubscribe ................ http://www.ixda.org/unsubscribe
> List Guidelines ............ http://www.ixda.org/guidelines
> List Help .................. http://www.ixda.org/help
>
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to