Hi:

Testing with a smaller number can yield useful insights and you can
reuse other portions of your budget to re-test on what you have found
out from a first round of testing. Never understood the need to see
the same problem repeat over and over again, when the monies could be
better spent prioritizing it, mapping it against a business goal and
seeing how/where to fix it.

My question is: Where does the question of statistical significance
in usability testing come from?

It seems that when we have faced this question from business, its
situations where the business:

* Is testing for the first time 
* Knows little about Usability/UX/iterative research
* Trying to win an internal battle against another team (yikes!)
* Taking the need for larger numbers of participants from other
methods like surveys or focus groups (historical)
* Dont trust the results from a Usability Test (maturity)
* Left testing too late so want to test with larger numbers to cover
their behinds (political)
* Fill in your own :)

Something always scares me a little when we are asked the
"statistical significance" question when the same question is not
applied to other parts of the business. Perhaps the question comes
from a lack of understanding and maturity around what we do? Be
pleased to see this question disappear forever!

Suggest by identifying where the question is coming from we may all
be better in finding ways to better inform/education the business.

Thoughts?

rgds,
Dan


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46278


________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to