Josh, 

At Yahoo!, they treated these types of issues by running a Rolling Study every 
month -- a usability study where anyone could throw in a topic.  Money for this 
study came out of a central budget rather than from any team's individual 
budget.  There were usually 4-6 topics per test.  The upside was that very 
small questions got tested even if they wouldn't warrant a whole test, or if 
the team didn't have budget for testing. The downside was that a) recruits 
couldn't be specific to the users of each project, b) coordination of studies 
was a very big job, and c) it was frustrating for the person running these 
studies not to be able to follow the issues through.  I'm not sure I'd 
recommend this.

If small issues come up occasionally, it might be appropriate to tack them on 
to the end of a test of another topic -- if it only takes a minute or two, that 
shouldn't be a problem.

However, for the question you mentioned below (terminology around pricing, 
what's your goal?  If it's to assess whether users understand the various 
terms, then lab testing is appropriate; however if, as I suspect, the goal is 
to find out which term converts best, then a bucket test is the most 
appropriate way to find this out.

One of my clients is currently investigating remote usability services like 
http://www.usertesting.com/.  This could be another solution to small 
questions, but we haven't tried it yet, so I can't speak to that from 
experience.

In sum, as with anything, it depends on the question you're trying to answer -- 
but I have found that keeping a running log of small issues and trying to sneak 
them in to the ends of other usability tests can often work well and be the 
most time-effective way to investigate these issues.  I'd certainly be 
interested to hear other approaches, though.

Hilary

 
Hilary User Experience
                   Hilary Bienstock, Principal
 
[email protected]  :: 310.883.5818  ::  fax 310.829.2839




________________________________
From: Josh Evnin <[email protected]>
To: [email protected]
Sent: Thu, February 25, 2010 6:26:46 AM
Subject: [IxDA Discuss] Quick & easy methods for usability testing 
micro-interactions?

Hey everybody,

I'm looking for suggestions for ways I can usability test some
micro-interactions on the site I'm working on. We've worked out a
bunch of the big questions - the flow and IA seem to be working, the
visual design is adding to the content, and users are enjoying
themselves on the site - but a handful of issues regularly pop up
that I would like to be able to test out with some users. I'm
talking about really little stuff like whether an item with a
variable price should be displayed as:

"Starting at $5.00" or "$5.00+" or just plain old "$5.00"

How do you test the little things as they come up? Do you save them
all up until you've got enough to warrant full testing sessions with
users, or do you use other methods to knock out these little
questions?

Thanks!

Josh
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to