On Mar 27, 2008, at 9:05 PM, Marijke Rijsberman wrote:

> For instance, testing prototypes is not a good way to suss out what  
> (small?) percentage of people is going to do something like write  
> reviews, tag their expenses, or do some other "power user" type of  
> thing which demands a lot more dedication than the average user  
> would bring to it. That requires a different (and likely more  
> quantitative) type of research.

Completely disagree. Last year we did several rounds of usability  
testing for LA Times w/prototypes looking at tagging, reviews, and  
other social idioms. In fact, the usability testing highlighted  
something we never would have seen in quantitative research—that while  
people aren't sure what tags are, the interaction of what a tag does  
meets their expectation.

If we had done a quantitative approach, we would have seen near 0%  
interaction and based on that would have scrapped tagging, ratings,  
and reviews from the new Calender Live site. However, with in-person  
testing, we were able to get feedback from users that showed:
1. Only power-users are likely to migrate to tagging, ratings, and  
reviews.
2. Power-users are not age-defined.
3. 3-5% of users will rate, tag, or review.
4. Non-power-users were willing and often interested to explore  
tagging, ratings, and reviews, but sometimes needed some type of  
prompting. Understanding what kind of prompt they needed helped us  
engage them in future rounds of testing. Gaining this understanding is  
only something we could have obtained by in-person discussions, not  
through a web-survey.
5. Through in-person studies were able to perform some collaborative  
design with the participants and determine the priority levels of the  
information on the screen. This lead to design concepts that enabled  
us to put tags clouds (something that less than 2% of our participants  
knew what it was) in the appropriate place on the screen so that they  
were out of the way of those who wouldn't use them, but reachable for  
those who would.
6. When encouraged to explore tags, every participant who did found  
them extremely useful and immediately saw the benefit. We didn't  
explain the benefit and ask them to try them, we simply asked what  
they expected to happen if they clicked on those "things" and then had  
them try it out and followed up with "how does that compare to what  
you expected?" Very vague, but it does the trick w/o leading.

Numbers 1-3 could be accomplished w/a quantitative study, but 4-6 took  
a qualitative study to perform. And frankly, 4-6 were insights that  
were new, while 1-3 are things we could have learned by googling.


Cheers!

Todd Zaki Warfel
President, Design Researcher
Messagefirst | Designing Information. Beautifully.
----------------------------------
Contact Info
Voice:  (215) 825-7423
Email:  [EMAIL PROTECTED]
AIM:    [EMAIL PROTECTED]
Blog:   http://toddwarfel.com
----------------------------------
In theory, theory and practice are the same.
In practice, they are not.

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [EMAIL PROTECTED]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to