We do a baseline measurement capturing time, effort, and satisfaction
of 5-6 key scenarios. Then once we have a prototype, we run a test
with the same scenarios measuring time, effort, and satisfaction and
ask them to rate it compared to the previous version.

If the product is something they're using every day, say web-based
email, we'll ask them to rate the prototype compared to the current
web-based application. In other cases, we'll run A/B testing. Have
them run us through one model and rate it, then the other and rate
it. Either way, we get a comparison. 

Usually, that's an improvement, but we can also measure damage if it
happens. 


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=42752


________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to