One quick test of any metric is to spend 5 minutes trying to hack it - What if you were your evil twin: how could you make evil happen while still scoring well on these metrics? Better metrics make life harder for your evil twin. Lousy metrics make it easy.
> 1) Number of layouts delivered > 2) Number of interactive prototypes created > 3) Percentage of product design requests completed by commit date > 4) Number of users tested > 5) Number of product improvements made > 6) Number of products insights documented One big assumption you're making is higher numbers means better results. One excellent prototype might do the work of 5 mediocre ones, but the designer who tends to need 5 mediocre ones will score better here. Same for # of users tested (you're rewarding people with sloppy study designs, or who can't win basic arguments without going to the lab), etc. Volume is a very poor measure for quality. But since measuring volume is easy and popular it explains the dozens of organizations proud of their fancy metrics, but somehow in denial of their lousy products. I'm really not a fan of systematic metrics - it's a favorite fuel for micromanagers. You should also note there is nothing wrong with subjective metrics. Why cant your team score itself 1 to 10 on team performance every month, or even better, ask your clients & stakeholders to rate your performance. Then at least you have a metric that is very difficult to manipulate. So what if it's not scientific: science is not a panacea. If the goal is to get a sense of how you're doing and focus team energy, qualitative measures can be just as effective as quantitative ones. RMPT can work fine with subjective measures. Lastly thinking like a general manager, which I was most of my career, the only metric I'd ever evaluate you on if I were your boss would be #5: number of product improvements made. That's the *only* metric that earns your team its salary. A favorite scheme I've seen used for usability engineers is simply this: # of usability issues found, # of recommendations made, # of recommendations approved. You might need a different set for designers, but you get the idea. If you discover more layouts, more prototypes, more magic spells, lead to more approved recommendations, you'll be rewarded for it. And if those things (layouts, protos, etc.) turn out to be a waste of time, you wont have a team of people doing those things anyway just because there is a metric that rewards it. (But do note that this is pretty much the only way to get people to respect metrics: they must be tied to rewards). And finally, I'd guess NetQos is a metric happy place give the business you're in, which is fine. But Creative work doesn't fit metric schemes as well as, say, performance testing does - creative work is inherently sloppy, messy and wasteful - I'd seek out other creative groups, PR, Marketing, Advertising, etc. and see how they're handling fitting their creative work into metrics. I suspect you'll get better ideas from them than from the engineering and Q&A orgs. -Scott Scott Berkun www.scottberkun.com -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Russell Wilson Sent: Monday, August 18, 2008 8:53 PM To: [EMAIL PROTECTED] Subject: [IxDA Discuss] 6 Metrics for Managing UI Design I've been working with my team to devise a small set of metrics (don't want overkill) to help both guide our efforts and measure our progress. Please take a look at my post if you have any interest in this area and comment. I would love to get feedback on metrics that others are working with. http://blog.dexodesign.com/2008/08/18/6-metrics-for-managing-ui-design/ ________________________________________________________________ Welcome to the Interaction Design Association (IxDA)! To post to this list ....... [EMAIL PROTECTED] Unsubscribe ................ http://www.ixda.org/unsubscribe List Guidelines ............ http://www.ixda.org/guidelines List Help .................. http://www.ixda.org/help
