I agree with you, Thomas, for a number of reasons...

Rarely have I seen a project staffed large enough to have competent
specialists in each role, required for the formal usability tests that many
UCD specialists espouse.  When I have, UCD practitioners who don't have a
really strong link to the technology and development staff needed to
implement the design, tend to test a lot of work that never makes it into
production, because the UCD folks didn't know what the system was capable
of, or didn't realize its limitations.  

Case in point is Remedy, the incident management tool.  I've seen a
half-dozen practitioners tackle Remedy with scathing criticisms and very
useful recommendations for improvement.  However, Remedy has always been one
of the worst-designed applications in the market (well, until last I saw it
3 years ago).  It does no good to redesign the front-end of an application
which provides no straightforward facility to improve same, except via field
arrangement on the page.

Because most of my projects tend to fall in the 3-month range, and have
fewer than 5 people, I tend to wear the UCD/PM/Designer/FE Developer hat.
Yes, that's a rotten combination for collaboration and equal input.
However, I wear all four hats with humility and hold everyone's opinion as
valuable.

Due to time and resource constraints, I have to be a "genius designer" most
of the time, because there simply is no feasible alternative.  I have a
conservative design style, and rarely stretch beyond the norm, especially
when 99% of what I design has been done before, just in a slightly different
way.

I still conduct usability tests, and formalize them as best I can, with
representative real-world subjects from the actual target audience.
However, my biggest rule in usability testing is: Never test what you
already know.  As long as you are honest about what you know, and as long as
you have enough experience to "know a lot", then your usability tests should
primarily be confirming designs that have to deviate (for domain reasons)
from the norm.  For example, most web sites of the enterprise business app
variety have a lot in common.  Most of them even have comparables in the
outside world (travel, billing, time-tracking, project management, portals,
etc).  When I'm faced with a unique technology limitation for an
otherwise-basic application, I test the difference between what I know will
work, and what I have to do for the technology.

Somewhere between "usability testing is a waste of time and disservice to
your users" and "releasing any product without UCD is blasphemous" is the
middle ground where low-budgets, overworked project teams, and unyielding
technology lay. 

FWIW, I design corporate business apps, portals, ecommerce and social
networking _web_ sites (in order of frequency).  Your mileage and experience
may vary, and I wouldn't insist that my method works for everyone/anyone
else.

Bryan Minihan

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Thomas
Petersen
Sent: Tuesday, October 13, 2009 5:03 AM
To: [email protected]
Subject: Re: [IxDA Discuss] Article on Number of Usability Test Participants

I for one have never said we shouldn't do user research, in fact I
think that is one of the most important areas.

My problem with the current state of usability testing is that it
most often test in pseudo environments that tell you more about the
quality of your mock-up than of any finished product/service. 

What happens often is that those responsible for the usability tests
provides their findings to the designers but that there is no actual
transcendence from the usability testing phase into the actual design
and development phase.

Figuring out where most users think a button should be have very
little if any bearing on the quality of the finished product.

It might sit exactly where the users wanted it yet there is still no
conversion.

Through the years I have seen this again and again which have made me
suggest to my client's only to do usability tests if they are trying
to test something completely new and even there I would be hesitant
in some cases.

I have no problem doing usability tests if they make sense, but it's
at least my experience that they don't make sense in any close
proximity to the amount of cases they are conducted and I find it
rather troubling for the state of products and services that so many
UX shops are popping up that only do the first part.

A much better approach IMHO is do you research, design the monkey and
let it loose in the jungle. THEN look at how users behave.

In most cases that gives you plenty of information about what to do
or not to do and whether to invite users of your product or not for
qualitative studies.

But the current UCD mania is simply on the wrong track and will
hopefully fade with time as companies realize, there is no safe way
to good products and succsess.

IMHO You have to care about your users and you product and realize
that the real test is the finished product, not a pseudo environment.

It's not fair either to our clients or the users.



. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Posted from the new ixda.org
http://www.ixda.org/discuss?post=46278


________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to