Hi,

This is a very interesting approach.  Some general points:

What is your measure of success for your new heuristics?  There is
some lively debate about success metrics with some examples being:
number of problems found; number of major problems found relative to
usability testing, abiltiy of developers to find problems in their
software, ......  There are some good papers by Woolrych and Cockton
discussing the efficacy of inspection methods.

One of the criticisms of the original heuristics is that they are not
complete.  There have been additional sets of heuristics.  The
originals for example, don't really deal with collaboration for
example or basic human factors problems like foreground/background
contrast.

The context for an inspection will vary by the user/task/environment
context.  For example, transparency may be good or bad depending on
context.  Guiding might always be good for a product used once a year
(tax program) or once when you set something complex up, but guiding
is not always good.

Here are some weaknesses of heuristic evaluations that you might consider:

• Different evaluators often find different problems for the same
product. This “evaluator effect” (Jacobsen, Hertzum, & John, 1998) has
implications for deciding what changes should be made to a design.
What do you do if five evaluators each come up with quite different
sets of problems (Kotval, Coyle, Santos, Vaughn, & Iden, (2007)?
• The heuristic evaluation method is based on finding usability
“problems”, but there is debate about what constitutes a problem and
whether heuristic reviews are good at finding “real problems”
• Heuristic reviews may not scale well for complex interfaces
(Slavkovic & Cross, 1999).  In complex interfaces, a small number of
evaluators may find only a small percentage of the problems in an
interface and may miss some serious problems.
• Evaluators may report problems at different levels of granularity.
For example, one evaluator may list a global problem of “bad error
messages” while another evaluator lists separate problems for each
error message encountered. The instructions and training for a
heuristic review should discuss what level of granularity is
appropriate. The facilitator of a heuristic evaluation will invariably
have to extract important high level issues from sets of specific
problems.
• Lack of clear rules for assigning severity judgments may yield major
differences; one evaluator says “minor” problem while others say
“moderate” or “serious” problem. In a recent study comparing the
results of usability testing with those of heuristics review there
were a few instances where some reviewers listed the same observation
as both problem and a “positive design feature”.
•  Some organizations find heuristic evaluation such a popular method
that they are reluctant to use other methods like usability testing or
participatory design.

> Begin each Heuristic with 'A design should be...'
>
> 1. Transparent
> At all times a person should understand where they are, what actions
> are available and how those actions can be performed. Information and
> objects should be made visible so that a person does not have to rely
> on memory from one screen to another
>
> Ask Yourself:
> •     Where am I?
> •     What are my options?

Transparency is a complex concept and is not always good.


>
> 2. Responsive
> Whenever appropriate, useful feedback should let a person know what
> is going on within a reasonable amount of time. If a person initiates
> an action, they should receive a clear response.
>

"Reasonable amount of time" depends on the context for the most part
and there is perceived versus real responsiveness.


> Ask Yourself:
> •     What is happening right now?
> •     Am I getting what I need?
> 5. Consistent
> A person should not have to wonder whether different words,
> situations, or actions mean the same thing. Additionally a person
> should not discover that similar words, situations or actions mean
> different things. Establish and maintain conventions.

Consistency is very complex.  What about consistency within different
parts of an app; consistency with other apps the user works with,
consistency with corporate style, consistent with OS, etc.  See the
classic article by Grudin on consistency for a good notion about when
it is consistent to be inconsistent.
Grudin, J. 1989. The case against user interface consistency. Commun.
ACM 32, 10 (Oct. 1989), 1164-1173. DOI=
http://doi.acm.org/10.1145/67933.67934

> 7. Guiding
> A person should feel capable of learning what is required to
> accomplish their goals. Help documentation and information should be
> easy to locate and search, focused on the task at hand and be only as
> long as necessary.

Guiding and flexible can be contradictory.  One person's guiding can
be another person's flexibility.  Guiding can be good or very bad
depending on the user training and task frequency.

Chauncey
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to