Hi Christoph,
On Sun, Jun 10, 2012 at 11:34 PM, Christoph Noack christ...@dogmatux.comwrote:
Hi Mirek, all!
Thanks for your quick response! It's already a bit late, but I'd like to
answer now - tomorrow, I suppose, my day job will eat up all the given
time ;-)
Before I start: The more often I read your mail, the more I'm convinced
that some of the potential misunderstandings are caused by differences
in terminology (read: same terms mean different things to us) and
procedure with regard to HMI development. So please allow me to add some
more my-point-of-view ...
Am Sonntag, den 10.06.2012, 19:53 +0200 schrieb Mirek M.:
Hi Christoph,
On Sun, Jun 10, 2012 at 2:38 PM, Christoph Noack christ...@dogmatux.com
wrote:
Hi Björn, hi Mirek!
I had to make up my mind concerning this thread and also the article
that was originally referred to. So here is what I'm thinking about ...
Am Mittwoch, den 06.06.2012, 20:45 +0200 schrieb Björn Balazs:
Am Mittwoch, 6. Juni 2012, 19:46:09 schrieb Mirek M.:
[...]
Developers encountering these keywords likely won't have any
additional
interface design training, so it is important that each heuristic is
very
clearly defined with specific examples and detailed explanations.
Additionally, allowing developers to view all of the bugs in the
software
marked as the same type of issue, both current and resolved, serves
as an
effective way for them to further learn about the heuristic.
Therefor I understand these principles as guidelines for developers
to
become
aware of UX, perhaps learn a tiny bit. Opposite I do understand
something like
the design ethos as rules for us - experienced designers and UX
professionals.
So, I think the sugested rules are good for teaching developers, but
I
think
this is not what we want to do - ?questionmark?
I understand it the same way - and I found another thing a bit strange.
The article is called Quantifying Usability although it deals with
heuristic evaluations. The aim of those evaluations is usually to
detect interaction design issues - but not to let users rate / quantify
those issues (having statistically relevant information). So, where is
the quantification?
In the given case, interaction experts (not users) do tag the issues
using their level of experience and (domain) knowledge. So finally, you
can generate a nice statistic of known issues within your system -
maybe
that also helps within the project to address the most important (here:
highest number) of issues in advance.
But that doesn't solve the issue what it really means if a dialog
violates e.g. ux-minimalism - you need to know the users
characteristics and their tasks. So for a complex product like
LibreOffice (assuming that its okay that it supports a variety of
tasks), some users may find a dialog overwhelming whilst other users
may
miss lots of information. The question is - which main target group
will
make use of this dialog ...
The minimalism principle states that interfaces should be as simple as
possible, where simple is meant as not complicated, not as as
featureless as possible.
That sounds great, indeed. But when designing products one is usually
faced to the problem that it's impossible to add (meaningful) features
without any increase of the complexity of the product. Although one user
group want to have these features (because it boosts their efficiency),
other users might find the resulting user interface not simple.
So, as Bjoern already pointed out, balancing what's simple and what is
not featureless requires a deep understanding of our users' needs. And
these needs vary a lot ... depending on their knowledge and their tasks.
I've documented a related issue some years ago (Myths about UX):
http://wiki.services.openoffice.org/wiki/User_Experience/Myths_about_UX#Advanced_functionality_doesn.27t_hurt_-_newcomers_just_won.27t_use_it.21
As an example, compare Firefox's separate search box and address bar and
Chrome's omnibox. In Firefox, you can search using both the address bar
and
the omnibox, which is unnecessary redundancy. In this case, Chrome is
more
minimalistic, yet it doesn't skimp on any features found within Firefox.
It does sound like Chrome is superior to Firefox, right?
But how do we know that the Chrome decision is the right one? Maybe ...
* Maybe the majority of people expects to have a separate search
field - like in other programs, too (Adobe Acrobat).
User expectations should be covered by ux-affordance
and ux-discovery (relevant visual cues), ux-visual-hierarchy (visual
weight), ux-natural-mapping, and, if it doesn't hurt the usability of the
software, ux-consistency.
* Or user tests showed that people are unable to discover the
search functionality - so they always enter www.google.com and
then start