On 10 Sep 2004 11:53:17 +0200, Sven Neumann <[EMAIL PROTECTED]> wrote:
> Hi,
> Nathan Summers <[EMAIL PROTECTED]> writes:
> > Consistancy is a good thing, of course, and in all the other places
> > where units are used, this is a very nice behavior to have.  But the
> > new image dialog is different in that there really are no
> > pre-existing sizes or units, really.  You are entering new ones from
> > scratch.
> I am sorry but I have to disagree with you here. You are seldomly
> entering new dimensions from scratch in this dialog. What you do is
> you start with the last values or the ones from the image you opened
> the dialog from. Or you are using a template. So what you usually do
> is to accept or modify the values that are present already. I don't
> think that entering new dimensions completely from scratch is a common
> use case.

It's true that you can get sizes from the last image or templates, but
obviously you don't use the GIMP for the same tasks that I do. :)  I
frequently enter a new size from scratch.  Apparently the reporter
does as well.  It's certainly not an unusual thing to do.
> > It's not absurd to think of a case where having unit conversion in
> > the dialog box is useful, but most of the time it's not a desirable
> > behavior.  Actually, that's being polite.  In reality, it's the kind
> > of frustrating, annoying thing that I make fun of the stupidity of
> > the developers when I run across in propriatary code.
> Since you admit that it is useful, I don't understand why you don't
> want to learn that you need to select the unit first. Seems like you
> behave like the kind of frustrating, annoying users who aren't willing
> to learn.

The current behavior is wrong for the exact same reason that it is
wrong to put a handle on a a door that is opened by pushing.  Even if
you mark PUSH on the door in big letters, and it's a door you use
frequently, you will still ocassionally find yourself straining to
pull the door open, simply because even though the door is documented
properly, it's afforded incorrectly.  It's not a matter of learning! 
It's a matter of the person who put the handle on in the first place
being ignorant of human nature. The brain just is not very good at
internalizing the rule "You pull doors which have handles except for
this ONE SPECIAL DOOR which in defiance to both convention and
practical physical considerations has a pull handle even though it can
only be opened by pushing."  It tends to spontaneously forget it,
especially when you are paying attention to higher-level tasks,
instead of focusing on the menial, uninteresting, and usually trivial
task of door-opening.

In terms of hardware, routine low-level physical tasks are generally
handled by the cerebellum and brain stem.  These devices are capable
of rather amazing feats of systems control, but are limited to
relatively simple programming.  They are quite simply incapable of
learning rules with ONE SPECIAL DOOR exceptions.  This is a hardware
limitation.  This restriction cannot be eliminated through practice,
training, or having a good attitude.

On top of this basic, almost fixed-function hardware, we have the
cerebrium and neocortex, which is capible of learning more numerous
and more complicated rules such as "for this ONE SPECIAL DOOR, you
need to push instead of pull."  When you think of the amount of signal
processing and storage this involves, this is really quite
extraordinary, and yet we take it for granted.

In this case, humans are able to open a misafforded door because the
cerebrium overrides the cerebellum's natural behavior.  When the
intentionality of the individual is such that a door opening is
necessary (the cerebellum is smart enough to deduce this without input
from the cerebrium) and it perceives that the door affords pulling,
the cerebellum decides to run the door-pulling program.  However, in
this case, it is overridden and given instead the instruction to run
the push program.

It should be noted that the cerebrium is incapable of meaningful
systems control without the use of the cerebellum. While it is true
that it can do systems control directly, it is not nearly at the same
rate of speed that the more specialized hardware is capable of doing. 
Without the cerebellum, the cerebrium is incapible of processing
incoming signals and sending outgoing ones fast enough for even the
most basic physical tasks such as walking.

The inherent problem with the misafforded door is that in order for
someone to be able to go through it, that individual's cerebrium must
override the cerebellum every time.  If, for some reason, it does not,
such as if the person has the audacity to be thinking of something
other than door-traversal, the cerebellum will run the program it
thinks is most appropriate, and the individual attempts to pull the
push door.  When the cerebellum notes that it is unsuccessful at its
attempts to open the door, it triggers an interrupt, notifying the
cerebrium that there is a problem that needs to be solved.  This
inevitably irritates the cerebrium, because it does not like to be
interrupted, and because it is not very good at context switches due
to a relatively small amount of short term random-access memory.  The
cerebrium processes the information and makes a decision on how to
proceed (this usually takes about 1-2 seconds of real time, but is
usually perceived by the individual as much less) and tells the
cerebellum to run the push program.

The real issue here is that the cerebrium shouldn't have to bother
with details like the mechanics of door opening.  It should be able to
concentrate on the high-level tasks that it is good at, instead of
being distracted by insignificant low-level details.  The only reason
that it cannot is because of poor engineering by the door designer.

It should be obvious to anyone with machine learning experience that
it is for good reason that ONE SPECIAL DOOR rules are especially
difficult to learn -- the ONE SPECIAL DOORs in training datasets tend
to be noise, and are properly filtered out, since they have no
genericizing power.  In fact, internalizing that rule would be
overfitting, which is the opposite of learning.

This is the reason why push doors with handles are annoying to most
people.  Perhaps you don't get annoyed by them.  Perhaps you have
never accidentally pulled a misafforded door after you realized that,
despite all appearences to the contrary, it was a push door.  You are
in a very, very small minority if that is the case.

To apply the analogy to the issue at hand, humans tend to enter data
in reading order.  They just do.  They did it before the invention of
the GUI.  They did it before the invention of the computer.  They do
it on paper just like they do it on the screen.  There are good
psychological and practical physical reasons why this is the case. 
Furthermore, convention dictates that forms be easily fillable in
reading order.  Even before the idea of rasterization popped into the
head of Philo Farnsworth, it was considered a misdesign for the
contents of an entry to be affected by an entry that follows it in
reading order.  A good form has its dependencies topologically sorted
because it's frustrating and couterproductive to have to go backwards
to a previous entry and check to make sure that it still is correct in
light of the way the following entry affects it.

Computerized form designers have tended to follow this aspect of the
centuries-old accumulation of form design wisdom with remarkable
consistancy -- honestly, even racking my brain, I can't think of
another application that breaks with it.  Even with GIMP, this is the
only dialog I know of that does.

So what it comes down to is that here we have the rule that you can
enter things into any form you run across in reading order except for
this ONE SPECIAL DIALOG, where if you enter things in reading order
the program goes and changes previously entered entries.  Even though
you probably remember that behavior after it bites you the first time,
human nature is such that inevitably you will be surprised by it again
-- or in other words, interrupted.  This is because while your
cerebrium should be handling more important, high level tasks like
deciding what the optimal size would be, it instead has to be
distracted by the manual mechanics of how to enter the size in this
ONE SPECIAL DIALOG, when for every other dialog it can mostly relegate
that low-level task to the cerebellum.  It is impossible to "learn" to
do otherwise.  Perhaps it may someday be possible with intricate brain
surgery or computer implants.

This is furthermore complicated by the fact that you are used to
letting your cerebellum handle the manual aspects of data entry, and
so it is not on guard for exceptions.

When you consider the amont of work it takes to change gimp such that
gimp nature to match human nature versus the amount of work it takes
to change all of our human users such that human nature matches gimp
nature, it seems pretty arrogant to conclude that the right answer is
to make human nature match gimp nature.

There is a reason why user interfaces are called user interfaces --
they are just like adaptors between different electronic devices. 
Just like the two devices must agree on a common protocol in order to
use an adaptor, and do whatever conversions are necessary in order to
use that protocol, to be usable, a program has to interface with the
way the human user processes data and signals its intentions.  It has
to convert between the way that things are represented internally and
the way that humans represent things.

Making ONE SPECIAL DIALOG that works contrary to the way that humans
naturally process information differs from displaying the dialog using
ultraviolet light or requiring all information to be entered using a
Morse key in reversed Manchester encoding only by degree.  All three
are examples of interface mismatches.   With the right hardware and
APIs, both of those extreme examples would be just as easy for the
program as using regular monitors and keyboards, but that doesn't mean
that people would be lazy if they were unable to work with such a
program as efficiently. Indeed, no amount of learning would make it
possible for them to use it as efficently as the more human-centric

There is a reason why people tend to use systems more designed towards
people.  I could sit here all day and argue about why the binary
system is better -- the multiplication table has only four entries,
you only need to learn two symbols, etc., but it won't change the fact
that the human language processing system is simply not that good at
parsing long two-symbol streams.  The chances of the appropriate
regions of the brain adapting to able to easily understand what it
means when a car has 111010100110101.0110 on the odometer in my
lifetime is about as good as that of the Earth's orbit deciding to go
along with the French revolution and change the number of days in a
year to a convenient multiple of ten, and both of these probablities
are so low for almost exactly the same reason.

Usablity is not really about enabling the user to be lazy about
learning things, even though programmers often misperceive it as such.
True usability is about enabling the user to really use the program
efficiently.  By relegating as much as possible to the cerebellum, and
by minimizing distracting interruptions from the cerebellum, users can
use the program faster because the cerebellum is more efficient at
manual tasks, and because optimal parallelism between the two parts of
the brain is achieved.  Furthermore, since the amount of high-speed
random-access memory available to the cerebrium is extremely limited,
and unlike computers, cannot be swapped out to long-term storage due
to hardware constraints, good usability enables the user to actually
perform more sophisticated tasks, since vital memory and processing
power is not wasted by low-level tasks and can therefore be used
solely for the high-level task at hand.

Remember the door with the wrong handle.  Misunderstanding the
fundamental reason for usablity leads almost inevitably to poor

>  It is certainly a desirable goal to make things intuitive
> but I don't think that useful features should be removed for the sake
> of intuitivity.

I haven't suggested that we remove features.  I don't think that
anyone has suggested that we remove features in the general case.  No
one wants to remove features.  Maybe I should say that again, because
I've said it several times before and you still seem to think that I
want features removed.  I don't want features removed.  No features
need to be removed. It's very rare that removing features is the way
to solve usablilty problems, although sometimes people chicken out
that way.  Really, you don't have to remove features.  I've counted
three suggestions on how to solve this problem without removing
features.  Two of them were present before you sent the email I'm
replying to. There seems to be considerable evidence that no one wants
to remove features.

> I also very much doubt that users will find it
> intuitive if the size entry in the New Image dialog doesn't behave
> like other size entries.

This is only the case if the size entry looks like, or in other words
affords the same as, the other size entries.


Note: Brain fans may notice that I've left out a lot of details in the
analysis above. Most importantly I left out details about how the
hindbrain (technically cerebrium) does a lot of signal processing work
for the cerebellum in higher order animals.  These kinds of details
just complicate things while doing nothing for the final analysis.
Gimp-developer mailing list

Reply via email to