In article <[EMAIL PROTECTED]>, Manuel Lemos <[EMAIL PROTECTED]> wrote:

> This is a very old discussion in the Amiga world.  Actually it started when
> Stefan Stunz decided to design MUI to handle all the user input.  There is
> not much point in carrying on this discussion here, except for the fact that
> Java GUI based programs will all suffer of that apparent slowlyness when they
> are just retarded.

This is not quite true. First of all let me correct the misconception
that the thread on which the GUI code runs (input.device vs. pplication)
has any direct impact on the observed "speed" of the GUI. It does not.

The speed with which a GUI reacts to user input depends on three
factors: Does the code responsible for the reaction/refresh get the
CPU at all, triggered by the user input ? How quickly does it get it,
once the decision to give it CPU time has been made ? How much CPU time
is it allocated, once it does get the CPU ?

The first point is only an issue for badly written applications that
don't keep a thread in their GUI main loop at all times. Those
applications do not appear to react to user input because they are
otherwise busy. That is obviously sloppy programming, and a violation
of programming guidelines, so it is pointless to discuss this and to
use it as an argument for or against a method. Obviously it CAN be
done correctly, and many programs do so.

The second point is affected by the task switching delay, which is
negligible in AmigaOS, unlike in Unix. It does not affect GUI reaction
time in any measurable or noticable way.

The third point is once again a point where bad vs. good programming
practice comes in. Obviously with programs using BOOPSI/gadgetclass
a lot of work is done on input.device, which is Very Bad from a design
point of view, see below, but appears fast to the user. Programs like
MUI simply need to make sure that their GUI event loop has a priority
higher than zero for GUIs to "feel fast". That's easy enough to do,
and ensures that GUIs will get "a lot of" CPU time when needed, instead
of competing with applications.

Bottom line: with everything else (GUI objects etc.) being equal, a
single-threaded application-driven GUI engine can be just as quick and
responsive as one driven by input.device, if the application is written
properly.

Actually single-threaded GUIs engine have one inherent performance
*advantage* over multi-threaded GUI engines: they don't have the
additional overhead of managing threads (resource locking, such as
RastPorts, initialization/setup for rendering etc.). They can cache
far more data than multi-threaded engines that have to worry about
being preempted.


As to running gadgets on input.device being a Bad Thing: there are
plenty of reasons for this. Just to name a few:

- It is not scalable. Once you start piling up gadgets upon gadgets
  and interconnect them at the event level, your computer stops reacting
  to input because it is too busy reacting to input :-). You effectively
  lose the advantage of having different priorities in the first place:
  raising the priorities or GUI-triggered events, like scrolling, means
  you don't get preempted for even more important things, like mouse
  movement, making the computer feel extremely sluggish. See the "mouse
  pointer locks up when scrolling large pages" problem with programs
  like Grapevine as an example. ClassAct tries to "solve" this problem
  by off-loading some work to the application (after all, getting closer
  to MUI), but this not only breaks OO information hiding principles and
  makes it impossible to cleanly subclass some types of gadgets, but
  also causes resource locking conflicts. 

- It runs third-party (class method) code on input.device, which just
  happens to be the most critical task in the system. A bug in any
  class and the whole system dies. With application-driven systems
  like MUI only a single application dies in situations like that.

- For reasons deeply rooted in computer science principles, the two
  concepts of "object oriented programming" and "multithreaded
  resource locking" are mutually exclusive. They are fundamentally
  incompatible with each other and cannot be combined within a single
  programming environment. (The reason, in a nutshell: one of them
  requires perfect information hiding, the other one complete
  implementation transparency. You can never have both. I don't want
  to go into more detail unless someone really wants to know).
  If you try anyway then you get a system which either deadlocks or
  crashes unpredictably, or which is so limited in scope that it cannot
  be extended arbitrarily by programmers, making it useless as a
  general OO runtime environment. BOOPSI/gadgetclass suffers exactly
  from that problem, on several levels. Most application programmers
  only know about the ObtainGIRPort() problem, which is a side effect
  of what I described above and cannot be fixed. However the problems
  run far deeper. What do you think why whole applications are written
  in MUI, using the MUI OO environment, with lots of complex and
  interchangeable classes (editors etc.), whereas you never see
  anything like that for ClassAct or even bare BOOPSI/gadgetclass ?
  It is not a coincidence, but caused by the fact that MUI, being a
  properly designed single-threaded system, is fully scalable,
  deadlock-free, and stable. ClassAct and BOOPSI/gadgetclass attempt
  to marry the concepts of OO and multi-threaded resource locking and,
  inevitably, run into brick walls and fail, as soon as the complexity
  of classes and class interactions exceeds certain thresholds. For the
  same reason you hardly ever see any "replacement classes" for existing
  BOOPSI/gadgetclass or ClassAct classes. Both systems break OO
  information hiding principles and make it impossible to cleanly and
  reliably replace (and sometimes even subclass) existing classes.

Of course there are LOTS of other problems in the BOOPSI/gadgetclass
design, including the OO encapsulation violation introduced by
separate SetGadgetAttr() functions, an arguably broken notification
system (brokers, using "push notification" instead of the superior
"pull notification" in MUI), lifetime and parameter issues, lack of
some important "class methods" etc. but they all pale in comparison
to the huge design flaw at the center of the whole system, explained
above.

I don't know if Stefan Stuntz actually realized the fundamental
theoretical flaw in BOOPSI/gadgetclass, or if he only saw the smaller
problems, or just did not like BOOPSI/gadgetclass. In any case, he
did the Right Thing in making MUI single-threaded and throwing BOOPSI
notification and gadgetclass out. That was the only way to design a
system as complex as MUI and keep it stable. Most other successful GUI
OO systems (e.g. Motif) do similar things. Those that try to be
multi-threaded or distributed are usually so "experimental" as to be
unusable. The only flaw I can see with the MUI model is that Stefan did
not stress these points enough in his developer docs, and did not
encourage developers sufficiently to create a proper run-time
environment in applications (dedicated GUI event process etc.). Better
documentation and some template functions would have gone a long way
here...


Regarding the Java GUI OO model: You are wrong here as well. The Java
GUI OO model does use multiple threads, but differently than
input.device. AWT handled things further away from the application,
like BOOPSI/gadgetclass, which is one of the reasons for its poor
stability and performance. Swing was designed to move events closer
into the application space, similar to MUI. Whether you see any
"apparent slowness" only depends on relative thread priorities in
Java, not on the event schedule.

-- 
Holger Kruse   [EMAIL PROTECTED]
               http://www.nordicglobal.com
               NO COMMERCIAL SOLICITATION !


-- 

To unsubscribe send "unsubscribe daytona-talk-ml" to
"[EMAIL PROTECTED]". For help on list commands send "help" to
"[EMAIL PROTECTED]".

Reply via email to