I would like to think aloud a little here, about my efforts to
try to follow the code.  This is my introduction to OO design
and programming, and (in spite of my tone sometimes) I do not
want to tread on anyone's toes here.

My comments are inspired by consideration of the issue of
cost-of-entry for new maintainers, and the sheer amount of
stumbling around that I have done up to now.  Mark's recent ease
of entry into the code may well put the lie to everything I say.

One of the things that has puzzled me is the reliance on an
object set which is largely determined by the set of FO nodes.
The XML parsing is a complete black box, which may have some
advantages, but is disconcerting for those, like me, who are
encountering XML parsing for the first time.  In any case,
the point is moot because of the move to serialization, which
should open that particular black box right up.

If an when that happens, I hope to be able to detect a flow of
control something like this:

   process root
    process layout-master-set
     while children
       process simple-page-master
        process region-body
        process region-before
        process region-after
        process region-start
        process region-end
       process page-sequence-master
    process fo-declarations
    process page-sequence
     until no more

with a parser feeding sets of element+attributes to a single
reader which can deliver elements of a known type on demand.
The whole process is then embedded with the "process root" method,
which requires an fo:root element, and terminates at the end of
that element.

As part of this I would like to see, if possible, a reduction
in the number of objects.  This harks back to what I was saying
about the current object set.  It seems to me that the FO nodes
are really just data, and that the algorithmic necessities are
much more minimal than the set of nodes and, heaven forbid,
properties.  Furthermore, as data, they are very simple and
consistent: simple and consistent enough to be expressed by XML
elements and attribute sets.

It seems to me from my very very limited experience that the
object model works very well for the encapsulation of data
and the provision of high level accessors for that data, as is
exemplified by the Collections of the JDK, but less well when
essentially algorithmic processes are forced into an artificial
object framework in which there is insufficient separation
between data and algorithms.

My impression of the situation at the moment is that the process
flow has been hacked up into pieces which are much too small,
and which militate against understanding.  If you understand a
program by following the control flow with a tracer or profiler,
you don't really understand it.  If you are *obliged* to do that,
there is something fundamentally wrong with the expression of
the algorithm(s).

I feel that Karen and Arved's proposals for the new model
are moving in this direction.      They are talking about
uber-processes which do not correspond to any particular FO object,
but which are primarily processing objects, i.e. heirarchical
collections of methods, in their own right.

If I am simply displaying my OO innocence here, please be

Peter B. West  [EMAIL PROTECTED]  http://powerup.com.au/~pbwest
"Lord, to whom shall we go?"

To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to