Andreas L. Delmelle wrote:
-----Original Message-----
From: Peter B. West [mailto:[EMAIL PROTECTED]


Thanks Andreas.  Yes, I disagree, but then, so does the spec.  What
information *should* do is not terribly relevant.  We need to work out
and express what information *must* do to get this thing working.



Voila! Without worrying too much about "textbook 'should'-stuff"... It is
after all *XSL*-FO, so I guess the basic compliance rule is also for FO
processors: 'the order of the processing events doesn't matter at all, as
long as the produced output is correct.'


[Me:]
... Would it really be worth a shot to change the design
there, and flip a switch? Throw the FOTree away and just fill up
the AT and re-use that?

Or process the minimal relevant parts of the FO and Area trees in parallel. That is what I am working towards. I believe that page-at-a-time layout with just in time processing of the FOs is possible. Given that, both FOs and Areas can be kept alive while required. (I haven't given any thought to rendering.)


Indeed. When I began thinking about 'borderline-cases', it became clearer
that situations where you need to wait for more than one page to complete
the layout are rather exceptional anyway, so it's most likely the better
option to treat them as what they are: 'an exception to the general
processing flow' (--IOW, not *all* theoretical possibilities implied by the
definitions in the spec should be covered by the *basic* processing
framework).


The borderline cases may be very much in the minority (and must be, judging by the degree of usage that FOP gets now) but they must be taken into account in the design of the solution. If we go for the low-hanging fruit and then discover that the ladder is too short we end up back in the workshop.



Note that forward references are always going to be a problem, but that a
combination of weak/soft/phantom references and serialization should
keep memory requirements manageable.



That's where your producer-consumer buffers step in, right?

So.. by 'minimal relevant parts' you actually mean something as 'reduced
skeletons of subtrees' that in themselves provide a way to pull the events
generated by their descendants from the buffer, without themselves actually
holding a reference to the descendants, so while the ancestor is being kept
alive, the descendants can be happily collected by our friend GC if no
longer used by another process, while the ancestor remains at our disposal
to re-pull the same events from the buffer if/when they are needed.

Is this close?


I wasn't thinking of the buffers. The re-usability of the buffers is only relevant to static-content and markers, so I don't see that as a major issue, and would expect to keep the static-content buffers for a page in memory for the life of the page-sequence. Markers are a bit more problematical, and may well benefit from some "transparent" serialization.


I was thinking more of the "historical" parts of the FO and Area trees. At one stage I considered linking all nodes in the FO tree with reference objects, but I was concerned about the extra layer of reference objects and their impact on memory and performance. However, when we get parallel FO tree, Area tree and rendering working it may pay dividends.

I find the language of the java.lang.ref package confusing, but I think that phantom references open the possibility of performing serialization "on demand", when the object is queued for GC.

Peter
--
Peter B. West <http://www.powerup.com.au/~pbwest/resume.html>



Reply via email to