That is a rather ideal situation. It requires not only interleaving of
page and line breaking, but also of page breaking and collection of
Knuth elements. That requires some communication. The collection of
Knuth elements is deeply recursive, LM.getNextKnuthElements. Each LM
now needs to pass its Knuth elements to the page breaker or the lowest
line breaker in the hierarchy. That can be accommodated by passing
that receiving object as an argument in
LM.getNextKnuthElements. Especially for the LMs in block mode (block
LMs and line LMs) it is important that they communicate their elements
soon, to allow the page breaker to interrupt the process and proceed
with the breaking calculations. In restricted block mode and inline
mode, i.e. mainly InlineLMs, that is not important, because the LineLM
will complete each paragraph before communicating it up.

This change is only meaningful for a best-fit strategy for one or a
few pages. For the total-fit strategy it adds complexity but no memory
efficiency. That is because this strategy cannot take a decision until
the whole page sequence is processed.


On Tue, Oct 02, 2007 at 11:31:12PM +0200, Andreas L Delmelle wrote:
> Just thought of it this way:
> Instead of collecting all the ListElements for the whole page-sequence in 
> one massive recursive iteration as is the case now 
> (getNextKnuthElements()), maybe the algorithm can be 'slightly' altered in 
> such a way that the FlowLM repeatedly checks back with the PageSequenceLM 
> and updates the LayoutContext for the active page.
> Not: collect *all* lines/paragraphs first, and only then *all* pages (may 
> be "total-fit", I'm not sure I would call it that...).
> But rather, an incremental total-fit:
> while (moreContent) {
>   collect more lines
>   if (accumulated line-height causes an implicit but unavoidable 
> page-break) {
>     run page-breaking algorithm over the accumulated lines
>   }
> }
> Obviously, the if-test is only a very rough estimation, but a good one, 
> since it guarantees that the sequence always generates at least one 
> page-break (no space-resolution, footnotes, floats taken into account here 
> yet)
> That would provide our 'interference' point, where decisions can be made 
> about whether to continue accumulating layout-possibilities or interrupt, 
> start adding areas based upon the best possibility so far, and resume, but 
> with a cleared state. The head of the list of lines/pages will be chopped 
> off, and their possibilities need no longer be considered.
> If I interpret correctly, the node corresponding to the 'best' overall 
> break for the first line/page (the one chosen by a total-fit 
> implementation), in many cases can be determined quite early in the 
> process. You don't always need to look at all words/lines in the 
> paragraph/page-sequence for that.

Simon Pepping
home page:

Reply via email to