On Aug 24, 2006, at 21:01, Simon Pepping wrote:

On Thu, Aug 24, 2006 at 12:40:25PM +0200, Andreas L Delmelle wrote:
This could actually be done in a single thread, chopping the process
up into discrete parts, and bouncing control from (a) to (b) to (c)
and back.
Implementing this with multiple threads would be much trickier, since
the access to all the lists needs to be synchronized.

An interesting side effect of the total-fit solution to page breaking
is that no interaction is required between line and page breaking. The
two are independent and one is performed before the other. That
changes when the line layout depends on the page layout, e.g. due to
different page widths or due to side floats. This makes the
programming more complex.

Indeed it would, but not necessarily increasing the computational complexity...

Rough sketch:
Page-breaking initializes first, and prefetches say five blank pages.
From these it constructs one long context, call it one big page --or better: one region-body--, with ipd changes at a known set of coordinates.

This context is then passed to the FlowLM, and further down. If the LineLM is aware of its bp-coordinate, the LineBreaker will know about changes in ipd, but it does not need to know that it is a page-break, which it isn't at this point.

Coming back up means either overflow of the context, or no more content.
In the latter case control is handed over to the PageBreaker.
In the first case, for a total-fit solution the set of active nodes is kept active, the next five pages are fetched, and the process repeats, until the content runs out, at which point the PageBreaker kicks in. A more memory-friendly algorithm could decide to run the PageBreaker sooner, and continue the process with the last best node as a starting-point.



Reply via email to