Hi Andreas,

Andreas Delmelle wrote:
> On 11 Jun 2009, at 12:40, Vincent Hennebert wrote:
> Hi Vincent
> <snip />
>> I spent some time looking at the current code and it seems to me that
>> a hack could be implemented without too many difficulties. It basically
>> consists in 2 steps:
>> 1. in the Knuth breaking algorithm, when creating a new active node,
>>   look whether the IPD for the following page is the same or not. If
>>   not, deactivate the node. Once we run out of active nodes, select the
>>   best of those deactivated nodes and treat it as if it were the
>>   regular final node. Add areas for content up to that node.
>> 2. re-create Knuth elements, starting from the index corresponding to
>>   that node. Re-launch the breaking algorithm, starting from there.
>>   Then back to step 1, until the end of the document is reached.
>> Step 1 should be doable without turning everything upside down.
>> Step 2 implies to change the signature of the
>> LayoutManager.getNextKnuthElements method, adding a parameter that
>> corresponds to the index from where to start the generation of Knuth
>> elements. We could largely ignore it, except in BlockLayoutManager where
>> we would re-launch the line-breaking algorithm, taking the new IPD into
>> account.
>> Obviously this is a limited approach. There is likely to be
>> a (potentially huge) waste of CPU cycles due to the re-creation of Knuth
>> elements. There may be side effects that I’ve missed so far. But I think
>> it’s worth giving it a try.
> The only thing I'm slightly concerned about, is the case where there
> would be multiple IPD changes for subsequent pages. I'm assuming that is
> the waste of CPU cycles you're referring to (?)

That’s right.

> If I interpret correctly, we would (supposing a page-sequence without
> forced breaks and/or span changes):
> a) generate the complete block list (effectively computing the
> line-breaks for the whole page-sequence)
> b) when computing the page-breaks, and encountering a new page with
> different available IPD, re-generate the remaining elements and
> recompute the line-breaks after that position
> b) would occur as many times as we have IPD changes.
> I'm wondering whether it would not be equally feasible to have the
> FlowLM check the total height of the block-boxes up to a point
> (estimated). Do this after every call to childLM.getNextKnuthElements().
> Then, if the total height exceeds the BPD for the current page, call
> back to check if the next page has the same IPD (or the same amount of
> columns). If so, then we happily continue, just as we do now. If not,
> then we hand the list off to the PageBreaker, run it through a
> PageBreakingAlgorithm to compute the page-breaks up to that point, add
> the areas, and resume later, passing the LineBreakingAlgorithm an
> updated LayoutContext corresponding to the next page.

While this is a good idea, it will imply more disruptive changes to the
codebase. I’d like to keep the necessary changes to a strict minimum.
Plus I’m not even sure yet that my idea will not lead to a dead end.

Your idea is not incompatible with mine, however. It’s an additional
step performing some optimization. We can always re-consider it once
I have some working code.

> If this functionality could somehow be factored in to BlockStackingLM,
> all the better, since we're definitely going to need it to avoid
> BlockLMs and TableLMs from accumulating the element-lists for all their
> content. They too will need to be able to stop when their accumulated
> content-height exceeds the threshold (available BPD + a percentage?)

I’m not planning to do any change to lists or tables. Tables in
particular may create all sorts of problems with collapsed borders and
repeated headers. This will be leading us too far for what is just
a temporary hack.

> Step 2) would still be necessary, since we need to know at what point to
> resume the line-breaking later on.
> The benefit being that we would catch the IPD-change long before the end
> of the page-sequence is reached by the line-breaking loop. The amount of
> elements that need to be re-generated would always remain as small as
> possible.
> If I judge correctly, and it is feasible, then this may present an
> opportunity for a pure FO hack to reduce the memory consumption for
> arbitrarily sized page-sequences: use alternating page-masters with a
> different IPD. Only make the difference practically invisible to naked
> eye. FOP would still detect it, and flush the list up to that point.
> Thoughts, you asked? ;-)


> Andreas Delmelle


Reply via email to