Il giorno 30/ott/08, alle ore 12:45, Vincent Hennebert ha scritto:
Let me re-phrase your words to be sure I understand you correctly:
all of the layouts for line X have been found, you choose the best of
ethem, select its corresponding ancestor layout for line 1, and
all the other layouts for line 1, along with their descendants. Is
Yes, it's right.
It would be great if you could find a similar image to illustrate your
heuristic. It’s important to ‘feel’ why it’s going to work.
Yes, it would be very nice, unfortunately ATM I don't have such an
I’m not sure there’s a real-life use case for such an optimization:
people wanting speed will still be less happy than with plain best-
people wanting quality are ready to pay the price for it anyway.
This is correct, but the total-total fit (document and paragraph)
algorithm the prototype is actually implementing may be very hungry
resource; in the document you recently wrote you mention a paragraph
that can be laid out in 4, 5 and 6 lines, but what if the document
contains many paragraphs like those I used to test the pruning in
With pruning (that can easily turned on/off) you may partially keep
advantages of total total fit.
Not if you apply the pruning method at the line level: indeed the
line is unlikely to be the same in, say, the 4-line version of
a paragraph, as in its 5-line version. Since you’ll be keeping only
first line, it’s going to be the line belonging to the optimal final
layout (say, the 5-line version).
You might partially keep the advantages of total-total-fit if you
pruning only at the page level, but that doesn’t sound right to me: if
you apply pruning, that means that you are ready to make a
quality in the interest of performance. But then you don’t need that
level of detail brought by the several layout possibilities of
a paragraph. Actually I expect that to be enabled only by people
quality at any price.
A thing that might be interesting is to regularly cut down with the
number of active nodes, every x lines: once all of the layouts for
line x have been found, select the best active node and discard
other ones. Then do that again for line x + x, 3x, 4x, etc. While
similar, it has the advantage that every bunch of x lines will
determined by a local total-fit method. In fact the paragraph will
made of a sum of local optimums, that may actually correspond to the
global optimum or not. But even in that case, I’m not sure this is
the additional complexity to a piece of code that’s already well
Another option might be to set a limit to the amount of consumed
(the number of active nodes, which in turn will have an effect on
processing time). Once the maximal number of active nodes is
start discarding the nodes with highest demerits. But it remains
if such a heuristic proves to be efficient, and what limit to set
we can see in your other message, figures may change radically
document to the other.
I think that pruning provide a good trade off output-quality/
without messing up so much the code (look at the patch for pruning in
trunk, it's trivial). Anyway pruning isn't the best way to cut down
number of active layouts, the solutions you propose are more
but I wrote the pruning for another reason (more later) and here we
talking about its (positive) side effect.
In the end, I would be more inclined to implement a gradation in the
layout quality (best fit, total fit over page sequence, total fit
document, total fit over document + paragraph line numbers, etc.),
rather than one or several pruning method. I think it should be
to implement yet provide enough flexibility to satisfy all kinds of
Sorry if that sounds a bit negative. Since this is all but a simple
topic, I may well have missed the interest of your approach. At
good ideas sometimes emerge from the oddest experiments, so feel
continue your investigations...
About my main goal (start imaging violins playing a moving song): I
dream of a fo processor that can process document in "streaming"
constant amount of memory regardless the size of the input document
(stop imaging... :P). Probably some nice features needs to be
or it's simply not possible, but I think that this is an interesting
challenge. Anyway I'm far from reaching this goal.
Why a streaming memory-efficient processor? Because industry need it:
XML and related are beautiful technologies but sometimes too much
academics and not production oriented. My company need it (I hope
will allow me to continue working on this...). A prove of the market
interest in this sense is XF Ultrascale, a commercial fo processor
is claiming very low memory footprint . It managed to obtain
impressing results in some test, but it fails rendering my company
going out of memory. So it seems that a streaming processor still
doesn't exists, even in commercial products.
Hmmm, there are two different points then IMO:
- rendering pages as soon as possible; I suddenly see the point of
pruning approach. However, for that case I believe best-fit is the way
- limiting memory consumption. This is a different issue IMO. I’d go
setting a maximum number of nodes and regularly discarding the nodes
with highest demerits. The rationale being that, at the point where
there are so many active nodes, it’s unlikely that the ones with
highest demerits will actually lead to the final solution. Unless the
content that’s following really is tricky (big unbreakable blocks or
It's not only a question of active nodes for memory consumption: how
big is the in memory representation of the FO tree of a 500MB
document? Three, four, five times the original size? And what about
all its LMs? Rendering pages ASAP (or at least building the area tree)
is necessary if you want minimize this footprint by freeing no more
necessary information. In that case the active node reduction is the
least of the benefit. The idea of applying pruning was born with this
in mind, and that's why I think pruning is good.
Well, my hope is to find an approach that makes everyone happy, those
wanting speed as well as those wanting quality, with a series of
intermediate steps providing different trade-offs. The difference
be in the number of times the tree is pruned, but the pruning itself
would always be the same: when the end of an object is reached (block,
page-sequence, flow), select the best layout so far and discard the
other ones. I believe this method will better integrate into the big
Now, the definitive way to compare approaches is to run them on
a representative set of documents, and for each one record the
processing time, memory consumption and demerits of the final layout.
I’d be curious to see the difference between best-fit and the pruning
approach then, but I’m not convinced it’s going to be that much.
Do you care about streaming processing?
Pruning is *necessary* (along with mixed line/page breaking) if you
to render pages before the end of the document is reached while
output quality better than best fit.
Don't you care about streaming processing?
Pruning is a way of improving performance, surely not the more
efficient. Keeping the amount of active nodes (with pruning or other
solutions) is not necessary and maybe its benefits in the real life
make it not so desirable.
I expect you and the FOP team don't care, at least ATM, about
processing. Anyway thank you for your feedback, you spent time for
So far, our answer to the streaming problem has simply been best-fit.
Now, there may be two other solutions: your pruning approach, and my
proposal of local total-fits (keeping the best layout every
x lines/pages). It remains to see if the quality brought by those
two methods is higher enough than best-fit to justify the additional
processing resources (and code).
Best fit obviously would work, but I have the *feeling* that pruning
or local total fit would achieve better results. I need to prove it,
the demerits comparison you mentioned above would be very interesting
in this sense. I also have the feeling that pruning should work better
than local total fit in some cases: think if you do local total fit in
X lines in a paragraph that is X + 2 lines long: these last two lines
may be forced to high demerits. But, again, tests are needed to prove