Luca Furini wrote:
<snip/>
The computation, in itself, is easy, as the LineLM already has all the
necessary information: line width, unadjusted width, available stretch
and shrink.
I think shrinking/stretching the spaces in the case where the guessed
space doesnt match the actual is an improvement on what we have today.
Sure, there will be people who are not satisfied but it is good enough
for version 1.0. After all we are not talking about having to find 100em
of extra space from a line. It should be 1 or 2em at most. Are there any
applications requiring a 1000000 page document???
The point is that this information is stored in the LineBreakPositions,
while the actual value (and the actual width) is set directly into the
area tree.
In order to adjust the inline content of a line when the page number is
resolved, I see two alternative strategies:
1) the LineLM has to handle this: this needs the LineAreas to hold a
reference to the LineLM that creates them, and that knows all the needed
information;
yuk! The area objects should not reference any more objects than
necessary. For large documents that have been broken up into multiple
page-sequences to keep memory down, this will cause memory usage to explode.
2) the LineArea has to handle this: this means that the LineArea (and
the InlineAreas too) must be given the information about MinOptMax ipd and
provisional adjust ratio
This is the preferred option as it only increases memory a little.
Perhaps the Min/max/opt objects can be null unless the area contains
dynamic information?
I don't like 1 very much, because I think the creator LM is not a
significant attribute of an area, but 2 involves adding many attributes
too (and maybe even less significant!) ...
What do you think? Do someone see a different strategy?
I'm against a 2 pass approach too as XSL-FO processing is slow enough
already. The shrink/stretch strategy is a good strategy. Don;t forget as
well as word spacing there is letter spacing and font stretch that can
be altered, when in a tight spot.
Chris