Vincent Hennebert wrote:

I'd have to think more about it, but:
- perhaps the compareNodes method should compare the line/page numbers
  for each node rather than the index in the Knuth sequence. Or some
  mixing of the two.

The index can tell us which node allows to lay out more content, the line number ... I am not able to see it as a very informative measure ...

- if you restart using the last deactivated node you are sure that
  immediately after that you'll have to restart using the last
  too-short/too-long node, because no feasible break will be found
  (otherwise the list of active nodes wouldn't have been emptied).

Yes, but I think we have a significant difference: in the first case we will have N good lines, a bad line and maybe some other good lines; in the second we have N-1 good lines, a quite-bad one (either too long or too short), then a bad one and finally some good ones.

I've preparared a very small patch fixing a couple of things:
- the TLM add a zero-width infinite-value penalty to forbid breaks at the
  glue elements used for left/right aligned text (I'm going to check if a
  similar fix is needed elsewhere in the code)
- the BreakingAlgorithm uses (if possible) lastDeactivated instead of
  either lastTooShort or lastTooLong.

The patch is just a dozen of lines long, and it was easy to apply it to the float branch.

How should I proceed? Apply it to both trunk and branch? Only to the branch?

I'm also going to mark bug 41121 as a duplicate of 41109, as the problem is exactly the same: the algorithm restarts from a very bad break instead of a good one (in that case, after the first word).


Reply via email to