* Also, would it help me to order that research paper--how helpful would it be in understanding our code?
Luca Furini wrote:
I realized just a few days ago that the breaking algorithm (in the BreakingAlgorithm class) is not fully patched with Finn's great refactoring of the Knuth code (bug 32612).
I must admit that this is due to my laziness: when I was playing with Knuth's algorithm for page breaking I applied to my local copy of the code only the new restarting strategy, so, although Jeremias applied the patch before the branch, most benefits in performance and readability got lost in the merge.
I have now applied the patch to the branch code: it needed some change in order to fit in the new classes, and I hope I did not introduce errors. :-)
A few doubts / questions / comments:
- the value BestRecord.INFINITE_DEMERITS: I'm not sure it must be +infinity; if it is a finite value it acts like a threshold, reducing the number of active nodes. On the other hand, the finite value should be carefully chosen, otherwise breakpoints with an allowed adjustment ratio could be later discarded because it has more than "finite infinity" demerits (this is something that Finn pointed out some time ago). What about a finite value computed according to the adjustment threshold and the line width?
- in addition to Finn's restarting strategy, lastTooShort is resetted to null after the creation of new nodes: the newly created nodes are surely better than it, and a "lastTooShort" solution will be found later; it will most likely have more demerits (demerits always increase, when a new line / page is created), but it will be better anyway.
- as now KnuthSequence extends ArrayList instead of LinkedList, a few more optimizations could be done here and there: using get() instead of ListIterators, for example.