Manuel Mall wrote:
On Thu, 29 Sep 2005 11:50 pm, Peter B. West wrote:

Fopsters,

I've always been somewhat sceptical of the new approach to page
breaking, although I was prepared to concede that it would be a great
achievement if you pulled it off.

However, the closer the development has come to fruition, the more
some of my original concerns have been reinforced.  Think about the
enormous amount of intellectual effort that has gone into mapping the
problem into Knuthian.  That effort is still under way.

How is this going to be maintained?  Where are the Knuthian speakers
who are going to do that job over the next few years?

I'm surprised, in fact, that some of the old hands have not raised
this question already.


Peter,

I don't get it what you are aiming at here.

Are you saying that the Knuth approach to line or page breaking is inherently more difficult to understand and therefore harder to maintain?

Apart from being one of the "best" (in terms of visual quality of the output) algorithms for breaking it is also IMO inherently simple. This is one of the beauties of many of Knuth's works. He is IMO a brilliant Computer Scientist who manages to solve complex problems using simple concepts and algorithms. The concepts and algorithms are also well documented in papers and books which are usually accessible through your nearest university library. Just take the "Breaking Paragraphs into Lines" paper. Yes, it is over 80 odd pages long but the important concepts are explained in the first 10 pages. In my case, when I delved "cold" into the fop layout code I had no idea what was going on, but after reading the initial part of the paper it all suddenly made sense.

So, where is the problem - Fop is using well documented concepts and algorithms to do its line and page breaking. Why should it be harder to maintain than some home cooked solution not backed up by previous research / papers / implementations (Tex)?

And if you take the recent discussions mainly driven by Jeremias regarding the bpd space resolution the core of the problem is not mapping it into the appropriate Knuth sequences. It is the implementation of the space resolution rules themselves, i.e. figuring out how much space to leave or not leave in a particular situation, which is the hard part. Generating the appropriate Knuth sequence once you know what the resolved space is is easy.

And quite a few of the stuff is also documented on the WIKI.

In summary, and I can speak here from my own recent experience, I don't share your concerns about the Knuth approach increasing the maintainability cost of the fop code base.

If you guys are happy with how the design is shaping up in terms of maintainability, great. There had been no discussion about this aspect, now there is.

Peter
--
Peter B. West <http://cv.pbw.id.au/>
Folio <http://defoe.sourceforge.net/folio/>

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to