Oleg,

...

Oleg Tkachenko wrote:
Peter B. West wrote:

taking a very isolated path. My motivation can be summed up in the slogan SAX SUX. I couldn't understand why anyone would persist with it for any complex tasks, e.g. FOP.
Actually I cannot say I fully agree with this, because I don't see nothing complex in SAX processing model. And being xslt fan I'm obviously push-model fan.

...

significant difference makes XmlReader much easier to use for most Microsoft developers that are used to working with firehose (forward-only/read-only) cursors in ADO.

Well, lets consider pull model pros and contras:
+ easy to use by developer
+ benefits by kind of structural validation
+ more?
Why is it easier for developers to use? Is it because the API is less complex or more easily understood? Not really. As you point out, the SAX API is not all that complex. The problem is that the processing model of SUX is completely inverted. You may have come to like writing XSLT that way. You may be working with very general grammars, and have no other choice. That doesn't make the inverted, inside-out model any more natural for the expression of processes and algorithms. "Easier for developers to use" means an easier vocabulary for the expression and solution of programming models and problems; it means easier to document, easier to read and understand, easier to maintain and extend (in the sense of adding functionality).


- it glues processing to a particular xml structure, what is not so bad for vocabularies with well-defined and stable content model. The question is whether xsl-fo is a such kind of a vocabulary? I think it doesn't. As a matter of fact xsl-fo even inexpressible in dtd or schema, besides of possibility of extensions.
I think that a W3C Recommendation qualifies as a well-defined and stable vocabulary. Hmm. Well, you know what I mean. It changes only infrequently, the changes are well-defined, and are going to involve changes, possibly major, to many parts of the code base anyway. It certainly cannot be described as a dynamic vocabulary.


- is there performance penalty? I used to think that easy to use stuff always costs something.
Of course, as I have mentioned recently. And as I also said, the cost of parsing relative to the intensive downstream element processing of FOP is small. Obviously, you would look at optimising that as much as possible.

- more?

Note also that the structure of the code does its own validation. It generates the simple-page-master subtree according to the content model

(region-body,region-before?,region-after?,region-start?,region-end?)
That's good, but it's not full-fledged validation unfortunately. To many "own validation" is bad I believe. If we need validation it must be done by specialized validation module and validation should not be scattered throughout the whole code.
Much of the validation of FOP has to be self-validation anyway, because so many of the constraints are context-dependent. The whole question is context-dependent. If you are engaged in the peephole processing of SUX you may be obliged to use external validation. With top-down processing you have more choice, because your context is travelling with you.

Don't get me wrong here. I'm not saying that external validation is wrong, merely that with a pull model, the need is reduced. There may still be a strong case for it, but not as strong as with SUX.

And final question - what's wrong with SAX besides of possible complexity?
Isn't that enough?

Peter
--
Peter B. West  [EMAIL PROTECTED]  http://www.powerup.com.au/~pbwest/
"Lord, to whom shall we go?"


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to