> As you've noted a lot of examples, and suggested solutions to problems posted > to this list, use an alternate approach: the generators present data to the > pipeline which uses Transformers and Actions to do often complex operations. > The data itself is often wrapped in a page markup language making it easier > to transform it to other forms (HTML/PDF/etc). > > This looks like its breaking MVC, and it probably is.
Not necessarily (through it often seems to be true with XSP): Consider the case where your MVC is implemented in XML and XSLT with Java just providing a way to produce the XML. In such a case you exploit the Cocoon pipeline to separate things, with separate transformation passes (and separate XSLT) implementing each piece. It's a little foggy on whether the XML or the XSLT implements each piece, the paradigm is twisted enough that it doesn't map completely cleanly. However, you can still get good separation of function if you don't mind the fact that it's a combination of rules specified in XML and implemented in XSLT that end up implementing the complete MVC pattern (as opposed to a single Java or JSP file). > It's at this point that you're starting to treat the sitemap as a programming > language rather than a declarative means of gluing together components. > > (Aside: anyone notice how close the Sitemap is becoming to a source file? > Imports: map:components; Instance Variables: component/global params > added in 2.1; Methods: pipelines. There's a danger there in making this > environment too programmer oriented). Yes, that's a good observation. In particular, the sitemap matching capabilities begin to become a rule processor. It strikes me that a more generalized version of the site map would allow XPath traversal of the current "pipeline" contents, to match to a template which produces a "map". In other words; the pipeline would just fire off an XSLT that has the current Cocoon contexts available to it as parameters or document sources. It would be able to parse these as it needed and directly invoke other Cocoon components. Cocoon's pipeline processing becomes the equivalent of running a transform on a series of XML files which specify what matching rules are to be fired. (The results of the transform could be fed to a filter that implements the current Cocoon pipeline capabilities.) The default transform would be the identity transform to handle something isomorphic to the current version of the sitemap, but one could then just plug in your own XSLT to customize the flow (you'd map various applications with different XML inputs). This way not only does Cocoon provide a way of running transformations, but the rule engine for determining what transforms to run is just another transform. Perhaps, this is where flowmaps are headed? I haven't had a chance to look at anything in 2.1 yet.... > So my general advice is: if the logic is reusable, then make it a transformer/action > so you'll have the most reuse. If its not, hide it away. My advice would be more like; stop thinking in procedural terms and stop thinking of using Java to implement everything. Use XML and XSLT and exploit their capabilities! --------------------------------------------------------------------- Please check that your question has not already been answered in the FAQ before posting. <http://xml.apache.org/cocoon/faq/index.html> To unsubscribe, e-mail: <[EMAIL PROTECTED]> For additional commands, e-mail: <[EMAIL PROTECTED]>