Yes, that does help. I read up on the command line interface and I see that it neatly solves the serialize-to-file problem (and others). Thanks for the tip.
But what about the fragment question? Say I have a single source file that will generate these pages:
1. full text view 2. abstract only view 3. figure 1 page 4. figure 2 page 5. table 1 page etc.
From what I understand so far about cocoon, it seems I would have to parse
the file 5 or more times, once for each of the output page types. Is there a better
way?
I feel like I need a process to make XML fragments for these, then call them individually for processing. Or is that not the cocoon way?
Thanks again,
Fred
At 09:09 PM 9/9/03 +0200, you wrote:
> This might be a bit outside of the normal cocoon usage. Has anyone else > had any experience with this approach? Am I missing something obvious? > Is there a better way?
Have you seen that Cocoon can be run from the command line? In that case, it produces static files for each matched URI in the sitemap, and Cocoon can follow links between the files. The Cocoon documentation is built like this, it was the initial intent of Cocoon. Apache Forrest uses Cocoon to do this also, generating static HTML and PDF documents alike. I'm just doing this to generate a website offline.
> I'm wondering if the document should be split up into fragments. How would > something like this be done with cocoon? Can you serialize to a disk file?
Yes. Files are automatically created for the matched URIs from the serialized content if you run Cocoon from the command-line.
The generation process could be splitted between different matchers, each one composing some part of the document. You could even shield inner processing from matching URIs in so called "internal" pipelines. The "external" pipelines would drive the processing, aggregate the content built by internal pipelines, further transform and serialize it.
Is that of any help?
Olivier
-----Message d'origine----- De : Fred Toth [mailto:[EMAIL PROTECTED] Envoye : mardi, 9. septembre 2003 16:01 A : [EMAIL PROTECTED] Objet : Large documents and fragments?
Hi,
We work in the scientific publishing industry and our typical source materials are fairly large XML files that contain a journal article with all the usual stuff, abstracts, bibliographic references, figures, tables, etc.
One of these documents typically yields multiple individual pages. For example, we will have an abstract page, a full text page, a figure 1 page, etc. Further, we will aggregate bits of 50 documents or so to produce a table of contents.
I am looking for the best way to approach this with cocoon. It seems impractical to have a single source document drive all of these pages? I'm wondering if the document should be split up into fragments. How would something like this be done with cocoon? Can you serialize to a disk file?
Also note that we are likely to be generating HTML off line and not using cocoon for serving pages. But we want to be able to take advantage of sitemaps, pipelines and all the other goodies to get the job done.
This might be a bit outside of the normal cocoon usage. Has anyone else had any experience with this approach? Am I missing something obvious? Is there a better way?
Many thanks!
Fred
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
