I'm about to build something similar, but in 1.2: a by section RSS feed and sitewide aggregation by dc:date, as well as "searchable" aggregation based on matches of other metadata.

I don't know if I'm going to do this yet by building a custom generator, or using more coccon based features, like xinclude or cinclude, and whether i'm going to use the sitemap, build a seperate chronological list running in parallel to the sitemap, or just parse through the index files.

I'm leaning toward building a custom generator that parses the index files, so I can just get what I want exactly as I want it, hopefully quickly by sticking to sax events and not bulding doms and for easy sorting (xsl sorting seems so, well, flimsy to me).

The other option I'm considering is parsing a sitemap or sitemap like file, and using xi:include:elementpath to aggregate whole sections of the site's (again avoiding building a dom for each page) metadata, then using XSL to grab what I want from there (again sorting will have to be accomplished via xsl). Not unlike solprovider's aggregation example recently posted.

Hopefully I can get one of these to go faster than xpathdirectory:xpointer generator, which has been really slow for me. With large sets of files, I've increaded the speed using regular old directory generator, limiting the file set based on some criteria, then xi:include/xpointer what I need from that smaller set. I'm expecting xi:include with elementpath to be even faster.

Any one have any suggestions on which is the best way to go for pure speed? My guess is to build a custom generator that avoids building doms and is cacheable.

Is there any speed difference between an XSP and custom generator in terms of speed I wonder once the xsp is complied, because that's another option. Do XSP's use the cache like other types of generators?

-doug

On Feb 13, 2006, at 1:58 PM, Doug Chestnut wrote:

Hi Jörn

Jörn Nettingsmeier wrote:
hi *!
the website i'm building needs to have a personal page for each member of our institute, which is generated from an xml. in order to benefit from the access control of lenya, i want to use one file per user, but i also need the aggregated content of all these files (to create lists, menus, etc.) i was thinking about using the directory generator or the xpath directory generator as documented on http://solprovider.com/lenya/aggregatefiles, but i'm not sure whether this will play nice with the upcoming jcr storage. what is the cleanest and most backend-agnostic way to aggregate the content of all files of a certain doctype under a given document id?
In 1.4, make a usecase that uses the api to get all of the documents under the current that are of a certain restype. Feed them to your usecases jx template, which simply makes a cinclude for each.

I am looking into the same thing this week for press releases. I want to have a usecase that will give me a list of press releases, and another usecase that will give me an rss feed.

--Doug

this looks like a standard task, i wonder how people are solving it.
regards,
jörn

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to