Peter Hunsberger wrote:

> As others have said, one needs to step back and look at the overall
> objective: what do you want Cocoon to do when you feed it a request
> (either via http or CLI or whatever)?  Figure out all the 
> high level use
> cases and their interactions, step back, generalize and repeat.
> Personally, I end up with something more like RDF and 
> ontology traversal
> than I do with scripting...  I don't think many people could 
> afford the
> hardware to do that in real-time for large scale web sites, so I come
> back to XML technologies as a reasonable compromise for the near term.

I don't know if this is exactly what you're thinking of, but at my work we are 
developing something which sounds similar - using XML Topic Maps rather than RDF - and 
I think (hope) it will be a powerful technique for knowledge-intensive sites. 

There are 3 parts to it: 
1) harvesting or mining the knowledge from the various sources (we use XSLT in Cocoon 
pipelines to extract knowledge and encode it as XTM).
2) using the semantic network to structure the website itself.

For this second part we have a sitemap which handles all requests with a simple 
flowscript, passing it the request information. This flowscript looks up the requested 
topic (a concept) in the topic map database (we use TM4J with Hibernate). Then it 
finds an appropriate jxtemplate for rendering that topic, and calls 
sendPage(jxtemplate, topic) to render it. The jxtemplate is responsible for rendering 
topics and inserting xinclude statements to aggregate topic occurrences (resources). 
So 90% of the sitemap consists of pipelines for rendering various occurrences, but 
totally decoupled from the website's external URI space. These pipelines are consumed 
by the rendering templates. The logical structure of the site is entirely in the topic 
map, the choice of page layout for each type of topic is also in the topic map, but 
the page layouts themselves are just jxtemplates.

Reply via email to