Hi,
I'm occasionally consulting Ginger Alliance on Charlie architecture, and
this seems like a good point to include a little plug for Charlie.
Niklas Lindström wrote:
> I'm a little puzzled about the mentioned extensions. I believe these should be bound
> to another namespace to avoid violation of the W3C-specs (since only elements in the
> recommendation are allowed in the ns allocated for xsl-elements. Furthermore, I seem
> to recall ns:s beginning with the letter 'x' being reserved for standardized
> namespaces. This is not followed by neither XTs purposed extension-prefixes nor
> Cocoon's XSP though).
I don't understand the purpose of the XSLT extensions suggested, or perhaps
used by XT or XSP. We would indeed appretiate some hints/explanations in
this area, sure it might turn to be the best thing under the sun we
underestimated so far ... Anyway, within Charlie architecture the role of
Sablotron is to be a straightforward template/transformation processor. We
have no need to spoil XSL or Sablotron with application control or data
generation logic, because that's what Charlie is all about (see bellow).
... weird perl-in-XML example deleted ...
> What I'm getting to is simply an open question: could this technique in any way be
> recommended when putting together dynamic xml-pages? Or would it simply be slow,
> unsafe and practically silly? It's been brought to my immediate attention when,
> having spent some time leaving the perl-hacking behind and practising xslt in a
> java-driven environment, I came upon first xml::xslt and then this Sablotron; and
> thus the possibility to merge perl and xslt together in an efficient way. (The works
> of Matt Sergeant have also inspired me much, although I haven't had the time to try
> them out very much.)
> Perhaps using ePerl, embPerl or mentioned apache::asp would be possible/much more
> appropriate for this? (Or just glueing together a preprocessing thing before using
> such "dynamic pages"..)
> I'm just throwing this here hoping it won't bother anyone, although perhaps a
> little far from Sablotron-development in essence? [Though for me - an inspiring pair.
> ]
I guess we can all agree that building web applications in two parts - data
generation logic and presentation interface for the data in a form of
templates - has many advantages, and that XML/XSL provides a great combo for
this - XML as an interface of generated data and XSLT "templates" as an
interface builder tool. Another examples of this architecture might be
Perl/EmbPerl or whatever-scripting-language/ASP. Now I can see several ways
to handle coupling of (XML) data with (XSLT) templates:
1) "active pages". The URLs refer to HTML page templates themselves, who in
turn have "embedded" instructions of some sort generating the data to fill
in. This is very popular approach, being used by ASP, PHP, Cold Fusion,
EmbPerl in mod_perl mode, and I pressume XSP as well. The advantage is wery
steep learning curve, it's very easy for beginners to go and develope simple
applications. Every HTML page is a valid "program", add just a couple of
special "embedded" commands and you have a simple interface to a database
table etc etc. From the point of larger web application this approach has
serious drawbacks, though. It involves a large duplication of templates (if
you want to use the same template for different kinds of data), very poor
control logic (when you call certain URL, you will get something looking
like the page template regardless what input data you get) and subsequently
very poor error handling. If you want to get around these limitations, your
active pages will get *very ugly*, entirely missing the original point of
simple, intuitive applications.
2) explicit calls to a template processor. The URL points to a CGI script,
or mod_perl script, or java servlet, simply to a piece of code/logic, which
generates the data, and at the end of the processing passes the data via an
explicit call to a template processor, such as Sablotron, EmbPerl offline
mode etc. This could be called a "traditional" model, because it directly
follows the model of standalone CGI scripts generating HTML pages directly.
The problem here is that of the script has to have the template name/URI
coded in (unless you design a special framework/library for this). Thus
changing the "look" of application and restructuring the application gets
hard.
A hybrid case between this approach and 3) is AxKit - a separate framework
for template management, which however requires the script generating XML
data to include full UURI of the template.
3) independent template manager. What Charlie (and possibly others, we'd
like to know about them) does, is defining a new type of object - an action.
User's request points at an action, which itself decides which XML data
source (typically URL) to use and which XSLT template to couple it with.
Actions are pieces of javascript code, which allows indefinite flexibility
and combinations of data generators (typically CGI/mod_perl scripts or java
servlets generating XML data) and templates. There is also a system of
"default actions" for simple cases. At this point Charlie only supports XSLT
templates and Sablotron, but could be modified for any template language
working on XML data. Check http://www.gingerall.com for more info and some
examples of Charlie.
It might sound like an overkill using scripting language (javascript) for
something just a litle bit more flexible than AxKit does based on simple
rules, but we have bigger plans for Charlie. Charlie might run on another
machine than the WWW/XML server, even on the same machine as user's browser
(or become a module of Mozilla), and actions, XML and XSL files can be
cached. Action might look inside XML data and cache some of it selectively
for future re-use - in such a case javascript becomes a control language for
the whole environment. And eventually, if everything in Charlie framework
(XML, XSL files, actions, selected data from XML files) can be cached
locally, we may start thinking about building offline web applications. But
that is still a distant future.
Honza