Dear Ichiro,
I take your point that what I propose does not fit in the JSPWiki "spirit".
The need is to associate parts of a Wiki Page to an "entity", something
pointed by an URL with data associated with it, data which needs to be
formatted for display.
From the previous discussion, it is now clear to me that the WikiPage
author do not need to enter data there but should obtain the desired
display by just making the reference.
Let's forget JSP and concentrate on a Plugin like the others: make a
generic Plugin that can be derived for individual needs allowing to write:
* [{DLAPA rec="12345"}]
* [{DLAPA rec="67890"}]
...
This would return WikiMarkup (and not HTML) displaying record 12345 and
67890 from source application named "DL" formatted in APA format. For
another source or another format, another plugin would be defined by a
programmer.
The plugin would therefore be responsible to access the application, get
the data and format it in WikiMarkup.
One generic plugin could be to use SPARQL servers as source,
parameterized in some configuration files. SPARQL field names could be
inserted in some WikiMarkup template. Simple and efficient? Example:
[{SPARQL config="dbpediaCountriesPopulation" param="Belgium"}]
This would use a configuration "dbpediaCountriesPopulation" to access
DBPedia and return a WikiMarkup table of Belgium Population statistics
Excuse me to not share your enthusiasm for STAX: it is essential for big
documents (I use it for that: RDF to XML transformations...) but
WikiPages are not that long and templates are hard enough to keep them
unconstrained. Anyway the main problem today is to DEFINE the process to
translate (normalize) XHTML into WikiMarkup. XSLT is certainly a way to
experiment (and share results). Let's start something like bringing
together test cases?
Thanks for the good discussion and have a very nice day!
Christophe
Le 14/11/2012 01:48, Ichiro Furusato a écrit :
On Tue, Nov 13, 2012 at 9:19 PM, Christophe Dupriez
<dupr...@squadratic.com> wrote:
Dear Ichiro,
Thanks a lot for your long answer! I will try to make a short one:
1) for the different Schema.org use cases, what you suggest (if I understand
you well) is "It is already in the database, stupid!" And you are very well
right. This means that if one deposit even a simple interwiki link (and a
template identifier), the interwiki link can be used to identify a database
object + the identifier to select a JSP template then the Wiki Page can
simply include the result. This is minimal changes and I will make a try.
I'm not entirely sure what you mean by the "It" in the "It is
already..." and I'm
a bit confused as to how you think the JSP templates are connected to either
the plugin API or to the backend. To my understanding there is a distinct and
desirable independence between them.
I suppose my only point is that a graph-based backend could either be used
in a "flat" manner, where all wiki pages are connected to a single root node,
or a hierarchical (or even graph-structured!) wiki could take advantage of the
actual graph structure of the database. I.e., every graph node has a single,
canonical path -- its connection to its original parent node -- and that path
could be used as the basis of its canonical URL (there could be others,
depending on how it is interconnected to other pages). This whole concept of
a graph-structured wiki is the central innovation in my work, BTW.
Question to the community: when a plugin has to produce an output, should it
take its template from some wiki markup in a page or from a JSP template? I
am more and more attracted by the second rather than the first (I do not see
end users change the templates and I wish maximal flexibility for them).
Again, confused. A wiki plugin doesn't have any access to its template context,
it only receives a WikiContext and a Map of parameters. It doesn't (necessarily)
have access to either the entire content of the page (though it can of course
gain that via the WikiContext and the WikiEngine) or to its JSP template. So I
don't quite understand where you are going with this.
2) the Arbortext DLM concept is very interesting. Do you know any open
initiative going in the same direction?
No, I'm not aware of anything. It's a very complex project and tied very tightly
to their product line. What I'd aim to do is make an application-independent
link manager.
3) XHTML to JSPWiki: Current XHtmlToWikiTranslator is DOM based which is not
too bad. I am unsure XSLT is easier to develop and maintain than DOM based
Java procedures.
Actually, the DOM is probably quite a barrier for a number of reasons.
First, it's
very old and clunky technology, an outdated and difficult to use API,
and second,
its performance is pretty poor, especially with large documents. Any translator
should be refactored to use either StAX (ideally), or if a document
object model
is required, then use Apache Axiom (which uses StAX as its stream/events
parser). I've recently been using Axiom and it's very fast, especially using the
Woodstox StAX implementation.
While XSLT experience is certainly rarer than Java experience, actually if the
skills were available, an XSLT approach would be cleaner and easier to maintain.
DOM or other object model approaches to transformation are a great deal more
difficult. XSLT was *designed* for transforming markup.
For Digital Libraries, DSpace and JSPWiki (SKOS concepts integration), I
described my work at an ISKO meeting two years ago:
http://www.iskouk.org/events/presentations/destin-isko-ucl-100929030224-phpapp01.pdf
Thanks -- I'll be sure to look into that as I've long been a fan of
DSpace, though
as yet have not had a chance to do any project work using it.
Ichiro