On Thu, 7 Nov 2002, Bertrand Delacretaz wrote:
> On Thursday 07 November 2002 17:52, Nicola Ken Barozzi wrote: > >. . . > > We can import the wiki sources and use that, with the Chaperon stuff. > > > > Imagine that we have a system that gets all wiki pages and transforms > > them in Forrest format... > >. . > > Certainly cool, but are you confident about implementing the complete JSPWiki > grammar using Chaperon? Maybe talking the JSPWiki guys into refactoring their > parser to make it standalone (and use it in Cocoon) would be an option? > > Not to downplay Chaperon in any way, but from what I've seen most wiki > systems use regular-expression based "text analyzers", which are "fuzzier" > than real parsers like Chaperon and might be more suited to wiki text parsing. Just for your information, I currently rewrite the parser to decouple the text scaner from the parser, and for some other things. So one transformer, I have planed, is a text 'tokenizer' for e.g. colorize source code. But I don't think that a simple 'text analyzers' will help you. > I haven't heard from the wikiland project for a while, but it seems like they > were having problems with this [1]. > > -Bertrand > > [1] http://article.gmane.org/gmane.comp.web.wiki.wikiland/32 >
