While the parser is primarily used to convert to PDF, many of us use it  
for other purposes entirely. Getting back a structured parse tree, rather  
than HTML formatting, can be useful.

If nothing else, is the action=parse feature faster than the mwlib parser?

Other Wikipedia processors that I played around with that utilised the  
default MediaWiki parser did not do so at an impressive pace.

- Joel


On Thu, 11 Sep 2008 01:09:35 +1000, <[EMAIL PROTECTED]> wrote:

> NOTE: Please use "Reply to All" when replying to this message, or CC
> your replies to [EMAIL PROTECTED] . I hardly ever read my gmail.com
> e-mail.
>
> I noticed that mwlib implemented its own wikitext parser. However, the
> MediaWiki API provides a way to parse wikitext through action=parse
> [1], which outputs the resulting HTML along with lists of links,
> language links, external links, categories, images, templates and
> sections present in the wikitext. action=parse was introduced in the
> 1.12 release.
>
> This could potentially replace mwlib's parser, unless you're using it
> to convert wikitext to something other than HTML (PDF?). In this case,
> you could use HTML as an intermediate stage, or I could add a
> parameter to action=parse that returns the DOM tree, so you can output
> PDF or whatever it is you want to output based on that. Let me know
> whether you want that feature.
>
> Roan Kattouw (Catrope)
> Lead developer for the MediaWiki API
>
> [1] http://www.mediawiki.org/wiki/API:Parsing_wikitext#parse
> >



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"mwlib" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/mwlib?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to