* Trevor Parscal <[email protected]> [Tue, 01 Jun 2010 11:31:03 
-0700]:
> In Berlin I gave a quick demo in the UX working group of a new parser
> I've been writing that understands the structure and meaning of
> Wikitext, rather than just converting it on the fly into HTML like the
> current parser (actually a hybrid parser/renderer) does. To be fair, 
the
> current pre-processor does properly parse things into a node-tree, but
> only for a small subset of the language, namely templates. My
> alternative approach parses the entire wikitext document into a
> node-tree, which can then be rendered into HTML (or any format for 
that
> matter) or back to wikitext. By having a unified data-structure for an
> entire article, we can do all sorts of things that were never before
> possible.
>
XML and DOM processing probably, too. Traversing / modifying any part(s) 
of particular wiki page, not just the whole page at once. Though current 
parser probably just wants to be fast.

> What we need is to be looking at building a first-class 
wikitext-editor,
> rather than adapting a buggy HTML editing system (ContentEditable).
> Wikitext deserves an editor that thinks in wikitext. Wikitext is a 
round
> peg, and ContentEditable is a square hole. It doesn't matter how much
> you try and force it in, it will never fit properly. Google has come 
to
> this conclusion after years of struggling with buggy browsers and 
poorly
> designed APIs. I would prefer not to go down a long road of hardship 
and
> struggles just to come out with a similar conclusion.
>
Complex.. I wish that would really be possible.
Dmitriy

_______________________________________________
Wikitech-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to