Andreas Jonsson schrieb:
> I'm using an antlr generated front end that interacts with a context
> object that provides the parser with hints.  I'm relying on that the
> token stream is stable, so anything that might produce new tokens
> after parsing have to be taken care of by a preprocessor, (removing
> comments, inclusions, magic words, etc.).

That actually sounds pretty good!

> Now, computing hints to reproduce the MediaWiki apostrophe parsing is
> ridiculously complex (see the below method), but it is at least a 
> separate concern.  Also, I
> don't think that there is anything more complex in the syntax, so I
> feel confident about the general idea of the antlr/context combination.

Nested tables can get pretty nasty, too. And mixed lists with indentations like
*#::* that may or may not match the previous line's indentation might also cause
trouble, i think.

Best of luck,
Daniel

_______________________________________________
Wikitext-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikitext-l

Reply via email to