> But you're mischaracterizing it as something that can just be done over 
> lunch, or yesterday. It's a lot more involved than that. And it's going to 
> need time to happen. You can be sure it will be discussed publicly, 
> implemented openly, and made available under an open license.

I want to just emphasise this point.  I have considerable experience
designing, specifying and implementing special purpose languages and
formal language specifications such as Ada.  Dan has said its not
simple, and I pointed out a number of reasons why its not simple in
previous posts, but to emphasise, it does not fall into any of the
theoretical programming language formalisations for which there is
prior art and tools.

A simple formal grammar is unlikely without special extensions to the
grammar definition methods or lots of human language additional
verbiage.

Also, what exactly is to be specified? Syntactic elements, sure, local
grammar, sure, but with lots of wordy discussion of the contextual
behaviour, and that needs infinite care to ensure its unambiguous
(look at the stilted language of the C/C++ standards as an example).
Then semantics.  What exactly is to be specified?  Asciidoc can be
targeted to many output formats, and many of those formats can be
styled external to Asciidoc (ie CSS), so specifying the output
directly is not appropriate.  Some sort of meaning/roll/class for the
construct should be defined that backends can interpret in a (fairly)
consistent manner into their format.

It may be that appendices specify a mapping from the construct
semantics to possible output for each "standardised" output format
just to demonstrate that its possible, but these should not be a
requirement of the standard IMO.  There are too many use-cases and
most implementations do/will offer customisations anyway.

>
> I'm glad to hear you want to write an open source implementation. That's 
> great. One of the key goals of a specification is to foster an ecosystem of 
> implementations, so your commitment and contribution will be an important 
> part of achieving it.

Second that, the more arguments, erm discussions, the better, and the
more implementations available the better, the less likely that
implementation specific behaviour will be accidentally standardised.
(an example of that is the current "order" that substitutions are done
in Asciidoc Python and the effect that has on other later
substitutions, this is an ongoing source of confusion and annoyance
and should not be standardised IMO)

I am experimenting with a full parser for Asciidoc and so far
successfully, but personal "real world" issues make progress
intermittent.  So far I have been successful for most of the language
and think I have cracked tables (MMOP to prove it).  But this parser
is a. hand coded b. has extremely complex contextual control of lexer
and parser c. in C++ so not a drop-in in any other implementation to
date.  But it does show its possible, and allows evaluation of things
like universal backslash escaping that make the method of telling the
lexer that a token is NOT markup consistent everywhere.

The standardisation process should start with agreement on what and
how the standard is to be expressed before any definition happens, but
examples are always welcome, they provide something concrete to
discuss.

Cheers
Lex

-- 
You received this message because you are subscribed to the Google Groups 
"asciidoc" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/asciidoc.
For more options, visit https://groups.google.com/d/optout.

Reply via email to