On Sep 18, 7:37pm, Ross Moore wrote:
> >> An XML version will be more extreme because it is totally
> >> up to the DTD (or the mapping between latex commands & DTD elements)
> >what is
> >> being generated.
> >
> >Yes indeed. The solution must be something that is concept based. E.g.,
> >the LaTeX parsing engine first determines what is wanted (e.g.,
> >chapter, bold face, verbatim, etc.) and then the rendering engine must
> >decide what code to generate. IMHO, it is important to separate the two
> >much more than they are now (at the moment there is almost no
> >distinction).
>
> It will be extremely hard to *drive* the translation from a DTD,
> whereas validation ``on the fly'' is quite feasible,
> as Marcus implemented for L2H-NG.
Unless I misunderstand your point, it would be impossible to drive the
translation from the DTD, since there's nothing in the DTD to connect it to
LaTeX; even <section> needn't have any relation to \section
...
> Thus making new DTDs that don't differ too much from HTML should be
> rather straight-forward.
Agreed.
> However turning-off *lots* of HTML features is much harder;
> you'll have to write more Perl code to override the existing subroutines.
Indeed. In fact, my concern is that the amount of code _not_ overridden (ie.
not somehow HTML specific) might approach 0! :>
I agree with Marcus that a more abstract aproach would be helpful: one layer
of the package would deal with parsing LaTeX to a (possibly virtual)
intermediate level of representation; a DTD specific layer would be responsible
for outputting those constructs in terms of the appropriate elements.
Probably a library of generic, commonly useful, tools would be helpful for
implementing. A trivial example might be something like format_phrase:
\textbf, \textit, etc might be translated into an intermeditate form, say,
phrase{type=>'bf', text=>'some text'}. The library would have a utility for
translating that to <b>some text</b> or <BOLD>some text</BOLD> or ...
--
--
[EMAIL PROTECTED]
http://math.nist.gov/~BMiller/