Nice examples.
Am 15.10.2010 um 15:29 schrieb Philip Taylor (Webmaster, Ret'd):

> 
> Keith, I don't see enough in your answer to enable me to understand
> how you propose to resolve what seem to me to be very serious
> problems of semantic ambiguity.  Let me give a simple example,
> using a fictitious language (Mekon) that use the basic latin character set.
> 
> Mekon defines the following translations :
> 
> \efg = \def
> \emph = \centerline
> \foe = \end
> \joqvu = \input
> 
> 
> File-1 contains :
> 
> %!Markup-language = Mekon
> \joqvu File-2
> \emph {Ifmmp xpsme !}
> \foe
> 
> File-2 contains :
> 
> %!Markup-language = Mekon
> \efg \emph #1{}
> \end
> 
> When the processing engine encounters \emph at line 3 of File-1,
> should it translate \emph into \centerline, as required by the
> translation rules for Mekon, or should it leave it untranslated
> so that it can reference the definition of \emph that occurs
> at line 2 of File-2 ?
        That one is simple it should translate, because it was defined.
        Naturally, it is a bad idea to overload like this.
        But, since it is overload one should translate.
        If you wanted the "normal" emphasis then use: 

                %!Markup-language = Mekon
                \joqvu File-2
                %!Markup-language = normal
                \emph {Ifmmp xpsme !}           
                %!Markup-language = Mekon
                \foe

        

> And when that same processing engine is called upon to process
> File-2, should it translate \emph as \centerline, so that it
> is \centerline that is being defined, or should it leave it
> untranslated and therefore it is \emph that is being defined ?

        O.K. This is somewhat complex.
        The way I look at it.
                1) Markup language in both cases is set to Mekon
                2) When the parser finds \emph iot translate it to \centerline
                        \efg \emph #1{}    becomes \def \centerline #1{}
                3) All actions are to take place in the "normal" markup that is 
why it is translated
                    before being processed.
                4) It would parse O.K. since it did not find a translation for 
\end
                     and thereby reverting to the "normal" \end

        I agree that this seems complex, but is logical. and one has to be 
aware or in which
        "language" one is.

        Now, lets see what happens if File-2 look different.
                     
File-2 contains :

\def \emph #1{}
\end

        Here the result would be the same as, that \centerline is affected, 
since "Mekon" is still active.
        
        Now, if File-2 look like   
        File-2 contains :

                %!Markup-language = normal
                \def \emph #1{}
                \end

        Then, we have the normal tex effect.
        BUT, with File-1 containing :

                %!Markup-language = Mekon
                \joqvu File-2
                \emph {Ifmmp xpsme !}
                \foe
        \emph stays \emph as the "language" is normal.

        Furthermore, \foe will trigger a undefined command error, since 
"normal" is active, due to
        the switch in File-2. But, such global scope is the way of TeX.
        
        
        
        

> 
> In other words, can we, with 100% precision, define in which
> contexts strings should be translated and in which contexts they
> should not, bearing in mind the fundamental ability of TeX
> programs to change the meaning of virtually everything on
> the fly ?

        Yes, but you have to know in what "language" mode you are in.
        Like you mentioned this is a semantical problem and off the bat I can 
not
        think of any way to catch a semantical error. I believe TeX can not do 
this.

        regards
                Keith.


--------------------------------------------------
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex

Reply via email to