The discussion of infix is clear that precedence is associated with the
entity rather than the token, and it gives an example of why this matters,
in which the same operator is defined in two different modules and referred
to with a qualified module name. Something like List..<< and Map.<<

I am struck that this implies a significant intertwining of state between
the parser and the lexer. In particular, if the Lexer is to correctly report
operator precedence to the parser, it needs the ability to look up the "<<"
operator in the appropriate lexical environment. This *seems* to imply that
symbol resolution is inextricably interleaved with parsing. Ironically, it
is easy to load imported operators, since an imported module must (of
necessity) have been processed fully and the associated environment is
therefore fully available. The problem lies with processing fixity
declarations in the current unit of compilation.

>From an implementation perspective, I can see two ways to approach this, and
I would appreciate explanation from someone who knows what is actually going
on in Haskell or ML:

The first approach is to restrict fixity introduction to top-level forms and
process top-level forms one at a time, each in the environment resulting
from the previous form. This is more or less consistent with what happens in
an interpreter REPL loop: the environment is more or less ready-to-hand in
both the lexer and the parser. Since we can fully type top-level forms, we
could, in principle, even extend this mechanism to make type information
available in the lexer (and no, I am *not* advocating that, only mentioning
it as a possibility).

The second approach is to cheat. We build a *temporary* environment at parse
time containing *only* the mixfix operators and their precedence, without
regard to their type. This construction is performed by the parser and
carried up and down through the parse tree processing. The one complication
with this is getting it to unwind correctly in the presence of parse errors,
but that should not (in principle) be insurmountable. This approach has the
property that it can successfully admit *local* fixity declarations.


Which approach is used in Haskell? Is it even *desirable* to admit local
fixity declarations, or is it just an unnecessary complication? I think that
local fixity declarations are useful, exactly because the don't corrupt the
global scope.


Reactions, advice, input?


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to