Hmm. It occur to me that there may be a "middle position" on arity handling.
We could *record* the arity information for purposes of type computation,
but *disregard* it for purposes of type compatibility.
So in my previous example, a procedure taking two arguments and returning a
function might be written as:
'a -> 'b -> ('c -> 'd)
but the parens would be ignored for purposes of type equivalence. The
capture of arity, in essence, records the "preferred" arity for purposes of
calling convention, and the compiler is required, at the implementation
level, to introduce arity lifting/lowering to get things properly mated up,
but this is all ignored for purposes of what constitutes a well-formed
application and/or a well-formed binding or copy.
The reason this will work is that all such type equivalence checks are
performed at copy boundaries (that is: at binding, argument passing, or
value return). In Haskell or ML, the test at such a point is:
'formal ~ 'actual
that is: the actual parameter type and the formal parameter type are
unifiable. In BitC, the corresponding check is performed as:
'actual|'formal
which should be read to mean: "the type 'actual is copy-compatible with the
type 'formal". The unification scheme already has to be smart enough to
reach across const/mutable/by-ref to handle the individual variable
unifications, and it should not be difficult to extend this to disregard
arity at this level, and to introduce appropriate fix-ups by
lifting/lowering in a suitable compiler pass. The rest is simply a matter of
changing how typing is done at the apply rule.
Does this smell like it might work? If so, then the whole thing becomes
purely a matter of surface syntax.
shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev