On Fri, Feb 20, 2015 at 12:23 AM, Matt Oliveri <[email protected]> wrote:
> On Fri, Feb 20, 2015 at 3:15 AM, Keean Schupke <[email protected]> wrote: > > The way I thought this could work is that you infer the arity from the > use > > of a function (and that arity is concrete in the type). If it is an > imported > > or foreign function the inferred arity must match the declared arity in > the > > import statement (which optionally could be an arity-alias) or type > checking > > fails. For locally declared functions the definition would initially be > > typed as curried, and specialised versions of the definition generated > for > > each different arity used. There would be a subtype relationship between > > each used different arity type and the fully curried version. So: > > > > (fn 'a -> fn 'b -> 'c) :> (fn 'a 'b -> 'c) > > > > Obviously there are more combinations of subtypes for 3 argument > functions > > etc. > > If that is how arity specialization works, I misunderstood it pretty > badly. I thought function definitions determine concrete arity, never > application. That is correct. If we adopt the curried application syntax, we simply have no syntactic basis for inferring a particular concrete arity. We know that the arguments need to get consumed. In general, that's not enough to tell us either a lower bound or an upper bound on the native arity. > That's actually what my second question pertains to: > whether that's always what we want. Also, I don't think Shap decided > to use your idea to import functions at a certain type. > Correct. An imported function has exactly one native arity. We have now agreed that arity is part of type, so there is no reason to separate arity from type at import. Definitions and declarations both have a native arity. shap
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
