On Mon, Feb 16, 2015 at 12:49 PM, Jonathan S. Shapiro <[email protected]> wrote:
> Per the previous summary, one option we are still considering is:
>
> 1) Functions take N>0 argument patterns and return 1 result. Such functions
> have arity N.
> 2) The type of a function is written as something like fn ty1 ty2 ... tyN ->
> result
> 3) We adopt a surface syntax for function call that appears curried, in that
> no parentheses or commas are involved in the application syntax.
>
> In this scenario, we have adequate information at the definition site and in
> the written form of the type to determine what the arity must be, but we do
> NOT have enough information at the application site. Given a procedure:
>
>   def perverse f a b = f a b
>
> we cannot determine whether f has type fn 'a -> (fn 'b -> 'c) or
> alternatively has type fn 'a 'b -> 'c

I believe that in Matt's proposal, we do know the arity of f in this
case: it's 2.  f is applied to two arguments, so it must have type fn
a b -> c.

It might be reasonable to extend this as you say to support automatic
conversion, but it does introduce a weird asymmetry where *fewer*
arguments are disallowed but *more* arguments are fine.  If more is
fundamentally easy to implement and understand, while fewer is not,
this asymmetry might be reasonable, but it seems odd.

Geoffrey

> Arguably, we do not care. I think I could make a case here that the type we
> should infer for /f/ is simply 'a -> 'b -> 'c, with the intended meaning
> that we really don't *care* what the native arity of f is. If /perverse/ is
> called with a function having type fn 'a -> (fn 'b -> 'c), we will perform a
> type-driven specialization of /perverse/ to accept an arity-1 function.
> Conversely, if /perverse/ is called with a function having type fn 'a 'b ->
> 'c, will will perform a type-driven specialization of /perverse/ such that
> it accepts an arity-2 function.
>
> It actually seems *desirable* to me that we should have a means to be
> arity-agnostic, or if you prefer, generic over arity.
>
> Offhand, I see two issues here:
>
> 1. Concretization of arity isn't just a matter of specializing the function
> type. It entails a type-driven AST rewrite at the application site. I see no
> inherent problem with that.
> 2. We might end up with cases where arity is bounded but underspecified. For
> reasons of performance, I think the correct heuristic in the presence of
> ambiguity is to prefer the largest possible arity, on the grounds that this
> minimizes the number of calls.
>
> Hmm. Now that I think about it, I'm not sure the underspecification I'm
> talking about here can ever actually occur. Ultimately, every procedure that
> might be passed originates at a definition. Since type concretization
> proceeds top-down from main(), I think it's the case that arity information
> must necessarily propagate top-down as the types specialize.
>
>
> In any case, do people agree that this ambiguity of interpretation exists,
> and that the type-driven AST rewrite is not somehow unclean? Does anybody
> want to make the case that an arity-agnostic abstract function type is a bad
> idea?
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to