On Wed, Mar 11, 2015 at 2:55 PM, Jonathan S. Shapiro <[email protected]> wrote: > On Tue, Mar 10, 2015 at 5:26 AM, Matt Oliveri <[email protected]> wrote: >> >> On Tue, Mar 10, 2015 at 1:45 AM, Jonathan S. Shapiro <[email protected]> >> wrote: > >> > And finally, the type rule requirement at an application is that the >> > number >> > of arguments present must be an exact match for the list-wise sum of the >> > shallow arities of the function calls that implement the application. >> >> Yes, but that seems to be saying it kind of backwards, since, looking >> at an application, we get to pick the shallow arities of the function >> calls, not the number of arguments. (Ah ha! You said calls have >> arities!) > > Whether it's backwards or forwards, the rule still holds. Arguments present > at an application need to be consumed. The [potentially abstract] function > type guarantees that we won't type check if too many arguments are present. > The issue I'm actually getting after here is that we can't be building > hidden closures either.
Good, it sounds like we're talking about the same requirement. >> > The piece that seems to be missing here is to resolve the problem that >> > was >> > identified for choose_one. Offhand, it seems to me that the rule should >> > be >> > that unifications must be preserved when applications are rewritten to >> > calls. >> >> I don't follow that. Can you pick a notation for arity-abstract >> function types and show how the choose_one example gets checked? > > Well, no. At the moment, I can't, and that's more or less my point. What I'm > trying to say here is that given > > choose_one x y z // returns (randomly) one of x, y, or z > > then typeof(x) ~= typeof(y) ~= typeof(z) > > where "~=" means "unifies with". That unification must hold whether > typeof(x) is concrete or abstract. The problem I'm trying to identify here > is that I don't (yet) have a rewrite strategy that guarantees this unless we > do something like the list-like arity notation. Well what's wrong with the list-like arity notation? Also, there's also the type constraint option, which I haven't talked about in a while, but it still seems (to me) to work. Separately, I have a gripe with framing the choose_one problem as that some types must unify. We need the concrete types to turn out the same. This is simpler to say, and doesn't presuppose a unification algorithm. All along, the way I've been thinking is that we design a type system so that choose_one gets a type where the types that must be the same are always the same, for any concrete instance. This is the kind of thinking I'm more familiar with, and if we then ensure that types produced by unification can also be explained as instantiation of implicit quantifiers, then we know that unifying abstract types cannot screw us. This way of thinking immediately got me to two schemes (the deep arity one and the type constraint one) that I've had to revise, and have never explained in detail, but basically still seem solid. I haven't given much thought to unification per se, but I don't see any obvious reasons why there can't be unification algorithms for my schemes. So _someone_ should soon explain how some scheme will solve choose_one, I figure. >> > What am I missing, and what further points do we need to cover on this? >> >> Just to address the grouping issue. Here's an example where I think we >> want grouping: >> >> def apply1 f = f 1 2 3 >> >> def apply2 f = apply1 f 4 5 6 > > So I'm not sure why this creates a special need for grouping **in the > types** beyond what is already needed for function parameters generically. > If function definitions produce concrete functions, then the types above > would seem to be: > > apply1 :: cfn [a->b->c->r] => r > apply2: :: cfn [a->b->c->r] => r > > That is: we won't know the concrete call bracketing used by f until we see > an instantiation of apply2 (or apply1), and that's perfectly OK. Yes, it's OK that f's type is arity-abstract, but that's not the issue. Your type for apply2 is wrong. The f apply2 receives needs to accept at least 6 arguments (across an unknown number of applications). apply2 :: cfn [a->b->c->d->e->f->s] => s This is because we first apply1 it, which gives it 3 arguments, then we give it 3 more. r instantiated to (d->e->f->s). Of course, a through f should be types that we can assign an int to, but you know that and we are just being lazy. The issue is that although f can take 6 arguments, it must not demand 6 at once (unless we can commit to inlining apply1 into apply2). More specifically, it must not take more than 3 arguments at once, (and likewise for the cfn it returns). So you see now the problem with your provisional arrow notation, where "a->b->c->d->e->f->r" does not tell us the maximum shallow arity. (Yeah, that.) Let me know if you or I am all confused. Alternatively, if you agree this is a problem, I could explain how either of my schemes solves this. > Personally, I think that the square braces convey nothing > new in cfn types, because cfn a b => r seems sufficient to describe concrete > arity at a definition. The place where the square brackets seem useful is in > describing how an application gets concretely implemented by particular > calls. Agree. > Matt: assuming that the "cfn" be explicitly present when writing this type, > do you feel that the arrow is sufficiently disambiguated by context, or > would you prefer to stick with Pal's => arrow for now? I think that we will > eventually we will need to replace it because of the constraint arrow, but > perhaps the => arrow is sufficiently disambiguated by "cfn" as well. E.g. > the following is not ambiguous: > > add :: arith 'a => cfn 'a 'a => 'a I agree that either arrow token will probably turn out to be unambiguous, due to the cfn. But I prefer "->" because then it's easier to see functions at a glance. >> Any alternative explanation of how this is handled would also help >> provide a common understanding of the issue. I am OK if it turns out >> apply2 needs to be written as >> def apply2 f = (apply1 f) 4 5 6 > > I don't think that should be necessary. I don't either, but I wasn't sure at the time. My point was that whether that's necessary is independent from the issue that f is applied to 6 arguments "in two groups of 3". > The entire point of the current > discussion is to determine how to re-write correctly from > > apply1 f 4 5 6 // what the user wrote > > to > > apply1 [f] [4 5 6] // how we re-wrote it Or (apply1 [f] [4] [5] [6]), or... But f is definitely separate. Again, that wasn't the point. The [1 2 3] and [4 5 6] are separate. > I see no reason to think that the chaining of calls in this example makes > that any more difficult than it was before. How about now? _______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
