On Wed, Feb 25, 2015 at 12:28 AM, Jonathan S. Shapiro <[email protected]> wrote: > On Tue, Feb 24, 2015 at 8:35 PM, Matt Oliveri <[email protected]> wrote: >> >> On Tue, Feb 24, 2015 at 10:14 PM, Jonathan S. Shapiro <[email protected]> >> wrote: > >> Well, since I'm counting concrete instances, and shallow >> specialization generally doesn't get you a concrete instance, I think >> we should be talking about deep specialization. > > I understand why that's comforting, but the list-style arity is > indeterminate until the return type speciailzes to either (a) a non-function > type, or (b) an arity-concrete function type having an arity-concrete return > type.
The concrete instances we're counting come from my example where I made the types concrete, and the return type non-function. I did that for a reason, ya know. If the possible deep arities are indeterminate, then we simply couldn't count concrete instances. > The thing is: application proceeds left-to-right. It follows that the deep > specialization is necessarily an inductive consequence of the left-to-right > shallow specialization. The difference between the two is that the shallow > specialization can be pursued before the program is fully concrete. You seem to be assuming that if we use deep arities, we wouldn't be able to shallow specialize without fully knowing the deep arity. I don't see why that must be. The tail of a list is a list, after all. (1 :: 'arity) is an arity. >> > But if >> > (for example) we conclude that the leftmost specialization has arity 1, >> > then >> > there remain three argument positions to concretize. Those can be >> > concretized as [1, 1, 1], [1, 2], [2, 1], or [3]. >> >> That really threw me off at first. It seems very misleading to say the >> argument positions are concretized, since each successive function >> type in a curried function type appears in the return type position. > > Perhaps i spoke poorly. Given fn 'arity 'a->'b->'c->'d->'r unified with fn 1 > 'u -> 'v, we conclude that 'v unifies with > > fn 'newArity 'b->'c->'d->'r > > When I referred to the "remaining argument positions", I was referring to > the 'b 'c 'd patterns. But this doesn't explain why you said that argument positions can concretize as certain arities. >> In my understanding, the reason why an abstract return type >> doesn't change things is because of the maximum arity, which it seems >> we also disagree about. > > Here is a counter-example showing why this is not true: Yes, I remembered later that that's not true. >> >> > So in my example above, we specialized >> >> > >> >> >> fn 'arity 'a->'b->'c->'c->'r (label this type T1) >> >> > >> >> > >> >> > with (renaming vars) >> >> > >> >> >> fn 2 'v 'w -> fn 'newArity 'x->'y->'z (label this type T2) >> >> > >> >> > >> >> > Because the return type in T2 is abstract, the type resulting from >> >> > specializing T1 remains abstract. >> >> >> >> Yeah yeah. I understand just fine why it _should_ work that way. I >> >> just don't see what formal rules actually make it work that way. >> > >> > I haven't tried to write the formal rules, but it follows from the axiom >> > of >> > choice. I agree that we need to capture formal rules for this. >> >> Huh? Yuck. What do you mean, and why would you resort to choice? > > You are raising a good and valid question, and I have no formally validated > answer yet. But I can give an informal intuition. > > Consider unifying 'a->'b->'c with 'u->'v->'w->'x. It is clear that 'c > unifies with 'w->'x. This is a consequence of the structural decomposition > of the arrow type. > > My choice of notation for function types is not innocent. The [shallow] > concrete arity corresponds to the stuff up to, but not including, the return > type. This is not necessarily true. Well, it is if you interpret "return type" with care. Maybe that's what you're saying. In (fn 'arity 'a->'b->'c) we do not know what the return type is. It isn't necessarily 'c, because if 'arity is 1, then the return type is something like (fn 'newArity 'b->'c). > This leaves the return type to unify in abstract form, just as it does > when 'c unified with 'w->'x. > > The part of this that needs some care in explaining is how the Nat > constrains the unification. In particular it needs to account for why > > fn 1 'a->'b > > cannot unify with > > fn 2 'a->'b->'c > > because the number of [transitive] shallow argument positions cannot be > reconciled. This explanation doesn't seem to have anything to do with the axiom of choice, or how 'newArity got there. >> There you go with argument positions again. Maybe I'm really missing >> something here. Or maybe you're smoking something funny. What's wrong >> with this specialization: >> >> fn 'arity Nat->Nat->Nat->Nat >> becomes >> fn 1 Nat -> fn 'newArity Nat->Nat->Nat >> >> Notice that the concreteness of the argument types has not forced the >> return type to be concrete. > > What's wrong with it is subtle. Generalizing your example, the type: > > fn 1 Nat -> fn 'newArity 'a 'b 'c > > is syntactically well-formed and semantically sensible, but it is not a type > that can feasibly be derived under the typing regime that I have proposed. > > The universe of derivable-by-inference function types is an inductive > construction over compositions of well-formed program fragments. If all > global function definitions have a native arity, then it is not possible to > compose program fragments in such a way that their types compose to produce > the type you propose. Well I'm not sure exactly what you mean, but I never claimed any of my example types were the types of programs. I don't see why you refuse to talk about them for that reason. Indeed, we are interested in partially specialized types because they arise by specialization. Are you saying it's illegal to specialize the arity of the application in (f 2 3 5) + 7 ? > Intuition: Global functions, by design, have concrete arity. Local > functions, by design, have concrete arity. Exercise: subject to these two > constraints, give an example of a function s.t. (a) the function is > arity-concrete as a consequence of the syntax of its definition, and (b) the > function accepts no arguments incorporating [recursively over type > constructors] an abstract function type, such that the function produces a > result having an arity-abstract function type. If someone can produce such a > function, then I am wrong and I need to re-think fundamentally. > > I submit that the set of "reachable" function types in real programs results > from an inductive composition over well-formed program fragments, and that > the composition does not yield a function type of the form you propose. > > Before you object: yes, this *really* needs to be verified. But I am > guardedly confident that the constructive argument that I am making holds > up. I object anyway! I don't see why that needs to be verified, if well-formed types make sense whether or not they arise. >> >> > It seems to me that I made a mistake in my description, however. It >> >> > seems to >> >> > me that I should have spoken about unification rather than >> >> > specialization. >> >> >> >> Which description are you referring to? >> > >> > I keep referring to this as specialization, and on reflection I think I >> > should be saying unification. >> >> So I should do a "s/specialization/unification/g" on this whole >> thread? Now that sounds like a find and replace accident. > > Rather: a conceptual error on my part that can be corrected by find and > replace. ;-) But kidding aside, I really don't know which uses of "specialization" you think were misuses. Like, I figure we still call it specialization when we turn an application into the right sequence of calls. I figure programs get specialized, while types get instantiated or unified. >> >> But choose_one shows that that doesn't actually >> >> work. What makes you think that choose_one-like problems are happy if >> >> only the shallow arity can be constrained? >> > >> > Perhaps the fact that I didn't say that. The thing is: when the shallow >> > arity is constrained, that newly introduced type variable for the >> > remaining >> > arity specializations is introduced, and it is required to specialize in >> > the >> > same way for all of the choose_one arguments, because they all have the >> > same >> > type. >> > >> > So in one sense, only the shallow arity is constrained. But the >> > introduction >> > of a commonly referenced 'newArity ensures that (a) deep arity is >> > constrained only by syntactic compatibility, but (b) all arguments of >> > choose_one must specializa in the same way w.r.t. arity. >> >> Umm. Maybe that answered my question. What about that 'a1, 'a2, 'a3 >> example I had? I guess we can talk about specifics more easily if you >> explain how to unify function types in general. > > My apologies, but can you re-issue the example? Somewhere in the last 150 > messages I lost track. :-) Here: On Tue, Feb 24, 2015 at 9:31 PM, Matt Oliveri <[email protected]> wrote: > What makes you think that choose_one-like problems are happy if > only the shallow arity can be constrained? > > In particular, how do we know that choose_one's type > fn 3 (fn 'ar a->b->c->d)->(fn 'ar a->b->c->d)->Bool->(fn 'ar a->b->c->d) > doesn't unify with > fn 3 (fn 1 a->fn 'a1 b->c->d)->(fn 1 a->fn 'a2 b->c->d)->Bool->(fn 1 a->fn > 'a3 b->c->d) > aside from that it makes no semantic sense? I realize now that I used an old notation for arity-concrete function types in that email, so I edited the quote. _______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
