On Fri, Mar 13, 2015 at 11:06 AM, Jonathan S. Shapiro <[email protected]> wrote:
> On Thu, Mar 12, 2015 at 11:18 PM, Matt Oliveri <[email protected]> wrote:
>> On Fri, Mar 13, 2015 at 12:57 AM, Jonathan S. Shapiro <[email protected]>
>> wrote:
>> > Matt:
>> >'m not sure why the afn types need to take an arity variable.
>>
>> Well in this case you're right, the arity variables don't say anything
>> interesting. But in choose_one, multiple afns would use the same arity
>> variable.
>
> I still don't think arity variables are useful in choose_one, and I'm going
> to walk through that case concretely below. I think that in any case where
> they *would* be useful, the "call arrow" (the one I've been writing as '=>')
> solves this as well.
Nope, I still don't buy it. I give an example where it breaks below.
>> > Another way to write this might be:
>> >
>> > a->b->c=>d->e->f=>r
>> >
>> > where the => arrows indicate required concrete call boundaries. This is
>> > a
>> > different (and I think, clearer) way of writing what I was trying to say
>> > with the "positional" idea.
>>
>> This notation is pretty nice. But are there actually multiple arrow
>> types there? For example, would it be parenthesized like this?
>> a->(b->(c=>(d->(e->(f=>r)))))
>
>
> My intention was that the parenthesization should be
>
> a->(b->(c=>(d->(e->(f=>r)))))
>
> as you wrote above.
I was starting to think so. Your mixed arrow scheme is actually quite
different from my deep arity (cfn/afn) scheme then.
> As to whether it's a new arrow type, the answer is "yes
> and no". It is new in the sense that it provides an annotation of something
> that we weren't tracking before (the call boundary). It is *not* new in that
> the logic and type theory of the -> arrow apply equally well to the =>
> arrow.
Actually, by "multiple arrow types" I really just meant multiple
subexpressions which are types formed by "->". The answer was yes. I
asked because for afns, the answer to the analogous question would be
no.
To explain the major difference between your mixed arrow types and my
afn types, as I see it:
With mixed arrows, function types are abstractly presented all split
up, and concretizing can put some things back together. With afns,
function types are abstractly presented all in big groups, and
concretizing can split groups up.
Well that's the _other_ major difference, the main one is mixed arrows
don't have an arity parameter.
> One way to think about the => arrow is that it is really a -> arrow with an
> attached annotation. Another way to think about it is that it is -> arrow
> with a "no-implicit-closure" effect attached. That is: if a -> arrow
> survives to be "executed", it will need an implicit closure construction,
> while a => arrow will not.
Errm, that's a weird way to put it. We never allow implicit closures.
I was thinking the "=>" is like a "->" but you can't pass through it
when merging functions into a cfn. So "=>" is a mandatory function
type, and "->" is optional. So when combining constraints, "=>" wins.
> The annotation/effect intuition feels right in a behavioral sense as well.
> They accumulate in set-union fashion. So if unification occurs between two
> types
>
> a->b->c=>d->r
> a=>b->c->d->r
>
>
> the => always replaces -> in the result, giving:
>
> a=>b->c=>d->r
Yes, this is a good rule. Mixed arrow notation makes it pretty clear,
which is good, since I think afns actually need something analogous,
but the fact is much less obvious on that side of the analogy.
> This has the same consequence as the arity variable, without introducing
> heavy notation. Or at least: I think it does.
No, I don't think it's enough.
> I don't think that I'm stating the effect intuition correctly above, but it
> seems close. I'd kind of like to be able to wave my hands and say "well,
> this is just a sugaring for that effect intuition over there", because doing
> so would put us back into a known type theory.
Whatever kind of constraint it is, constraints are generally not
enough to guarantee abstract types will concretize the same. You can
get lucky sometimes, and you did get lucky, as I'll explain.
>> If so, then it seems strange to change the "a->b->c =>" part to "cfn a
>> b c =>", as you propose below.
>
> I don't think it's strange, but It might seem more comfortable if I write it
> differently. From a type perspective, the substitution is between:
>
> a->b->...x->y=>r
> cfn a b ... x y =>r
>
> That is: we don't (on reflection) require a => on the LHS of the first type.
> In some sense, the => serves as a marker at which the two meeting types get
> aligned in column-like fashion. The stuff on the left and right of the =>
> then "meet" recursively.
>
> But we could equally well write that as
>
> (a->(b->(c=>r)))
> cfb a b c => r
>
> That is: the parenthesis don't alter our ability to substitute. The crucial
> thing here is that we are cleanly substituting one syntactically complete
> expression for a second one.
I think I knew all that, so it didn't help. It's not like I thought
the substitution was nonsensical, just weird. But I now realize that
it's not really much weirder than afns. Just that the weirdness works
in the other direction, like I was saying above.
>> This mixed arrow notation does seem to solve the apply2 problem. But
>> it's not clear how it works for choose_one. For choose_one, we'd want
>> to enforce that multiple "a->b->c =>" groups have your rewrites 1 and
>> 2 above done to them in the same way, to get the same concrete
>> arities...
>
> Yes. And I *think* that the rule for substitution that I stated above does
> that. Let's work a concrete example:
>
> def f1 a = lambda b c { 1 } // cfn int => cfn int int => int
> def f2 a b = lambda c { 1 } // cfn int int => cfn int => int
> def f3 a b c = a // cfn int int int => int
>
>
> Barring a typo, I think we can agree that these are mathematically identical
> functions with different concrete arities. That is my intent.
Well it kind of looks like f1 and f2 always return 1 while f3 returns
its first argument. Luckily it doesn't really matter what any of those
functions do.
> Now lets look at pseudo-code for choose_one:
>
> def choose_one x y z {
> if random_true_false ()
> x
> elseif random_true_false()
> y
> else
> z
> choose_one:: cfn 'a 'a 'a => 'a
>
>
> And now let's look at the type resolutions when we try to call this as
>
> choose_one f1 f2 f3
>
>
> Note that there exists *some* order in which the formal parameter types
> "meet" and are checked/resolved against the actual parameters. Choosing
> arbitrarily, lets check f1 =~ x first:
>
> cfn int => cfn int int => int =~ 'a
>
>
> so now we have 'a = (cfn int => cfn int int => int), and we check
> (arbitrarily) f2 =~ a next:
>
> cfn int int => int =~ ('a = (cfn int => cfn int int => int))
>
>
> These two types are not compatible, so the application (correctly) fails to
> type check. In fact, this is what has puzzled me about the choose_one
> example; it has always seemed to me that the argument types unify and so
> there can be no failure to detect concrete incompatibility at
> specialization.
Shap, this example is way too easy. Of course if you have
arity-concrete function types, it'll work out. That is not in any way
a test of your arity-abstract function types. Here's the example I
think you meant, which still happens to work, because you're lucky:
The f's are unknown functions with arity-abstract types:
f1 : int=>int->int->int
f2 : int->int=>int->int
f3 : int->int->int=>int
When we pass them to choose_one, the instance of choose_one's type becomes:
cfn (int=>int=>int=>int) (int=>int=>int=>int) (int=>int=>int=>int) ->
(int=>int=>int=>int)
because we combine the stricter => arrows which variously appear in
each position. Well (int=>int=>int=>int) is actually arity-concrete
after all; it's the same as
cfn int=>cfn int=>cfn int=>int
So we're OK. f1, f2, and f3 must turn out to have that concrete arity.
Where you get in trouble is whenever the constraints don't happen to
add up to a concrete arity. Example is coming up.
> I think it's also interesting to look at apply_one. Reusing f1, f2, f3 from
> above, here is apply_one:
>
> def apply_one x y z a b c {
> if random_true_false ()
> x a b c
> elseif random_true_false()
> y a b c
> else
> z a b c
> apply_one:: cfn ('a->'a->'a->'r) ('a->'a->'a->'r) ('a->'a->'a->'r) 'a 'a 'a
> => 'r
>
> This case is interesting because the three function argument types do NOT
> unify. And *because* they do not unify, the application
>
> apply_one f1 f2 f3 1 2 3
>
> type checks (and specializes) correctly, which is what we want.
I agree. With afns too, x y, and z would not get unified. So each afn
would each have a fresh arity variable inferred for it. Here's an
example that's really funny because in HM it'd be functionally
equivalent to apply_one, but the function types do unify, and this
means that it's not even the same type in BitC.
def not_apply_one f1 f2 f3 a b c = (choose_one f1 f2 f3) a b c
See there's only one application to specialize. Oh well. The thing
that's so funny about this example is that it's also the example that
breaks your scheme.
Here's the type I'd give it:
not_apply_one : cfn (afn 'ar 'a 'b 'c->'r) (afn 'ar 'a 'b 'c->'r) (afn
'ar 'a 'b 'c->'r) 'a 'b 'c -> 'r
The three function arguments must have the same arity, as you can see.
Here's the type I suppose you'd give it:
not_apply_one : cfn ('a->'b->'c=>'r) ('a->'b->'c=>r) ('a->'b->'c=>r)
'a 'b 'c => 'r
You (correctly) do not infer an arity-concrete type for f[123], since
there is no reason to do so. But what's wrong with that type? Well,
AFAICT, the arguments do _not_ need to have the same arity! We can
instantiate each one separately, like:
cfn (cfn 'a 'b 'c=>'r) (cfn 'a 'b=>cfn 'c=>r) (cfn 'a=>cfn 'b 'c=>r)
'a 'b 'c => 'r
Now we use functions of different arities for f1, f2, and f3. Oops.
> Well, how about a case where an argument (therefore arity-abstract) function
> gets applied with different numbers of arguments in different places:
>
> def weird_apply f a b c {
> if random_true_false ()
> let tmp = f a in
> tmp b c
> else
> let tmp2 = f a b in
> tmp2 c
>
>
> In this case we are applying the *same* function, so it's type unifies
> internally. Let's walk through this one with some care:
>
> The application f a leads us to infer the type 'a => 'b for f.
> The application f a b leads us to infer the type 'c -> 'd => 'e for f
> These two types unify, giving 'b =~ ('d=>'e) and therefore 'a => 'd => 'e
> for f.
> The application tmp b c leads us to infer 'f -> 'g => 'r1 for tmp
> The application tmp2 c leads us to infer ''h => 'r2 for tmp
> The type for tmp unifies with the return type 'b of f, giving:
>
> 'b =~ ('d => 'e)
> 'b =~ ('f -> 'g => 'r1)
> 'd =~ 'f,
> 'e =~ ('g => 'r1)
>
> typeof(f) = 'a => 'd => 'g => 'r1
>
> The type of tmp2 unifies with the return type 'e of f, giving
>
> 'e ~= ('h=>'r2)
>
> 'g =~ 'h
>
> 'r1 =~ 'r2
>
> All of this unifies with the argument variables, giving:
>
> 'arg_a =~ 'a
>
> 'arg_b =~ 'c
>
> 'arg_c = 'd
>
>
> And finally, we arrive at:
>
> weird_apply:: cfn ('arg_a => 'arg_b => 'arg_c => 'r) 'arg_a 'arg_b 'arg_c =>
> 'r
>
>
> which is correct.
I haven't actually read through your steps carefully, but your answer
seems right. This example does not involve a join, like choose_one, so
there is no problem for your scheme. Even if it did, in this case, the
constraints combine to dictate the concrete arity (modulo type
variables), so there'd still be no problem, as I pointed out above.
And then I pointed out not_apply_one. How do you explain that?
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev