On Wed, Feb 25, 2015 at 9:03 AM, Jonathan S. Shapiro <[email protected]> wrote:
> On Tue, Feb 24, 2015 at 11:25 PM, Matt Oliveri <[email protected]> wrote:
>>
>> On Wed, Feb 25, 2015 at 12:28 AM, Jonathan S. Shapiro <[email protected]>
>> wrote:
>>
>> > The thing is: application proceeds left-to-right. It follows that the
>> > deep
>> > specialization is necessarily an inductive consequence of the
>> > left-to-right
>> > shallow specialization. The difference between the two is that the
>> > shallow
>> > specialization can be pursued before the program is fully concrete.
>>
>> You seem to be assuming that if we use deep arities, we wouldn't be
>> able to shallow specialize without fully knowing the deep arity. I
>> don't see why that must be. The tail of a list is a list, after all.
>> (1 :: 'arity) is an arity.
>
> I don't think I'm assuming that. In any case I do not intend to. I suppose I
> just prefer to proceed inductively from the left and make progress as things
> get instantiated and/or unified.

Well the thing I like about deep arities is if we have (fn 'arity
'a->'b->'c) and we instantiate 'arity with (1 :: 'newArity), we get
(fn 1 'a -> fn 'newArity 'b->'c) and now there's no mystery how
'newArity got into the function type.

What I've been trying to say about your notation is: I think I could
write down the rules for arity-abstract types using deep arities. I
have no idea how to do it with shallow arities. It seems to me like,
though we are discussing types using shallow arities, the only reason
why things seem to be working is because intuition is smoothing over
logical gaps. Either that, or you're so comfortable with unification
tricks that you're inadvertently leaving me in the dark.

> You keep asking me where the 'newArity comes from. The answer is that 'arity
> is not an arbitrary type variable.

Technically it's not a type variable at all, it's a Nat variable.

> It's a type variable in a specific
> position of the fn type constructor. As a consequence we know that it is
> actually 'newArity::Nat (that is: it has Nat kind), and we know that we can
> in general rewrite
>
> 'SomeNat
>
> as
>
> 1 + 'SomeOtherNat
>
>
> so long as 'SomeNat > 0. Which we in turn know structurally because we are
> looking at an arrow type, and all arities are greater than zero.

This doesn't help. Obviously 'newArity is a nonzero Nat, as is any
shallow arity for a function. Since you don't seem to understand my
concern, let's end this branch of the discussion. My question will
probably be answered by how you handle the 'a1, 'a2, 'a3 example.

> The types are syntactically well-formed, but not semantically valid. My
> claim is that types like fn 1 int -> 'a can only appear in underspecified
> programs.
>
> However, I may have been wrong about my constructive argument, so let's set
> it aside. Behind the scenes I was trying to answer how 'newArity got
> introduced.

Well, it's probably for the best that I didn't understand that answer,
'cause I don't want the only justification to depend on particulars of
inference.

> I think the explanation I give above (peano decomposition) is a
> simpler answer.

I didn't see how it explains anything until I got to the example below.

>> >> Umm. Maybe that answered my question. What about that 'a1, 'a2, 'a3
>> >> example I had? I guess we can talk about specifics more easily if you
>> >> explain how to unify function types in general.
>> >
>> > My apologies, but can you re-issue the example? Somewhere in the last
>> > 150
>> > messages I lost track. :-)
>>
>> Here:
>> On Tue, Feb 24, 2015 at 9:31 PM, Matt Oliveri <[email protected]> wrote:
>> > What makes you think that choose_one-like problems are happy if
>> > only the shallow arity can be constrained?
>> >
>> > In particular, how do we know that choose_one's type
>> > fn 3 (fn 'ar a->b->c->d)->(fn 'ar a->b->c->d)->Bool->(fn 'ar a->b->c->d)
>> > doesn't unify with
>> > fn 3 (fn 1 a->fn 'a1 b->c->d)->(fn 1 a->fn 'a2 b->c->d)->Bool->(fn 1
>> > a->fn 'a3 b->c->d)
>> > aside from that it makes no semantic sense?
>>
>> I realize now that I used an old notation for arity-concrete function
>> types in that email, so I edited the quote.
>
> It CAN unify that way.

Oh yeah. 'a1, 'a2, and 'a3 can all be the same. But I meant how do we
know they have to be the same. It looks like you knew what I meant.

> But all of the original types had the same arity type
> variable 'ar, so all of the decompositions of 'ar have to result in the same
> arity type variable 'newArity. When the unification above is done, the 'a1,
> 'a2, 'a3 variables will have unified, and the *result* type will therefore
> be
>
> fn 3 (fn 1 a->fn 'newArity b->c->d)->(fn 1 a->fn 'newArity
> b->c->d)->Bool->(fn 1 a->fn 'newArity b->c->d)

That's what it should be alright. But your argument makes no sense to
me. 'a1, 'a2, and 'a3 do not arise by decomposing 'ar. After all, in
this case 'ar turns out to be 1. Oh, unless you meant for it to be (1
+ 'newArity). Wow, what awful syntax.

Please confirm if 'ar is supposed to instantiate to (1 + 'newArity).

> The shallow arity thing is deceptively powerful. the peano-style
> decomposition ensures that arity agreement is preserved over the entire
> arity-abstract function type even though the instantiations are done
> stepwise.

If you did mean that 'ar instantiates to (1 + 'newArity), then it's
very misleading to call it shallow arity, since it's not necessarily
the arity of any function call. I would call it total arity, since
it's the sum of the shallow arities in the deep arity.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to