On Tue, Feb 24, 2015 at 4:19 PM, Matt Oliveri <[email protected]> wrote:

> Here's what I think the situation is in this discussion:
> - Matt (me), Shap, and William are skeptical that subtyping can be
> implemented without introducing allocations.
>

I don't agree.

Whether we think of this as subtyping or as inequality constraints over a
Nat kind isn't the issue. In the end, if you exhaustively enumerate legal
unifications/specializations, Keean's model and mine arrive at the same
concrete results. So I don't think the issue has to do with whether the
mental model is constraints or subtypes.

The issue as I see it - at least up to my current understanding of Keean's
proposal - is that the rewrites Keean seems to want to do are only legal
when the native arities involved permit them. When they do, fine. My
difficulty with his examples so far is that he never states what his native
arity assumptions are, and he also never states how the allocation strategy
for the injected lambdas required by his proposal ensures that heap
allocation need not occur.

To be clear: I agree that given the right primitives some of the rewrites
he proposes are fine, and I think that we *could* build the right
primitives to support them. In particular, and assuming the wrong things
don't escape, we can re-write:

     f x y  // for f having arity 2
to  (f x) y


when two arguments are actually present. But I'm not sure what purpose this
rewrite serves given that the arguments are in hand.

What we *can't* do, again assuming that f x y has native arity 2, is
transparently accept

f x


the concern is that there are a whole bunch of ways in which we could
inadvertently end up at that rewrite through successive optimizations.

- Keean doesn't see how subtyping would make it any harder to avoid
> introducing allocations.
>

Per above, I agree.


> - Pal has joined in, but I'm not sure what his take is.
>

:-)


> But if I'm right, can we stop talking about subtyping for a while, and
> tie up the other loose ends? The ones that are on my mind are
> 1) deep vs. shallow arity variables vs. type constraints
> 2) application-driven specialization without using subtyping
>

We need to tie up these loose ends regardless. I attempted to give you an
answer about deep vs. shallow arity variables, but I'm not sure if it
helped.

I have yet to see a story for application-drivien specialization that makes
any sense to me. I'm sorry, because I know you have tried to put together a
coherent description, but it isn't penetrating for me. I *think* my ground
problem is that if application drives arity specialization, then we have to
be able to re-write (i.e. instantiate) the function definitions at the
selected arities. The function definitions aren't necessarily available to
us in the form of an AST, so I don't see how that's possible.

Correction: I can see how to make function definitions available in the
form of an AST (thereby enabling instantiation), but for fully concrete
functions I think the cost of this is more than we want.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to