On 21 February 2015 at 11:18, Matt Oliveri <[email protected]> wrote:

> On Sat, Feb 21, 2015 at 5:16 AM, Keean Schupke <[email protected]> wrote:
> You just pretty much described dual (application-driven) arity
> specialization. Well done. But I thought it was you who said we'd
> handle the HOF case by specializing the HOF, if it's in the same
> module. The function whose arity we are thinking about is not the HOF,
> it's what the HOF calls. So specializing the HOF would be
> definition-driven, not application-driven. We are specializing the
> code for how the HOF calls its argument to suit the actual argument's
> definition.
>

I think that we can consider the specialisation just a form of inlining.
Consider:

f :: Nat -> Nat -> Nat
f = ...

f 1 2

Applying f to 1, returns a function that we apply to 2 to get the result.
Although this is safe (we don't allocate for lambda-accumulators) it still
involves two function calls. You could inline the first if you have access
to the definition. But I think now this makes no difference to
type-checking, its an implementation detail.

The only change I was suggesting, is based on the fact we can do this
inlining, we may as well infer the most-curried type for all functions, as
we know the inlining will remove the applications later.


> Well, as long as we agree that the final program doesn't have any
> isolated coercions from one function arity to another. I see these
> coercions as the implementation of the subsubption rule, so no
> coercions means no subsumption. Of course I see that the way you do
> type checking in the first place would use the subsumption rule. Then
> subtyping triggers specialization by rewriting the derivation to not
> use subsumption anymore, essentially. That's how I understood your
> proposal. And my problem is that this idea is brand-new to me, and
> there seems to be a lot of flexibility (ambiguity) in the details,
> which you haven't been explaining very systematically.
>

consider 'g' to be an implicit parameter, consider:

import g :: Nat -> Nat -> Nat
main = g 1 2

main :: {g :: fn Nat Nat -> 'a} -> 'a -- if we type main in isolation
treating 'g' as an implicit parameter we don't know its return type.

Now we can pass 'g' in implicitly using normal subsumtion. So we don't need
anything other than the subtyping relation. No type coercions at all (just
the inlining optimisation to improve things).


> No, I don't think you do. You know what functions need to be
> specialized from the combination of
> 1. Their type
> 2. The type of actual arguments they get applied to
>
> That's not quite as simple.
>

Hmm, this is just type inference.


> > I am obviously not good at explaining this.
>
> Do you say that because I said something unequivocally wrong about
> your proposal, or are you just agreeing that I don't fully understand
> it yet?
>

I am saying this because I feel your don't fully understand it. I feel some
of your criticisms miss the mark because I have not clearly explained what
I mean.


> >> It's clear from what you said now that your proposal is truly
> >> different from Shap's. So that answers your question. Now my question
> >> is: What do you think are its advantages and disadvantages. This will
> >> give people perspective about your decisions. (Or allow them to reject
> >> it out of hand.)
> >
> > Advantages are greater efficiency (less function applications).
>
> So I would attribute that to the combination of definition-driven and
> application-driven specialization. Is there more to it than that?
>

Perhaps there is no difference apart from the type-system? Perhaps I am
just trying to understand this from my personal experience. If you remove
the inlining of applications and inferring the most-curried type for the
definition, its now looks the same to me.


> >> As far as the example, I did not break the code there across modules
> >> since that's not what confuses me about Shap's proposal. But with your
> >> proposal, the examples you should give are the various cases of
> >> specialization which apparently hinge on where the functions come
> >> from.
> >
> > Its simple, you specialise the definition where you can (you have access
> to
> > the source), otherwise you cannot and you have to use the definition you
> > actually have imported (say from 'C').
>
> That's still ambiguous, I think. We need to be absolutely positive
> that we reject all cases where specialization won't eliminate all
> instances of subsumption. The difficult task you have burdened
> yourself with is convincing Shap that you know all those cases.


Considering free variables as implicit arguments, allows the use of normal
subtyping subsumption, so we can be sure that all applications are sound.

This just leaves inferring the most-curried type to give the most
flexibility, which simply relies on the ability to inline applications. It
should be obvious that you can inline the application if the function
definition is local in the same module as its just beta-reduction. Maybe
you can beta-reduce imported functions too (if BitC allows it), but you
cannot for foreign functions at least, so you want to keep their exact
type, and not promote it to the most-curried type.


Keean.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to