On Sat, Feb 21, 2015 at 5:16 AM, Keean Schupke <[email protected]> wrote:
> On 21 February 2015 at 09:45, Matt Oliveri <[email protected]> wrote:
>>
>> No that part wasn't wrong. The HOF will call its argument with more
>> arguments than it requires.
>
> I think we are still misunderstanding each other, I think you are trying to
> view everything from the point of view of types flowing from definitions to
> applications (via the environment).
>
> I am suggesting the reverse, types flow from applications to definitions.
> Forget subtyping for now, consider we infer a concrete type at every
> application site. We now gather all these concrete application types
> together at the definition site, and emit a specialised version of the
> definition for different arity application. As long as we can generate a
> specialised version of the definition we want type-checking to pass.
>
> All we need to do is define a relation on types that is sound for the above
> semantics, that is a type relation that given the abstract-arty type of the
> definition only admits types that we can specialise the definition for.

You just pretty much described dual (application-driven) arity
specialization. Well done. But I thought it was you who said we'd
handle the HOF case by specializing the HOF, if it's in the same
module. The function whose arity we are thinking about is not the HOF,
it's what the HOF calls. So specializing the HOF would be
definition-driven, not application-driven. We are specializing the
code for how the HOF calls its argument to suit the actual argument's
definition.

>> > It that the application defines the concrete type so we specialise the
>> > higher order function according to the type passed, and this is always
>> > making is less curried.
>>
>> You mean it makes the applications in the HOF less curried? Yes, this
>> is what I originally thought you intended. To specialize. But again,
>> normally one doesn't use avoiding the subsumption rule of subtyping as
>> a specialization mechanism. At least I've never heard of that.
>
> I don't think I am avoiding using the subsumption rule... The relation on
> types just admits the valid types, that we use specialisation semantics to
> doesn't affect the typing. The type system is just a model of what we want
> to accept.

Well, as long as we agree that the final program doesn't have any
isolated coercions from one function arity to another. I see these
coercions as the implementation of the subsubption rule, so no
coercions means no subsumption. Of course I see that the way you do
type checking in the first place would use the subsumption rule. Then
subtyping triggers specialization by rewriting the derivation to not
use subsumption anymore, essentially. That's how I understood your
proposal. And my problem is that this idea is brand-new to me, and
there seems to be a lot of flexibility (ambiguity) in the details,
which you haven't been explaining very systematically.

>> > If the HOF is imported the function argument can be specialised to match
>> > the
>> > required concrete type.
>>
>> This is not necessary. We can specialize at link time. Specializing
>> the argument instead would be dual arity specialization. This isn't
>> necessarily bad, but I wish you had made it clear up front that your
>> proposal relies heavily on changing the arity of both lambdas and
>> applications, since that is a major difference from Shap's proposal,
>> which only changes applications.
>
> Maybe you can, I was assuming you could not without access to the source. A
> binary library (.dll or .so) is not normally modifiable by the linker, so
> imported functions would need concrete arities.

You are right about native shared libraries. But link-time
specialization is needed for other reasons, so Shap has a plan to do
it, I believe.

>> Keean, are you making this up as you go along? I don't think you ever
>> said anything about all these cases. Anyway, this still doesn't tell
>> us what to do with abstract (we don't know where it came from) HOFs
>> and function arguments.
>
> The idea has not changed, I am just tying to get it across clearly.

>> Shap's scheme doesn't have cases like this. We know what functions
>> need to be specialized from their type.
>
> I know what functions need to be specialised from their types,

No, I don't think you do. You know what functions need to be
specialized from the combination of
1. Their type
2. The type of actual arguments they get applied to

That's not quite as simple.

>> It's a little better, if you stick to it. But it still seems awfully
>> hand-wavy. An implicit advantage of Shap's specialization proposal is
>> that it's similar to the specialization to handle generics, which is
>> already understood. Your proposal somehow uses "subtyping" to trigger
>> specialization, which you simply came up with yourself AFAIK. You have
>> more explaining to do than Shap, but it feels like you've done less
>> explaining.
>
> I am obviously not good at explaining this.

Do you say that because I said something unequivocally wrong about
your proposal, or are you just agreeing that I don't fully understand
it yet?

>> It's clear from what you said now that your proposal is truly
>> different from Shap's. So that answers your question. Now my question
>> is: What do you think are its advantages and disadvantages. This will
>> give people perspective about your decisions. (Or allow them to reject
>> it out of hand.)
>
> Advantages are greater efficiency (less function applications).

So I would attribute that to the combination of definition-driven and
application-driven specialization. Is there more to it than that?

>> As far as the example, I did not break the code there across modules
>> since that's not what confuses me about Shap's proposal. But with your
>> proposal, the examples you should give are the various cases of
>> specialization which apparently hinge on where the functions come
>> from.
>
> Its simple, you specialise the definition where you can (you have access to
> the source), otherwise you cannot and you have to use the definition you
> actually have imported (say from 'C').

That's still ambiguous, I think. We need to be absolutely positive
that we reject all cases where specialization won't eliminate all
instances of subsumption. The difficult task you have burdened
yourself with is convincing Shap that you know all those cases.

>> Cards on the table: While I think it might be a good idea to use both
>> Shap's "definition-driven" arity specialization, and your
>> "application-driven" dual specialization, and your order on types
>> seems informally useful, I really don't think subtyping is the way to
>> do this.
>
> I am not sure you can do both. If you don't use subtyping, how do you know
> what types to admit. Remember the type system must determine which types are
> valid without any reference to anything outside the type system. Without a
> subtype relation, how do you know which types to admit? What kind of
> relation on types do you suggest?

See my earlier reply. An order on types _is_ a good way to think about
it, but it's still not subtyping.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to