On 21 February 2015 at 09:45, Matt Oliveri <[email protected]> wrote: > > No that part wasn't wrong. The HOF will call its argument with more > arguments than it requires. >
I think we are still misunderstanding each other, I think you are trying to view everything from the point of view of types flowing from definitions to applications (via the environment). I am suggesting the reverse, types flow from applications to definitions. Forget subtyping for now, consider we infer a concrete type at every application site. We now gather all these concrete application types together at the definition site, and emit a specialised version of the definition for different arity application. As long as we can generate a specialised version of the definition we want type-checking to pass. All we need to do is define a relation on types that is sound for the above semantics, that is a type relation that given the abstract-arty type of the definition only admits types that we can specialise the definition for. > It that the application defines the concrete type so we specialise the > > higher order function according to the type passed, and this is always > > making is less curried. > > You mean it makes the applications in the HOF less curried? Yes, this > is what I originally thought you intended. To specialize. But again, > normally one doesn't use avoiding the subsumption rule of subtyping as > a specialization mechanism. At least I've never heard of that. > I don't think I am avoiding using the subsumption rule... The relation on types just admits the valid types, that we use specialisation semantics to doesn't affect the typing. The type system is just a model of what we want to accept. > > > If the HOF is imported the function argument can be specialised to match > the > > required concrete type. > > This is not necessary. We can specialize at link time. Specializing > the argument instead would be dual arity specialization. This isn't > necessarily bad, but I wish you had made it clear up front that your > proposal relies heavily on changing the arity of both lambdas and > applications, since that is a major difference from Shap's proposal, > which only changes applications. > Maybe you can, I was assuming you could not without access to the source. A binary library (.dll or .so) is not normally modifiable by the linker, so imported functions would need concrete arities. > Keean, are you making this up as you go along? I don't think you ever > said anything about all these cases. Anyway, this still doesn't tell > us what to do with abstract (we don't know where it came from) HOFs > and function arguments. > The idea has not changed, I am just tying to get it across clearly. > > Shap's scheme doesn't have cases like this. We know what functions > need to be specialized from their type. > I know what functions need to be specialised from their types, but you cannot do that if you don't have access to the source with an imported function. > It's a little better, if you stick to it. But it still seems awfully > hand-wavy. An implicit advantage of Shap's specialization proposal is > that it's similar to the specialization to handle generics, which is > already understood. Your proposal somehow uses "subtyping" to trigger > specialization, which you simply came up with yourself AFAIK. You have > more explaining to do than Shap, but it feels like you've done less > explaining. > I am obviously not good at explaining this. > It's clear from what you said now that your proposal is truly > different from Shap's. So that answers your question. Now my question > is: What do you think are its advantages and disadvantages. This will > give people perspective about your decisions. (Or allow them to reject > it out of hand.) > Advantages are greater efficiency (less function applications). As far as the example, I did not break the code there across modules > since that's not what confuses me about Shap's proposal. But with your > proposal, the examples you should give are the various cases of > specialization which apparently hinge on where the functions come > from. > Its simple, you specialise the definition where you can (you have access to the source), otherwise you cannot and you have to use the definition you actually have imported (say from 'C'). > Cards on the table: While I think it might be a good idea to use both > Shap's "definition-driven" arity specialization, and your > "application-driven" dual specialization, and your order on types > seems informally useful, I really don't think subtyping is the way to > do this. I am not sure you can do both. If you don't use subtyping, how do you know what types to admit. Remember the type system must determine which types are valid without any reference to anything outside the type system. Without a subtype relation, how do you know which types to admit? What kind of relation on types do you suggest? Keean,
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
