On Thu, Jul 10, 2014 at 2:06 PM, Matt Oliveri <[email protected]> wrote:
> Even inference failures don't have to be intuitive. When Coq's > inference doesn't work, I often don't know or care why, but adding > more annotations within the top level definition is always the > solution. > But I think you are making my point. You seem to be saying that when Coq's inference fails, you don't really have much insight into why, and you are reduced to what sounds like a strategy of "make random modifications until something works". This is bad. > I guess what I don't understand is why people would need to be so > clever just because the type inferencer doesn't always work. You just > fiddle with annotations 'till it works. > I'm an engineer. I find non-principled solutions deeply dissatisfying. My biases aside, the approach you are proposing limits reuse. > > I disagree with you very strongly about the need for principal types. > Every > > principal typing you can't obtain is a place where genericity fails. > > Was I right that by complete types you meant principal types? No. Complete type inference means that the inference engine infers the best (most general) type for every function and expression in the program. See the first introduction paragraph of Axel Simon's paper on complete type inference for HM: http://www2.in.tum.de/bib/files/simon14deriving.pdf Principal types are complete, but they require bottom-up inference. So principal types are *one* way to arrive at complete typings, but not the only way. What do you mean by genericity failing? Does this happen in dependent > type systems, where principal types often don't exist? If you don't get the most general type, then there will be generic structures that you are unable to instantiate over certain types that, if you *had* the most general type, would instantiate just fine. So yes, this happens in dependent type systems. shap
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
