On Sun, Jul 20, 2014 at 11:43 PM, Matt Oliveri <[email protected]> wrote:


> So what you're saying is that given complete typings, the complete
> types are unique?


This is axiomatically true. Suppose the complete types were *not* unique.
Then there must be two types, each maximally inclusive (because it's
complete). But if neither includes the other, neither can be maximally
inclusive.

You *can* arrive at situations where you get two complete types. In effect,
you have two [possibly local] maxima in the type search space. But HM-style
typing requires that you get *one* type for each expression.

Matters are a little more complicated when qualified types are introduced,
because the "exactly one type" rule gets relaxed.


> This is not to say types are unique, just that all
> the other types an expression has are not complete.


I think you're looking at this incorrectly. If you have a situation where
there is a choice between a more inclusive and a less inclusive type, and
the type rules are otherwise satisfied, then you necessarily have a
subtyping relationship (of sorts) and it is not possible to arrive at a
unique typing. The only question at that point is whether you want the
maximally inclusive type.

SML originally went to some care to make sure that there wasn't any
subtyping. The only case that feels a little like subtyping is unresolved
type variables. But those resolve necessarily at application time. Later
versions of SML introduce row subtyping, and in that place you can get into
the kind of question I'm talking about above.


>> This sounds similar to the point I made earlier about choosing a type
> >> being like choosing a specification, and that it generally needs to be
> >> a human act of design. Or am I misunderstanding?
> >
> > I don't think you are misunderstanding, but I think you are raising a
> false
> > dichotomy. In a language with inference one remains free to declare and
> > thereby proscribe.
>
> I didn't mean to say a language with complete inference shouldn't
> allow type annotations. But what I was implying is that if type
> annotations are needed in practice, I don't understand why it's so
> important for the inference to have any theoretically-good properties.
>

This is a question that should probably be raised on LtU.



> >> > I mean that they are "black art" in the eyes of a systems programmer
> who
> >> > probably can't name a prover, never mind drive one.
> >>
> >> I wouldn't have used that term.
> >
> > Fair enough. And I agree that it's an abuse of the term. Can you suggest
> a
> > better term?
>
> Well here's an opportunity to clarify: Do you mean proof goals the
> user could end up dealing with, or proof goals the compiler has to
> somehow handle automatically? I propose "proof handling" and "theorem
> proving", respectively.
>

Our target user base will reject outright any language that requires manual
proof discharge. We can sneak some things in via type level computation,
but they have to discharge automatically in all but the weirdest corner
cases.



> At any rate, Curry-style
> types are basically just extra tacked-on information about an existing
> thing.
>

Until you get to qualified types, I think I agree. Though of course the
ground functions of the runtime need to be declared in order for the rest
of the inference process to be feasible.


> I'm sorry, but it's not my fault "complete" means many things.
>

I agree. But in a conversation about type systems you don't drag another
definition in to confuse the issue. Or if you must, you annotate it.


> For
> deductive systems, "complete" means that a statement is provable
> whenever it's true in all models.


Which (I think) is isomorphic to the notion of complete types. If
inferences is viewed as a process of deduction, complete types are those
that satisfy all of the possible deductive models.


> This is the closest idea I know to
> what I'm getting at: that a program has a type whenever* its compiled
> code has the required properties for any language implementation.
>

MOST DEFINITELY NOT. We need to be very careful to distinguish between the
language *definition* and the artifacts of some particular implementation.
Critical systems should *never* be written to a particular implementation
of a language.


> Basically I took the special case of Curry-style types as a deductive
> system about programs. Soundness and completeness of a type system are
> different from soundness and completeness of type inference.
>

Well, no, they really aren't. Because the inference process and the type
system are co-specified as a set of type judgments that cover all of the
operations and forms in the language. The only way inference can fail to
produce a sound and complete typing is if there is some aspect of inference
that is not syntactically directed. You can actually go a bit further than
that. E.g. '.' in BitC is partially type directed, but the inference of the
left-hand side never depends on the right-hand side, so this devolves into
an ordering constraint. Some other forms of type-directed inference would
not be OK in that way.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to