On Mon, May 4, 2015 at 3:24 AM, Keean Schupke <ke...@fry-it.com> wrote:
> On 4 May 2015 at 01:38, Matt Oliveri <atma...@gmail.com> wrote:
>> You _can_ implement an HM type checker that way, if it's really just
>> HM. (E.g., no modules.) Still, no-one does that.
>
> I think you misunderstand, you still type check abstractly, the grounding
> property is still there when you have type checked the whole program.

I see what you mean now.

>> Try the following code in OCaml, or analogous code in whatever
>> language w/ generic types you have on hand:
>>
>> (fun (f:'a->'a->'a)->f) (fun x->x)
>> Error: This expression [x] has type 'a but an expression was expected of
>> type
>>          'a -> 'a
>>        The type variable 'a occurs inside 'a -> 'a
>>
>> Now how could it know that expression is nonsense (without
>> equirecursive types at least) without knowing what 'a is? Only one
>> explanation: it type checked it.
>
> The function is type checked, but not its use. Having a function that is
> never used, the function may as well not exist.
>
> In the type system this is simply an uninstantiated type. I can have as any
> polymorphic types in the polymorphic environment as I like, but if there are
> never polyinstantiated they are not part of this program, and play no part
> in the typing of the program.

Right.

>> > I think you are not fully thinking through the process of type checking.
>>
>> Either we are having a hilarious misunderstanding, or one of us has
>> learned type checkers all wrong. :)
>
> I think you are not thinking about the type derivation of the program, but
> somehow think the types of unused functions are somehow relevant to the type
> of a program.

No, it's much simpler than that. I said program, but I didn't mean
whole program, because I'm thinking modularly. :P :)

>> > Lets say I have some functions:
>> >
>> > f :: forall a b . a -> b -> b
>> > g :: forall a b .  a -> b -> a
>> >
>> > Now I can type check these polymorphic function definitions on their own
>> > with the type variables no problem,
>>
>> You have not defined any functions...
>
> Actually I have each of those functions has only one possible definition. I
> dont need to specify the value level because it is obvious:
>
> f x y = y
> g x y = x
>
> There are no other possibilities.

Ooh, a smart ass. But there _are_ other possibilites: the ones
involving non-termination.

>> > When I get to the main program:
>> >
>> > main = f g 1 2 3
>> >
>> > No type variables are allowed to be unknown, because they do not exist
>> > outside of main.
>>
>> What do you mean?
>
> we know the types of all the variables. IE the type of main does not contain
> any variables. The two are connected, unless we introduce a new type
> variable existentially, there can be no type introductions left in the type
> of main. The type system is an algebra and the left and right hand sides
> must be equivalent. Quantifiers cannot just appear.

Nope, didn't help. Never mind.

>> > There is no way for main to pass a value of unknown type to
>> > any function,
>>
>> Well you're mixing up runtime and compile time in that sentence. There
>> _is_ a way to use an expression of unknown type as an argument. It'll
>> just screw up if you run it:
>>
>> main = f (hd []) 3
>
> So you are relying on the fact that we know at compile time that we never
> use the list. If we use the list we need to know its type. This is fine as
> there are no unknown types in the program. Yes there can be unknown types in
> dead code, which a strict type checker should reject, but I don't really
> mind if it doesn't.
>
> The compiler would just dead-code eliminate the expression, hence there
> would still be no ungrounded type variables in the program.

I didn't intend to rely on compile-time knowledge. I was just pointing
out that the compiler accepts code that involves an unconstrained type
variable. I wasn't saying that you need a value of unknown type at
runtime. This is obviously impossible in a sound type system.

But you have a good point: Expressions with an impossible type are
necessarily dead code.

This confusion between static dependency and dynamic dependency seems
to be the root of your misunderstanding of dependent types: it's only
about static dependency. For a type to dynamically depend on a value
would require that the value be known at compile time, or the type to
be reified at runtime. Neither of these are the point of dependent
types, since the first is staging, and the second is dynamic typing.

Dependent types are about statically tracking properties of dynamic
values, using types. If that sounds impossible, just consider it as
writing proofs about what your program would do. It's just that those
proofs are mixed into your program in a stylish way.

Of course that's a gimmicky, programming-centric way of saying it.
Dependent types are really more profound than that, but that's largely
the payoff for programming languages.

>> Because you could add "existentials", and still have typeclasses over
>> types with existentials, and add instance arguments, and add dependent
>> types, and add everything but the kitchen sink, and you'd still know
>> that types "in the mode" would be known at compile time.
>
> This not what I intend. You can still have type-classes over existentials.
> The point is that you know when to use runtime polymorphism (when there is
> an existential) and when you can resolve statically.
>
> The point is not that you cannot do these things, but that:
>
> - There is a clear rule for when runtime polymorphism is needed, that is
> exact.
>
> - The places where runtime polymorphism is to be used are clearly visible to
> the programmer. They have to actively use existentials to get the runtime
> polymorphism. You don't pay for what you don't (explicitly) use.

Ah, not bad. But you're still thinking instance arguments don't fit
that picture? And they aren't worth adding staging annotations for?
_______________________________________________
bitc-dev mailing list
bitc-dev@coyotos.org
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to