On 7 Apr 2015 14:36, "Matt Oliveri" <[email protected]> wrote:
> Well personally, I find it hard to stay on top of what your code is
> supposed to be doing, since you started it back when you were thinking
> about arity specialization in terms of subtyping, but now you seem to
> agree that we should use something like callvars. Also, it's just a
> type checker, right? Not also an evaluator. So there's no way to even
> test (let alone prove) the soundness of the type system through your
> implementation. If that's true, then implementing a type checker kind
> of seems to be jumping the gun.
>

It does type inference, rather than type checking. Using ternary type-level
arrow operators + two type level constants arg and call with their own
kind. Really its a development of the earlier system, where I was using
intersections of call types and a concrete arrow encoding. We can see
retrospectively that the call-arrow and parametric-arrow (the one with the
type-level variable in) is the same as the intersection notation - except
(and this is important) with higher order functions we don't want to break
parametricity, which having arguments with intersections in would do. This
is the point that made me move to the multi-arrow notation for call-site
inferred types. I had always proposed a multi-arrow notation for definition
inference of arity-concrete types. The second important thing is that all
functions are curried, both arity concrete and arity abstract ones. This
simplifies the type inference and checking implementation.

The evaluator would be trivial, so I might add that if it helps, but
personally I have no problem thinking of type checking independently of
value level semantics. The type system is a model for how you permit terms
to behave. Generally I am defining the semantics as untyped lambda-calculus
as restricted by the type system.

> As far as "type" vs. "type-level object", I still don't know whether
> that's a weird philosophy of yours, or due to a misunderstanding
> somewhere. But as long as you can use the same terms as everyone else,
> there's no need to discuss it further.
>

Indeed, I am happy using type - level - object.

> And for "stratifying" types, I assume you mean the
> Bool' = True | False
> Bool'' = True' | False'
> business.
> There may actually be a lesson there yet for practical programming
> languages. Namely, whether stratified vs. dependent type systems are
> actually more appropriate for programming. This question is
> complicated by the many styles of system that get called "dependent".
>
> BitC is tentatively a stratified system. For now, it seems to me that
> a dependent type system for BitC could totally be made that's
> workable, but it may not be worth it. Having something *as strong as
> if* there were a dependent type system, but with a less systems-y term
> language for type level, seems ideal. This would be kind of a
> combination of stratified and dependent, with exactly two "levels",
> where the type level is a dependent type system. Perhaps this
> combination should be called a "dependent kind system"? (You know,
> 'cause kinds can depend on type-level terms?)

There are two options here, something collapsed like dependent types, but
with values removed. Types can depend on types but not values. Or what I am
interested in, where type universe levels correspond to staging
compilation. So value level is runtime, universe level zero is compile time
(including type level template code that outputs value level code like
module specialisation), universe level 1 is type level template code that
outputs type level templates... Etc.

> I'd like to pick up parts of our old discussion off-list, to see if we
> can learn something about stratified vs. dependent type systems.

Sure, sounds good.

Keean.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to