On Tue, Apr 7, 2015 at 10:26 AM, Keean Schupke <[email protected]> wrote:
> On 7 Apr 2015 14:36, "Matt Oliveri" <[email protected]> wrote:
>> Well personally, I find it hard to stay on top of what your code is
>> supposed to be doing, since you started it back when you were thinking
>> about arity specialization in terms of subtyping, but now you seem to
>> agree that we should use something like callvars. Also, it's just a
>> type checker, right? Not also an evaluator. So there's no way to even
>> test (let alone prove) the soundness of the type system through your
>> implementation. If that's true, then implementing a type checker kind
>> of seems to be jumping the gun.
>>
>
> It does type inference, rather than type checking.

My bad.

> Using ternary type-level
> arrow operators + two type level constants arg and call with their own kind.
> Really its a development of the earlier system, where I was using
> intersections of call types and a concrete arrow encoding. We can see
> retrospectively that the call-arrow and parametric-arrow (the one with the
> type-level variable in) is the same as the intersection notation - except
> (and this is important) with higher order functions we don't want to break
> parametricity, which having arguments with intersections in would do. This
> is the point that made me move to the multi-arrow notation for call-site
> inferred types.

Yeah, I remember that transition. But the system you proposed with
intersections was still later and different from what you originally
had in mind when you first pointed to your code, I thought. You had
made it sound like the original used subtyping.

> I had always proposed a multi-arrow notation for definition
> inference of arity-concrete types.

(Well I don't remember it that way. I thought you liked ('a 'b 'c ->
'd) until recently.)

> The second important thing is that all
> functions are curried, both arity concrete and arity abstract ones. This
> simplifies the type inference and checking implementation.

Agree.

> The evaluator would be trivial, so I might add that if it helps, but
> personally I have no problem thinking of type checking independently of
> value level semantics.

Type checking is indeed independent of value-level semantics outside
of dependent type systems. But for the type system to be sound, you
need to consider the evaluation of well-typed programs. So if you had
an evaluator, we could actually check and see if the challenge cases
all type check and run without implicit allocations.

Come to think of it, coding up an evaluator that adequately models the
incorrectness of implicit allocations might not be so trivial. The
evaluator should deal only with cfns, and the applications need to be
specialized to calls--as a prior pass--in a way which obviously
doesn't introduce allocations.

Anyway, whether it's easy or not, don't code up an evaluator for my
sake. I'm already pretty convinced the callvar solution is sound, and
I would want a proof in order to be fully convinced.

> The type system is a model for how you permit terms
> to behave. Generally I am defining the semantics as untyped lambda-calculus
> as restricted by the type system.

That is a good description of Curry-style or refinement type systems.
I don't think BitC's type system should be thought of as purely
Curry-style though. For example, the whole point of arity
specialization is that functions are not really curried, one-arg
closures floating around, and if we pretend they are, then we get
implicit allocations.

For Church-style type systems, it's more like the type system is an
ontology for which terms are meaningful. And that's not really right
for BitC either, but it's closer, because it lets us allow only those
terms that we know how to implement adequately.

>> And for "stratifying" types, I assume you mean the
>> Bool' = True | False
>> Bool'' = True' | False'
>> business.
>> There may actually be a lesson there yet for practical programming
>> languages. Namely, whether stratified vs. dependent type systems are
>> actually more appropriate for programming. This question is
>> complicated by the many styles of system that get called "dependent".
>>
>> BitC is tentatively a stratified system. For now, it seems to me that
>> a dependent type system for BitC could totally be made that's
>> workable, but it may not be worth it. Having something *as strong as
>> if* there were a dependent type system, but with a less systems-y term
>> language for type level, seems ideal. This would be kind of a
>> combination of stratified and dependent, with exactly two "levels",
>> where the type level is a dependent type system. Perhaps this
>> combination should be called a "dependent kind system"? (You know,
>> 'cause kinds can depend on type-level terms?)
>
> There are two options here, something collapsed like dependent types, but
> with values removed. Types can depend on types but not values.

If types cannot depend on values, then it isn't dependent types. In
dependent kinding, I'm proposing that types would only depend on
type-level objects, but that the language for type-level objects would
be about as rich as MLTT.

> Or what I am
> interested in, where type universe levels correspond to staging compilation.
> So value level is runtime, universe level zero is compile time (including
> type level template code that outputs value level code like module
> specialisation), universe level 1 is type level template code that outputs
> type level templates... Etc.

I thought you might be thinking that!! Oh boy, am I gonna give you a
talking to for thinking that!

...

Well actually, it's sensible to have a type system like that. But MLTT
universes are definitely not that, and I think staging types shouldn't
be in BitC.

>> I'd like to pick up parts of our old discussion off-list, to see if we
>> can learn something about stratified vs. dependent type systems.
>
> Sure, sounds good.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to