On Fri, Jul 4, 2014 at 1:56 PM, Jonathan S. Shapiro <[email protected]> wrote:
> On Thu, Jul 3, 2014 at 12:42 PM, Matt Oliveri <[email protected]> wrote:
>> Dependent type systems
>> is still an active research area.
>
> Perhaps. But I don't think the grammar is where the research is.
Right, but what's the point of a surface syntax that supports
dependent types if the semantics don't fit? But it sounds like you
have a particular, well-controlled case of dependent types in mind, so
it should work.
> And the
> truth is that "constructions over literals" pushes you pretty far in the
> direction of unifying the expression and type grammars, because literals can
> appear any place that a type name can appear. And then also expressions over
> those. So you could easily see something like
>
> List (1+2)
>
> as a **type name**, meaning the (contrived) "List of elements of type 3 :
> Nat".
Since when is 3 a type? That seems unambiguously ill-formed to me.
Unless List is also a value constructor, in which case it's
unambiguously a list of int. Is it that you're asking how to get the
parser to know that?
> And the other thing is that you cannot entirely avoid dependence on values.
> Because if the type of fixed length arrays is:
>
> 'a['i : Nat]
>
> then the type of indices for fixed-length arrays is either "'index < 'i" or
> " 'index : [ 0 .. 'i :Nat ]". The latter would mean that all array indices
> are range types as far as the type system is concerned, which is mildly
> interesting.
I'd say both of those notations are actually the same range type,
assuming indexes are required to be/interpreted as unsigned integers.
> But in either case, at some point you need to be able to take a
> variable /ndx/ and index into an array /ar/:
>
> ar[v]
>
> and note that because /v/ is a variable, the types don't agree. There are a
> couple of ways to resolve this. One of them is to resolve this by
> introducing a new set of overloads for the indexing operator that invoke the
> range check explicitly, passing the literal type as a type parameter to the
> range check procedure. This is implemented in the runtime, and can therefore
> do dispatches on type that ordinary code may or may not be able to do.
I don't understand why that check should happen at runtime. I mean, I
see that it could, for more flexibility, but isn't that undoing the
point of using types?
I thought the usual approach with Nat kinds is that /v/ is known to
have the singleton type {n} for some type-level Nat n, and hopefully
you have enough constraints on n in scope to automatically prove that
n is in range. If not, then fall back to a runtime bound check.
Oh, maybe that's what you're saying. That the way languages expose
that is to provide a range check primitive that will provide
type-level evidence for being in range. I wouldn't call that a
dispatch on type. It's just a branch on a value with a type strong
enough to maintain the connection to the strong types.
> The point is that even within this (seemingly) limited "computation by kind
> classes" sort of approach, there remain places at the bottom where we need
> to cross the line between types and values. Eventually these are going to
> "leak out" into the rest of the language in the form of constraints or some
> such thing.
If singleton types count as crossing the line, then yes. But that's
not the way dependent type systems cross the line.
Wait, are you saying you thought you were going to get away without
constraints on type-level Nats and stuff?
> I guess what I'm saying is: it doesn't feel to me like we are going to get
> away with "giving up" on this in the grammar, even if we don't do full
> dependent types. Since we can't give up, it would be nice not to shoot
> ourselves in the grammar in some way that might preclude going to dependent
> types later.
Oh, now it looks like you're saying you can't get away without
constraints, and that they will show up in the surface syntax. That
seems OK.
>> And I haven't seen anyone who has tried to combine dependent typing
>> with unboxed types yet, other than to go all the way to formal
>> verification (using propositions as types).
>
> I actually don't see that as much of an issue. Propositions as types doesn't
> do it either, since as long as the data structure is pure you can't tell the
> difference between value copy semantics and reference copy semantics.
I'm taking about impure data structures here. Supporting unboxed types
only when they're pure seems like an empty victory to me. Propositions
as types does everything, but then the problem is keeping the proof
burden tractable.
> if the type-level computation language and the term-level computation
> language are related closely enough, it seems likely that you end up with
> similar grammar issues, and maybe even a lot of the same feel.
That sounds right. So we're basically talking about how to add syntax
for Habit features to BitC?
>> I like Geoffrey's idea of having a strictly separate, pure copy of
>> BitC for compile time...
>
> I do too, but we're already committed to that. Because I'm interested in
> complex deep-constant initializations, I need to know the subset of the
> language that comprises pure computation. Think of this as "constexpr" done
> right (assuming we can). It's a surprisingly tricky thing to define, because
> it appears to require parametric effect types.
Wait, from my understanding of Geoffrey's idea, it would subsume
pretty much any sensible form of type-level computation short of true
dependent typing. But now you're saying you've been working out how to
support it all along. So then what's the remaining problem?
> It's a surprisingly tricky thing to define, because
> it appears to require parametric effect types.
I'm not surprised to hear that. I saw a recording of a presentation by
Greg Morrisett, where he said something like because strong typing in
the impure/imperative style requires heavy machinery like effect
polymorphism anyway, we might as well use monads in a pure language
instead, and inherit all the machinery. This was in introduction to
his Ynot extension to Coq.
> I like effect types a lot, but they have a way of introducing one hell of a
> lot of textual clutter.
Have you thought about heuristic algorithms for inferring effect polymorphism?
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev