I was *really* tempted to use the subject "Current status of nat kinds", but decided the lame joke wasn't worth the poor archival behavior. :)
In a recent curry-thread message, shap said bitc will have a nat kind. Interpreted this to mean that bitc will support polyinstantiation over nats a la Habit. The impression I got after reading the Habit spec is that polyinstantiation is not enough: a significant amount of arithmetic support enabling parametric nat polymorphism would be required to get to a reasonable place. In the long run, of course, one wants full dependent types / refinement types with full proof support. This is probably not available on the time scale of bitc, so the question is whether there are useful intermediates. To attempt to structure the question better, here are a few examples of nat polymorphism, generated using my numerical analyst hat. Polyinstantiation is ideal for the first, dependent types are probably required for the last. I'm curious which of these people imagine bitc being able to handle. 1. Fixed size vectors: "vec n t" is a flat unboxed array of n t's (n : nat, t : type). Polyinstantiation works. 2. Dot product: take two dynamically sized arrays of the *same* size and compute the dot product. This requires parametric nat polymorphism plus enough linear relations exposed to the surface language to handle the loop safely. Compilers do this sort of math all the time (polyhedral analysis), but spec'ing it is a special challenge. 3. Packed symmetric matrices: "sym n t" where n is unknown until runtime, and we have a boxed array of n(n+1)/2 t's. Probably requires at least an SMT solver (something like liquid haskell) to handle this kind of situation. Not particularly important for systems code. 4. Power of two hash tables: a dynamically sized hash table of size (1<<n) where n is a runtime nat. Also probably requires a SMT solver. 5. Sparse matrices: "sparse n t" is a sparse 2D n by n array of t's (runtime sized) represented as a flat array of (index,value) pairs chopped into columns via a second offset array. We need 0 <= index < n (the analog of Idris's Fin) and a similar constraint on the offset array in order to elide all the bounds checks. This is the most important primitive in high performance numerics. I'm not sure if this requires SMT (expressing correctness of the data structure does, but eliminating bounds checks might be easier).
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
