Thanks, Geoffrey. I should probably add that from BitC's point of
view, proof checkers would be a means to an end. A need for
mathematical proofs can arise because of dependent types or
constraints, but even just regular type checking can be seen as a
special case of proof checking. So the reason BitC might want
extensible "proof" checking would be to extend the type checker with
more user-friendly support for tricky cases that arise when using
advanced type system features.

On Thu, Jul 10, 2014 at 6:16 PM, Geoffrey Irving <[email protected]> wrote:
> Unfortunately, there are several problems with using reflection in
> practice (Matt's list):
>
> 1. The checker must be fast when run inside Coq's type checker.  While
> coq can emit compilable ocaml code, it is not itself a compiled
> language.  Performance intensive checkers will be very slow.
>
> How do these fair under imagined-bitc-with-full-dependent-types?
>
> 1. Bitc should be a lot faster than Coq.  All good here.

It's not enough for BitC to be faster than Coq if checkers are still
running in some interpretive fashion in the BitC compiler. We need
some kind of staging to say "Load/compile this before applying it."

> 3. Since it runs inside Coq's type checker, the normalization proof
> cannot be made computational irrelevant, even with the Bove-Capretta
> method (this bit is copied straight from Matt).  If the normalization
> proof is involved, the proof checker will be even slower.
>
> 3. Matt will have to answer this one, since he has better knowledge
> than I.  Does a strong phase distinction between runtime and compile
> time help here?

The Bove-Capretta method is a design pattern used in dependent type
systems for writing functions with complicated termination arguments.
(More complicated than the built-in termination checker can find by
itself.) It separates the actual code from the termination proof, but
because of strong normalization, the type checker still needs to run
through the termination proof to make sure it doesn't use false
assumptions. If we settle for weak normalization, (3) is easy to
handle: we just ignore the proof once it type checks. The difference
with weak normalization is that reduction of open terms--including the
open subterms under binders--may not terminate. Closed terms still
evaluate to normal forms.

> 4. The checker must be purely functional.  Conceivably, one can
> imagine proof checkers that do I/O, such as asking the user questions
> or even pulling parts of proofs to be checked off the web (needless to
> say: such features would have be optional).
>
> 4. Impure type checkers are perfectly fine as long as all the effects
> are deemed okay by the user.  This would require compiler flags, since
> the default would want to be no effects allowed at compile time.

(4) is actually the hardest requirement to deal with, in my opinion.
It seems to require a bigger change to the design of the language than
the others. I actually don't recommend trying to do (4) in the BitC
compiler. It seems kind of weird anyway, to have a compiler doing I/O,
other than reading source files and writing object files (or
whatever). I brought it up because it is a design goal for my project,
which aims to be more of a common runtime for proof assistants than a
compiler.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to