On Sat, Jul 23, 2011 at 3:24 AM, Alexis Read <[email protected]> wrote:
> > 1. You develop your proof interactively with the prover (Coq). > 2. You dump out the proof and run your proof checkers on it. > 3. If it says your proof is good, then you're done. > 4. If it says your proof is bad, then you curse, and fix the > bug in the theorem prover and redo your proof. > > That's fine, but it is a lot of work. What I'm aiming at is a large > amount of circularity so you can verify the prover in itself (and > thereby be able to use a large amount of automation) via the route: > prover specification => compiled by compiler written in prover => > prover proves compiler correct => compiles certified prover. > There is also the "social process" of proving code correct, via the de Bruijn criterion. The de Bruijn criterion itself is a response to Jack Dennis's skeptic argument against the value of proving software correct, since we have to be especially careful that we don't depend on buggy software to help prove or check our software. Thus, doing all of that work in Coq (or similar tool chain) is a necessary evil, but the lesser of two evils. The other evil being buggy, informally specified software that is insecure or costs human life. As for bootstrapping the prover by verifying the prover in itself, that is doable. But the key to making the whole thing work is not the process, but the *methodology* as specified by the de Bruijn criterion. The other half of the criterion is that the components of the system used to prove and verify are simple, even if slow, so long as they complete. Once the simple pass is done, optimizations can be performed, but only after using a very simple proof kernel that is hand audited by mathematicians for consistency and correctness. A good overview of this is available on a blog entry [1]. Finally, none of this is "simple", but LeRoy's achievement for compiler architecture is that he figured out a way to make it potentially *scalable* to real world software. For a review of his achievement, read Andrew Appel's summary Foundational High-level Static Analysis and then the paper by LeRoy from the CACM [2]. > As for an OO browser take on an OS, I'm assuming you've looked at > mobile maude (maude.cs.uiuc.edu/papers/postscript/principlesmm.ps.gz) > - it is written with maude which means the debugger is available for > (the fully distributed) testing. Mobile maude is based on mobile > objects (carrying own internal state and code with them) and processes > (located computation environments where mobile objects can reside), > and uses async secure message passing between objects, together with a > resource allocation mechanism. As objects can encapsulate the whole > maude language, you get your fine-grainedness for free via the module > structure. > > Personally, I think that mobile maude solves most of the issues you've > been talking about. I'm trying to find a way to integrate maude with > an OS in a useful way, so that these facilities are available on a > global system scale. > > Cheers, > Alexis. Maude's module system is the best module system I know of, and much easier to work with than ML-style modules that require explicit imports of module paraphenelia. For Mobile Maude, I am not clear on how it takes care of updating code. Such a system could probably be built on top of Mobile Maude and retain the same fine-grainedness properties you mention, but I don't think Mobile Maude is enough. [1] http://www.advogato.org/person/raph/diary/279.html [2] http://lambda-the-ultimate.org/node/3763#comment-55123
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
