Il 27/10/19 10:01, Mario Carneiro ha scritto:
> Of course my programme proposes to prove correctness of "all the code",
> but as long as there is *any* unverified code on the system you have to
> be able to protect yourself from it, and that means sandboxing.

Yes, my (admittedly, somehow tongue-in-cheek) idea is that the whole
ecosystem evolves so that any code is always shipped together with a
proof-of-tameness, that says that that code is going to behave nicely
when you jump in it. So you run your Tor client and download some
program from the worst darknet website, but it comes with a bundled
proof that guarantees that it is going to only access allowed memory
space and only do authorized syscalls through well defined gates
(possibly, provided that there is some way to formalize this in a
trusted way, also that it is not going to take advantage of
microarchitectural attacks like Spectre and co).

Then, since you trust the specification of "being nice" that such
program proved to be conforming to, you happily run it in ring 0. No
worries even if the code is self-modifying: the "being nice"
specification took this into consideration as well (it is probably not a
big problem in the specification; if anything, the additional burden is
put on the prover).

Again, I fully agree that this is an enormous over-engineering: it is
much easier (as we do) to build a processor that enforces privilege
separation. It is just a way to picture in a funny way what extreme
program correctness proving might one day deliver us.

> I mention this briefly in the paper, but that's actually why the
> verifier is in a separate process; the "small trusted kernel" approach
> doesn't work unless the language is sufficiently "safe" and restrictive
> (and verified) that we can prove that "arbitrary" code doesn't trample
> on the part of the code about which properties of interest are proved
> (the trusted kernel). If it's in a separate process, then the unverified
> part of the code can throw whatever tantrums it likes but it won't be
> able to influence the verified checker without the OS's say-so. (The OS
> still has to be trusted in this scenario, though.)

I agree from the viewpoint of MM0. In my scenario the verification would
be made by the kernel before the program is even jumped in the first time.

Giovanni.
-- 
Giovanni Mascellani <[email protected]>
Postdoc researcher - Université Libre de Bruxelles

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/33bdea08-d5b4-6b60-c495-b71195102725%40gmail.com.

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to