(The "Modest Proposal" of the Subject line appears at the end.)


Some assumptions I'm seeing that I don't believe:

* BitC programs will in general be fully specified, so if a program's correctness depends on a + operator, of whatever form, to not overflow, this dependence will be captured by way of proving the program satisfies its specification.

My take: Most BitC programs, like most regular programs, will only be fully "specified" by their source text. Adding defthm to a programming language is a great thing, which I fully support, but I believe it will usually be used, if at all, only as a cross check to state partial specifications about the program. To understand what a BitC program really means, programmers will often resort to the same tactic they use elsewhere -- use the source, Luke. BitC will move programming towards being a more formal activity, but it will remain a mostly informal human activity, and the source notation must continue to be a good user-interface to this activity.


* We will generally try to prove that programs are both safe (consistency is preserved) and live (forward progress continues).


My take: For a small number of simple and/or critical programs, including pace-makers and the Coyotos kernel, programmers will be concerned to write programs that they can prove are both safe and live. For most programs, most of the correctness they will actually be able to prove will be safety properties. Liveness will generally be both harder to prove, and less demanded. A program which is known to be safe, but only empirically demonstrated to be apparently live, will satisfy most needs for increased confidence in correctness. In a distributed computing context with the additional use of timeouts, this corresponds to fail-stop components, which is the best building block we can have anyway in distributed systems. In security, this corresponds to protection from breach, but not from denial of service, which is again the best we can do anyway in a distributed context.


"I meant to do that" --Pee Wee Herman

* There are times when a C programmer says "+" where it might wrap, and where the program will still be correct if it does.

My take: For arithmetic on unsigned types, I'd guess this may be true for maybe 1% of C programs. For arithmetic on signed types, I'd guess it's true for less than 0.1% of C programs. In other cases, when a C programmer says "+", they either have reasons to believe that it won't wrap, or they don't care if their program is buggy if in those cases.


* On Pentium architectures, in the absence of proof that we're safe from overflow, the overflow check can only be done by an explicit check, which is a cost that will deter C programmers from switching to BitC.


My take: These same C programmers will be deterred by the bounds check on array indexing. If the default binding for "+" in BitC is the cheaper wrapping one, then I suggest that the default array indexing operator should obtain the element at (index % arraysize). This is memory safe and does not throw an exception. If bad wrappings of "+" would be caught by other proof obligations, then bad wrappings of array indexing should likewise be caught by other proof obligations. If BitC programmers are clever enough to use modular signed arithmetic intentionally in correct programs, then they will be clever enough to use modular array indexing correctly as well. BitC programmers seeking C levels of efficiency will choose array sizes that are powers of two, so that array indexing need only mask bits of the index.

--
Text by me above is hereby placed in the public domain

    Cheers,
    --MarkM

_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to