On Thursday, 7 August 2014 at 23:13:15 UTC, David Bregman wrote:
and use unreachable() if you know 100% it holds.

This is just another name for the same thing, it isn't any more or less dangerous.

Of course it is. unreachable() could lead to this:

f(){ @unreachable}
g(){...}

f:
g:
   machinecode


assume(false) just states that the post condition has not been proved/tested/verified yet. That does not signify that no code will reach it. It just means that the returned value is uncertain. It does not mean that the code will never be executed.

It is also dangerous to mix them up since CTFE can lead to assume(false). If you want safety and consistence you need to have a designated notation for @unreachable.

Encouraging assume(false) sounds like an aberration to me.

If you accept my definition of assume there is no problem with it, it just means unreachable. I showed how this follows naturally from my definition.

Your "definition" is lacking. It mixes up two entirely different perspectives, the deductive mode and the imperative mode of reasoning.

You need to provide proofs if you attempt using assume(X) and assert(false).

There is no mechanism for providing proofs or checking proofs in D.

Actually, unit tests do. D just lacks the semantics to state that the input accepted by the precondition has been fully covered in unit tests.

Well, any correct mathematical formalism of the code must preserve its semantics, so it doesn't really matter which one we talk about.

Preserve the semantics of the language, yes. For pure functions there is no difference. And that is generally what is covered by plain HL. The code isn't modified, but you only prove calculations, not output.

Different lines (or basic blocks, I guess is the right term here) can have different reachability, so you could just give each one its own P.

"basic blocks" is an artifact of a specific intermediate representation. It can cease to exist. For instance MIPS and ARM have quite different conditionals than x86. With a MIPS instruction set a comparison is an expression returning 1 or 0 and you can get away from conditionals by multiplying successive calculations with it to avoid branch penalties. This is also a common trick for speeding up SIMD…

Hoare Logic deals with triplets: precondition, statement, postcondition. (assume,code,assert)

You start with assume(false), then widen it over the code until the assert is covered for the input you care about.

P1 and P2 are propositions, so what are the content of those? The ones you use for your P1=>X and P2=>Y?

This is just an implementation detail of the compiler. Probably each basic block has its own P, which is conservatively approximated with some data flow analysis. (Disclaimer: I am not a compiler expert)

Oh, there is of course no problem by inserting a @unreachable barrier-like construct when you have proven assume(false), but it is a special case. It is not an annotation for deduction.

Anyways, I don't have much more time to keep justifying my definition of assume. If you still don't like mine, maybe you can give your definition of assume, and demonstrate how that definition is more useful in the context of D.

assume(x) is defined by the preconditions in Hoare Logic triplets. This is what you should stick to for in(){} preconditions.

A backend_assume(x) is very much implementation defined and is a precondition where the postconditions is:

program(input) == executable(input) for all x except those excluded by backend_assume.

However an assume tied to defining a function's degree of verification is not a backend_assume.

That is where D risk going seriously wrong: conflating postconditions (assert) with preconditions (assume) and conflating local guarantees (by unit tests or proofs) with global optimization and backend mechanisms (backend_assume).


Reply via email to