On Monday, 28 July 2014 at 15:52:23 UTC, John Colvin wrote:
If asserts were used as optimization constraints

all available code is fair game as optimisation constraints. What you are asking for is a special case for `assert` such that the optimiser is blind to it.

Yes, because they are not to have side-effects. If asserts change behaviour, then they have side-effects.

You add "asserts" as a weak form for verification of partial correctness. That is based on the probability of not repeating the same mistake twice being less than specifying it once implicitly in the executed code. Then you test it with limited coverage of potential inputs.

That means:

1. you don't know if the specified program is correct
2. you don't know if the partial verification (which is weak) is correct
3. you don't know if 2. contradicts 1.

Then you remove the verification test in release. If you then continue to assume that 2. is true then you are potentially worse off than just having 1. You basically introduce contradictions by having a high probability of using facts that are not represented in the specified program (unless you have formally proven the facts to hold, no partial testing can cover this).

Why should the situation be different if I use the builtin `assert` instead?

No difference. Except with a builtin you can get additional debugger support, so that you don't have to recompile when following a complicated trace that trigger asserts.

Reply via email to