On Sunday, 3 August 2014 at 22:57:24 UTC, Ola Fosheim Grøstad
wrote:
On Sunday, 3 August 2014 at 22:18:29 UTC, John Carter wrote:
My view, which I think corresponds with Walter's and Betrand
Meyer's, is that asserts define what correct behaviour is.
No. The propositions describe what the correct behaviour ought
to be. The asserts request them to be proved.
And the sooner you know that, preferably at compile time, the
better.
And to do that you need a theorem prover capable of solving
NP-Hard problems. So you need a veeeery intelligent programmer
to write provably correct code without any special tools.
Continuing past such an assert inevitably results in
defective, possibly catastrophic, possibly flaky behaviour.
And Walter thinks it would a great idea to make that
catastrophic behaviour occur with a much greater probability
and also every time you execute your program, undetected, not
only in the select few cases where slight overlap in conditions
were detected.
So basically if your program contains an assert that states
that the program should stop working in 30 years from now, it
is a good idea to make it fail randomly right away. That's the
view that Andrei, Don and Walter has expressed very explicitly.
People who thinks this is a great idea defies reason. They only
learn from failure.
You have to realize that a deduction engine cannot tolerate a
single contradiction in axioms. If there is a single
contradiction it can basically deduce anything, possibly
undetected.
Turning asserts in program+libaries into globally available
axioms is insane.
John proposes a separate function, so I think you two are in
agreement on what really matters. Let's try to avoid going too
deep into tangents, unlike the other threads - it didn't work
well last time.