On Tuesday, 5 August 2014 at 18:57:40 UTC, H. S. Teoh via
Digitalmars-d wrote:
Exactly. I think part of the problem is that people have been
using
assert with the wrong meaning. In my mind, 'assert(x)' doesn't
mean
"abort if x is false in debug mode but silently ignore in
release mode",
as some people apparently think it means. To me, it means "at
this point
in the program, x is true". It's that simple.
A language construct with such a meaning is useless as a safety
feature. If I first have to prove that the condition is true
before I can safely use an assert, I don't need the assert
anymore, because I've already proved it. If it is intended to be
an optimization hint, it should be implemented as a pragma, not
as a prominent feature meant to be widely used. (But I see that
you have a different use case, see my comment below.)
The optimizer only guarantees (in theory)
consistent program behaviour if the program is valid to begin
with. If
the program is invalid, all bets are off as to what its
"optimized"
version does.
There is a difference between invalid and undefined: A program is
invalid ("buggy"), if it doesn't do what it's programmer
intended, while "undefined" is a matter of the language
specification. The (wrong) behaviour of an invalid program need
not be undefined, and often isn't in practice.
An optimizer may only transform code in a way that keeps the
resulting code semantically equivalent. This means that if the
original "unoptimized" program is well-defined, the optimized one
will be too.
Yes, the people using assert as a kind of "check in debug mode
but
ignore in release mode" should really be using something else
instead,
since that's not what assert means. I'm honestly astounded that
people
would actually use assert as some kind of
non-release-mode-check instead
of the statement of truth that it was meant to be.
Well, when this "something else" is introduced, it will need to
replace almost every existing instance of "assert", as the latter
must only be used if it is proven that the condition is always
true. To name just one example, it cannot be used in range
`front` and `popFront` methods to assert that the range is not
empty, unless there is an additional non-assert check directly
before it.
Furthermore, I think Walter's idea to use asserts as a source of
optimizer hints is a very powerful concept that may turn out to
be a
revolutionary feature in D. It could very well develop into the
answer
to my long search for a way of declaring identities in
user-defined
types that allow high-level optimizations by the optimizer, thus
allowing user-defined types to be on par with built-in types in
optimizability. Currently, the compiler is able to optimize
x+x+x+x into
4*x if x is an int, for example, but it can't if x is a
user-defined
type (e.g. BigInt), because it can't know if opBinary was
defined in a
way that obeys this identity. But if we can assert that this
holds for
the user-defined type, e.g., BigInt, then the compiler can make
use of
that axiom to perform such an optimization. This would then
allow code
to be written in more human-readable forms, and still maintain
optimal
performance, even where user-defined types are involved.
This is a very powerful feature indeed, but to be used safely,
the compiler needs to be able to detect invalid uses reliably at
compile time. This is currently not the case:
void onlyOddNumbersPlease(int n) {
assert(n % 2);
}
void foo() {
onlyOddNumbersPlease(42); // shouldn't compile, but
does
}
It would be great if this were possible. In the example of
`front` and `popFront`, programs that call these methods on a
range that could theoretically be empty wouldn't compile. This
might be useful for optimization, but above that it's useful for
verifying correctness.
Unfortunately this is not what has been suggested (and was
evidently intended from the beginning)...