On 8/1/14, 9:25 AM, Timon Gehr wrote:
Even then, such a semantics is non-standard and almost nobody else knew.

This notion of "standard" has been occasionally mentioned in this discussion. I agree that D's assert is different from traditional C and C++ assert, and I also agree that might surprise some, but I think the meaning of D's assert is well within the intent of the larger notion.

Why break 'assert' now, now that it actually behaves as I and many
others expect (even some of those who argued for the (apparently, even
inofficially) new and opposite design)?

I don't see any breakage of "assert", more like realizing more of its latent potential. Clearly the documentation could be better, which is something we should focus on.

I do agree there's stuff that some may find unexpected, such as:

assert(x > 42);
if (x <= 42) {
  // let me also handle this conservatively
  ...
} else {
  // all good, proceed
  ...
}

The D optimizer might actually deem the entire "then" path unreachable in the future, and only compile in the "else" path. Some may find that surprising. For my money I never write code like this, and I consider it incorrect if I review it. You either assert something, or you check it dynamically.

I don't remember having to ding anyone in a code review at Facebook for something like that in almost five years of tenure, and we use assert all over the place.

Yesterday I changed a bunch of "assert" in hhvm (https://github.com/facebook/hhvm) with "BSSERT" having the following definition:

#ifdef NDEBUG
#define BSSERT(e) do { if (e) {} else __builtin_unreachable(); } while (0)
#else
#define BSSERT(e) do { assert(e); } while (0)
#endif

That's a >2MLOC project. I was hoping I'd measure a slight improvement based on the hints to gcc's optimizer. Tests passed but alas there was an 1.4% CPU time increase (which at our scale we consider a large performance regression); not sure what's causing it (it does sometimes happen that certain gcc optimization end up generating larger code which spills the I-Cache more often, something gcc's optimizer is not very good at controlling).

Better, clearer language definition and documentation is the real cure for this particular matter. Sure enough we need to introduce any future optimizations with due care and diligence; that's a given. We also have a bunch of issues very important and very urgent to tend to, starting with finalizing the new release. Getting into this holier-than-thou contest, dinging people for using "exponential" casually, or discouraging them to express approval with the language leader is not only a waste of time, it's a net negative for everyone involved. It makes us look bad among ourselves and also to the larger community.


Andrei


Reply via email to