On Tuesday, 21 May 2013 at 01:34:29 UTC, estew wrote:
But I'm not convinced it would cost us less to have NotNull!T and Nullable!T. I feel it is cheaper to mindlessly write "if(a is null) {}" when using pointers than to worry at design time what the behaviour of a pointer should be.
As a matter of fact, most reference are never null, or are assumed never to be null.
For instance, in DMD2.062, e2ir.c line 869 it is assumed that irs->sclosure can't be null, when in fact it can and that lead to an ICE (and that isn't the first one, which kind of mitigate the strength of the arguement that this rarely happens and is easy to fix).
Design time is the second most expensive developer time for us. The most expensive dev. time is changing a design that turned out to be incorrect, or is now outdated for whatever reason. Moving pointer behaviour to be a design time issue rather than "knowing it could be NULL so check it" could increase the probability of redesign bugs creeping in.
You may not know if a reference will be nullable or not when you write you code at first. With current model, you start writing code as if it can't be null, and then later, when you see you in fact need null, you now can have surprise breakage anywhere.
With a Nullable, you'll have code breakage that force you to handle the null case. This enforce correctness instead of relying on faith.
