On Sat, 06 Nov 2010 23:37:09 +0300, Gary Whatmore <[email protected]> wrote:
Walter Bright Wrote:
Adam Burton wrote:
> I wouldn't consider that as the same thing. null represents the lack
of a
> value where as 25 is the wrong value. Based on that argument the
application
> should fail immediately on accessing the item with 25 (not many moons
later)
> in the same manner it does nulls, but it doesn't because 25 is the
wrong
> value where as null is a lack of value.
>
> As with the array allocation example earlier you initialise the array
to
> nulls to represent the lack of value till your application eventually
gets
> values to assign to the array (which may still be wrong values). As
shown by
> my alternative example non-nulls allow you to define that a
> variable/parameter wants a value and does not work when it receives
nothing.
> However in the case of the array because all the information is not
there at
> the point of creation it is valid for the array items to represent
nothing
> till you have something to put in them.
I am having a real hard time explaining this. It is conceptually *the
same
thing*, which is having an enforced subset of the values of a type.
I'm seeing it. The other arguments for non-null types also fall short
because non-nulls don't solve basic problems like arrays, basic
collections in the library (custom fill policy). Has any mainstream
language adopted non-null types? No they haven't because the idea is
broken.
That's not because the concept is broken, it's because mainstream
languages were developed before it became clear that non-nullables are a
must. Although the concept itself is an old one, it's wasn't very popular
until recently.
C# has them (in form of int vs int?). They tried to apply the same concept
to reference types, but found no way to do so without breaking existing
code.
See also
http://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare