From the FaQ:

NaNs have the interesting property in that whenever a NaN is used as an operand in a computation, the result is a NaN. Therefore, NaNs will propagate and appear in the output whenever a computation made use of one. This implies that a NaN appearing in the output is an unambiguous indication of the use of an uninitialized variable.

If 0.0 was used as the default initializer for floating point values, its effect could easily be unnoticed in the output, and so if the default initializer was unintended, the bug may go unrecognized.


So basically, it's for debugging? Is that it's only reason? If so I'm at loss as to why default is NaN. The priority should always be in the ease-of-use IMO. Especially when it breaks a "standard":

    struct Foo {
      int x, y;    // ready for use.
      float z, w;  // messes things up.
      float r = 0; // almost always...
    }

I'm putting this in .Learn because I'm not really suggesting a change as much as trying to learn the reasoning behind it. The break in consistency doesn't outweigh any cost of "debugging" benefit I can see. I'm not convinced there is any. Having core numerical types always and unanimously default to zero is understandable and consistent (and what I'm use too with C#). The above could be written as:

    struct Foo {
      float z = float.nan, ...
    }

if you wanted to guarantee the values are set uniquely at construction. Which seems like a job better suited for unittests to me anyways.

musing...

Reply via email to