On Tuesday, 28 November 2017 at 04:17:18 UTC, A Guy With an Opinion wrote:
On Tuesday, 28 November 2017 at 04:12:14 UTC, ketmar wrote:
A Guy With an Opinion wrote:

That is true, but I'm still unconvinced that making the person's program likely to error is better than initializing a number to 0. Zero is such a fundamental default for so many things. And it would be consistent with the other number types.
basically, default initializers aren't meant to give a "usable value", they meant to give a *defined* value, so we don't have UB. that is, just initialize your variables explicitly, don't rely on defaults. writing:

        int a;
        a += 42;

is still bad code, even if you're know that `a` is guaranteed to be zero.

        int a = 0;
        a += 42;

is the "right" way to write it.

if you'll look at default values from this PoV, you'll see that NaN has more sense that zero. if there was a NaN for ints, ints would be inited with it too. ;-)

Eh...I still don't agree. I think C and C++ just gave that style of coding a bad rap due to the undefined behavior. But the issue is it was undefined behavior. A lot of language features aim to make things well defined and have less verbose representations. Once a language matures that's what a big portion of their newer features become. Less verbose shortcuts of commonly done things. I agree it's important that it's well defined, I'm just thinking it should be a value that someone actually wants some notable fraction of the time. Not something no one wants ever.

I could be persuaded, but so far I'm not drinking the koolaid on that. It's not the end of the world, I was just confused when my float was NaN.

Just a little anecdote of a maintainer of a legacy project in C. My predecessors in that project had the habit of systematically initialize any auto declared variable at the beginning of a function. The code base that was initiated in the early '90s and written by people who were typical BASIC programmer, so the consequence of it was that functions were very often hundreds of lines long and they all started with a lot of declarations. In the years of reviewing that code, and I was really surprised by that, was how often I found bugs because the variables had been wrongly initialised. By initialising with 0 or NULL, the data flow pass was essentially suppressed at the start so that it could not detect when variables were used before they had been properly populated with the right values the functionality required. The thing with these kind of bugs was that they were very subtle.

To make it short, 0 is an arbitrary number that often is the right value but when it isn't, it can be a pain to detect that it was the wrong value.

Reply via email to