On Tuesday, 28 November 2017 at 04:17:18 UTC, A Guy With an
Opinion wrote:
On Tuesday, 28 November 2017 at 04:12:14 UTC, ketmar wrote:
A Guy With an Opinion wrote:
That is true, but I'm still unconvinced that making the
person's program likely to error is better than initializing
a number to 0. Zero is such a fundamental default for so many
things. And it would be consistent with the other number
types.
basically, default initializers aren't meant to give a "usable
value", they meant to give a *defined* value, so we don't have
UB. that is, just initialize your variables explicitly, don't
rely on defaults. writing:
int a;
a += 42;
is still bad code, even if you're know that `a` is guaranteed
to be zero.
int a = 0;
a += 42;
is the "right" way to write it.
if you'll look at default values from this PoV, you'll see
that NaN has more sense that zero. if there was a NaN for
ints, ints would be inited with it too. ;-)
Eh...I still don't agree. I think C and C++ just gave that
style of coding a bad rap due to the undefined behavior. But
the issue is it was undefined behavior. A lot of language
features aim to make things well defined and have less verbose
representations. Once a language matures that's what a big
portion of their newer features become. Less verbose shortcuts
of commonly done things. I agree it's important that it's well
defined, I'm just thinking it should be a value that someone
actually wants some notable fraction of the time. Not something
no one wants ever.
I could be persuaded, but so far I'm not drinking the koolaid
on that. It's not the end of the world, I was just confused
when my float was NaN.
Also, C and C++ didn't just have undefined behavior, sometimes it
has inconsistent behavior. Sometimes int a; is actually set to 0.