------- Comment #12 from [EMAIL PROTECTED]  2008-11-22 13:22 -------
(In reply to comment #10)
> (In reply to comment #7)
> > In general, we want to go with the simple rule is to have an operation 
> > return
> > the tightest type that won't engender overflow (which precludes making 8- 
> > and
> > 16-bit values as closed sets for addition).
> >
> Really?  I disagree that 16 bit addition causes frequent overflow. I'm not 
> sure
> in what contexts you are using it, can you give an example?

The most recent example that comes to mind is std.format. Out of probably too
much zeal, I store the width and precision as short numbers. There are several
places in the code using them where I had to pay close attention to possible

> And 8-bit addition is very frequent for character transformations, i.e.
> converting to upper case:
> c += 'A' - 'a';
> Casting seems too strict a requirement in these types of situations.  I can't
> imagine that anyone has a positive experience with these warnings, most are
> just going to grumble, then insert the cast without thinking about it.

Notice that in the particular example you mention, the code does go through
because it uses +=.

> > The exception to that is int, which
> > for a combination of practical reasons, "stays" int even if it could 
> > overflow,
> > and also long, which cannot go any larger. Anecdotal evidence suggests that
> > unintended overflows can be more annoying than having to insert the 
> > occasional
> > cast.
> I haven't seen such anecdotal evidence.  I don't think I've ever seen an
> overflow due to addition that wasn't intended in my code on 16 or 8-bit 
> values.

This may mean that you are a great coder and that you and I frequent different

>  The one case where casting should be required is doing comparisons of signed
> to unsigned values, where the comparison flips what it should be.  A classic
> case that I've had with C++ is comparing the size() of a vector to some
> subtraction of integers.  But this should be flagged because you are comparing
> signed to unsigned (and most good C++ compilers flag that).
> Multiplication might be a different story.
> > 
> > We could relax this rule by having the compiler statically tracking possible
> > ranges of values.
> Why not just relax it to allow reassignment to the same type?  I don't think
> that's an uncommon usage, and I don't think it would cause rampant failures.

Walter believes the same. I disagree.

> > The random ints argument does not quite hold because one seldom adds fully
> > random 32-bit values. "Most integers are small."
> Most integers are small, including 16-bit integers.  If one uses a 16-bit
> integer, they are generally doing so because they know the domain of such
> values is small.

Storage considerations may be at stake though. In fact, most of the uses of
small integers I've seen in C and C++ come from a storage/format requirement,
not a range requirement. In fact, what I'm saying is a tautology because in C
and C++ there is very little enforcement on range, which means there is very
low incentive to express small ranges with small integers.

> 8-bit integers that one performs math on are generally characters, generally
> used for transforming them.  Most of the time, the domain of such values is
> known to be less than the domain of 8-bit integers.  For example, ascii
> characters.

The compiler can't guess such legitimacy. In the characters domain, we might be
able to target our effort towards improving operations on char, wchar, and


Reply via email to