------- Comment #10 from [EMAIL PROTECTED] 2008-11-22 12:39 -------
(In reply to comment #7)
> In general, we want to go with the simple rule is to have an operation return
> the tightest type that won't engender overflow (which precludes making 8- and
> 16-bit values as closed sets for addition).
Really? I disagree that 16 bit addition causes frequent overflow. I'm not sure
in what contexts you are using it, can you give an example?
And 8-bit addition is very frequent for character transformations, i.e.
converting to upper case:
c += 'A' - 'a';
Casting seems too strict a requirement in these types of situations. I can't
imagine that anyone has a positive experience with these warnings, most are
just going to grumble, then insert the cast without thinking about it.
> The exception to that is int, which
> for a combination of practical reasons, "stays" int even if it could overflow,
> and also long, which cannot go any larger. Anecdotal evidence suggests that
> unintended overflows can be more annoying than having to insert the occasional
I haven't seen such anecdotal evidence. I don't think I've ever seen an
overflow due to addition that wasn't intended in my code on 16 or 8-bit values.
The one case where casting should be required is doing comparisons of signed
to unsigned values, where the comparison flips what it should be. A classic
case that I've had with C++ is comparing the size() of a vector to some
subtraction of integers. But this should be flagged because you are comparing
signed to unsigned (and most good C++ compilers flag that).
Multiplication might be a different story.
> We could relax this rule by having the compiler statically tracking possible
> ranges of values.
Why not just relax it to allow reassignment to the same type? I don't think
that's an uncommon usage, and I don't think it would cause rampant failures.
> The random ints argument does not quite hold because one seldom adds fully
> random 32-bit values. "Most integers are small."
Most integers are small, including 16-bit integers. If one uses a 16-bit
integer, they are generally doing so because they know the domain of such
values is small.
8-bit integers that one performs math on are generally characters, generally
used for transforming them. Most of the time, the domain of such values is
known to be less than the domain of 8-bit integers. For example, ascii