Don wrote:
[EMAIL PROTECTED] wrote:
http://d.puremagic.com/issues/show_bug.cgi?id=1977


[EMAIL PROTECTED] changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|RESOLVED                    |REOPENED
         Resolution|INVALID                     |




------- Comment #5 from [EMAIL PROTECTED]  2008-11-22 08:44 -------
(In reply to comment #4)
It's not ridiculous at all. The compiler cannot tell what values will be
possibly passed to f, and the range of byte and short are sufficiently small to
make overflow as frequent as it is confusing and undesirable.

Why is this also flagged (no overflow possible):

short f(byte i) {
  byte t = 1;
  short o = t - i;
  return o;
}

The community has insisted for a long time to tighten integral operations, and
now that it's happening, the tightening is reported as a bug :o).

But it's pretty inconsistent. If I add two random ints together, I will get an
overflow in 25% of cases, why is that not flagged?

I think the restriction is too tight. People expect to do math on homogeneous types without having to cast the result, as they do with ints. And I'll say I was not one of the people asking for this 'feature'. I'm not sure where that
came from.

Personally I think having to insert a cast makes the code more error-prone. The cure is worse than the disease.

Consider also that with the original code, the compiler could install debug-time asserts on any such narrowing conversion. Once you insert a cast, that's impossible, since the language doesn't distinguish between (a) 'I know that is OK' casts, (b) 'I want to pretend that this is a different type' casts, and (c) 'I want you to change this into another type' casts.

Compiler checks should only be inserted for case (a) and (c).

Could you paste your comment into bugzilla so we have the discussion tracked there? Thanks.

Andrei

Reply via email to