Andrew Lentvorski wrote:
> Christopher Smith wrote:
>> 3) overflow silently. This is sometimes the exact desired behavior
>> (which is why it's spec'd to be silent for C, perhaps not a good idea
>> for the general case, but it really comes in handy when you need it).
> 
> The reason why it is spec'd silent for C is that they have no portable
> way to signal anything extraordinary.

If only C had things like function pointers/callbacks, variables like
errno that could indicate an error, or functions you could call to
verify that you didn't get a certain error since your last checkpoint
(or the last operation). Oh wait...  ;-)

Seriously, most platforms already has an overflow register, all you need
is an internal directive that checks the value in the register. You can
expose it with a macro to avoid any significant performance overhead.
Something like "check_for_overflow()".

Actually, in C *unsigned* arithmetic is specified as allowing overflow
silently, but signed arithmetic's behavior in the case of overflow is
left undefined (it just so happens that basically all C compiler
implementors took the low road and went with the silent overflow
behavior). This is because there are lots of useful cases for overflow
with unsigned arithmetic, but with signed... let's just say that there
are a whole lot less uses.

>> 4) signal the overflow, let the application decide what to do about
>> it. For example, you'll find lots of code out there which dynamically
>> grows buffer's by powers of 2. When you get to 2^31 and it's time to
>> grow the buffer again, the desired behavior is probably to grow the
>> buffer to something like 2^31 + 2^30, rather than to die or switch to
>> using a larger integer (particularly if your runtime can't allocate a
>> buffer > 2^32).
> 
> I consider this part of "Die.", but okay.  Again, the problem is that C
> has no mechanism to signal this.  C++ exceptions work fine for this,
> though.

See above.

>>> I wish language designers would just finally get it through their
>>> heads that int's should have infinite precision and floating point
>>> numbers should be *decimal* floating point instead of *binary*
>>> floating point.
>>>
>>> It would save *so* much time and trouble.
>>
>>
>> Yeah, and cause so much more trouble elsewhere......
> 
> Prove it.
> 
> Systems with gracefully degrading ints just don't have
> overflow/underflow bugs.  The only "problem" is that programs
> occasionally slow down because somebody overflowed the range.

The other problem is that programmers end up with out of range problems
which in turn create there own bugs. Case in point is the exponentially
growing array case. You end up trying to allocate an array who's size
can't be allocated. You can also have fun with accessing files at
offsets that can't happen, etc, etc. So, the bugs move from one place to
another. It turns out boxing of values is a really good thing for making
sure your code is correct.

> Unfortunately, the same experience doesn't exist for decimal floating
> point.  However, I can tell you that I have yet to meet even a good
> programmer who can write binary floating point code that doesn't have
> all manner of horrible problems.

I work with folks who do binary floating point just fine every day. ;-)
That said, it takes people a while to get the gist of floating point
math, regardless of whether it's in base 2 or base 10.

--Chris

-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to