Andrew Lentvorski wrote:
Gabriel Sechan wrote:
He's wrong about the results of the bug- he says in C it overflows
by going to an invalid index via underflow. In C, you'd use an
unsigned int (does Java have this?) thus wouldn't go to a negative
index (although it is still a bug, it won't crash the app. It may
infinite loop). IF a variable is supposed to be a loop index, it
should always be unsigned in any language.
Sigh. Boil the frog gradually ...
You are treating the symptom rather than the problem.
The bug is in handling integers. There are two options for integer
overflow:
1) Die. Period. Do not silently truncate, overflow, saturate, or
whatever.
2) Gracefully degrade. Switch to a slower version of integer which
can handle the greater required precision. eg. Lisp bignums, Python
longs, or Java BigInteger.
You forgot two other options:
3) overflow silently. This is sometimes the exact desired behavior
(which is why it's spec'd to be silent for C, perhaps not a good idea
for the general case, but it really comes in handy when you need it).
4) signal the overflow, let the application decide what to do about it.
For example, you'll find lots of code out there which dynamically grows
buffer's by powers of 2. When you get to 2^31 and it's time to grow the
buffer again, the desired behavior is probably to grow the buffer to
something like 2^31 + 2^30, rather than to die or switch to using a
larger integer (particularly if your runtime can't allocate a buffer >
2^32).
I wish language designers would just finally get it through their
heads that int's should have infinite precision and floating point
numbers should be *decimal* floating point instead of *binary*
floating point.
It would save *so* much time and trouble.
Yeah, and cause so much more trouble elsewhere......
--Chris
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg