On Nov 25, 2009, at 11:16 AM, John Daughtry wrote:
With respect to such problems, I spent the usual amount of time in college studying various complexities in arithmetic on computers. Yet, I have only seen problems crop up three times over 10 years of full-time programming experience.


You may have *seen* the problems only three times.
How many times the problem has *occurred* is another matter.

Just the other day, I heard about someone who ran into the problem
that int x = ...; int y = Math.abs(x); can leave y negative in Java.

Check
http://www.fefe.de/intof.html
to find out how the C analogue led to a security bug.


I've certainly had loops that terminated on one machine but not
another (until I learned what -fstore was all about); wasn't even
code I'd written.

For 'most' programmers, the complexities of computational arithmetic don't impact us on a daily basis.

They do every time you write C or Visual Basic or any other
programming language where YOU have to choose the size of numeric
variables.  For a long time I had to write code that would work
on both 16-bit and 32-bit machines, where you never knew whether
'int' was going to be 16-bit or 32-bit.  Now that we have <stdint.h>
things are a bit better, but it's still _our_ responsibility to
write (long long)x * y - (long long)u * v in the right places.

A _potential_ problem doesn't ever have to occur for it to be
a _difficulty_ for a programmer to ensure that it doesn't.

Nelson Beebe gave a talk "Computer Arithmetic and the MathCW
Library".  The slides are at 
http://www.math.utah.edu/~beebe/talks/2008/dk/trends-in-qc.pdf
Slide 5:

o USS Yorktown nuclear-weapons carrier dead in water in 1997 for three
hours after integer overflow in spreadsheet shut down software control
(potential cost: loss of a war)
o US Patriot missiles fail to shoot down Iraqi Scuds in 1990 Gulf War
due to timer counter integer overflow
o European Space Agency Ariane 5 missile loss off West Africa in 1996
due to arithmetic overflow in floating-point to integer conversion in
guidance system (cost about US$1B)
o Too few digits in US National Debt Clock (9-Oct-2008), US gas
pumps, and Y2K fiasco (US$600B – US$1000B)
o Intel Pentium floating-point divide flaw in 1994 (US$400M –
US$600M)
o New York and Vancouver Stock Exchange shutdowns (cost: several
tens of millions of USD)

Slide 6:
o German state election in Schleswig–Holstein reversed in 1992 because
of rounding errors in party vote percentages
o The US presidential elections in 2000 and 2004, the State of
Washington gubernatorial election in 2004, and the Mexico
presidential election in 2007, were so close that even minor errors in
counting could have changed the results
o Some US electronic voting machines in 2004 were found to have
counted backwards, subtracting votes, either because of counter
overflow and wraparound, or malicious tampering with the software.


And, when these problems occur, they often result in obviously wrong results (as opposed to believable results).

And they often don't.  My favourite example is a warehouse program
written in BASIC that I was asked to fix many years ago.  One of its
flaws was that it stored 9-digit part numbers in single-precision
floats.  Oddly enough, the problem hadn't shown up in the previous
programmer's test cases.

Thus, if one were to study an aspect of this problem, I would think it would be better to focus on a higher-impact issue (not that arithmetic doesn't have worth).

What does "higher-impact" mean?  The famous bugs listed above
seem pretty high impact to me!

Frankly, to me this seems like a textbook case of people THINKING
they understand a computing topic and THINKING they don't/won't
have any problem with it and rushing in where angels fear to tread.


Reply via email to