On 12/5/2011 1:37 PM, Don wrote:
The "overflow12.pdf" paper on that site shows statistics that overflow is very often intentional. It's strong evidence that you *cannot* make signed overflow an error. Even if you could do it with zero complexity and zero performance impact, it would be wrong.
Here's an email from Andy Koenig from the C++ mailing list that I think is very relevant (there are a lot of very experienced people on that list, lots of mistakes we can avoid by listening to them):
----------------------------------------- Subject: [c++std-ext-11967] Re: Two's-Complement Arithmetic From: "Andrew Koenig" <a...@acm.org> To: <c++std-...@accu.org> Date: Mon, 5 Dec 2011 22:08:29 -0500 >> With respect to overflow, I wonder how many of these issues would >> not be better addressed with a Scheme-like bignum type that is cheap >> for (31- or) 63-bit integers, and involves memory allocation only on >> overflow. > +1, would love to have had this years ago. Sounds a little like Python 3 integers. And while I'm thinking about Python 3 arithmetic, there's something else in Python 3 That I'd love to have in C++, namely a guarantee that: 1) Converting a string to a floating-point number, whether through input at run time or writing a floating-point literal as part of a program, always yields the correctly rounded closest floating-point value to the infinite-precision value of the literal. 2) Converting a floating-point number to a string without a specified number of significant digits yields the string with the smallest number of significant digits that, when converted back to floating point according to (1), yields exactly the same value as the one we are converting. Techniques for solving these problems were published more than 20 years ago, so it's hard to argue against them on the basis of novelty. Moreover, these rules would have some nice properties, among them: Printing a floating-point number without specifying accuracy and reading it back again into a variable with the same precision gives you the same value. Printing a floating-point literal with default precision gives you the same value as the literal unless the literal has too many significant digits to represent accurately. References here: http://www.cs.washington.edu/education/courses/cse590p/590k_02au/print-fp.pdf http://www.cs.washington.edu/education/courses/cse590p/590k_02au/read-fp.pdf