On Sat, Apr 27, 2013 at 4:23 AM, Graydon Hoare <[email protected]> wrote:
> I think it has to be opt-in, yeah. Sadly. I mean, I wish we were living in > the world of hardware garbage collection and tagged memory words too, and > hensel codes had won out over floating point, and all our languages had > well defined total functional subsets with industrial strength provers > attached to them that we were legally obliged to use. > I don't understand the relationship between those features and integer overflow checking. There are very strong reasons why those features haven't developed, and none of those reasons apply to integer overflow checking. > Definitely a Milestone 2 consideration ;) >> > > Maybe. At least consideration. I think any variant would have to be > additive (backwards compatible) simply because in languages like this, it's > _really_ not what people assume as the default. > In my experience the default assumption is that integer overflow doesn't happen. Just for fun I did "grep -F ' + '" and "grep -F ' - '" in some mozilla-central graphics, DOM and layout code: gfx/thebes/*.cpp gfx/cairo/cairo/src/*.c content/base/src/*.cpp content/html/content/src/*.cpp layout/generic/*.cpp and skimmed the results. I might have missed something, but I found only three occurrences of addition/subtraction operators where overflow seemed expected: two in hash functions in cairo-cache.c, and one in a hash function in cairo-misc.c. Based on that data and previous experience with this code, I'm certain that those occurrences are vastly outweighed by handwritten code that tries to detect/avoid integer overflows, and also by arithmetic operations that are still vulnerable to overflow bugs in spite of those checks. Furthermore, I wonder how many average C/C++ programmers know that overflow of unsigned values is defined but overflow of signed values is not. I expect that most people's assumptions are plain incorrect. So I contend that integer overflow checking is more likely to prevent unexpected behavior than to cause it, especially when it really matters: in shipped code that's being attacked. Once in a while someone writing a hash function or similar will trip over, but it will be easy for them to detect and correct their mistake. We've already got in trouble for / trapping divide-by-zero by a > branch/fault rather than a signal; it hits inner loops and costs us > performance. Really. > Was this some unsafe-language benchmark shootout? Even if those are important due to some "Rust is slow, clinical tests prove it" bogo-PR effect, I assume you would disable overflow checking along with array-bounds checks in unsafe Rust code. Rob -- q“qIqfq qyqoquq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qlqoqvqeq qtqhqoqsqeq qwqhqoq qlqoqvqeq qtqhqeqmq.q qAqnqdq qiqfq qyqoquq qdqoq qgqoqoqdq qtqoq qtqhqoqsqeq qwqhqoq qaqrqeq qgqoqoqdq qtqoq qyqoquq,q qwqhqaqtq qcqrqeqdqiqtq qiqsq qtqhqaqtq qtqoq qyqoquq?q qEqvqeqnq qsqiqnqnqeqrqsq qdqoq qtqhqaqtq.q"
_______________________________________________ Rust-dev mailing list [email protected] https://mail.mozilla.org/listinfo/rust-dev
