Graydon Hoare wrote:
> I believe you can trap
> this in C presently with a gcc flag too (-ftrapv); but it's a flag
> rarely turned on.

-ftrapv basically doesn't work, e.g. doesn't work (at all?) on x86_64,
doesn't prevent algebraic simplification which assumes no overflow
(and thus hides the overflow). I haven't tried it with GCC 4.8 yet,
but I assume the story is the same there too.

It's also only for signed overflow... since signed overflow is
undefined in C and the compiler will happily optimize relative to that
it is extra important, but that ignores all the unsigned cases where
overflow is indicative of a bug.

The integer overflow checker in clang (http://embed.cs.utah.edu/ioc/)
is _much_ better, and I utilized it extensively in the development of
the Opus codec, but again it only covers signed. Checking of unsigned
would also suffer from needing some kind of instrumentation to
annotate the cases where you actually expect the overflow.

> How much of a performance penalty is it worth?

The clang IOC seemed to have fairly little impact on "typical" code,
but on gnarly fixed-point DSP code it was a fairly major slowdown e.g.
>3x and something that wouldn't be acceptable in production code if it
were inescapable. Absolutely fantastic for debug builds... but again,
it was only on signed which is undefined in C so there were no "false
positives", every instance was a bug even if the result was discarded.

I think in rust signed overflow is defined?  If so then even signed
can't have false-positive-free detection from pure compiler
instrumentation.

Certainly there are cases where any performance hit is not really
acceptable, but I imagine that most uses of integers are not these
places. The alternative of "use a full bignum" seems inelegant
especially if the bignum comes with a big performance/memory/icache
hit even in places where a Sufficiently Smart compiler could
statically prove that a regular integer would suffice.

These also don't do anything for cases when the "bug free range" is
still within the machine word being used... it's a real element of
software correctness, but perhaps too obscure to worry about.  I would
speculate that when the range constraints comes from the problem being
solved rather than the machine the author is more likely aware of
them.

Robert O'Callahan wrote:
> On the other hand, I don't see this as important for floating point types.
> Accumulating NaNs and infinities is much saner than simply producing the
> wrong answer, has seldom led to critical browser bugs in my experience, and
> I can't see how it would without an intervening conversion to integer type
> which would, of course, be checked.

I've seen things like code infinite looping because NAN != NAN, but I
agree that I've seen far fewer serious bugs from "surprising"
divergences between floating point values and real numbers.

In general, I think float-weirdness is something which could be
addressed by better dynamic checking tools, but wouldn't benefit from
language support in the same way that integer overflow does because
code which intentionally creates NANs and Infinities and expects to do
things with them is fairly uncommon. While overflow of fixed size
integers is moderately common and useful (e.g. fast hashes and prngs
implemented as arithmetic on GF(2^N)).
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to