A few comments:
Many portable C programs assume that signed integer overflow wraps around
reliably using two's complement arithmetic.
I'd replace portable C programs with widely-used C programs. The normal
use of portable means that it conforms to the standard.
Conversely, in at least one
Many portable C programs assume that signed integer overflow wraps around
reliably using two's complement arithmetic.
I was looking for an adjective that mean the programs work on a wide
variety of platforms, and portable seems more appropriate than
widely-used.
Maybe just say what you
I can do this. What I also will do is improve VRP to still fold comparisons
of the for a - 10 20 when it knows there is no overflow due to available
range information for a (it doesn't do that right now).
I thought fold-const.c optimizes that right now and has been for a long time?
If that's
We do that with -fstrict-aliasing, which also changes language semantics.
Well, yes, but not quite in the same way. Indeed it's rather hard to
describe in what way it changes the language semantics but easier to
describe the effect it has on optimization. I think -fwrapv is the other
way
Well, while the effect of -fstrict-aliasing is hard to describe
(TBAA _is_ a complex part of the standard), -fno-strict-aliasing
rules are simple. All loads and stores alias each other if they
cannot be proven not to alias by points-to analysis.
Yes, the rules are simple, but are written in
A few comments:
Many portable C programs assume that signed integer overflow wraps around
reliably using two's complement arithmetic.
I'd replace portable C programs with widely-used C programs. The normal
use of portable means that it conforms to the standard.
Conversely, in at least one
Many portable C programs assume that signed integer overflow wraps around
reliably using two's complement arithmetic.
I was looking for an adjective that mean the programs work on a wide
variety of platforms, and portable seems more appropriate than
widely-used.
Maybe just say what you
A few comments:
Many portable C programs assume that signed integer overflow wraps around
reliably using two's complement arithmetic.
I'd replace portable C programs with widely-used C programs. The normal
use of portable means that it conforms to the standard.
Conversely, in at least one
Many portable C programs assume that signed integer overflow wraps around
reliably using two's complement arithmetic.
I was looking for an adjective that mean the programs work on a wide
variety of platforms, and portable seems more appropriate than
widely-used.
Maybe just say what you
the seemingly prevalent attitude but it is undefined; but it is not
C is the opinion of the majority of middle-end maintainers.
Does anybody DISAGREE with that attitude? It isn't valid C to assume that
signed overflow wraps. I've heard nobody argue that it is. The question
is how far we go
Still, in practical terms, it is true that overflow
being undefined is unpleasant. In Ada terms, it would
have seemed better in the C standard to reign in the
effect of overflow, for instance, merely saying that
the result is an implementation defined value of the
type, or the program is
More important, we don't yet have an easy way to characterize the
cases where (2) would apply. For (2), we need a simple, documented
rule that programmers can easily understand, so that they can easily
verify that C code is safe
I'm not sure what you mean: there's the C standard. That says
Richard Kenner wrote:
I'm not sure what you mean: there's the C standard.
We have many standards, starting with KRv1 through the current draft.
Which do you call, the C standard?
The current one. All others are previous C standards. However, it
doesn't matter in this case since ALL of them
VRP as currently written adjust limits out to infinity of an
appropriate sign for variables which are changed in loops. It then
assumes that the (signed) variable will not wrap past that point,
since that would constitute undefined signed overflow.
But isn't that fine since OTHER code is
Wait, though: KRv2 is post-C89.
Not completely: it's published in 1988, but the cover says based on
draft-proposed ANSI C.
Naturally KRv2 documents this, but if you want to know about
traditional practice the relevant wording should come from KRv1,
not v2.
I don't know what KRv1 says on
I suppose there is
*hv = (HOST_WIDE_INT) -(unsigned HOST_WIDE_INT) h1;
to make it safe.
Can't that conversion overflow?
Not on a two's complement machine,
Then I'm confused about C's arithmetic rules. Suppose h1 is 1. It's cast
to unsigned, so stays as 1. Now we
On the other hand, C does not have a way to tell the compiler:
this is my loop variable, it must not be modified inside the loop
neither you can say:
this is the upper bound of the loop, it must not be modified
either.
No, but the compiler can almost always trivially
The burden of proof ought to be on the guys proposing -O2
optimizations that break longstanding code, not on the skeptics.
There's also a burden of proof that proposed optimizations will actually
break longstanding code. So far, all of the examples of code shown
that assumes wrapv semantics
As I said earlier in this thread, people seem to think that the
standards committee invented something new here in making overflow
undefined, but I don't think that's the case.
I agree with that too.
However, it is also the case that between KRv1 and the ANSI C standard,
there was a language
Doing that in unsigned arithmetic is much more readable anyway.
If you're concerned about readability, you leave it as the two tests and
let the compiler worry about the optimal way to implement it.
So I doubt that programmers would do that in signed arithmetic.
I kind of doubt that as
I think this is a fragile and not very practical approach. How do
you define these traditional cases?
You don't need to define the cases in advance. Rather, you look at
each place where you'd be making an optimization based on the non-existance
of overflow and use knowlege of the importance
Are you volunteering to audit the present cases and argue whether they
fall in the traditional cases?
I'm certainly willing to *help*, but I'm sure there will be some cases
that will require discussion to get a consensus.
Note that -fwrapv also _enables_ some transformations on signed
This isn't just about old code. If you're saying that old code with
overflow checking can't be fixed (in a portable manner...), then new
code will probably use the same tricks.
I said there's no good way, meaning as compact as the current tests. But
it's certainly easy to test for overflow
http://gcc.gnu.org/ml/gcc/2006-12/msg00607.html
If this doesn't count as optimization of loop invariants
then what would count?
One where the induction variable was updated additively, not
multiplicatively. When we talk about normal loop optimizations,
that's what we mean. I agree that the
Note that -fwrapv also _enables_ some transformations on signed
integers that are disabled otherwise. We for example constant fold
-CST for -fwrapv while we do not if signed overflow is undefined.
Would you change those?
I don't understand the rationale for not wrapping constant
But how would that happen here? If we constant-fold something that would
have overflowed by wrapping, we are ELIMINATING a signed overflow, not
INTRODUCING one. Or do I misunderstand what folding we're talking about
here?
http://gcc.gnu.org/PR27116 is what led to the patch.
I think
But didn't this thread get started by a real program that was broken
by an optimization of loop invariants? Certainly I got a real bug
report of a real problem, which you can see here:
http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
I just thought of something
Paul Eggert wrote:
That's great, but GCC has had many other hands stirring the pot.
I daresay a careful scan would come up with many other examples of
undefined behavior due to signed integer overflow. (No doubt
you'll be appalled by them as well, but there they are.)
That's
Not so appalling really, given that relying on wrapping is as has
been pointed out in this thread, the most natural and convenient way
of testing for overflow. It is really *quite* difficult to test for
overflow while avoiding overflow, and this is something that is
probably not in the
Where does GCC assume wrapv semantics?
The macro OVERFLOW_SUM_SIGN in fold-const.c.
Here's an example from the intprops module of gnulib.
These are interesting case.
Note that all the computations are constant-folded.
And I think this points the fact that we can have our cake and eat it too
in many cases. Everything we're seeing points to the fact that the cases
where
[EMAIL PROTECTED] (Richard Kenner) writes:
Date: Sat, 30 Dec 2006 08:01:37 EST
I'd actually be very surprised if there were ANYPLACE where GCC has code
that's otherwise correct but which would malfunction if signed overflow
weren't required to wrap.
Date: Sat, 30 Dec 2006 08:09:33
Note the interesting places in VRP where it assumes undefined signed
overflow is in compare_values -- we use the undefinedness to fold
comparisons.
Unfortunately, comparisons are the trickiest case because you have to
be careful to avoid deleting a comparison that exists to see if overflow
I am. I just now looked and found another example.
gcc-4.3-20061223/gcc/fold-const.c's neg_double function
contains this line:
*hv = - h1;
OK, granted. But it is followed by code testing for overflow.
That means that (1) we can find these by looking for cases where we are
setting
Gaby said
KR C leaves arithmetic overflow undefined (except for unsigned
types), in the sense that you get whatever the underlying hardware
gives you. If it traps, you get trapped. If it wraps, you get wrapped.
Is that really what the KR book says, or just what compilers typically
did?
I suppose there is
*hv = (HOST_WIDE_INT) -(unsigned HOST_WIDE_INT) h1;
to make it safe.
Can't that conversion overflow?
Wrong. Many people have relied on that feature because they thought it
was leagal and haven't had the time to check every piece of code they
wrote for conformance with the holy standard. And they don't have the time
now to walk trough the work of their lifetime to see where they did wrong,
at
I'm not sure what data you're asking for.
Here's the data *I'd* like to see:
(1) What is the maximum performance loss that can be shown using a real
program (e.g,. one in SPEC) and some compiler (not necessarily GCC) when
one assumes wrapping semantics?
(2) In the current SPEC, how many
Those questions are more for the opponents of -fwrapv, so
I'll let them answer them. But why are they relevant?
Having -fwrapv on by default shouldn't affect your SPEC
score, since you can always compile with -fnowrapv if the
application doesn't assume wraparound.
(1) If -fwrapv isn't the
But since you asked, I just now did a quick scan of
gcc-4.3-20061223 (nothing fancy -- just 'grep -r') and the first
example I found was this line of code in gcc/stor-layout.c's
excess_unit_span function:
/* Note that the calculation of OFFSET might overflow; we calculate it so
that
40 matches
Mail list logo