------- Comment #8 from mikpe at it dot uu dot se  2010-07-27 22:18 -------
(In reply to comment #7)
> In fact, it seems that the error is already there at the very
> beginning: the .original dump shows
> 
> fixnum_neg
> {
>   ux = (unsigned char) x;
>   uy = (unsigned char) -(signed char) ux;
>   ...
> }
> 
> That is, the negation of unsigned char value is implemented by casting it to
> signed char, which introduces signed overflow if the value of x is -128.  As
> far as I understand the C standard, this seems incorrect.

It depends on how GCC interprets that cast and negation:
- if the cast has C semantics, then (signed char)ux causes overflow
- if the cast wraps, then it is fine and yields (signed char)-128
- if the negation has C semantics, then (signed char)-128 is widened to int and
then negated to 128
- if the negation maps signed char to signed char, then it causes overflow

IMO, a serious problem with the C standard is that

    signed char x = -1;
    signed char y = (signed char)(unsigned char)x;

triggers signed overflow causing undefined behaviour.

This comes from an asymmetry between cast to unsigned and cast to signed:
- cast from signed to unsigned is a total and injective function
- cast from unsigned to signed is a partial function with range from 0 to the
maximum of the signed type (inclusive), which excludes values converted from
negative signed values

(I'd be happy to be proven wrong about this, if anyone can cite relevant
sections from n1124 (C99 TC2) or n1494 (C1x draft) to the contrary.)

Personally I think GCC should treat source-level casts as wrapping, regardless
of -fstrict-overflow and -fno-wrapv.  Perhaps it intends to, and we're just
seeing the effects of bugs.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45034

Reply via email to