https://gcc.gnu.org/bugzilla/show_bug.cgi?id=30957

vries at gcc dot gnu.org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|RESOLVED                    |WAITING
                 CC|                            |vries at gcc dot gnu.org
         Resolution|FIXED                       |---

--- Comment #22 from vries at gcc dot gnu.org ---
(In reply to m...@gcc.gnu.org from comment #21)
> So, is there anything left as a bug in the compiler, or has this issue been
> fixed?

The compiler behaves in accordance with the semantics of the switches passed.
In that sense, there's no bug.

The code committed for this PR (using -0.0 instead of +0.0) in
insert_var_expansion_initialization currently looks like this:
...
          if (honor_signed_zero_p)
            zero_init = simplify_gen_unary (NEG, mode, CONST0_RTX (mode),
mode);
          else
            zero_init = CONST0_RTX (mode);
...

AFAIU, this has become dead code, because -fsigned-zeros cannot be true at the
same time as -fassociative-math.

The definition of -fno-signed-zeros is 'Allow optimizations for floating-point
arithmetic that ignore the signedness of zero'. So stricly speaking, if the
cost of -0.0 is the same as of +0.0, there's nothing that stops us from using
-0.0 here for -fno-signed-zeros (attempting to be a bit 'more' correct at zero
cost). My feeling is that this is not a good idea, but obviously others may
think otherwise. If we decide that we want to implement this, we should reopen
this bug. If we decide otherwise, we should mark this wontfix. Re-opening this
as waiting for appropriate maintainer(s) to make a decision on this.

Reply via email to