https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107195

--- Comment #7 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Aldy Hernandez <al...@gcc.gnu.org>:

https://gcc.gnu.org/g:c4d15dddf6b9eacb36f535807ad2ee364af46e04

commit r13-3217-gc4d15dddf6b9eacb36f535807ad2ee364af46e04
Author: Aldy Hernandez <al...@redhat.com>
Date:   Mon Oct 10 20:42:10 2022 +0200

    [PR107195] Set range to zero when nonzero mask is 0.

    When solving 0 = _15 & 1, we calculate _15 as:

            [irange] int [-INF, -2][0, +INF] NONZERO 0xfffffffe

    The known value of _15 is [0, 1] NONZERO 0x1 which is intersected with
    the above, yielding:

            [0, 1] NONZERO 0x0

    This eventually gets copied to a _Bool [0, 1] NONZERO 0x0.

    This is problematic because here we have a bool which is zero, but
    returns false for irange::zero_p, since the latter does not look at
    nonzero bits.  This causes logical_combine to assume the range is
    not-zero, and all hell breaks loose.

    I think we should just normalize a nonzero mask of 0 to [0, 0] at
    creation, thus avoiding all this.

            PR tree-optimization/107195

    gcc/ChangeLog:

            * value-range.cc (irange::set_range_from_nonzero_bits): Set range
            to [0,0] when nonzero mask is 0.

    gcc/testsuite/ChangeLog:

            * gcc.dg/tree-ssa/pr107195-1.c: New test.
            * gcc.dg/tree-ssa/pr107195-2.c: New test.

Reply via email to