https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122845
--- Comment #5 from GCC Commits <cvs-commit at gcc dot gnu.org> --- The trunk branch has been updated by Andrew Pinski <[email protected]>: https://gcc.gnu.org/g:50df2b188450457580b5ff4ac1d524fcc74de9bc commit r16-6727-g50df2b188450457580b5ff4ac1d524fcc74de9bc Author: Andrew Pinski <[email protected]> Date: Sat Jan 10 23:17:12 2026 -0800 match: Simplify `(T1)(a bit_op (T2)b)` to `((T1)a bit_op b)` When b is T1 type and truncating from T2 [PR122845] This adds the simpliciation of: ``` <unnamed-signed:3> _1; _2 = (signed char) _1; _3 = _2 ^ -47; _4 = (<unnamed-signed:3>) _3; ``` to: ``` <unnamed-signed:3> _n; _4 = _1 ^ -47; ``` This also fixes PR 122843 by optimizing out the xor such that we get: ``` _1 = b.a; _21 = (<unnamed-signed:3>) t_23(D); // t_23 in the original testcase was 200 so this is reduced to 0 _5 = _1 ^ _21; # .MEM_24 = VDEF <.MEM_13> b.a = _5; ``` And then there is no cast catch this pattern: `(bit_xor (convert1? (bit_xor:c @0 @1)) (convert2? (bit_xor:c @0 @2)))` As we get: ``` _21 = (<unnamed-signed:3>) t_23(D); _5 = _1 ^ _21; _22 = (<unnamed-signed:3>) t_23(D); _7 = _5 ^ _22; _25 = (<unnamed-signed:3>) t_23(D); _8 = _7 ^ _25; _26 = (<unnamed-signed:3>) t_23(D); _9 = _7 ^ _26; ``` After unrolling and then fre will optimize away all of those xor. Bootstrapped and tested on x86_64-linux-gnu. PR tree-optimization/122845 PR tree-optimization/122843 gcc/ChangeLog: * match.pd (`(T1)(a bit_op (T2)b)`): Also simplify if T1 is the same type as b and T2 is wider type than T1. gcc/testsuite/ChangeLog: * gcc.dg/tree-ssa/bitops-12.c: New test. * gcc.dg/tree-ssa/bitops-13.c: New test. * gcc.dg/store_merging_18.c: xfail store merging. Signed-off-by: Andrew Pinski <[email protected]>
