Current COND_ADD reduction pattern can't optimize floating-point vector. As Richard suggested: https://gcc.gnu.org/pipermail/gcc-patches/2023-September/631336.html Allow COND_ADD reduction pattern to optimize floating-point vector.
Bootstrap and Regression is running. Ok for trunk if tests pass ? gcc/ChangeLog: * match.pd: Optimize COND_ADD reduction pattern. --- gcc/match.pd | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/gcc/match.pd b/gcc/match.pd index 5061c19e086..398beaebd27 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -8863,8 +8863,10 @@ and, c = mask1 && mask2 ? d + b : d. */ (simplify - (IFN_COND_ADD @0 @1 (vec_cond @2 @3 integer_zerop) @1) - (IFN_COND_ADD (bit_and @0 @2) @1 @3 @1)) + (IFN_COND_ADD @0 @1 (vec_cond @2 @3 zerop@4) @1) + (if (ANY_INTEGRAL_TYPE_P (type) + || fold_real_zero_addition_p (type, NULL_TREE, @4, 0)) + (IFN_COND_ADD (bit_and @0 @2) @1 @3 @1))) /* Detect simplication for a conditional length reduction where -- 2.36.3