[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-07-16 Thread linkw at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

Kewen Lin  changed:

   What|Removed |Added

 Status|ASSIGNED|RESOLVED
 Resolution|--- |FIXED

--- Comment #13 from Kewen Lin  ---
(In reply to Andrew Pinski from comment #12)
> Has this been fixed?

Yes, those failures should be fixed then. Thanks for reminding.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-07-15 Thread pinskia at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #12 from Andrew Pinski  ---
Has this been fixed?

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-06-02 Thread cvs-commit at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #11 from CVS Commits  ---
The master branch has been updated by Pan Li :

https://gcc.gnu.org/g:bffc52838e393a775e13dc48162669b0f43ebe09

commit r14-1493-gbffc52838e393a775e13dc48162669b0f43ebe09
Author: Ju-Zhe Zhong 
Date:   Thu Jun 1 12:36:17 2023 +0800

VECT: Change flow of decrement IV

Follow Richi's suggestion, I change current decrement IV flow from:

do {
   remain -= MIN (vf, remain);
} while (remain != 0);

into:

do {
   old_remain = remain;
   len = MIN (vf, remain);
   remain -= vf;
} while (old_remain >= vf);

to enhance SCEV.

Include fixes from kewen.

This patch will need to wait for Kewen's test feedback.

Testing on X86 is on-going

Co-Authored by: Kewen Lin  

  PR tree-optimization/109971

gcc/ChangeLog:

* tree-vect-loop-manip.cc (vect_set_loop_controls_directly): Change
decrement IV flow.
(vect_set_loop_condition_partial_vectors): Ditto.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-30 Thread linkw at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #10 from Kewen Lin  ---
(In reply to JuzheZhong from comment #9)
> (In reply to Kewen Lin from comment #8)
> > I did SPEC2017 int/fp evaluation on Power10 at Ofast and an extra explicit
> > --param=vect-partial-vector-usage=2 (the default is 1 on Power), baseline
> > r14-1241 vs. new r14-1242, the results showed that it can offer some
> > speedups for 500.perlbench_r 1.12%, 525.x264_r 1.96%, 544.nab_r 1.91%,
> > 549.fotonik3d_r 1.25%, but it degraded 510.parest_r by 5.01%.
> > 
> > I just tested Juzhe's new proposed fix which makes the loop closing iv
> > SCEV-ed, it can fix the degradation of 510.parest_r, also the miss
> > optimization on cunroll (in #c5), the test failures are gone as well. One
> > SPEC2017 re-evaluation with that fix is ongoing, I'd expect it won't degrade
> > anything.
> 
> Thanks so much. You mean you are trying this patch:
> https://gcc.gnu.org/pipermail/gcc-patches/2023-May/620086.html ?

Yes, it means that Richi's concern (niter analysis but all analyses relying on
SCEV are pessimized) does affect the exposed degradation and failures. Thanks
for looking into it.

> 
> I believe it can improve even more for IBM's target.

Hope so, I'll post the new SPEC2017 results once the run finishes.

btw, the SPEC2017 run with --param=vect-partial-vector-usage=2 here is mainly
to verify the expectation on the decrement IV change, the normal SPEC2017 runs
still use --param=vect-partial-vector-usage=1 which isn't affected by this
change and it beats the former in general as the cost for length setting up.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-30 Thread juzhe.zhong at rivai dot ai via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #9 from JuzheZhong  ---
(In reply to Kewen Lin from comment #8)
> I did SPEC2017 int/fp evaluation on Power10 at Ofast and an extra explicit
> --param=vect-partial-vector-usage=2 (the default is 1 on Power), baseline
> r14-1241 vs. new r14-1242, the results showed that it can offer some
> speedups for 500.perlbench_r 1.12%, 525.x264_r 1.96%, 544.nab_r 1.91%,
> 549.fotonik3d_r 1.25%, but it degraded 510.parest_r by 5.01%.
> 
> I just tested Juzhe's new proposed fix which makes the loop closing iv
> SCEV-ed, it can fix the degradation of 510.parest_r, also the miss
> optimization on cunroll (in #c5), the test failures are gone as well. One
> SPEC2017 re-evaluation with that fix is ongoing, I'd expect it won't degrade
> anything.

Thanks so much. You mean you are trying this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-May/620086.html ?

I believe it can improve even more for IBM's target.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-30 Thread linkw at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

Kewen Lin  changed:

   What|Removed |Added

   Keywords|testsuite-fail  |missed-optimization
   Assignee|linkw at gcc dot gnu.org   |juzhe.zhong at rivai 
dot ai

--- Comment #8 from Kewen Lin  ---
I did SPEC2017 int/fp evaluation on Power10 at Ofast and an extra explicit
--param=vect-partial-vector-usage=2 (the default is 1 on Power), baseline
r14-1241 vs. new r14-1242, the results showed that it can offer some speedups
for 500.perlbench_r 1.12%, 525.x264_r 1.96%, 544.nab_r 1.91%, 549.fotonik3d_r
1.25%, but it degraded 510.parest_r by 5.01%.

I just tested Juzhe's new proposed fix which makes the loop closing iv SCEV-ed,
it can fix the degradation of 510.parest_r, also the miss optimization on
cunroll (in #c5), the test failures are gone as well. One SPEC2017
re-evaluation with that fix is ongoing, I'd expect it won't degrade anything.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-26 Thread rguenth at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

Richard Biener  changed:

   What|Removed |Added

   Priority|P3  |P1
   Target Milestone|--- |14.0

--- Comment #7 from Richard Biener  ---
(In reply to Kewen Lin from comment #5)
> For example on full-1.c int8_t type:
> 
>[local count: 75161909]:
>   # vectp_a_int8_t.4_18 = PHI 
>   # vectp_b_int8_t.8_8 = PHI 
>   # vectp_c_int8_t.14_26 = PHI 
>   # ivtmp_29 = PHI 
>   # loop_len_16 = PHI <_34(5), 16(2)>
>   vect__1.6_13 = .LEN_LOAD (vectp_a_int8_t.4_18, 8B, loop_len_16, 0);
>   vect__2.7_12 = VIEW_CONVERT_EXPR(vect__1.6_13);
>   vect__3.10_22 = .LEN_LOAD (vectp_b_int8_t.8_8, 8B, loop_len_16, 0);
>   vect__4.11_23 = VIEW_CONVERT_EXPR(vect__3.10_22);
>   vect__5.12_24 = vect__2.7_12 + vect__4.11_23;
>   vect__6.13_25 = VIEW_CONVERT_EXPR(vect__5.12_24);
>   .LEN_STORE (vectp_c_int8_t.14_26, 8B, loop_len_16, vect__6.13_25, 0);
>   vectp_a_int8_t.4_17 = vectp_a_int8_t.4_18 + 16;
>   vectp_b_int8_t.8_7 = vectp_b_int8_t.8_8 + 16;
>   vectp_c_int8_t.14_27 = vectp_c_int8_t.14_26 + 16;
>   ivtmp_30 = ivtmp_29 + 16;
>   _32 = MIN_EXPR ;
>   _33 = 127 - _32;
>   _34 = MIN_EXPR <_33, 16>;
>   if (ivtmp_30 <= 126)

With this exit condition niter analysis can work.

> goto ; [85.71%]
>   else
> goto ; [14.29%]
> 
> vs.
> 
>[local count: 75161909]:
>   # vectp_a_int8_t.4_18 = PHI 
>   # vectp_b_int8_t.8_8 = PHI 
>   # vectp_c_int8_t.14_26 = PHI 
>   # ivtmp_29 = PHI 
>   loop_len_16 = MIN_EXPR ;
>   vect__1.6_13 = .LEN_LOAD (vectp_a_int8_t.4_18, 8B, loop_len_16, 0);
>   vect__2.7_12 = VIEW_CONVERT_EXPR(vect__1.6_13);
>   vect__3.10_22 = .LEN_LOAD (vectp_b_int8_t.8_8, 8B, loop_len_16, 0);
>   vect__4.11_23 = VIEW_CONVERT_EXPR(vect__3.10_22);
>   vect__5.12_24 = vect__2.7_12 + vect__4.11_23;
>   vect__6.13_25 = VIEW_CONVERT_EXPR(vect__5.12_24);
>   .LEN_STORE (vectp_c_int8_t.14_26, 8B, loop_len_16, vect__6.13_25, 0);
>   vectp_a_int8_t.4_17 = vectp_a_int8_t.4_18 + 16;
>   vectp_b_int8_t.8_7 = vectp_b_int8_t.8_8 + 16;
>   vectp_c_int8_t.14_27 = vectp_c_int8_t.14_26 + 16;
>   ivtmp_30 = ivtmp_29 - loop_len_16;
>   if (ivtmp_30 != 0)

While here it will fail because ivtmp_30 isn't affine - it doesn't
decrement by an invariant amount but instead by MIN .

Note this will not only pessimize niter analysis but all analyses relying
on SCEV (for uses of this IV!).

The decrement is essentially saturating to zero so we might be able to
special-case this in niter analysis - but still I don't see how to
generally handle this in SCEV.  If we know that niter will fit into
a signed IV we could rewrite the exit test to ivtmp_30 > 0 and decrement
by constant 16.  Alternatively one can test the pre-decrement value,
in the above case

  if (ivtmp_29 >= 16)

which isn't ideal for IV coalescing later but it also allows

  ivtmp_30 = ivtmp_29 - 16;

here.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-25 Thread juzhe.zhong at rivai dot ai via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #6 from JuzheZhong  ---
>> With decrement IV, the optimized IR actually becomes better, it also aligns 
>> >>with our discussion here: 
>> https://gcc.gnu.org/pipermail/gcc-patches/2023->>April/615629.html (Thanks 
>> for the improvement!)

Oh, I see. It seems that my patch is overall beneficial to PowerPC?
That's what I want since I want my patch can not only help RVV but also
other targets.

>>It exposes something inefficient at -O2, it seems we can teach cunroll 
>>further >>about this kind of new sequence.

Ok, we can optimize it for decrement IV in the future.


>>If you meant to disable decrement IV on Power (but now actually enable it 
unexpectedly), then probably we can just keep it (not disabling), for Power 
>>we >>mainly adopt --param=vect-partial-vector-usage=1, it shouldn't be 
>>affected, for >>--param=vect-partial-vector-usage=2, it does generate better 
>>code sequence for >>most cases and we can improve the remaining worse one 
>>gradually.

I am not meant to disable decrement IV on Power. Actually, I really hope Power
can reuse the flow that I build for RVV. It makes things more meaningful.
If it works for power and it seems to improve power codegen in most cases, I'd
like to see power enable it by default. Then we can optimize it togther.

Thanks a lot for your information.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-25 Thread linkw at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

Kewen Lin  changed:

   What|Removed |Added

 Ever confirmed|0   |1
 Status|UNCONFIRMED |ASSIGNED
   Last reconfirmed||2023-05-26

--- Comment #5 from Kewen Lin  ---
(In reply to JuzheZhong from comment #4)
> (In reply to Kewen Lin from comment #3)
> > I'll take a look first.
> 
> Thanks a lot. I am sorry for causing such issue to you.

Never mind! Some failures can't be even caught by normal testings, or not
responsible by the culprit patch itself but just exposed by it instead.

As your comment #c2, it seems that you want to disable this on Power (and s390)
for now? (It's disabled for s390 apparently since it has
LOOP_VINFO_PARTIAL_LOAD_STORE_BIAS 1 always). 

After some checkings, I found that:
 1) for failures on p9-vec-length-full-{1,2,6}.c, the root cause is that the
main loop becomes neat and rtl pass bbro is able to duplicate it, the expected
counts on vector with length instructions change accordingly, I think they are
test issues.

With decrement IV, the optimized IR actually becomes better, it also aligns
with our discussion here:
https://gcc.gnu.org/pipermail/gcc-patches/2023-April/615629.html (Thanks for
the improvement!)

For example on full-1.c int8_t type:

   [local count: 75161909]:
  # vectp_a_int8_t.4_18 = PHI 
  # vectp_b_int8_t.8_8 = PHI 
  # vectp_c_int8_t.14_26 = PHI 
  # ivtmp_29 = PHI 
  # loop_len_16 = PHI <_34(5), 16(2)>
  vect__1.6_13 = .LEN_LOAD (vectp_a_int8_t.4_18, 8B, loop_len_16, 0);
  vect__2.7_12 = VIEW_CONVERT_EXPR(vect__1.6_13);
  vect__3.10_22 = .LEN_LOAD (vectp_b_int8_t.8_8, 8B, loop_len_16, 0);
  vect__4.11_23 = VIEW_CONVERT_EXPR(vect__3.10_22);
  vect__5.12_24 = vect__2.7_12 + vect__4.11_23;
  vect__6.13_25 = VIEW_CONVERT_EXPR(vect__5.12_24);
  .LEN_STORE (vectp_c_int8_t.14_26, 8B, loop_len_16, vect__6.13_25, 0);
  vectp_a_int8_t.4_17 = vectp_a_int8_t.4_18 + 16;
  vectp_b_int8_t.8_7 = vectp_b_int8_t.8_8 + 16;
  vectp_c_int8_t.14_27 = vectp_c_int8_t.14_26 + 16;
  ivtmp_30 = ivtmp_29 + 16;
  _32 = MIN_EXPR ;
  _33 = 127 - _32;
  _34 = MIN_EXPR <_33, 16>;
  if (ivtmp_30 <= 126)
goto ; [85.71%]
  else
goto ; [14.29%]

vs.

   [local count: 75161909]:
  # vectp_a_int8_t.4_18 = PHI 
  # vectp_b_int8_t.8_8 = PHI 
  # vectp_c_int8_t.14_26 = PHI 
  # ivtmp_29 = PHI 
  loop_len_16 = MIN_EXPR ;
  vect__1.6_13 = .LEN_LOAD (vectp_a_int8_t.4_18, 8B, loop_len_16, 0);
  vect__2.7_12 = VIEW_CONVERT_EXPR(vect__1.6_13);
  vect__3.10_22 = .LEN_LOAD (vectp_b_int8_t.8_8, 8B, loop_len_16, 0);
  vect__4.11_23 = VIEW_CONVERT_EXPR(vect__3.10_22);
  vect__5.12_24 = vect__2.7_12 + vect__4.11_23;
  vect__6.13_25 = VIEW_CONVERT_EXPR(vect__5.12_24);
  .LEN_STORE (vectp_c_int8_t.14_26, 8B, loop_len_16, vect__6.13_25, 0);
  vectp_a_int8_t.4_17 = vectp_a_int8_t.4_18 + 16;
  vectp_b_int8_t.8_7 = vectp_b_int8_t.8_8 + 16;
  vectp_c_int8_t.14_27 = vectp_c_int8_t.14_26 + 16;
  ivtmp_30 = ivtmp_29 - loop_len_16;
  if (ivtmp_30 != 0)
goto ; [85.71%]
  else
goto ; [14.29%]

2) for failure on p9-vec-length-full-7.c ({u,}int8_t), the IR difference causes
cunroll not to unroll the loop further, so IR has some differences during
optimized dumpings:

   [local count: 18146240]:
  MEM  [(signed char *)_int8_t + 16B] = { 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 };
  MEM  [(signed char *)_int8_t + 32B] = { 31, 32, 33,
34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46 };
  .LEN_STORE (  [(void *)_int8_t + 48B], 128B, 11, { 47, 48,
49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62 }, 0); [tail call]
  return;

vs.

   [local count: 72584963]:
  # vect_vec_iv_.6_50 = PHI <_51(5), { 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
25, 26, 27, 28, 29, 30 }(4)>
  # ivtmp_57 = PHI 
  # ivtmp.12_11 = PHI 
  loop_len_55 = MIN_EXPR ;
  _51 = vect_vec_iv_.6_50 + { 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
16, 16, 16, 16 };
  _5 = (void *) ivtmp.12_11;
  _14 =   [(signed char *)_5];
  .LEN_STORE (_14, 128B, loop_len_55, vect_vec_iv_.6_50, 0);
  ivtmp_58 = ivtmp_57 - loop_len_55;
  ivtmp.12_22 = ivtmp.12_11 + 16;
  if (ivtmp_58 != 0)
goto ; [75.00%]
  else
goto ; [25.00%]

It exposes something inefficient at -O2, it seems we can teach cunroll further
about this kind of new sequence.

If you meant to disable decrement IV on Power (but now actually enable it
unexpectedly), then probably we can just keep it (not disabling), for Power we
mainly adopt --param=vect-partial-vector-usage=1, it shouldn't be affected, for
--param=vect-partial-vector-usage=2, it does generate better code sequence for
most cases and we can improve the remaining worse one gradually.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-25 Thread juzhe.zhong at rivai dot ai via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #4 from JuzheZhong  ---
(In reply to Kewen Lin from comment #3)
> I'll take a look first.

Thanks a lot. I am sorry for causing such issue to you.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-25 Thread linkw at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

Kewen Lin  changed:

   What|Removed |Added

   Assignee|unassigned at gcc dot gnu.org  |linkw at gcc dot gnu.org
 CC||linkw at gcc dot gnu.org

--- Comment #3 from Kewen Lin  ---
I'll take a look first.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-25 Thread juzhe.zhong at rivai dot ai via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #2 from JuzheZhong  ---
It seems this condition:

+  /* If we're vectorizing a loop that uses length "controls" and
+ can iterate more than once, we apply decrementing IV approach
+ in loop control.  */
+  if (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo)
+  && !LOOP_VINFO_LENS (loop_vinfo).is_empty ()
+  && LOOP_VINFO_PARTIAL_LOAD_STORE_BIAS (loop_vinfo) == 0
+  && !(LOOP_VINFO_NITERS_KNOWN_P (loop_vinfo)
+  && known_le (LOOP_VINFO_INT_NITERS (loop_vinfo),
+   LOOP_VINFO_VECT_FACTOR (loop_vinfo
+LOOP_VINFO_USING_DECREMENTING_IV_P (loop_vinfo) = true;

dit not disable decrement IV in powerPC. 
Sorry for creating this issue since I only tested on X86.
Should I add target hook for decrement IV?
I am waiting for Richard's comments.

[Bug target/109971] [14 regression] Several powerpc64 vector test cases fail after r14-1242-gf574e2dfae7905

2023-05-25 Thread juzhe.zhong at rivai dot ai via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #1 from JuzheZhong  ---
It seems this condition:

+  /* If we're vectorizing a loop that uses length "controls" and
+ can iterate more than once, we apply decrementing IV approach
+ in loop control.  */
+  if (LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo)
+  && !LOOP_VINFO_LENS (loop_vinfo).is_empty ()
+  && LOOP_VINFO_PARTIAL_LOAD_STORE_BIAS (loop_vinfo) == 0
+  && !(LOOP_VINFO_NITERS_KNOWN_P (loop_vinfo)
+  && known_le (LOOP_VINFO_INT_NITERS (loop_vinfo),
+   LOOP_VINFO_VECT_FACTOR (loop_vinfo
+LOOP_VINFO_USING_DECREMENTING_IV_P (loop_vinfo) = true;

dit not disable decrement IV in powerPC. 
Sorry for creating this issue since I only tested on X86.
Should I add target hook for decrement IV?
I am waiting for Richard's comments.