Re: [PATCH] Extend 64-bit vector bit_op patterns with ?r alternative

2022-07-14 Thread Uros Bizjak via Gcc-patches
On Thu, Jul 14, 2022 at 11:32 AM Hongtao Liu  wrote:
>
> On Thu, Jul 14, 2022 at 3:22 PM Uros Bizjak via Gcc-patches
>  wrote:
> >
> > On Thu, Jul 14, 2022 at 7:33 AM liuhongt  wrote:
> > >
> > > And split it to GPR-version instruction after reload.
> > >
> > > > ?r was introduced under the assumption that we want vector values
> > > > mostly in vector registers. Currently there are no instructions with
> > > > memory or immediate operand, so that made sense at the time. Let's
> > > > keep ?r until logic instructions with mem/imm operands are introduced.
> > > > So, for the patch that adds 64-bit vector logic in GPR, I would advise
> > > > to first introduce only register operands. mem/imm operands should be
> > > Update patch to add ?r to 64-bit bit_op patterns.
> > >
> > > Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
> > > No big imact on SPEC2017(Most same binary).
> >
> > The problem with your approach is with the combine pass, where combine
> > first tries to recognize the combined instruction without clobber,
> > before re-recognizing instruction with added clobber. So, if a forward
> > propagation happens, the combine will *always* choose the insn variant
> > without GPR.
> Thank you for the explanation, I really did not know this point.
> >
> > So, the solution with VI_16_32 is to always expand with a clobbered
> > version that is split to either SImode or V16QImode. With 64-bit
> > instructions, we have two additional complications. First, we have a
> > native MMX instruction, and we have to split to it after reload, and
> > second, we have a builtin that expects vector insn.
> >
> > To solve the first issue, we should change the mode of
> > "*mmx" to V1DImode and split your new _gpr version with
> > clobber to it for !GENERAL_REG_P operands.
> >
> > The second issue could be solved by emitting V1DImode instructions
> > directly from the expander. Please note there are several expanders
> > that expect non-clobbered logic insn in certain mode to be available,
> > so the situation can become quite annoying...
> Yes. It looks like it would add a lot of code complexity, I'll hold
> the patch for now.

I did some experimenting in the past with the idea of adding GPR
instructions to 64-bit vectors. While there were some opportunities
with 32- and 16-bit operations, mostly due to the fact that these
arguments are passed via integer registers, 64-bit cases never
triggered, because 64-bit vectors are passed via XMM registers. Also,
when mem/imm alternatives were added, many inter-regunit moves were
generated for everything but the most simple testcases involving logic
operations, also considering the limited range of 64-bit immediates.
IMO, the only case it is worth adding is a direct immediate store to
memory, which HJ recently added.

Uros.


Uros.


Re: [PATCH] Extend 64-bit vector bit_op patterns with ?r alternative

2022-07-14 Thread Hongtao Liu via Gcc-patches
On Thu, Jul 14, 2022 at 3:22 PM Uros Bizjak via Gcc-patches
 wrote:
>
> On Thu, Jul 14, 2022 at 7:33 AM liuhongt  wrote:
> >
> > And split it to GPR-version instruction after reload.
> >
> > > ?r was introduced under the assumption that we want vector values
> > > mostly in vector registers. Currently there are no instructions with
> > > memory or immediate operand, so that made sense at the time. Let's
> > > keep ?r until logic instructions with mem/imm operands are introduced.
> > > So, for the patch that adds 64-bit vector logic in GPR, I would advise
> > > to first introduce only register operands. mem/imm operands should be
> > Update patch to add ?r to 64-bit bit_op patterns.
> >
> > Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
> > No big imact on SPEC2017(Most same binary).
>
> The problem with your approach is with the combine pass, where combine
> first tries to recognize the combined instruction without clobber,
> before re-recognizing instruction with added clobber. So, if a forward
> propagation happens, the combine will *always* choose the insn variant
> without GPR.
Thank you for the explanation, I really did not know this point.
>
> So, the solution with VI_16_32 is to always expand with a clobbered
> version that is split to either SImode or V16QImode. With 64-bit
> instructions, we have two additional complications. First, we have a
> native MMX instruction, and we have to split to it after reload, and
> second, we have a builtin that expects vector insn.
>
> To solve the first issue, we should change the mode of
> "*mmx" to V1DImode and split your new _gpr version with
> clobber to it for !GENERAL_REG_P operands.
>
> The second issue could be solved by emitting V1DImode instructions
> directly from the expander. Please note there are several expanders
> that expect non-clobbered logic insn in certain mode to be available,
> so the situation can become quite annoying...
Yes. It looks like it would add a lot of code complexity, I'll hold
the patch for now.
>
> Uros.



-- 
BR,
Hongtao


Re: [PATCH] Extend 64-bit vector bit_op patterns with ?r alternative

2022-07-14 Thread Uros Bizjak via Gcc-patches
On Thu, Jul 14, 2022 at 7:33 AM liuhongt  wrote:
>
> And split it to GPR-version instruction after reload.
>
> > ?r was introduced under the assumption that we want vector values
> > mostly in vector registers. Currently there are no instructions with
> > memory or immediate operand, so that made sense at the time. Let's
> > keep ?r until logic instructions with mem/imm operands are introduced.
> > So, for the patch that adds 64-bit vector logic in GPR, I would advise
> > to first introduce only register operands. mem/imm operands should be
> Update patch to add ?r to 64-bit bit_op patterns.
>
> Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
> No big imact on SPEC2017(Most same binary).

The problem with your approach is with the combine pass, where combine
first tries to recognize the combined instruction without clobber,
before re-recognizing instruction with added clobber. So, if a forward
propagation happens, the combine will *always* choose the insn variant
without GPR.

So, the solution with VI_16_32 is to always expand with a clobbered
version that is split to either SImode or V16QImode. With 64-bit
instructions, we have two additional complications. First, we have a
native MMX instruction, and we have to split to it after reload, and
second, we have a builtin that expects vector insn.

To solve the first issue, we should change the mode of
"*mmx" to V1DImode and split your new _gpr version with
clobber to it for !GENERAL_REG_P operands.

The second issue could be solved by emitting V1DImode instructions
directly from the expander. Please note there are several expanders
that expect non-clobbered logic insn in certain mode to be available,
so the situation can become quite annoying...

Uros.


[PATCH] Extend 64-bit vector bit_op patterns with ?r alternative

2022-07-13 Thread liuhongt via Gcc-patches
And split it to GPR-version instruction after reload.

> ?r was introduced under the assumption that we want vector values
> mostly in vector registers. Currently there are no instructions with
> memory or immediate operand, so that made sense at the time. Let's
> keep ?r until logic instructions with mem/imm operands are introduced.
> So, for the patch that adds 64-bit vector logic in GPR, I would advise
> to first introduce only register operands. mem/imm operands should be
Update patch to add ?r to 64-bit bit_op patterns.

Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}.
No big imact on SPEC2017(Most same binary).

Ok for trunk?

gcc/ChangeLog:

PR target/106038
* config/i386/mmx.md (3): Expand
with (clobber (reg:CC flags_reg)) under TARGET_64BIT
(mmx_code>3): Ditto.
(*mmx_3_gpr): New define_insn, add post_reload
splitter after it.
(mmx_andnot3_gpr): Ditto.
(3): Extend follow define_split from VI_16_32 to
VI_16_32_64.
(*andnot3): Ditto.
(mmxinsnmode): New mode attribute.
(VI_16_32_64): New mode iterator.
(*mov_imm): Refactor with mmxinsnmode.
* config/i386/predicates.md

gcc/testsuite/ChangeLog:

* gcc.target/i386/pr106038-1.c: New test.
* gcc.target/i386/pr106038-2.c: New test.
* gcc.target/i386/pr106038-3.c: New test.
---
 gcc/config/i386/mmx.md | 131 +++--
 gcc/testsuite/gcc.target/i386/pr106038-1.c |  61 ++
 gcc/testsuite/gcc.target/i386/pr106038-2.c |  35 ++
 gcc/testsuite/gcc.target/i386/pr106038-3.c |  17 +++
 4 files changed, 210 insertions(+), 34 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/i386/pr106038-1.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr106038-2.c
 create mode 100644 gcc/testsuite/gcc.target/i386/pr106038-3.c

diff --git a/gcc/config/i386/mmx.md b/gcc/config/i386/mmx.md
index 3294c1e6274..5f7e40bd7a1 100644
--- a/gcc/config/i386/mmx.md
+++ b/gcc/config/i386/mmx.md
@@ -75,6 +75,11 @@ (define_mode_iterator V_16_32_64
 (V8QI "TARGET_64BIT") (V4HI "TARGET_64BIT") (V4HF "TARGET_64BIT")
 (V2SI "TARGET_64BIT") (V2SF "TARGET_64BIT")])
 
+(define_mode_iterator VI_16_32_64
+   [V2QI V4QI V2HI
+(V8QI "TARGET_64BIT") (V4HI "TARGET_64BIT")
+(V2SI "TARGET_64BIT")])
+
 ;; V2S* modes
 (define_mode_iterator V2FI [V2SF V2SI])
 
@@ -86,6 +91,14 @@ (define_mode_attr mmxvecsize
   [(V8QI "b") (V4QI "b") (V2QI "b")
(V4HI "w") (V2HI "w") (V2SI "d") (V1DI "q")])
 
+;; Mapping to same size integral mode.
+(define_mode_attr mmxinsnmode
+  [(V8QI "DI") (V4QI "SI") (V2QI "HI")
+   (V4HI "DI") (V2HI "SI")
+   (V2SI "DI")
+   (V4HF "DI") (V2HF "SI")
+   (V2SF "DI")])
+
 (define_mode_attr mmxdoublemode
   [(V8QI "V8HI") (V4HI "V4SI")])
 
@@ -350,22 +363,7 @@ (define_insn_and_split "*mov_imm"
   HOST_WIDE_INT val = ix86_convert_const_vector_to_integer (operands[1],
mode);
   operands[1] = GEN_INT (val);
-  machine_mode mode;
-  switch (GET_MODE_SIZE (mode))
-{
-case 2:
-  mode = HImode;
-  break;
-case 4:
-  mode = SImode;
-  break;
-case 8:
-  mode = DImode;
-  break;
-default:
-  gcc_unreachable ();
-}
-  operands[0] = lowpart_subreg (mode, operands[0], mode);
+  operands[0] = lowpart_subreg (mode, operands[0], mode);
 })
 
 ;; For TARGET_64BIT we always round up to 8 bytes.
@@ -2878,6 +2876,31 @@ (define_insn "mmx_andnot3"
(set_attr "type" "mmxadd,sselog,sselog,sselog")
(set_attr "mode" "DI,TI,TI,TI")])
 
+(define_insn "mmx_andnot3_gpr"
+  [(set (match_operand:MMXMODEI 0 "register_operand" "=?r,y,x,x,v")
+   (and:MMXMODEI
+ (not:MMXMODEI (match_operand:MMXMODEI 1 "register_operand" 
"r,0,0,x,v"))
+ (match_operand:MMXMODEI 2 "register_mmxmem_operand" "r,ym,x,x,v")))
+   (clobber (reg:CC FLAGS_REG))]
+  "TARGET_64BIT && (TARGET_MMX || TARGET_SSE2)"
+  "#"
+  [(set_attr "isa" "bmi,*,sse2_noavx,avx,avx512vl")
+   (set_attr "mmx_isa" "*,native,*,*,*")
+   (set_attr "type" "alu,mmxadd,sselog,sselog,sselog")
+   (set_attr "mode" "DI,DI,TI,TI,TI")])
+
+(define_split
+  [(set (match_operand:MMXMODEI 0 "register_operand")
+   (and:MMXMODEI
+ (not:MMXMODEI (match_operand:MMXMODEI 1 "register_mmxmem_operand"))
+ (match_operand:MMXMODEI 2 "register_mmxmem_operand")))
+   (clobber (reg:CC FLAGS_REG))]
+  "reload_completed
+   && (TARGET_MMX || TARGET_MMX_WITH_SSE)
+   && !GENERAL_REGNO_P (REGNO (operands[0]))"
+  [(set (match_dup 0)
+   (and: (not: (match_dup 1)) (match_dup 2)))])
+
 (define_insn "*andnot3"
   [(set (match_operand:VI_16_32 0 "register_operand" "=?&r,?r,x,x,v")
 (and:VI_16_32
@@ -2892,20 +2915,20 @@ (define_insn "*andnot3"
(set_attr "mode" "SI,SI,TI,TI,TI")])
 
 (define_split
-  [(set (match_operand:VI_16_32 0 "general_reg_operand")
-(and:VI_16_32
- (not:VI_16_32 (match_operand:VI_16_32 1