Surya's recent patch for hard register propagation has caused
regressions on the RISC-V port for the various spill-* testcases. After
reviewing the newer generated code it was clear the new code was worse.
The core problem is we have a copy insn that is not frame related (and
should not be frame related) and a use of the destination of the copy in
an insn that is frame related. Prior to Surya's change we could
propagate away the copy, but not anymore.
Ideally we'd just avoid generating the copy entirely, but the structure
of the code to legitimize a poly_int isn't well suited for that. So
instead we have the code signal that it created a trivial copy and we
try to optimize the code after creation, but well before regcprop would
have run. That fixes the code quality aspect of the regression. In
fact, it looks like the code can at times be slightly better, but I
didn't track down the precise reason why we were able to re-use the read
of VLEN so much better then before.
The optimization step is pretty simple. When it's been signaled that a
copy was generated, look back one insn and change it from writing the
scratch register to write the final destination instead.
That triggers the need to generalize the testcases so that they don't
use specific registers. We can also see the csr reads of the VLEN
register getting CSE'd more often in those testcases, so they're
adjusted for that change as well. There's some hope this will improve
spill code more generally -- I haven't really evaluated that, but I do
know that when we spill vector registers, the resulting code seems to
have a lot of redundant VLEN reads.
Anyway, bootstrapped and regression tested on riscv (BPI and Pioneer).
It's also been through rv32 and rv64 regression testing. It doesn't fix
all the regressions for RISC-V on the trunk because (of course)
something new got introduced this week ;(
Waiting on pre-commit testing to do its thing.
jeff
gcc/
* config/riscv/riscv.cc (riscv_expand_mult_with_const_int): Signal
when this creates a simple copy that may be optimized.
(riscv_legitimate_poly_move): Try to optimize away any copy created
by riscv_expand_mult_with_const_int).
gcc/testsuite
* gcc.target/riscv/rvv/base/spill-1.c: Update expected output.
* gcc.target/riscv/rvv/base/spill-2.c: Likewise.
* gcc.target/riscv/rvv/base/spill-3.c: Likewise.
* gcc.target/riscv/rvv/base/spill-4.c: Likewise.
* gcc.target/riscv/rvv/base/spill-5.c: Likewise.
* gcc.target/riscv/rvv/base/spill-6.c: Likewise.
* gcc.target/riscv/rvv/base/spill-7.c: Likewise.
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 96519c96a2b4..88711064e4fa 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -3497,14 +3497,14 @@ riscv_expand_op (enum rtx_code code, machine_mode mode,
rtx op0, rtx op1,
/* Expand mult operation with constant integer, multiplicand also used as a
* temporary register. */
-static void
+static bool
riscv_expand_mult_with_const_int (machine_mode mode, rtx dest, rtx
multiplicand,
HOST_WIDE_INT multiplier)
{
if (multiplier == 0)
{
riscv_emit_move (dest, GEN_INT (0));
- return;
+ return false;
}
bool neg_p = multiplier < 0;
@@ -3515,7 +3515,13 @@ riscv_expand_mult_with_const_int (machine_mode mode, rtx
dest, rtx multiplicand,
if (neg_p)
riscv_expand_op (NEG, mode, dest, multiplicand, NULL_RTX);
else
- riscv_emit_move (dest, multiplicand);
+ {
+ riscv_emit_move (dest, multiplicand);
+
+ /* Signal to our caller that it should try to optimize away
+ the copy. */
+ return true;
+ }
}
else
{
@@ -3595,10 +3601,15 @@ riscv_expand_mult_with_const_int (machine_mode mode,
rtx dest, rtx multiplicand,
riscv_expand_op (MULT, mode, dest, dest, multiplicand);
}
}
+ return false;
}
-/* Analyze src and emit const_poly_int mov sequence. */
+/* Analyze src and emit const_poly_int mov sequence.
+ Essentially we want to generate (set (dest) (src)), where SRC is
+ a poly_int. We may need TMP as a scratch register. We assume TMP
+ is truely a scratch register and need not have any particular value
+ after the sequence. */
void
riscv_legitimize_poly_move (machine_mode mode, rtx dest, rtx tmp, rtx src)
{
@@ -3655,8 +3666,43 @@ riscv_legitimize_poly_move (machine_mode mode, rtx dest,
rtx tmp, rtx src)
riscv_expand_op (LSHIFTRT, mode, tmp, tmp,
gen_int_mode (exact_log2 (div_factor), QImode));
- riscv_expand_mult_with_const_int (mode, dest, tmp,
- factor / (vlenb / div_factor));
+ bool opt_seq
+ = riscv_expand_mult_with_const_int (mode, dest, tmp,
+ factor / (vlenb / div_factor));
+
+ /* Potentially try to optimize the sequence we've generated so far.
+ Essentially when OPT_SEQ is true, we should have a simple reg->reg
+ copy from TMP to DEST as the last insn in the sequence. Try to
+ back up one real insn and adjust it in that case.
+
+ This is important for frame setup/teardown with RVV since we can't
+ propagate away the copy as the copy is not frame related, but the
+ insn creating or destroying the frame is frame related. */
+ if (opt_seq)
+ {
+ rtx_insn *insn = get_last_insn ();
+ rtx set = single_set (insn);
+
+ /* Verify the last insn in the chain is a simple assignment from
+ DEST to TMP. */
+ gcc_assert (set);
+ gcc_assert (SET_SRC (set) == tmp);
+ gcc_assert (SET_DEST (set) == dest);
+
+ /* Now back up one real insn and see if it sets TMP, if so adjust
+ it so that it sets DEST. */
+ rtx_insn *insn2 = prev_nonnote_nondebug_insn (insn);
+ rtx set2 = insn2 ? single_set (insn2) : NULL_RTX;
+ if (set2 && SET_DEST (set2) == tmp)
+ {
+ SET_DEST (set2) = dest;
+ tmp = dest;
+ /* Turn the prior insn into a NOP. But don't delete. */
+ SET_SRC (set) = SET_DEST (set);
+ }
+
+ }
+
HOST_WIDE_INT constant = offset - factor;
if (constant == 0)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-1.c
b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-1.c
index 3b11e5562d4a..141769cfe182 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-1.c
@@ -7,8 +7,8 @@
/*
** spill_1:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,3
@@ -24,8 +24,8 @@
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -39,8 +39,8 @@ spill_1 (int8_t *in, int8_t *out)
/*
** spill_2:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e8,mf4,ta,ma
** vle8.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -57,8 +57,8 @@ spill_1 (int8_t *in, int8_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -72,8 +72,8 @@ spill_2 (int8_t *in, int8_t *out)
/*
** spill_3:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e8,mf2,ta,ma
** vle8.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -81,13 +81,12 @@ spill_2 (int8_t *in, int8_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
** ...
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -101,8 +100,8 @@ spill_3 (int8_t *in, int8_t *out)
/*
** spill_4:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -124,8 +123,8 @@ spill_4 (int8_t *in, int8_t *out)
/*
** spill_5:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -148,8 +147,8 @@ spill_5 (int8_t *in, int8_t *out)
/*
** spill_6:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -172,8 +171,8 @@ spill_6 (int8_t *in, int8_t *out)
/*
** spill_7:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
@@ -196,8 +195,8 @@ spill_7 (int8_t *in, int8_t *out)
/*
** spill_8:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e8,mf8,ta,ma
** vle8.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -214,8 +213,8 @@ spill_7 (int8_t *in, int8_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -229,8 +228,8 @@ spill_8 (uint8_t *in, uint8_t *out)
/*
** spill_9:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e8,mf4,ta,ma
** vle8.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -247,8 +246,8 @@ spill_8 (uint8_t *in, uint8_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -262,8 +261,8 @@ spill_9 (uint8_t *in, uint8_t *out)
/*
** spill_10:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e8,mf2,ta,ma
** vle8.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -271,13 +270,12 @@ spill_9 (uint8_t *in, uint8_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
** ...
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -291,8 +289,8 @@ spill_10 (uint8_t *in, uint8_t *out)
/*
** spill_11:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -314,8 +312,8 @@ spill_11 (uint8_t *in, uint8_t *out)
/*
** spill_12:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -338,8 +336,8 @@ spill_12 (uint8_t *in, uint8_t *out)
/*
** spill_13:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -362,8 +360,8 @@ spill_13 (uint8_t *in, uint8_t *out)
/*
** spill_14:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-2.c
b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-2.c
index 567aa56d9826..5c44cc3051c3 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-2.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-2.c
@@ -7,8 +7,8 @@
/*
** spill_2:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e16,mf4,ta,ma
** vle16.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -25,8 +25,8 @@
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle16.v\tv[0-9]+,0\([a-x0-9]+\)
** vse16.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -40,8 +40,8 @@ spill_2 (int16_t *in, int16_t *out)
/*
** spill_3:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e16,mf2,ta,ma
** vle16.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -49,13 +49,12 @@ spill_2 (int16_t *in, int16_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vse16.v\tv[0-9]+,0\([a-x0-9]+\)
** ...
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle16.v\tv[0-9]+,0\([a-x0-9]+\)
** vse16.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -69,8 +68,8 @@ spill_3 (int16_t *in, int16_t *out)
/*
** spill_4:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -92,8 +91,8 @@ spill_4 (int16_t *in, int16_t *out)
/*
** spill_5:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -116,8 +115,8 @@ spill_5 (int16_t *in, int16_t *out)
/*
** spill_6:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -140,8 +139,8 @@ spill_6 (int16_t *in, int16_t *out)
/*
** spill_7:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
@@ -164,8 +163,8 @@ spill_7 (int16_t *in, int16_t *out)
/*
** spill_9:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e16,mf4,ta,ma
** vle16.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -182,8 +181,8 @@ spill_7 (int16_t *in, int16_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle16.v\tv[0-9]+,0\([a-x0-9]+\)
** vse16.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -197,8 +196,8 @@ spill_9 (uint16_t *in, uint16_t *out)
/*
** spill_10:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e16,mf2,ta,ma
** vle16.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -206,13 +205,12 @@ spill_9 (uint16_t *in, uint16_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vse16.v\tv[0-9]+,0\([a-x0-9]+\)
** ...
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle16.v\tv[0-9]+,0\([a-x0-9]+\)
** vse16.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -226,8 +224,8 @@ spill_10 (uint16_t *in, uint16_t *out)
/*
** spill_11:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -249,8 +247,8 @@ spill_11 (uint16_t *in, uint16_t *out)
/*
** spill_12:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -273,8 +271,8 @@ spill_12 (uint16_t *in, uint16_t *out)
/*
** spill_13:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -297,8 +295,8 @@ spill_13 (uint16_t *in, uint16_t *out)
/*
** spill_14:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-3.c
b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-3.c
index 2c1213b0f78a..d6fa3a630cd4 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-3.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-3.c
@@ -7,8 +7,8 @@
/*
** spill_3:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e32,mf2,ta,ma
** vle32.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -16,13 +16,12 @@
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
** ...
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle32.v\tv[0-9]+,0\([a-x0-9]+\)
** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -36,8 +35,8 @@ spill_3 (int32_t *in, int32_t *out)
/*
** spill_4:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -59,8 +58,8 @@ spill_4 (int32_t *in, int32_t *out)
/*
** spill_5:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -83,8 +82,8 @@ spill_5 (int32_t *in, int32_t *out)
/*
** spill_6:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -107,8 +106,8 @@ spill_6 (int32_t *in, int32_t *out)
/*
** spill_7:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
@@ -131,8 +130,8 @@ spill_7 (int32_t *in, int32_t *out)
/*
** spill_10:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e32,mf2,ta,ma
** vle32.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -140,13 +139,12 @@ spill_7 (int32_t *in, int32_t *out)
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
** ...
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle32.v\tv[0-9]+,0\([a-x0-9]+\)
** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -160,8 +158,8 @@ spill_10 (uint32_t *in, uint32_t *out)
/*
** spill_11:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -183,8 +181,8 @@ spill_11 (uint32_t *in, uint32_t *out)
/*
** spill_12:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -207,8 +205,8 @@ spill_12 (uint32_t *in, uint32_t *out)
/*
** spill_13:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -231,8 +229,8 @@ spill_13 (uint32_t *in, uint32_t *out)
/*
** spill_14:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-4.c
b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-4.c
index ad7592f30bc7..f636f2d9cc3b 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-4.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-4.c
@@ -7,8 +7,8 @@
/*
** spill_4:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -30,8 +30,8 @@ spill_4 (int64_t *in, int64_t *out)
/*
** spill_5:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -54,8 +54,8 @@ spill_5 (int64_t *in, int64_t *out)
/*
** spill_6:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -78,8 +78,8 @@ spill_6 (int64_t *in, int64_t *out)
/*
** spill_7:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
@@ -102,8 +102,8 @@ spill_7 (int64_t *in, int64_t *out)
/*
** spill_11:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -125,8 +125,8 @@ spill_11 (uint64_t *in, uint64_t *out)
/*
** spill_12:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -149,8 +149,8 @@ spill_12 (uint64_t *in, uint64_t *out)
/*
** spill_13:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -173,8 +173,8 @@ spill_13 (uint64_t *in, uint64_t *out)
/*
** spill_14:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-5.c
b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-5.c
index a6874067e762..2324a91dd50f 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-5.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-5.c
@@ -7,8 +7,8 @@
/*
** spill_3:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** vsetvli\ta5,zero,e32,mf2,ta,ma
** vle32.v\tv[0-9]+,0\(a0\)
** csrr\t[a-x0-9]+,vlenb
@@ -16,13 +16,12 @@
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
** ...
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,sp
** vle32.v\tv[0-9]+,0\([a-x0-9]+\)
** vse32.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\tt0,vlenb
-** add\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** add\tsp,sp,[a-x0-9]+
** ...
** jr\tra
*/
@@ -36,8 +35,8 @@ spill_3 (float *in, float *out)
/*
** spill_4:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -59,8 +58,8 @@ spill_4 (float *in, float *out)
/*
** spill_5:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -83,8 +82,8 @@ spill_5 (float *in, float *out)
/*
** spill_6:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -107,8 +106,8 @@ spill_6 (float *in, float *out)
/*
** spill_7:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-6.c
b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-6.c
index 07eee61baa37..a6f23372139d 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-6.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-6.c
@@ -7,8 +7,8 @@
/*
** spill_4:
-** csrr\tt0,vlenb
-** sub\tsp,sp,t0
+** csrr\t[a-x0-9]+,vlenb
+** sub\tsp,sp,[a-x0-9]+
** ...
** vs1r.v\tv[0-9]+,0\(sp\)
** ...
@@ -30,8 +30,8 @@ spill_4 (double *in, double *out)
/*
** spill_5:
-** csrr\tt0,vlenb
-** slli\tt1,t0,1
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,1
** sub\tsp,sp,t1
** ...
** vs2r.v\tv[0-9]+,0\(sp\)
@@ -54,8 +54,8 @@ spill_5 (double *in, double *out)
/*
** spill_6:
-** csrr\tt0,vlenb
-** slli\tt1,t0,2
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,2
** sub\tsp,sp,t1
** ...
** vs4r.v\tv[0-9]+,0\(sp\)
@@ -78,8 +78,8 @@ spill_6 (double *in, double *out)
/*
** spill_7:
-** csrr\tt0,vlenb
-** slli\tt1,t0,3
+** csrr\t[a-x0-9]+,vlenb
+** slli\tt1,[a-x0-9]+,3
** sub\tsp,sp,t1
** ...
** vs8r.v\tv[0-9]+,0\(sp\)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-7.c
b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-7.c
index 2bc54557deec..865a95a0fb97 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/spill-7.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/spill-7.c
@@ -19,31 +19,26 @@
** addi\t[a-x0-9]+,[a-x0-9]+,1
** vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,2
** add\t[a-x0-9]+,[a-x0-9]+,[a-x0-9]+
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
** addi\t[a-x0-9]+,[a-x0-9]+,2
** vsetvli\t[a-x0-9]+,zero,e8,mf2,ta,ma
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\t[a-x0-9]+,vlenb
** srli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,[a-x0-9]+
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
** addi\t[a-x0-9]+,[a-x0-9]+,3
** vl1re8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\t[a-x0-9]+,vlenb
** add\t[a-x0-9]+,[a-x0-9]+,[a-x0-9]+
** vs1r.v\tv[0-9]+,0\([a-x0-9]+\)
** addi\t[a-x0-9]+,[a-x0-9]+,4
** vl2re8.v\tv[0-9]+,0\([a-x0-9]+\)
-** csrr\t[a-x0-9]+,vlenb
** slli\t[a-x0-9]+,[a-x0-9]+,1
** add\t[a-x0-9]+,[a-x0-9]+,[a-x0-9]+
** vs2r.v\tv[0-9]+,0\([a-x0-9]+\)
** addi\t[a-x0-9]+,[a-x0-9]+,5
** vl4re8.v\tv[0-9]+,0\([a-x0-9]+\)
-** mv\t[a-x0-9]+,[a-x0-9]+
** slli\t[a-x0-9]+,[a-x0-9]+,2
** add\t[a-x0-9]+,[a-x0-9]+,[a-x0-9]+
** vs4r.v\tv[0-9]+,0\([a-x0-9]+\)
@@ -66,8 +61,6 @@
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)
** addi\t[a-x0-9]+,[a-x0-9]+,2
-** srli\t[a-x0-9]+,[a-x0-9]+,1
-** add\t[a-x0-9]+,[a-x0-9]+,[a-x0-9]+
** ...
** vle8.v\tv[0-9]+,0\([a-x0-9]+\)
** vse8.v\tv[0-9]+,0\([a-x0-9]+\)