Re: [PATCH v6 09/18] target/riscv: accessors to registers upper part and 128-bit load/store
On Mon, Nov 29, 2021 at 12:03 AM Frédéric Pétrot wrote: > > Get function to retrieve the 64 top bits of a register, stored in the gprh > field of the cpu state. Set function that writes the 128-bit value at once. > The access to the gprh field can not be protected at compile time to make > sure it is accessed only in the 128-bit version of the processor because we > have no way to indicate that the misa_mxl_max field is const. > > The 128-bit ISA adds ldu, lq and sq. We provide support for these > instructions. Note that (a) we compute only 64-bit addresses to actually > access memory, cowardly utilizing the existing address translation mechanism > of QEMU, and (b) we assume for now little-endian memory accesses. > > Signed-off-by: Frédéric Pétrot > Co-authored-by: Fabien Portas Reviewed-by: Alistair Francis Alistair > --- > target/riscv/insn16.decode | 27 ++- > target/riscv/insn32.decode | 5 ++ > target/riscv/translate.c| 41 ++ > target/riscv/insn_trans/trans_rvi.c.inc | 100 ++-- > 4 files changed, 163 insertions(+), 10 deletions(-) > > diff --git a/target/riscv/insn16.decode b/target/riscv/insn16.decode > index 2e9212663c..02c8f61b48 100644 > --- a/target/riscv/insn16.decode > +++ b/target/riscv/insn16.decode > @@ -25,14 +25,17 @@ > # Immediates: > %imm_ci12:s1 2:5 > %nzuimm_ciw7:4 11:2 5:1 6:1 !function=ex_shift_2 > +%uimm_cl_q 10:1 5:2 11:2 !function=ex_shift_4 > %uimm_cl_d 5:2 10:3 !function=ex_shift_3 > %uimm_cl_w 5:1 10:3 6:1 !function=ex_shift_2 > %imm_cb12:s1 5:2 2:1 10:2 3:2 !function=ex_shift_1 > %imm_cj12:s1 8:1 9:2 6:1 7:1 2:1 11:1 3:3 !function=ex_shift_1 > > %shimm_6bit 12:1 2:5 !function=ex_rvc_shifti > +%uimm_6bit_lq 2:4 12:1 6:1 !function=ex_shift_4 > %uimm_6bit_ld 2:3 12:1 5:2 !function=ex_shift_3 > %uimm_6bit_lw 2:2 12:1 4:3 !function=ex_shift_2 > +%uimm_6bit_sq 7:4 11:2 !function=ex_shift_4 > %uimm_6bit_sd 7:3 10:3 !function=ex_shift_3 > %uimm_6bit_sw 7:2 9:4!function=ex_shift_2 > > @@ -54,16 +57,20 @@ > # Formats 16: > @cr . . .. rs2=%rs2_5 rs1=%rd %rd > @ci... . . . .. imm=%imm_ci rs1=%rd %rd > +@cl_q ... . . . .. imm=%uimm_cl_q rs1=%rs1_3 > rd=%rs2_3 > @cl_d ... ... ... .. ... .. imm=%uimm_cl_d rs1=%rs1_3 > rd=%rs2_3 > @cl_w ... ... ... .. ... .. imm=%uimm_cl_w rs1=%rs1_3 > rd=%rs2_3 > @cs_2 ... ... ... .. ... .. rs2=%rs2_3 rs1=%rs1_3 > rd=%rs1_3 > +@cs_q ... ... ... .. ... .. imm=%uimm_cl_q rs1=%rs1_3 > rs2=%rs2_3 > @cs_d ... ... ... .. ... .. imm=%uimm_cl_d rs1=%rs1_3 > rs2=%rs2_3 > @cs_w ... ... ... .. ... .. imm=%uimm_cl_w rs1=%rs1_3 > rs2=%rs2_3 > @cj...... .. imm=%imm_cj > @cb_z ... ... ... .. ... .. imm=%imm_cb rs1=%rs1_3 rs2=0 > > +@c_lqsp... . . . .. imm=%uimm_6bit_lq rs1=2 %rd > @c_ldsp... . . . .. imm=%uimm_6bit_ld rs1=2 %rd > @c_lwsp... . . . .. imm=%uimm_6bit_lw rs1=2 %rd > +@c_sqsp... . . . .. imm=%uimm_6bit_sq rs1=2 rs2=%rs2_5 > @c_sdsp... . . . .. imm=%uimm_6bit_sd rs1=2 rs2=%rs2_5 > @c_swsp... . . . .. imm=%uimm_6bit_sw rs1=2 rs2=%rs2_5 > @c_li ... . . . .. imm=%imm_ci rs1=0 %rd > @@ -87,9 +94,15 @@ >illegal 000 000 000 00 --- 00 >addi000 ... ... .. ... 00 @c_addi4spn > } > -fld 001 ... ... .. ... 00 @cl_d > +{ > + lq 001 ... ... .. ... 00 @cl_q > + fld 001 ... ... .. ... 00 @cl_d > +} > lw010 ... ... .. ... 00 @cl_w > -fsd 101 ... ... .. ... 00 @cs_d > +{ > + sq 101 ... ... .. ... 00 @cs_q > + fsd 101 ... ... .. ... 00 @cs_d > +} > sw110 ... ... .. ... 00 @cs_w > > # *** RV32C and RV64C specific Standard Extension (Quadrant 0) *** > @@ -132,7 +145,10 @@ addw 100 1 11 ... 01 ... 01 @cs_2 > > # *** RV32/64C Standard Extension (Quadrant 2) *** > slli 000 . . . 10 @c_shift2 > -fld 001 . . . 10 @c_ldsp > +{ > + lq 001 ... ... .. ... 10 @c_lqsp > + fld 001 . . . 10 @c_ldsp > +} > { >illegal 010 - 0 - 10 # c.lwsp, RES rd=0 >lw 010 . . . 10 @c_lwsp > @@ -147,7 +163,10 @@ fld 001 . . . 10 @c_ldsp >jalr100 1 . 0 10 @c_jalr rd=1 # C.JALR >add 100 1 . . 10 @cr > } > -fsd 101 .. . 10 @c_sdsp > +{ > + sq 101 ... ... .. ... 10 @c_sqsp > +
[PATCH v6 09/18] target/riscv: accessors to registers upper part and 128-bit load/store
Get function to retrieve the 64 top bits of a register, stored in the gprh field of the cpu state. Set function that writes the 128-bit value at once. The access to the gprh field can not be protected at compile time to make sure it is accessed only in the 128-bit version of the processor because we have no way to indicate that the misa_mxl_max field is const. The 128-bit ISA adds ldu, lq and sq. We provide support for these instructions. Note that (a) we compute only 64-bit addresses to actually access memory, cowardly utilizing the existing address translation mechanism of QEMU, and (b) we assume for now little-endian memory accesses. Signed-off-by: Frédéric Pétrot Co-authored-by: Fabien Portas --- target/riscv/insn16.decode | 27 ++- target/riscv/insn32.decode | 5 ++ target/riscv/translate.c| 41 ++ target/riscv/insn_trans/trans_rvi.c.inc | 100 ++-- 4 files changed, 163 insertions(+), 10 deletions(-) diff --git a/target/riscv/insn16.decode b/target/riscv/insn16.decode index 2e9212663c..02c8f61b48 100644 --- a/target/riscv/insn16.decode +++ b/target/riscv/insn16.decode @@ -25,14 +25,17 @@ # Immediates: %imm_ci12:s1 2:5 %nzuimm_ciw7:4 11:2 5:1 6:1 !function=ex_shift_2 +%uimm_cl_q 10:1 5:2 11:2 !function=ex_shift_4 %uimm_cl_d 5:2 10:3 !function=ex_shift_3 %uimm_cl_w 5:1 10:3 6:1 !function=ex_shift_2 %imm_cb12:s1 5:2 2:1 10:2 3:2 !function=ex_shift_1 %imm_cj12:s1 8:1 9:2 6:1 7:1 2:1 11:1 3:3 !function=ex_shift_1 %shimm_6bit 12:1 2:5 !function=ex_rvc_shifti +%uimm_6bit_lq 2:4 12:1 6:1 !function=ex_shift_4 %uimm_6bit_ld 2:3 12:1 5:2 !function=ex_shift_3 %uimm_6bit_lw 2:2 12:1 4:3 !function=ex_shift_2 +%uimm_6bit_sq 7:4 11:2 !function=ex_shift_4 %uimm_6bit_sd 7:3 10:3 !function=ex_shift_3 %uimm_6bit_sw 7:2 9:4!function=ex_shift_2 @@ -54,16 +57,20 @@ # Formats 16: @cr . . .. rs2=%rs2_5 rs1=%rd %rd @ci... . . . .. imm=%imm_ci rs1=%rd %rd +@cl_q ... . . . .. imm=%uimm_cl_q rs1=%rs1_3 rd=%rs2_3 @cl_d ... ... ... .. ... .. imm=%uimm_cl_d rs1=%rs1_3 rd=%rs2_3 @cl_w ... ... ... .. ... .. imm=%uimm_cl_w rs1=%rs1_3 rd=%rs2_3 @cs_2 ... ... ... .. ... .. rs2=%rs2_3 rs1=%rs1_3 rd=%rs1_3 +@cs_q ... ... ... .. ... .. imm=%uimm_cl_q rs1=%rs1_3 rs2=%rs2_3 @cs_d ... ... ... .. ... .. imm=%uimm_cl_d rs1=%rs1_3 rs2=%rs2_3 @cs_w ... ... ... .. ... .. imm=%uimm_cl_w rs1=%rs1_3 rs2=%rs2_3 @cj...... .. imm=%imm_cj @cb_z ... ... ... .. ... .. imm=%imm_cb rs1=%rs1_3 rs2=0 +@c_lqsp... . . . .. imm=%uimm_6bit_lq rs1=2 %rd @c_ldsp... . . . .. imm=%uimm_6bit_ld rs1=2 %rd @c_lwsp... . . . .. imm=%uimm_6bit_lw rs1=2 %rd +@c_sqsp... . . . .. imm=%uimm_6bit_sq rs1=2 rs2=%rs2_5 @c_sdsp... . . . .. imm=%uimm_6bit_sd rs1=2 rs2=%rs2_5 @c_swsp... . . . .. imm=%uimm_6bit_sw rs1=2 rs2=%rs2_5 @c_li ... . . . .. imm=%imm_ci rs1=0 %rd @@ -87,9 +94,15 @@ illegal 000 000 000 00 --- 00 addi000 ... ... .. ... 00 @c_addi4spn } -fld 001 ... ... .. ... 00 @cl_d +{ + lq 001 ... ... .. ... 00 @cl_q + fld 001 ... ... .. ... 00 @cl_d +} lw010 ... ... .. ... 00 @cl_w -fsd 101 ... ... .. ... 00 @cs_d +{ + sq 101 ... ... .. ... 00 @cs_q + fsd 101 ... ... .. ... 00 @cs_d +} sw110 ... ... .. ... 00 @cs_w # *** RV32C and RV64C specific Standard Extension (Quadrant 0) *** @@ -132,7 +145,10 @@ addw 100 1 11 ... 01 ... 01 @cs_2 # *** RV32/64C Standard Extension (Quadrant 2) *** slli 000 . . . 10 @c_shift2 -fld 001 . . . 10 @c_ldsp +{ + lq 001 ... ... .. ... 10 @c_lqsp + fld 001 . . . 10 @c_ldsp +} { illegal 010 - 0 - 10 # c.lwsp, RES rd=0 lw 010 . . . 10 @c_lwsp @@ -147,7 +163,10 @@ fld 001 . . . 10 @c_ldsp jalr100 1 . 0 10 @c_jalr rd=1 # C.JALR add 100 1 . . 10 @cr } -fsd 101 .. . 10 @c_sdsp +{ + sq 101 ... ... .. ... 10 @c_sqsp + fsd 101 .. . 10 @c_sdsp +} sw110 . . . 10 @c_swsp # *** RV32C and RV64C specific Standard Extension (Quadrant 2) *** diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 2f251dac1b..02889c6082 100644 --- a/target/riscv/insn32.decode