Re: [RFC PATCH v2 0/6] hw/i2c: i2c slave mode support

2022-06-02 Thread Cédric Le Goater

On 6/2/22 21:19, Klaus Jensen wrote:

On Jun  2 17:40, Cédric Le Goater wrote:

On 6/2/22 16:29, Jae Hyun Yoo wrote:

Hi Klaus,

On 6/2/2022 6:50 AM, Cédric Le Goater wrote:

On 6/2/22 10:21, Klaus Jensen wrote:


There is an outstanding issue with the SLAVE_ADDR_RX_MATCH interrupt bit
(bit 7). Remember from my first series I had a workaround to make sure
it wasnt masked.

I posted this upstream to linux

https://lore.kernel.org/lkml/20220602054842.122271-1-...@irrelevant.dk/

Not sure if that is the right way to fix it.


That's weird. I would have thought it was already enabled [ Adding Jae ]


Slave mode support in Aspeed I2C driver is already enabled and it has
worked well so far. The fix Klaus made in the link is incorrect.

https://lore.kernel.org/lkml/20220602054842.122271-1-...@irrelevant.dk/

The patch is adding ASPEED_I2CD_INTR_SLAVE_MATCH as a mask bit for
I2CD0C (Interrupt Control Register) but actually this bit is part of
I2CD10 (Interrupt Status Register). Means that the slave match interrupt
can be enabled without enabling any mask bit in I2CD0C.


Thanks Jae.

So we should enable this interrupt always independently of the
Interrupt Control Register value.

I would simply extend the mask value (bus->regs[intr_ctrl_reg])
with the SLAVE_ADDR_RX_MATCH bit when interrupts are raised in
aspeed_i2c_bus_raise_interrupt().



Alright, so my "workaround" from v1 was actually the right fix - I'll
re-add it ;)


yes :) but now we know why ! May be add a ALWAYS_ENABLE mask ?

Thanks,

C.




Re: [PATCH 3/3] capstone: Remove the capstone submodule

2022-06-02 Thread Thomas Huth

On 03/06/2022 02.56, Richard Henderson wrote:

On 6/2/22 17:03, Richard Henderson wrote:
Ho hum.  So... the first time I try to do any actual debugging after this 
has gone in, and I am reminded exactly how terrible capstone 4.0.1 is for 
anything except x86.  There was a reason I had chosen a development branch 
snapshot, and that's because it was usable.


Here, for instance, is how ubuntu 20.04 capstone disassembles
tests/tcg/aarch64/system/boot.S:

0x400027b0:  10ffc280  adr x0, #-0x7b0 (addr 0x40002000)

0x400027b4:  d518c000  msr (unknown), x0


0x400027b8:  dfe0  adrp x0, #+0x1fe000 (addr 0x4020)

0x400027bc:  9100  add x0, x0, #0x0 (0)

0x400027c0:  d5182000  msr (unknown), x0

...
0x40002850:  d5381040  mrs x0, (unknown)

0x40002854:  b26c0400  orr x0, x0, #0x30

0x40002858:  d5181040  msr (unknown), x0


And this is the extremely simple case of ARMv8.0 with no extensions.

I am very much tempted to re-instate the capstone submodule, or update 
disas/vixl and disable use of capstone for arm.


Double ho-hum.  It would appear that this horrible disassembly *is* vixl, 
because I didn't double check that libcapstone was installed.


So is capstone disassembly better now with Ubuntu 20.04 or should we still 
revert the submodule removal?


Also, if libvixl is so bad, why do we still have that in the repo?

 Thomas





Re: [PATCH 3/3] capstone: Remove the capstone submodule

2022-06-02 Thread Richard Henderson

On 6/2/22 17:03, Richard Henderson wrote:
Ho hum.  So... the first time I try to do any actual debugging after this has gone in, and 
I am reminded exactly how terrible capstone 4.0.1 is for anything except x86.  There was a 
reason I had chosen a development branch snapshot, and that's because it was usable.


Here, for instance, is how ubuntu 20.04 capstone disassembles
tests/tcg/aarch64/system/boot.S:

0x400027b0:  10ffc280  adr x0, #-0x7b0 (addr 0x40002000)

0x400027b4:  d518c000  msr (unknown), x0


0x400027b8:  dfe0  adrp x0, #+0x1fe000 (addr 0x4020)

0x400027bc:  9100  add x0, x0, #0x0 (0)

0x400027c0:  d5182000  msr (unknown), x0

...
0x40002850:  d5381040  mrs x0, (unknown)

0x40002854:  b26c0400  orr x0, x0, #0x30

0x40002858:  d5181040  msr (unknown), x0


And this is the extremely simple case of ARMv8.0 with no extensions.

I am very much tempted to re-instate the capstone submodule, or update disas/vixl and 
disable use of capstone for arm.


Double ho-hum.  It would appear that this horrible disassembly *is* vixl, because I didn't 
double check that libcapstone was installed.



r~



Re: [PATCH 3/3] capstone: Remove the capstone submodule

2022-06-02 Thread Richard Henderson

On 5/23/22 05:15, Thomas Huth wrote:

On 19/05/2022 13.41, Peter Maydell wrote:

On Mon, 16 May 2022 at 16:22, Thomas Huth  wrote:


Now that we allow compiling with Capstone v3.05 again, all our supported
build hosts should provide at least this version of the disassembler
library, so we do not need to ship this as a submodule anymore.


When this eventually goes in, please remember to update the
wiki changelog page's 'Build Information' section to let
users know.


Done: https://wiki.qemu.org/ChangeLog/7.1#Build_Dependencies


Ho hum.  So... the first time I try to do any actual debugging after this has gone in, and 
I am reminded exactly how terrible capstone 4.0.1 is for anything except x86.  There was a 
reason I had chosen a development branch snapshot, and that's because it was usable.


Here, for instance, is how ubuntu 20.04 capstone disassembles
tests/tcg/aarch64/system/boot.S:

0x400027b0:  10ffc280  adr x0, #-0x7b0 (addr 0x40002000)

0x400027b4:  d518c000  msr (unknown), x0


0x400027b8:  dfe0  adrp x0, #+0x1fe000 (addr 0x4020)

0x400027bc:  9100  add x0, x0, #0x0 (0)

0x400027c0:  d5182000  msr (unknown), x0

...
0x40002850:  d5381040  mrs x0, (unknown)

0x40002854:  b26c0400  orr x0, x0, #0x30

0x40002858:  d5181040  msr (unknown), x0


And this is the extremely simple case of ARMv8.0 with no extensions.

I am very much tempted to re-instate the capstone submodule, or update disas/vixl and 
disable use of capstone for arm.


Would the ppc folk please have a look at how capstone is or is not handling ppc64? 
Because I strongly suspect that 333f944c15e7 ("disas: Remove old libopcode ppc 
disassembler") is also going to turn out to be a regression when combined with the removal 
of the capstone submodule.



r~


PS: While there are tags in upstream capstone hinting at a 5.0 release, there's no 
timeline for when we might see such a thing.  Anyway, it wouldn't help anyone with an LTS 
distro for the next half decade.




[PATCH] ppc/pnv: fix extra indent spaces with DEFINE_PROP*

2022-06-02 Thread Daniel Henrique Barboza
The DEFINE_PROP* macros in pnv files are using extra spaces for no good
reason.

Cc: Mark Cave-Ayland 
Signed-off-by: Daniel Henrique Barboza 
---
 hw/pci-host/pnv_phb3.c |  8 
 hw/pci-host/pnv_phb4.c | 10 +-
 hw/pci-host/pnv_phb4_pec.c | 10 +-
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/hw/pci-host/pnv_phb3.c b/hw/pci-host/pnv_phb3.c
index 3f03467dde..26ac9b7123 100644
--- a/hw/pci-host/pnv_phb3.c
+++ b/hw/pci-host/pnv_phb3.c
@@ -1088,10 +1088,10 @@ static const char *pnv_phb3_root_bus_path(PCIHostState 
*host_bridge,
 }
 
 static Property pnv_phb3_properties[] = {
-DEFINE_PROP_UINT32("index", PnvPHB3, phb_id, 0),
-DEFINE_PROP_UINT32("chip-id", PnvPHB3, chip_id, 0),
-DEFINE_PROP_LINK("chip", PnvPHB3, chip, TYPE_PNV_CHIP, PnvChip *),
-DEFINE_PROP_END_OF_LIST(),
+DEFINE_PROP_UINT32("index", PnvPHB3, phb_id, 0),
+DEFINE_PROP_UINT32("chip-id", PnvPHB3, chip_id, 0),
+DEFINE_PROP_LINK("chip", PnvPHB3, chip, TYPE_PNV_CHIP, PnvChip *),
+DEFINE_PROP_END_OF_LIST(),
 };
 
 static void pnv_phb3_class_init(ObjectClass *klass, void *data)
diff --git a/hw/pci-host/pnv_phb4.c b/hw/pci-host/pnv_phb4.c
index 13ba9e45d8..6594016121 100644
--- a/hw/pci-host/pnv_phb4.c
+++ b/hw/pci-host/pnv_phb4.c
@@ -1692,11 +1692,11 @@ static void pnv_phb4_xive_notify(XiveNotifier *xf, 
uint32_t srcno,
 }
 
 static Property pnv_phb4_properties[] = {
-DEFINE_PROP_UINT32("index", PnvPHB4, phb_id, 0),
-DEFINE_PROP_UINT32("chip-id", PnvPHB4, chip_id, 0),
-DEFINE_PROP_LINK("pec", PnvPHB4, pec, TYPE_PNV_PHB4_PEC,
- PnvPhb4PecState *),
-DEFINE_PROP_END_OF_LIST(),
+DEFINE_PROP_UINT32("index", PnvPHB4, phb_id, 0),
+DEFINE_PROP_UINT32("chip-id", PnvPHB4, chip_id, 0),
+DEFINE_PROP_LINK("pec", PnvPHB4, pec, TYPE_PNV_PHB4_PEC,
+ PnvPhb4PecState *),
+DEFINE_PROP_END_OF_LIST(),
 };
 
 static void pnv_phb4_class_init(ObjectClass *klass, void *data)
diff --git a/hw/pci-host/pnv_phb4_pec.c b/hw/pci-host/pnv_phb4_pec.c
index 61bc0b503e..8b7e823fa5 100644
--- a/hw/pci-host/pnv_phb4_pec.c
+++ b/hw/pci-host/pnv_phb4_pec.c
@@ -215,11 +215,11 @@ static int pnv_pec_dt_xscom(PnvXScomInterface *dev, void 
*fdt,
 }
 
 static Property pnv_pec_properties[] = {
-DEFINE_PROP_UINT32("index", PnvPhb4PecState, index, 0),
-DEFINE_PROP_UINT32("chip-id", PnvPhb4PecState, chip_id, 0),
-DEFINE_PROP_LINK("chip", PnvPhb4PecState, chip, TYPE_PNV_CHIP,
- PnvChip *),
-DEFINE_PROP_END_OF_LIST(),
+DEFINE_PROP_UINT32("index", PnvPhb4PecState, index, 0),
+DEFINE_PROP_UINT32("chip-id", PnvPhb4PecState, chip_id, 0),
+DEFINE_PROP_LINK("chip", PnvPhb4PecState, chip, TYPE_PNV_CHIP,
+ PnvChip *),
+DEFINE_PROP_END_OF_LIST(),
 };
 
 static uint32_t pnv_pec_xscom_pci_base(PnvPhb4PecState *pec)
-- 
2.36.1




[PATCH 62/71] linux-user/aarch64: Tidy target_restore_sigframe error return

2022-06-02 Thread Richard Henderson
Fold the return value setting into the goto, so each
point of failure need not do both.

Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/signal.c | 26 +++---
 1 file changed, 11 insertions(+), 15 deletions(-)

diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index 08a9746ace..e9ff280d2a 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -287,7 +287,6 @@ static int target_restore_sigframe(CPUARMState *env,
 struct target_sve_context *sve = NULL;
 uint64_t extra_datap = 0;
 bool used_extra = false;
-bool err = false;
 int vq = 0, sve_size = 0;
 
 target_restore_general_frame(env, sf);
@@ -301,8 +300,7 @@ static int target_restore_sigframe(CPUARMState *env,
 switch (magic) {
 case 0:
 if (size != 0) {
-err = true;
-goto exit;
+goto err;
 }
 if (used_extra) {
 ctx = NULL;
@@ -314,8 +312,7 @@ static int target_restore_sigframe(CPUARMState *env,
 
 case TARGET_FPSIMD_MAGIC:
 if (fpsimd || size != sizeof(struct target_fpsimd_context)) {
-err = true;
-goto exit;
+goto err;
 }
 fpsimd = (struct target_fpsimd_context *)ctx;
 break;
@@ -329,13 +326,11 @@ static int target_restore_sigframe(CPUARMState *env,
 break;
 }
 }
-err = true;
-goto exit;
+goto err;
 
 case TARGET_EXTRA_MAGIC:
 if (extra || size != sizeof(struct target_extra_context)) {
-err = true;
-goto exit;
+goto err;
 }
 __get_user(extra_datap,
&((struct target_extra_context *)ctx)->datap);
@@ -348,8 +343,7 @@ static int target_restore_sigframe(CPUARMState *env,
 /* Unknown record -- we certainly didn't generate it.
  * Did we in fact get out of sync?
  */
-err = true;
-goto exit;
+goto err;
 }
 ctx = (void *)ctx + size;
 }
@@ -358,17 +352,19 @@ static int target_restore_sigframe(CPUARMState *env,
 if (fpsimd) {
 target_restore_fpsimd_record(env, fpsimd);
 } else {
-err = true;
+goto err;
 }
 
 /* SVE data, if present, overwrites FPSIMD data.  */
 if (sve) {
 target_restore_sve_record(env, sve, vq);
 }
-
- exit:
 unlock_user(extra, extra_datap, 0);
-return err;
+return 0;
+
+ err:
+unlock_user(extra, extra_datap, 0);
+return 1;
 }
 
 static abi_ulong get_sigframe(struct target_sigaction *ka,
-- 
2.34.1




[PATCH 67/71] linux-user: Rename sve prctls

2022-06-02 Thread Richard Henderson
Add "sve" to the sve prctl functions, to distinguish
them from the coming "sme" prctls with similar names.

Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/target_prctl.h |  8 
 linux-user/syscall.c  | 12 ++--
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/linux-user/aarch64/target_prctl.h 
b/linux-user/aarch64/target_prctl.h
index fdd973e07d..3c2ef734fe 100644
--- a/linux-user/aarch64/target_prctl.h
+++ b/linux-user/aarch64/target_prctl.h
@@ -6,7 +6,7 @@
 #ifndef AARCH64_TARGET_PRCTL_H
 #define AARCH64_TARGET_PRCTL_H
 
-static abi_long do_prctl_get_vl(CPUArchState *env)
+static abi_long do_prctl_sve_get_vl(CPUArchState *env)
 {
 ARMCPU *cpu = env_archcpu(env);
 if (cpu_isar_feature(aa64_sve, cpu)) {
@@ -14,9 +14,9 @@ static abi_long do_prctl_get_vl(CPUArchState *env)
 }
 return -TARGET_EINVAL;
 }
-#define do_prctl_get_vl do_prctl_get_vl
+#define do_prctl_sve_get_vl do_prctl_sve_get_vl
 
-static abi_long do_prctl_set_vl(CPUArchState *env, abi_long arg2)
+static abi_long do_prctl_sve_set_vl(CPUArchState *env, abi_long arg2)
 {
 /*
  * We cannot support either PR_SVE_SET_VL_ONEXEC or PR_SVE_VL_INHERIT.
@@ -47,7 +47,7 @@ static abi_long do_prctl_set_vl(CPUArchState *env, abi_long 
arg2)
 }
 return -TARGET_EINVAL;
 }
-#define do_prctl_set_vl do_prctl_set_vl
+#define do_prctl_sve_set_vl do_prctl_sve_set_vl
 
 static abi_long do_prctl_reset_keys(CPUArchState *env, abi_long arg2)
 {
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index f55cdebee5..a7f41ef0ac 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -6365,11 +6365,11 @@ static abi_long do_prctl_inval1(CPUArchState *env, 
abi_long arg2)
 #ifndef do_prctl_set_fp_mode
 #define do_prctl_set_fp_mode do_prctl_inval1
 #endif
-#ifndef do_prctl_get_vl
-#define do_prctl_get_vl do_prctl_inval0
+#ifndef do_prctl_sve_get_vl
+#define do_prctl_sve_get_vl do_prctl_inval0
 #endif
-#ifndef do_prctl_set_vl
-#define do_prctl_set_vl do_prctl_inval1
+#ifndef do_prctl_sve_set_vl
+#define do_prctl_sve_set_vl do_prctl_inval1
 #endif
 #ifndef do_prctl_reset_keys
 #define do_prctl_reset_keys do_prctl_inval1
@@ -6434,9 +6434,9 @@ static abi_long do_prctl(CPUArchState *env, abi_long 
option, abi_long arg2,
 case PR_SET_FP_MODE:
 return do_prctl_set_fp_mode(env, arg2);
 case PR_SVE_GET_VL:
-return do_prctl_get_vl(env);
+return do_prctl_sve_get_vl(env);
 case PR_SVE_SET_VL:
-return do_prctl_set_vl(env, arg2);
+return do_prctl_sve_set_vl(env, arg2);
 case PR_PAC_RESET_KEYS:
 if (arg3 || arg4 || arg5) {
 return -TARGET_EINVAL;
-- 
2.34.1




[PATCH 70/71] target/arm: Enable SME for user-only

2022-06-02 Thread Richard Henderson
Enable SME, TPIDR2_EL0, and FA64 if supported by the cpu.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 5cb9f9f02c..13b008547e 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -209,6 +209,17 @@ static void arm_cpu_reset(DeviceState *dev)
  CPACR_EL1, ZEN, 3);
 env->vfp.zcr_el[1] = cpu->sve_default_vq - 1;
 }
+/* and for SME instructions, with default vector length, and TPIDR2 */
+if (cpu_isar_feature(aa64_sme, cpu)) {
+env->cp15.sctlr_el[1] |= SCTLR_EnTP2;
+env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
+ CPACR_EL1, SMEN, 3);
+env->vfp.smcr_el[1] = cpu->sme_default_vq - 1;
+if (cpu_isar_feature(aa64_sme_fa64, cpu)) {
+env->vfp.smcr_el[1] = FIELD_DP64(env->vfp.smcr_el[1],
+ SMCR, FA64, 1);
+}
+}
 /*
  * Enable 48-bit address space (TODO: take reserved_va into account).
  * Enable TBI0 but not TBI1.
-- 
2.34.1




[PATCH 64/71] linux-user/aarch64: Verify extra record lock succeeded

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/signal.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index 590f2258b2..711fd19701 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -340,6 +340,9 @@ static int target_restore_sigframe(CPUARMState *env,
 __get_user(extra_size,
&((struct target_extra_context *)ctx)->size);
 extra = lock_user(VERIFY_READ, extra_datap, extra_size, 0);
+if (!extra) {
+return 1;
+}
 break;
 
 default:
-- 
2.34.1




Re: [PULL 0/2] VFIO fixes 2022-02-03

2022-06-02 Thread Alex Williamson
On Thu, 2 Jun 2022 15:31:38 -0600
Alex Williamson  wrote:

> On Mon, 7 Feb 2022 17:20:02 +0100
> Thomas Huth  wrote:
> 
> > On 07/02/2022 16.50, Alex Williamson wrote:  
> > > On Sat, 5 Feb 2022 10:49:35 +
> > > Peter Maydell  wrote:
> > > 
> > >> On Thu, 3 Feb 2022 at 22:38, Alex Williamson 
> > >>  wrote:
> > >>>
> > >>> The following changes since commit 
> > >>> 8f3e5ce773c62bb5c4a847f3a9a5c98bbb3b359f:
> > >>>
> > >>>Merge remote-tracking branch 
> > >>> 'remotes/hdeller/tags/hppa-updates-pull-request' into staging 
> > >>> (2022-02-02 19:54:30 +)
> > >>>
> > >>> are available in the Git repository at:
> > >>>
> > >>>git://github.com/awilliam/qemu-vfio.git tags/vfio-fixes-20220203.0
> > >>>
> > >>> for you to fetch changes up to 36fe5d5836c8d5d928ef6d34e999d6991a2f732e:
> > >>>
> > >>>hw/vfio/common: Silence ram device offset alignment error traces 
> > >>> (2022-02-03 15:05:05 -0700)
> > >>>
> > >>> 
> > >>> VFIO fixes 2022-02-03
> > >>>
> > >>>   * Fix alignment warnings when using TPM CRB with vfio-pci devices
> > >>> (Eric Auger & Philippe Mathieu-Daudé)
> > >>
> > >> Hi; this has a format-string issue that means it doesn't build
> > >> on 32-bit systems:
> > >>
> > >> https://gitlab.com/qemu-project/qemu/-/jobs/2057116569
> > >>
> > >> ../hw/vfio/common.c: In function 'vfio_listener_region_add':
> > >> ../hw/vfio/common.c:893:26: error: format '%llx' expects argument of
> > >> type 'long long unsigned int', but argument 6 has type 'intptr_t' {aka
> > >> 'int'} [-Werror=format=]
> > >> error_report("%s received unaligned region %s iova=0x%"PRIx64
> > >> ^~
> > >> ../hw/vfio/common.c:899:26:
> > >> qemu_real_host_page_mask);
> > >> 
> > >>
> > >> For intptr_t you want PRIxPTR.
> > > 
> > > Darn.  Well, let me use this opportunity to ask, how are folks doing
> > > 32-bit cross builds on Fedora?  I used to keep an i686 PAE VM for this
> > > purpose, but I was eventually no longer able to maintain the build
> > > dependencies.  Looks like this failed on a mipsel cross build, but I
> > > don't see such a cross compiler in Fedora.  I do mingw32/64 cross
> > > builds, but they leave a lot to be desired for code coverage.  Thanks,
> > 
> > The easiest way for getting more test coverage is likely to move your qemu 
> > repository from github to gitlab - then you get most of the CI for free, 
> > which should catch such issues before sending pull requests.  
> 
> Well, it worked for a few months, but now pushing a tag to gitlab runs
> a whole 4 jobs vs the 124 jobs that it previously ran, so that's
> useless now :(  Thanks,

And Richard has now sent me the link to your announcement, including
the git push variables to get things back to normal:

https://lists.nongnu.org/archive/html/qemu-devel/2022-06/msg00256.html

Thanks,
Alex




[PATCH 68/71] linux-user/aarch64: Implement PR_SME_GET_VL, PR_SME_SET_VL

2022-06-02 Thread Richard Henderson
These prctl set the Streaming SVE vector length, which may
be completely different from the Normal SVE vector length.

Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/target_prctl.h | 48 +++
 linux-user/syscall.c  | 16 +++
 2 files changed, 64 insertions(+)

diff --git a/linux-user/aarch64/target_prctl.h 
b/linux-user/aarch64/target_prctl.h
index 3c2ef734fe..01282bd78c 100644
--- a/linux-user/aarch64/target_prctl.h
+++ b/linux-user/aarch64/target_prctl.h
@@ -10,6 +10,7 @@ static abi_long do_prctl_sve_get_vl(CPUArchState *env)
 {
 ARMCPU *cpu = env_archcpu(env);
 if (cpu_isar_feature(aa64_sve, cpu)) {
+/* PSTATE.SM is always unset on syscall entry. */
 return sve_vq_cached(env) * 16;
 }
 return -TARGET_EINVAL;
@@ -27,6 +28,7 @@ static abi_long do_prctl_sve_set_vl(CPUArchState *env, 
abi_long arg2)
 && arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
 uint32_t vq, old_vq;
 
+/* PSTATE.SM is always unset on syscall entry. */
 old_vq = sve_vq_cached(env);
 
 /*
@@ -49,6 +51,52 @@ static abi_long do_prctl_sve_set_vl(CPUArchState *env, 
abi_long arg2)
 }
 #define do_prctl_sve_set_vl do_prctl_sve_set_vl
 
+static abi_long do_prctl_sme_get_vl(CPUArchState *env)
+{
+ARMCPU *cpu = env_archcpu(env);
+if (cpu_isar_feature(aa64_sme, cpu)) {
+return sme_vq_cached(env) * 16;
+}
+return -TARGET_EINVAL;
+}
+#define do_prctl_sme_get_vl do_prctl_sme_get_vl
+
+static abi_long do_prctl_sme_set_vl(CPUArchState *env, abi_long arg2)
+{
+/*
+ * We cannot support either PR_SME_SET_VL_ONEXEC or PR_SME_VL_INHERIT.
+ * Note the kernel definition of sve_vl_valid allows for VQ=512,
+ * i.e. VL=8192, even though the architectural maximum is VQ=16.
+ */
+if (cpu_isar_feature(aa64_sme, env_archcpu(env))
+&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
+int vq, old_vq;
+
+old_vq = sme_vq_cached(env);
+
+/*
+ * Bound the value of vq, so that we know that it fits into
+ * the 4-bit field in SMCR_EL1.  Because PSTATE.SM is cleared
+ * on syscall entry, we are not modifying the current SVE
+ * vector length.
+ */
+vq = MAX(arg2 / 16, 1);
+vq = MIN(vq, 16);
+env->vfp.smcr_el[1] =
+FIELD_DP64(env->vfp.smcr_el[1], SMCR, LEN, vq - 1);
+vq = sme_vq_cached(env);
+
+if (old_vq != vq) {
+/* PSTATE.ZA state is cleared on any change to VQ. */
+env->svcr = FIELD_DP64(env->svcr, SVCR, ZA, 0);
+arm_rebuild_hflags(env);
+}
+return vq * 16;
+}
+return -TARGET_EINVAL;
+}
+#define do_prctl_sme_set_vl do_prctl_sme_set_vl
+
 static abi_long do_prctl_reset_keys(CPUArchState *env, abi_long arg2)
 {
 ARMCPU *cpu = env_archcpu(env);
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index a7f41ef0ac..e8d6e20b85 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -6346,6 +6346,12 @@ abi_long do_arch_prctl(CPUX86State *env, int code, 
abi_ulong addr)
 #ifndef PR_SET_SYSCALL_USER_DISPATCH
 # define PR_SET_SYSCALL_USER_DISPATCH 59
 #endif
+#ifndef PR_SME_SET_VL
+# define PR_SME_SET_VL  63
+# define PR_SME_GET_VL  64
+# define PR_SME_VL_LEN_MASK  0x
+# define PR_SME_VL_INHERIT   (1 << 17)
+#endif
 
 #include "target_prctl.h"
 
@@ -6386,6 +6392,12 @@ static abi_long do_prctl_inval1(CPUArchState *env, 
abi_long arg2)
 #ifndef do_prctl_set_unalign
 #define do_prctl_set_unalign do_prctl_inval1
 #endif
+#ifndef do_prctl_sme_get_vl
+#define do_prctl_sme_get_vl do_prctl_inval0
+#endif
+#ifndef do_prctl_sme_set_vl
+#define do_prctl_sme_set_vl do_prctl_inval1
+#endif
 
 static abi_long do_prctl(CPUArchState *env, abi_long option, abi_long arg2,
  abi_long arg3, abi_long arg4, abi_long arg5)
@@ -6437,6 +6449,10 @@ static abi_long do_prctl(CPUArchState *env, abi_long 
option, abi_long arg2,
 return do_prctl_sve_get_vl(env);
 case PR_SVE_SET_VL:
 return do_prctl_sve_set_vl(env, arg2);
+case PR_SME_GET_VL:
+return do_prctl_sme_get_vl(env);
+case PR_SME_SET_VL:
+return do_prctl_sme_set_vl(env, arg2);
 case PR_PAC_RESET_KEYS:
 if (arg3 || arg4 || arg5) {
 return -TARGET_EINVAL;
-- 
2.34.1




[PATCH 54/71] target/arm: Implement PSEL

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  | 20 +
 target/arm/translate-sve.c | 57 ++
 2 files changed, 77 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index bbdaac6ac7..bf561c270a 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1674,3 +1674,23 @@ BFMLALT_zzxw01100100 11 1 . 0100.1 . .   
  @rrxr_3a esz=2
 
 ### SVE2 floating-point bfloat16 dot-product (indexed)
 BFDOT_zzxz  01100100 01 1 . 01 . . @rrxr_2 esz=2
+
+### SVE broadcast predicate element
+
+   esz pd pn pm rv imm
+%psel_rv16:2 !function=plus_12
+%psel_imm_b 22:2 19:2
+%psel_imm_h 22:2 20:1
+%psel_imm_s 22:2
+%psel_imm_d 23:1
+@psel    .. . ... .. .. pn:4 . pm:4 . pd:4  \
+ rv=%psel_rv
+
+PSEL00100101 .. 1 ..1 .. 01  0  0   \
+@psel esz=0 imm=%psel_imm_b
+PSEL00100101 .. 1 .10 .. 01  0  0   \
+@psel esz=1 imm=%psel_imm_h
+PSEL00100101 .. 1 100 .. 01  0  0   \
+@psel esz=2 imm=%psel_imm_s
+PSEL00100101 .1 1 000 .. 01  0  0   \
+@psel esz=3 imm=%psel_imm_d
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index adf0cd3e68..58d0894e15 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7379,3 +7379,60 @@ static bool do_BFMLAL_zzxw(DisasContext *s, arg_rrxr_esz 
*a, bool sel)
 
 TRANS_FEAT(BFMLALB_zzxw, aa64_sve_bf16, do_BFMLAL_zzxw, a, false)
 TRANS_FEAT(BFMLALT_zzxw, aa64_sve_bf16, do_BFMLAL_zzxw, a, true)
+
+static bool trans_PSEL(DisasContext *s, arg_psel *a)
+{
+int vl = vec_full_reg_size(s);
+int pl = pred_gvec_reg_size(s);
+int elements = vl >> a->esz;
+TCGv_i64 tmp, didx, dbit;
+TCGv_ptr ptr;
+
+if (!dc_isar_feature(aa64_sme, s)) {
+return false;
+}
+if (!sve_access_check(s)) {
+return true;
+}
+
+tmp = tcg_temp_new_i64();
+dbit = tcg_temp_new_i64();
+didx = tcg_temp_new_i64();
+ptr = tcg_temp_new_ptr();
+
+/* Compute the predicate element. */
+tcg_gen_addi_i64(tmp, cpu_reg(s, a->rv), a->imm);
+if (is_power_of_2(elements)) {
+tcg_gen_andi_i64(tmp, tmp, elements - 1);
+} else {
+tcg_gen_remu_i64(tmp, tmp, tcg_constant_i64(elements));
+}
+
+/* Extract the predicate byte and bit indices. */
+tcg_gen_shli_i64(tmp, tmp, a->esz);
+tcg_gen_andi_i64(dbit, tmp, 7);
+tcg_gen_shri_i64(didx, tmp, 3);
+if (HOST_BIG_ENDIAN) {
+tcg_gen_xori_i64(didx, didx, 7);
+}
+
+/* Load the predicate word. */
+tcg_gen_trunc_i64_ptr(ptr, didx);
+tcg_gen_add_ptr(ptr, ptr, cpu_env);
+tcg_gen_ld8u_i64(tmp, ptr, pred_full_reg_offset(s, a->pm));
+
+/* Extract the predicate bit and replicate to MO_64. */
+tcg_gen_shr_i64(tmp, tmp, dbit);
+tcg_gen_andi_i64(tmp, tmp, 1);
+tcg_gen_neg_i64(tmp, tmp);
+
+/* Apply to either copy the source, or write zeros. */
+tcg_gen_gvec_ands(MO_64, pred_full_reg_offset(s, a->pd),
+  pred_full_reg_offset(s, a->pn), tmp, pl, pl);
+
+tcg_temp_free_i64(tmp);
+tcg_temp_free_i64(dbit);
+tcg_temp_free_i64(didx);
+tcg_temp_free_ptr(ptr);
+return true;
+}
-- 
2.34.1




[PATCH 69/71] target/arm: Only set ZEN in reset if SVE present

2022-06-02 Thread Richard Henderson
There's no reason to set CPACR_EL1.ZEN if SVE disabled.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 75295a14a3..5cb9f9f02c 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -203,11 +203,10 @@ static void arm_cpu_reset(DeviceState *dev)
 /* and to the FP/Neon instructions */
 env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
  CPACR_EL1, FPEN, 3);
-/* and to the SVE instructions */
-env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
- CPACR_EL1, ZEN, 3);
-/* with reasonable vector length */
+/* and to the SVE instructions, with default vector length */
 if (cpu_isar_feature(aa64_sve, cpu)) {
+env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
+ CPACR_EL1, ZEN, 3);
 env->vfp.zcr_el[1] = cpu->sve_default_vq - 1;
 }
 /*
-- 
2.34.1




[PATCH 71/71] linux-user/aarch64: Add SME related hwcap entries

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 linux-user/elfload.c | 20 
 1 file changed, 20 insertions(+)

diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index f7eae357f4..8135960305 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -601,6 +601,18 @@ enum {
 ARM_HWCAP2_A64_RNG  = 1 << 16,
 ARM_HWCAP2_A64_BTI  = 1 << 17,
 ARM_HWCAP2_A64_MTE  = 1 << 18,
+ARM_HWCAP2_A64_ECV  = 1 << 19,
+ARM_HWCAP2_A64_AFP  = 1 << 20,
+ARM_HWCAP2_A64_RPRES= 1 << 21,
+ARM_HWCAP2_A64_MTE3 = 1 << 22,
+ARM_HWCAP2_A64_SME  = 1 << 23,
+ARM_HWCAP2_A64_SME_I16I64   = 1 << 24,
+ARM_HWCAP2_A64_SME_F64F64   = 1 << 25,
+ARM_HWCAP2_A64_SME_I8I32= 1 << 26,
+ARM_HWCAP2_A64_SME_F16F32   = 1 << 27,
+ARM_HWCAP2_A64_SME_B16F32   = 1 << 28,
+ARM_HWCAP2_A64_SME_F32F32   = 1 << 29,
+ARM_HWCAP2_A64_SME_FA64 = 1 << 30,
 };
 
 #define ELF_HWCAP   get_elf_hwcap()
@@ -670,6 +682,14 @@ static uint32_t get_elf_hwcap2(void)
 GET_FEATURE_ID(aa64_rndr, ARM_HWCAP2_A64_RNG);
 GET_FEATURE_ID(aa64_bti, ARM_HWCAP2_A64_BTI);
 GET_FEATURE_ID(aa64_mte, ARM_HWCAP2_A64_MTE);
+GET_FEATURE_ID(aa64_sme, (ARM_HWCAP2_A64_SME |
+  ARM_HWCAP2_A64_SME_F32F32 |
+  ARM_HWCAP2_A64_SME_B16F32 |
+  ARM_HWCAP2_A64_SME_F16F32 |
+  ARM_HWCAP2_A64_SME_I8I32));
+GET_FEATURE_ID(aa64_sme_f64f64, ARM_HWCAP2_A64_SME_F64F64);
+GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64);
+GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64);
 
 return hwcaps;
 }
-- 
2.34.1




[PATCH 65/71] linux-user/aarch64: Move sve record checks into restore

2022-06-02 Thread Richard Henderson
Move the checks out of the parsing loop and into the
restore function.  This more closely mirrors the code
structure in the kernel, and is slightly clearer.

Reject rather than silently skip incorrect VL and SVE record sizes.

Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/signal.c | 51 +
 1 file changed, 35 insertions(+), 16 deletions(-)

diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index 711fd19701..73b15038ad 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -250,12 +250,36 @@ static void target_restore_fpsimd_record(CPUARMState *env,
 }
 }
 
-static void target_restore_sve_record(CPUARMState *env,
-  struct target_sve_context *sve, int vq)
+static bool target_restore_sve_record(CPUARMState *env,
+  struct target_sve_context *sve,
+  int size)
 {
-int i, j;
+int i, j, vl, vq;
 
-/* Note that SVE regs are stored as a byte stream, with each byte element
+if (!cpu_isar_feature(aa64_sve, env_archcpu(env))) {
+return false;
+}
+
+__get_user(vl, >vl);
+vq = sve_vq_cached(env);
+
+/* Reject mismatched VL. */
+if (vl != vq * TARGET_SVE_VQ_BYTES) {
+return false;
+}
+
+/* Accept empty record -- used to clear PSTATE.SM. */
+if (size <= sizeof(*sve)) {
+return true;
+}
+
+/* Reject non-empty but incomplete record. */
+if (size < TARGET_SVE_SIG_CONTEXT_SIZE(vq)) {
+return false;
+}
+
+/*
+ * Note that SVE regs are stored as a byte stream, with each byte element
  * at a subsequent address.  This corresponds to a little-endian load
  * of our 64-bit hunks.
  */
@@ -277,6 +301,7 @@ static void target_restore_sve_record(CPUARMState *env,
 }
 }
 }
+return true;
 }
 
 static int target_restore_sigframe(CPUARMState *env,
@@ -287,7 +312,7 @@ static int target_restore_sigframe(CPUARMState *env,
 struct target_sve_context *sve = NULL;
 uint64_t extra_datap = 0;
 bool used_extra = false;
-int vq = 0, sve_size = 0;
+int sve_size = 0;
 
 target_restore_general_frame(env, sf);
 
@@ -321,15 +346,9 @@ static int target_restore_sigframe(CPUARMState *env,
 if (sve || size < sizeof(struct target_sve_context)) {
 goto err;
 }
-if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
-vq = sve_vq_cached(env);
-sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
-if (size == sve_size) {
-sve = (struct target_sve_context *)ctx;
-break;
-}
-}
-goto err;
+sve = (struct target_sve_context *)ctx;
+sve_size = size;
+break;
 
 case TARGET_EXTRA_MAGIC:
 if (extra || size != sizeof(struct target_extra_context)) {
@@ -362,8 +381,8 @@ static int target_restore_sigframe(CPUARMState *env,
 }
 
 /* SVE data, if present, overwrites FPSIMD data.  */
-if (sve) {
-target_restore_sve_record(env, sve, vq);
+if (sve && !target_restore_sve_record(env, sve, sve_size)) {
+goto err;
 }
 unlock_user(extra, extra_datap, 0);
 return 0;
-- 
2.34.1




[PATCH 57/71] target/arm: Reset streaming sve state on exception boundaries

2022-06-02 Thread Richard Henderson
We can handle both exception entry and exception return by
hooking into aarch64_sve_change_el.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 15 +--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 7396be4352..af612b52b5 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -14276,6 +14276,19 @@ void aarch64_sve_change_el(CPUARMState *env, int 
old_el,
 return;
 }
 
+old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64;
+new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64;
+
+/*
+ * Both AArch64.TakeException and AArch64.ExceptionReturn
+ * invoke ResetSVEState when taking an exception from, or
+ * returning to, AArch32 state when PSTATE.SM is enabled.
+ */
+if (old_a64 != new_a64 && FIELD_EX64(env->svcr, SVCR, SM)) {
+arm_reset_sve_state(env);
+return;
+}
+
 /*
  * DDI0584A.d sec 3.2: "If SVE instructions are disabled or trapped
  * at ELx, or not available because the EL is in AArch32 state, then
@@ -14288,10 +14301,8 @@ void aarch64_sve_change_el(CPUARMState *env, int 
old_el,
  * we already have the correct register contents when encountering the
  * vq0->vq0 transition between EL0->EL1.
  */
-old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64;
 old_len = (old_a64 && !sve_exception_el(env, old_el)
? sve_vqm1_for_el(env, old_el) : 0);
-new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64;
 new_len = (new_a64 && !sve_exception_el(env, new_el)
? sve_vqm1_for_el(env, new_el) : 0);
 
-- 
2.34.1




[PATCH 66/71] linux-user/aarch64: Implement SME signal handling

2022-06-02 Thread Richard Henderson
Set the SM bit in the SVE record on signal delivery, create the ZA record.
Restore SM and ZA state according to the records present on return.

Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/signal.c | 158 ++--
 1 file changed, 149 insertions(+), 9 deletions(-)

diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index 73b15038ad..79b2fc1cfe 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -104,6 +104,22 @@ struct target_sve_context {
 
 #define TARGET_SVE_SIG_FLAG_SM  1
 
+#define TARGET_ZA_MAGIC0x54366345
+
+struct target_za_context {
+struct target_aarch64_ctx head;
+uint16_t vl;
+uint16_t reserved[3];
+/* The actual ZA data immediately follows. */
+};
+
+#define TARGET_ZA_SIG_REGS_OFFSET \
+QEMU_ALIGN_UP(sizeof(struct target_za_context), TARGET_SVE_VQ_BYTES)
+#define TARGET_ZA_SIG_ZAV_OFFSET(VQ, N) \
+(TARGET_ZA_SIG_REGS_OFFSET + (VQ) * TARGET_SVE_VQ_BYTES * (N))
+#define TARGET_ZA_SIG_CONTEXT_SIZE(VQ) \
+TARGET_ZA_SIG_ZAV_OFFSET(VQ, VQ * TARGET_SVE_VQ_BYTES)
+
 struct target_rt_sigframe {
 struct target_siginfo info;
 struct target_ucontext uc;
@@ -207,6 +223,32 @@ static void target_setup_sve_record(struct 
target_sve_context *sve,
 }
 }
 
+static void target_setup_za_record(struct target_za_context *za,
+   CPUARMState *env, int vq, int size)
+{
+int i, j, vl = vq * TARGET_SVE_VQ_BYTES;
+
+memset(za, 0, sizeof(*za));
+__put_user(TARGET_ZA_MAGIC, >head.magic);
+__put_user(size, >head.size);
+__put_user(vl, >vl);
+
+if (size == TARGET_ZA_SIG_CONTEXT_SIZE(0)) {
+return;
+}
+
+/*
+ * Note that ZA vectors are stored as a byte stream,
+ * with each byte element at a subsequent address.
+ */
+for (i = 0; i < vl; ++i) {
+uint64_t *z = (void *)za + TARGET_ZA_SIG_ZAV_OFFSET(vq, i);
+for (j = 0; j < vq * 2; ++j) {
+__put_user_e(env->zarray[i].d[j], z + j, le);
+}
+}
+}
+
 static void target_restore_general_frame(CPUARMState *env,
  struct target_rt_sigframe *sf)
 {
@@ -252,16 +294,28 @@ static void target_restore_fpsimd_record(CPUARMState *env,
 
 static bool target_restore_sve_record(CPUARMState *env,
   struct target_sve_context *sve,
-  int size)
+  int size, int *svcr)
 {
-int i, j, vl, vq;
+int i, j, vl, vq, flags;
+bool sm;
 
+/* ??? Kernel tests SVE && (!sm || SME); suggest (sm ? SME : SVE). */
 if (!cpu_isar_feature(aa64_sve, env_archcpu(env))) {
 return false;
 }
 
 __get_user(vl, >vl);
-vq = sve_vq_cached(env);
+__get_user(flags, >flags);
+
+sm = flags & TARGET_SVE_SIG_FLAG_SM;
+if (sm) {
+if (!cpu_isar_feature(aa64_sme, env_archcpu(env))) {
+return false;
+}
+vq = sme_vq_cached(env);
+} else {
+vq = sve_vq_cached(env);
+}
 
 /* Reject mismatched VL. */
 if (vl != vq * TARGET_SVE_VQ_BYTES) {
@@ -278,6 +332,8 @@ static bool target_restore_sve_record(CPUARMState *env,
 return false;
 }
 
+*svcr = FIELD_DP64(*svcr, SVCR, SM, sm);
+
 /*
  * Note that SVE regs are stored as a byte stream, with each byte element
  * at a subsequent address.  This corresponds to a little-endian load
@@ -304,15 +360,57 @@ static bool target_restore_sve_record(CPUARMState *env,
 return true;
 }
 
+static bool target_restore_za_record(CPUARMState *env,
+ struct target_za_context *za,
+ int size, int *svcr)
+{
+int i, j, vl, vq;
+
+if (!cpu_isar_feature(aa64_sme, env_archcpu(env))) {
+return false;
+}
+
+__get_user(vl, >vl);
+vq = sme_vq_cached(env);
+
+/* Reject mismatched VL. */
+if (vl != vq * TARGET_SVE_VQ_BYTES) {
+return false;
+}
+
+/* Accept empty record -- used to clear PSTATE.ZA. */
+if (size <= TARGET_ZA_SIG_CONTEXT_SIZE(0)) {
+return true;
+}
+
+/* Reject non-empty but incomplete record. */
+if (size < TARGET_ZA_SIG_CONTEXT_SIZE(vq)) {
+return false;
+}
+
+*svcr = FIELD_DP64(*svcr, SVCR, ZA, 1);
+
+for (i = 0; i < vl; ++i) {
+uint64_t *z = (void *)za + TARGET_ZA_SIG_ZAV_OFFSET(vq, i);
+for (j = 0; j < vq * 2; ++j) {
+__get_user_e(env->zarray[i].d[j], z + j, le);
+}
+}
+return true;
+}
+
 static int target_restore_sigframe(CPUARMState *env,
struct target_rt_sigframe *sf)
 {
 struct target_aarch64_ctx *ctx, *extra = NULL;
 struct target_fpsimd_context *fpsimd = NULL;
 struct target_sve_context *sve = NULL;
+struct target_za_context *za = NULL;
 uint64_t extra_datap = 0;
 bool used_extra = false;
 

[PATCH 63/71] linux-user/aarch64: Do not allow duplicate or short sve records

2022-06-02 Thread Richard Henderson
In parse_user_sigframe, the kernel rejects duplicate sve records,
or records that are smaller than the header.  We were silently
allowing these cases to pass, dropping the record.

Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/signal.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index e9ff280d2a..590f2258b2 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -318,10 +318,13 @@ static int target_restore_sigframe(CPUARMState *env,
 break;
 
 case TARGET_SVE_MAGIC:
+if (sve || size < sizeof(struct target_sve_context)) {
+goto err;
+}
 if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
 vq = sve_vq_cached(env);
 sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
-if (!sve && size == sve_size) {
+if (size == sve_size) {
 sve = (struct target_sve_context *)ctx;
 break;
 }
-- 
2.34.1




[PATCH 61/71] linux-user/aarch64: Add SM bit to SVE signal context

2022-06-02 Thread Richard Henderson
Make sure to zero the currently reserved fields.

Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/signal.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index 30e89f67c8..08a9746ace 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -78,7 +78,8 @@ struct target_extra_context {
 struct target_sve_context {
 struct target_aarch64_ctx head;
 uint16_t vl;
-uint16_t reserved[3];
+uint16_t flags;
+uint16_t reserved[2];
 /* The actual SVE data immediately follows.  It is laid out
  * according to TARGET_SVE_SIG_{Z,P}REG_OFFSET, based off of
  * the original struct pointer.
@@ -101,6 +102,8 @@ struct target_sve_context {
 #define TARGET_SVE_SIG_CONTEXT_SIZE(VQ) \
 (TARGET_SVE_SIG_PREG_OFFSET(VQ, 17))
 
+#define TARGET_SVE_SIG_FLAG_SM  1
+
 struct target_rt_sigframe {
 struct target_siginfo info;
 struct target_ucontext uc;
@@ -177,9 +180,13 @@ static void target_setup_sve_record(struct 
target_sve_context *sve,
 {
 int i, j;
 
+memset(sve, 0, sizeof(*sve));
 __put_user(TARGET_SVE_MAGIC, >head.magic);
 __put_user(size, >head.size);
 __put_user(vq * TARGET_SVE_VQ_BYTES, >vl);
+if (FIELD_EX64(env->svcr, SVCR, SM)) {
+__put_user(TARGET_SVE_SIG_FLAG_SM, >flags);
+}
 
 /* Note that SVE regs are stored as a byte stream, with each byte element
  * at a subsequent address.  This corresponds to a little-endian store
-- 
2.34.1




[PATCH 59/71] linux-user/aarch64: Clear tpidr2_el0 if CLONE_SETTLS

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/target_cpu.h | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/linux-user/aarch64/target_cpu.h b/linux-user/aarch64/target_cpu.h
index 97a477bd3e..f90359faf2 100644
--- a/linux-user/aarch64/target_cpu.h
+++ b/linux-user/aarch64/target_cpu.h
@@ -34,10 +34,13 @@ static inline void cpu_clone_regs_parent(CPUARMState *env, 
unsigned flags)
 
 static inline void cpu_set_tls(CPUARMState *env, target_ulong newtls)
 {
-/* Note that AArch64 Linux keeps the TLS pointer in TPIDR; this is
+/*
+ * Note that AArch64 Linux keeps the TLS pointer in TPIDR; this is
  * different from AArch32 Linux, which uses TPIDRRO.
  */
 env->cp15.tpidr_el[0] = newtls;
+/* TPIDR2_EL0 is cleared with CLONE_SETTLS. */
+env->cp15.tpidr2_el0 = 0;
 }
 
 static inline abi_ulong get_sp_from_cpustate(CPUARMState *state)
-- 
2.34.1




[PATCH 56/71] target/arm: Implement SCLAMP, UCLAMP

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper.h|  18 +++
 target/arm/sve.decode  |   5 ++
 target/arm/translate-sve.c | 102 +
 target/arm/vec_helper.c|  24 +
 4 files changed, 149 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 5bca7255f1..f9bc4b29b4 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1017,6 +1017,24 @@ DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(gvec_sclamp_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_sclamp_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_sclamp_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_sclamp_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(gvec_uclamp_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_uclamp_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_uclamp_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_uclamp_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index d1e229fd6e..ad411b5790 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1695,3 +1695,8 @@ PSEL00100101 .. 1 100 .. 01  0  0 
  \
 @psel esz=2 imm=%psel_imm_s
 PSEL00100101 .1 1 000 .. 01  0  0   \
 @psel esz=3 imm=%psel_imm_d
+
+### SVE clamp
+
+SCLAMP  01000100 .. 0 . 11 . .  @rda_rn_rm
+UCLAMP  01000100 .. 0 . 110001 . .  @rda_rn_rm
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 1129f1fc56..40c5bf1a55 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7438,3 +7438,105 @@ static bool trans_PSEL(DisasContext *s, arg_psel *a)
 tcg_temp_free_ptr(ptr);
 return true;
 }
+
+static void gen_sclamp_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_i32 a)
+{
+tcg_gen_smax_i32(d, a, n);
+tcg_gen_smin_i32(d, d, m);
+}
+
+static void gen_sclamp_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 a)
+{
+tcg_gen_smax_i64(d, a, n);
+tcg_gen_smin_i64(d, d, m);
+}
+
+static void gen_sclamp_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
+   TCGv_vec m, TCGv_vec a)
+{
+tcg_gen_smax_vec(vece, d, a, n);
+tcg_gen_smin_vec(vece, d, d, m);
+}
+
+static void gen_sclamp(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+   uint32_t a, uint32_t oprsz, uint32_t maxsz)
+{
+static const TCGOpcode vecop[] = {
+INDEX_op_smin_vec, INDEX_op_smax_vec, 0
+};
+static const GVecGen4 ops[4] = {
+{ .fniv = gen_sclamp_vec,
+  .fno  = gen_helper_gvec_sclamp_b,
+  .opt_opc = vecop,
+  .vece = MO_8 },
+{ .fniv = gen_sclamp_vec,
+  .fno  = gen_helper_gvec_sclamp_h,
+  .opt_opc = vecop,
+  .vece = MO_16 },
+{ .fni4 = gen_sclamp_i32,
+  .fniv = gen_sclamp_vec,
+  .fno  = gen_helper_gvec_sclamp_s,
+  .opt_opc = vecop,
+  .vece = MO_32 },
+{ .fni8 = gen_sclamp_i64,
+  .fniv = gen_sclamp_vec,
+  .fno  = gen_helper_gvec_sclamp_d,
+  .opt_opc = vecop,
+  .vece = MO_64,
+  .prefer_i64 = TCG_TARGET_REG_BITS == 64 }
+};
+tcg_gen_gvec_4(d, n, m, a, oprsz, maxsz, [vece]);
+}
+
+TRANS_FEAT(SCLAMP, aa64_sme, gen_gvec_fn_arg_, gen_sclamp, a)
+
+static void gen_uclamp_i32(TCGv_i32 d, TCGv_i32 n, TCGv_i32 m, TCGv_i32 a)
+{
+tcg_gen_umax_i32(d, a, n);
+tcg_gen_umin_i32(d, d, m);
+}
+
+static void gen_uclamp_i64(TCGv_i64 d, TCGv_i64 n, TCGv_i64 m, TCGv_i64 a)
+{
+tcg_gen_umax_i64(d, a, n);
+tcg_gen_umin_i64(d, d, m);
+}
+
+static void gen_uclamp_vec(unsigned vece, TCGv_vec d, TCGv_vec n,
+   TCGv_vec m, TCGv_vec a)
+{
+tcg_gen_umax_vec(vece, d, a, n);
+tcg_gen_umin_vec(vece, d, d, m);
+}
+
+static void gen_uclamp(unsigned vece, uint32_t d, uint32_t n, uint32_t m,
+   uint32_t a, uint32_t oprsz, uint32_t maxsz)
+{
+static const TCGOpcode vecop[] = {
+INDEX_op_umin_vec, INDEX_op_umax_vec, 0
+};
+static const GVecGen4 ops[4] = {
+{ .fniv = gen_uclamp_vec,
+  .fno  = gen_helper_gvec_uclamp_b,
+  .opt_opc = vecop,
+  .vece = MO_8 },
+{ .fniv = gen_uclamp_vec,
+  .fno  = gen_helper_gvec_uclamp_h,
+  .opt_opc = vecop,
+  .vece = MO_16 },
+  

[PATCH 60/71] linux-user/aarch64: Reset PSTATE.SM on syscalls

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 linux-user/aarch64/cpu_loop.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
index 3b273f6299..4af6996d57 100644
--- a/linux-user/aarch64/cpu_loop.c
+++ b/linux-user/aarch64/cpu_loop.c
@@ -89,6 +89,15 @@ void cpu_loop(CPUARMState *env)
 
 switch (trapnr) {
 case EXCP_SWI:
+/*
+ * On syscall, PSTATE.ZA is preserved, along with the ZA matrix.
+ * PSTATE.SM is cleared, per SMSTOP, which does ResetSVEState.
+ */
+if (FIELD_EX64(env->svcr, SVCR, SM)) {
+env->svcr = FIELD_DP64(env->svcr, SVCR, SM, 0);
+arm_rebuild_hflags(env);
+arm_reset_sve_state(env);
+}
 ret = do_syscall(env,
  env->xregs[8],
  env->xregs[0],
-- 
2.34.1




[PATCH 46/71] target/arm: Implement SME LD1, ST1

2022-06-02 Thread Richard Henderson
We cannot reuse the SVE functions for LD[1-4] and ST[1-4],
because those functions accept only a Zreg register number.
For SME, we want to pass a pointer into ZA storage.

Signed-off-by: Richard Henderson 
---
 target/arm/helper-sme.h|  82 +
 target/arm/sme.decode  |   9 +
 target/arm/sme_helper.c| 615 +
 target/arm/translate-sme.c |  69 +
 4 files changed, 775 insertions(+)

diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
index 600346e08c..5cca01f372 100644
--- a/target/arm/helper-sme.h
+++ b/target/arm/helper-sme.h
@@ -32,3 +32,85 @@ DEF_HELPER_FLAGS_4(sme_mova_avz_d, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sme_mova_zav_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sme_mova_avz_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sme_mova_zav_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sme_ld1b_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1b_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1b_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1b_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+
+DEF_HELPER_FLAGS_5(sme_ld1h_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1h_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1h_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1h_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1h_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1h_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1h_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1h_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+
+DEF_HELPER_FLAGS_5(sme_ld1s_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1s_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1s_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1s_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1s_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1s_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1s_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1s_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+
+DEF_HELPER_FLAGS_5(sme_ld1d_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1d_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1d_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1d_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1d_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1d_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1d_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1d_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+
+DEF_HELPER_FLAGS_5(sme_ld1q_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1q_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1q_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1q_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_ld1q_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1q_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1q_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_ld1q_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+
+DEF_HELPER_FLAGS_5(sme_st1b_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_st1b_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_st1b_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_st1b_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+
+DEF_HELPER_FLAGS_5(sme_st1h_be_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_st1h_le_h, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_st1h_be_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_st1h_le_v, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, i32)
+DEF_HELPER_FLAGS_5(sme_st1h_be_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_st1h_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_st1h_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+DEF_HELPER_FLAGS_5(sme_st1h_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+
+DEF_HELPER_FLAGS_5(sme_st1s_be_h, TCG_CALL_NO_WG, void, env, ptr, 

[PATCH 47/71] target/arm: Export unpredicated ld/st from translate-sve.c

2022-06-02 Thread Richard Henderson
Add a TCGv_ptr base argument, which will be cpu_env for SVE.
We will reuse this for SME save and restore array insns.

Signed-off-by: Richard Henderson 
---
 target/arm/translate-a64.h |  3 +++
 target/arm/translate-sve.c | 48 --
 2 files changed, 39 insertions(+), 12 deletions(-)

diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index c341c95582..54503745a9 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -165,4 +165,7 @@ void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t 
rn_ofs,
   uint32_t rm_ofs, int64_t shift,
   uint32_t opr_sz, uint32_t max_sz);
 
+void gen_sve_ldr(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int 
imm);
+void gen_sve_str(DisasContext *s, TCGv_ptr, int vofs, int len, int rn, int 
imm);
+
 #endif /* TARGET_ARM_TRANSLATE_A64_H */
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 13bdd027a5..adf0cd3e68 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4294,7 +4294,8 @@ TRANS_FEAT(UCVTF_dd, aa64_sve, gen_gvec_fpst_arg_zpz,
  * The load should begin at the address Rn + IMM.
  */
 
-static void do_ldr(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
+void gen_sve_ldr(DisasContext *s, TCGv_ptr base, int vofs,
+ int len, int rn, int imm)
 {
 int len_align = QEMU_ALIGN_DOWN(len, 8);
 int len_remain = len % 8;
@@ -4320,7 +4321,7 @@ static void do_ldr(DisasContext *s, uint32_t vofs, int 
len, int rn, int imm)
 t0 = tcg_temp_new_i64();
 for (i = 0; i < len_align; i += 8) {
 tcg_gen_qemu_ld_i64(t0, clean_addr, midx, MO_LEUQ);
-tcg_gen_st_i64(t0, cpu_env, vofs + i);
+tcg_gen_st_i64(t0, base, vofs + i);
 tcg_gen_addi_i64(clean_addr, clean_addr, 8);
 }
 tcg_temp_free_i64(t0);
@@ -4333,6 +4334,12 @@ static void do_ldr(DisasContext *s, uint32_t vofs, int 
len, int rn, int imm)
 clean_addr = new_tmp_a64_local(s);
 tcg_gen_mov_i64(clean_addr, t0);
 
+if (base != cpu_env) {
+TCGv_ptr b = tcg_temp_local_new_ptr();
+tcg_gen_mov_ptr(b, base);
+base = b;
+}
+
 gen_set_label(loop);
 
 t0 = tcg_temp_new_i64();
@@ -4340,7 +4347,7 @@ static void do_ldr(DisasContext *s, uint32_t vofs, int 
len, int rn, int imm)
 tcg_gen_addi_i64(clean_addr, clean_addr, 8);
 
 tp = tcg_temp_new_ptr();
-tcg_gen_add_ptr(tp, cpu_env, i);
+tcg_gen_add_ptr(tp, base, i);
 tcg_gen_addi_ptr(i, i, 8);
 tcg_gen_st_i64(t0, tp, vofs);
 tcg_temp_free_ptr(tp);
@@ -4348,6 +4355,11 @@ static void do_ldr(DisasContext *s, uint32_t vofs, int 
len, int rn, int imm)
 
 tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
 tcg_temp_free_ptr(i);
+
+if (base != cpu_env) {
+tcg_temp_free_ptr(base);
+assert(len_remain == 0);
+}
 }
 
 /*
@@ -4376,13 +4388,14 @@ static void do_ldr(DisasContext *s, uint32_t vofs, int 
len, int rn, int imm)
 default:
 g_assert_not_reached();
 }
-tcg_gen_st_i64(t0, cpu_env, vofs + len_align);
+tcg_gen_st_i64(t0, base, vofs + len_align);
 tcg_temp_free_i64(t0);
 }
 }
 
 /* Similarly for stores.  */
-static void do_str(DisasContext *s, uint32_t vofs, int len, int rn, int imm)
+void gen_sve_str(DisasContext *s, TCGv_ptr base, int vofs,
+ int len, int rn, int imm)
 {
 int len_align = QEMU_ALIGN_DOWN(len, 8);
 int len_remain = len % 8;
@@ -4408,7 +4421,7 @@ static void do_str(DisasContext *s, uint32_t vofs, int 
len, int rn, int imm)
 
 t0 = tcg_temp_new_i64();
 for (i = 0; i < len_align; i += 8) {
-tcg_gen_ld_i64(t0, cpu_env, vofs + i);
+tcg_gen_ld_i64(t0, base, vofs + i);
 tcg_gen_qemu_st_i64(t0, clean_addr, midx, MO_LEUQ);
 tcg_gen_addi_i64(clean_addr, clean_addr, 8);
 }
@@ -4422,11 +4435,17 @@ static void do_str(DisasContext *s, uint32_t vofs, int 
len, int rn, int imm)
 clean_addr = new_tmp_a64_local(s);
 tcg_gen_mov_i64(clean_addr, t0);
 
+if (base != cpu_env) {
+TCGv_ptr b = tcg_temp_local_new_ptr();
+tcg_gen_mov_ptr(b, base);
+base = b;
+}
+
 gen_set_label(loop);
 
 t0 = tcg_temp_new_i64();
 tp = tcg_temp_new_ptr();
-tcg_gen_add_ptr(tp, cpu_env, i);
+tcg_gen_add_ptr(tp, base, i);
 tcg_gen_ld_i64(t0, tp, vofs);
 tcg_gen_addi_ptr(i, i, 8);
 tcg_temp_free_ptr(tp);
@@ -4437,12 +4456,17 @@ static void do_str(DisasContext *s, uint32_t vofs, int 
len, int rn, int imm)
 
 tcg_gen_brcondi_ptr(TCG_COND_LTU, i, len_align, loop);
 tcg_temp_free_ptr(i);
+
+if (base != cpu_env) {
+tcg_temp_free_ptr(base);
+

[PATCH 53/71] target/arm: Implement SME integer outer product

2022-06-02 Thread Richard Henderson
This is SMOPA, SUMOPA, USMOPA_s, UMOPA, for both Int8 and Int16.

Signed-off-by: Richard Henderson 
---
 target/arm/helper-sme.h| 16 
 target/arm/sme.decode  | 10 +
 target/arm/sme_helper.c| 82 ++
 target/arm/translate-sme.c | 14 +++
 4 files changed, 122 insertions(+)

diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
index ecc957be14..31562551ee 100644
--- a/target/arm/helper-sme.h
+++ b/target/arm/helper-sme.h
@@ -128,3 +128,19 @@ DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sme_bfmopa, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_smopa_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_umopa_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_sumopa_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_usmopa_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_smopa_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_umopa_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_sumopa_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_usmopa_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
index e8d27fd8a0..628804e37a 100644
--- a/target/arm/sme.decode
+++ b/target/arm/sme.decode
@@ -76,3 +76,13 @@ FMOPA_d 1000 110 . ... ... . . 0 ... 
   @op_64
 
 BFMOPA  1001 100 . ... ... . . 00 ..@op_32
 FMOPA_h 1001 101 . ... ... . . 00 ..@op_32
+
+SMOPA_s 101 0 10 0 . ... ... . . 00 ..  @op_32
+SUMOPA_s101 0 10 1 . ... ... . . 00 ..  @op_32
+USMOPA_s101 1 10 0 . ... ... . . 00 ..  @op_32
+UMOPA_s 101 1 10 1 . ... ... . . 00 ..  @op_32
+
+SMOPA_d 101 0 11 0 . ... ... . . 0 ...  @op_64
+SUMOPA_d101 0 11 1 . ... ... . . 0 ...  @op_64
+USMOPA_d101 1 11 0 . ... ... . . 0 ...  @op_64
+UMOPA_d 101 1 11 1 . ... ... . . 0 ...  @op_64
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
index 0807fbc708..cebddabbc7 100644
--- a/target/arm/sme_helper.c
+++ b/target/arm/sme_helper.c
@@ -1089,3 +1089,85 @@ void HELPER(sme_bfmopa)(void *vza, void *vzn, void *vzm, 
void *vpn,
 } while (row & 15);
 }
 }
+
+typedef uint64_t IMOPFn(uint64_t, uint64_t, uint64_t, uint8_t, bool);
+
+static inline void do_imopa(uint64_t *za, uint64_t *zn, uint64_t *zm,
+uint8_t *pn, uint8_t *pm,
+uint32_t desc, IMOPFn *fn)
+{
+intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
+bool neg = simd_data(desc);
+
+for (row = 0; row < oprsz; ++row) {
+uint8_t pa = pn[H1(row)];
+uint64_t *za_row = [row * sizeof(ARMVectorReg)];
+uint64_t n = zn[row];
+
+for (col = 0; col < oprsz; ++col) {
+uint8_t pb = pm[H1(col)];
+uint64_t *a = _row[col];
+
+*a = fn(n, zm[col], *a, pa & pb, neg);
+}
+}
+}
+
+#define DEF_IMOP_32(NAME, NTYPE, MTYPE) \
+static uint64_t NAME(uint64_t n, uint64_t m, uint64_t a, uint8_t p, bool neg) \
+{   \
+uint32_t sum0 = 0, sum1 = 0;\
+/* Apply P to N as a mask, making the inactive elements 0. */   \
+n &= expand_pred_b(p);  \
+sum0 += (NTYPE)(n >> 0) * (MTYPE)(m >> 0);  \
+sum0 += (NTYPE)(n >> 8) * (MTYPE)(m >> 8);  \
+sum0 += (NTYPE)(n >> 16) * (MTYPE)(m >> 16);\
+sum0 += (NTYPE)(n >> 24) * (MTYPE)(m >> 24);\
+sum1 += (NTYPE)(n >> 32) * (MTYPE)(m >> 32);\
+sum1 += (NTYPE)(n >> 40) * (MTYPE)(m >> 40);\
+sum1 += (NTYPE)(n >> 48) * (MTYPE)(m >> 48);\
+sum1 += (NTYPE)(n >> 56) * (MTYPE)(m >> 56);\
+if (neg) {  \
+sum0 = (uint32_t)a - sum0, sum1 = (uint32_t)(a >> 32) - sum1;   \
+} else {\
+sum0 = (uint32_t)a + sum0, sum1 = (uint32_t)(a >> 32) + sum1;   \
+}  

[PATCH 55/71] target/arm: Implement REVD

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  2 ++
 target/arm/sve.decode  |  1 +
 target/arm/sve_helper.c| 16 
 target/arm/translate-sve.c |  2 ++
 4 files changed, 21 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index ab0333400f..cc4e1d8948 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -719,6 +719,8 @@ DEF_HELPER_FLAGS_4(sve_revh_d, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_4(sve_revw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sme_revd_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(sve_rbit_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index bf561c270a..d1e229fd6e 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -652,6 +652,7 @@ REVB0101 .. 1001 00 100 ... . . 
@rd_pg_rn
 REVH0101 .. 1001 01 100 ... . . @rd_pg_rn
 REVW0101 .. 1001 10 100 ... . . @rd_pg_rn
 RBIT0101 .. 1001 11 100 ... . . @rd_pg_rn
+REVD0101 00 1011 10 100 ... . . @rd_pg_rn_e0
 
 # SVE vector splice (predicated, destructive)
 SPLICE  0101 .. 101 100 100 ... . . @rdn_pg_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 9a26f253e0..5de82696b5 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -931,6 +931,22 @@ DO_ZPZ_D(sve_revh_d, uint64_t, hswap64)
 
 DO_ZPZ_D(sve_revw_d, uint64_t, wswap64)
 
+void HELPER(sme_revd_q)(void *vd, void *vn, void *vg, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn;
+uint8_t *pg = vg;
+
+for (i = 0; i < opr_sz; i += 2) {
+if (pg[H1(i)] & 1) {
+uint64_t n0 = n[i + 0];
+uint64_t n1 = n[i + 1];
+d[i + 0] = n1;
+d[i + 1] = n0;
+}
+}
+}
+
 DO_ZPZ(sve_rbit_b, uint8_t, H1, revbit8)
 DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
 DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 58d0894e15..1129f1fc56 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -2896,6 +2896,8 @@ TRANS_FEAT(REVH, aa64_sve, gen_gvec_ool_arg_zpz, 
revh_fns[a->esz], a, 0)
 TRANS_FEAT(REVW, aa64_sve, gen_gvec_ool_arg_zpz,
a->esz == 3 ? gen_helper_sve_revw_d : NULL, a, 0)
 
+TRANS_FEAT(REVD, aa64_sme, gen_gvec_ool_arg_zpz, gen_helper_sme_revd_q, a, 0)
+
 TRANS_FEAT(SPLICE, aa64_sve, gen_gvec_ool_arg_zpzz,
gen_helper_sve_splice, a, a->esz)
 
-- 
2.34.1




[PATCH 44/71] target/arm: Implement SME ZERO

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sme.h|  2 ++
 target/arm/translate-a64.h |  1 +
 target/arm/sme.decode  |  4 
 target/arm/sme_helper.c| 25 +
 target/arm/translate-a64.c | 15 +++
 target/arm/translate-sme.c | 13 +
 6 files changed, 60 insertions(+)

diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
index 3bd48c235f..c4ee1f09e4 100644
--- a/target/arm/helper-sme.h
+++ b/target/arm/helper-sme.h
@@ -19,3 +19,5 @@
 
 DEF_HELPER_FLAGS_2(set_pstate_sm, TCG_CALL_NO_RWG, void, env, i32)
 DEF_HELPER_FLAGS_2(set_pstate_za, TCG_CALL_NO_RWG, void, env, i32)
+
+DEF_HELPER_FLAGS_3(sme_zero, TCG_CALL_NO_RWG, void, env, i32, i32)
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 6bd1b2eb4b..ec5d580ba0 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -30,6 +30,7 @@ bool logic_imm_decode_wmask(uint64_t *result, unsigned int 
immn,
 unsigned int imms, unsigned int immr);
 bool sve_access_check(DisasContext *s);
 bool sme_enabled_check(DisasContext *s);
+bool sme_za_enabled_check(DisasContext *s);
 TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
 TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
 bool tag_checked, int log2_size);
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
index c25c031a71..6e4483fdce 100644
--- a/target/arm/sme.decode
+++ b/target/arm/sme.decode
@@ -18,3 +18,7 @@
 #
 # This file is processed by scripts/decodetree.py
 #
+
+### SME Misc
+
+ZERO1100 00 001 000 imm:8
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
index c34d1b2e6b..4172b788f9 100644
--- a/target/arm/sme_helper.c
+++ b/target/arm/sme_helper.c
@@ -58,3 +58,28 @@ void helper_set_pstate_za(CPUARMState *env, uint32_t i)
 memset(env->zarray, 0, sizeof(env->zarray));
 }
 }
+
+void helper_sme_zero(CPUARMState *env, uint32_t imm, uint32_t svl)
+{
+uint32_t i;
+
+/*
+ * Special case clearing the entire ZA space.
+ * This falls into the CONSTRAINED UNPREDICTABLE zeroing of any
+ * parts of the ZA storage outside of SVL.
+ */
+if (imm == 0xff) {
+memset(env->zarray, 0, sizeof(env->zarray));
+return;
+}
+
+/*
+ * Recall that ZAnH.D[m] is spread across ZA[n+8*m].
+ * Unless SVL == ARM_MAX_VQ, each row is discontiguous.
+ */
+for (i = 0; i < svl; i++) {
+if (imm & (1 << (i % 8))) {
+memset(>zarray[i], 0, svl);
+}
+}
+}
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 222f93d42d..660c5dbf5b 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1231,6 +1231,21 @@ bool sme_enabled_check(DisasContext *s)
 return fp_access_check_only(s);
 }
 
+/* Note that this function corresponds to CheckSMEAndZAEnabled. */
+bool sme_za_enabled_check(DisasContext *s)
+{
+if (!sme_enabled_check(s)) {
+return false;
+}
+if (!s->pstate_za) {
+gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
+   syn_smetrap(SME_ET_InactiveZA, false),
+   default_exception_el(s));
+return false;
+}
+return true;
+}
+
 /*
  * This utility function is for doing register extension with an
  * optional shift. You will likely want to pass a temporary for the
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
index 786c93fb2d..d526c74456 100644
--- a/target/arm/translate-sme.c
+++ b/target/arm/translate-sme.c
@@ -33,3 +33,16 @@
  */
 
 #include "decode-sme.c.inc"
+
+
+static bool trans_ZERO(DisasContext *s, arg_ZERO *a)
+{
+if (!dc_isar_feature(aa64_sme, s)) {
+return false;
+}
+if (sme_za_enabled_check(s)) {
+gen_helper_sme_zero(cpu_env, tcg_constant_i32(a->imm),
+tcg_constant_i32(s->svl));
+}
+return true;
+}
-- 
2.34.1




[PATCH 58/71] target/arm: Enable SME for -cpu max

2022-06-02 Thread Richard Henderson
Note that SME remains effectively disabled for user-only,
because we do not yet set CPACR_EL1.SMEN.  This needs to
wait until the kernel ABI is implemented.

Signed-off-by: Richard Henderson 
---
 docs/system/arm/emulation.rst |  4 
 target/arm/cpu64.c| 11 +++
 2 files changed, 15 insertions(+)

diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
index 49cc3e8340..834289cb8e 100644
--- a/docs/system/arm/emulation.rst
+++ b/docs/system/arm/emulation.rst
@@ -63,6 +63,10 @@ the following architecture extensions:
 - FEAT_SHA512 (Advanced SIMD SHA512 instructions)
 - FEAT_SM3 (Advanced SIMD SM3 instructions)
 - FEAT_SM4 (Advanced SIMD SM4 instructions)
+- FEAT_SME (Scalable Matrix Extension)
+- FEAT_SME_FA64 (Full A64 instruction set in Streaming SVE mode)
+- FEAT_SME_F64F64 (Double-precision floating-point outer product instructions)
+- FEAT_SME_I16I64 (16-bit to 64-bit integer widening outer product 
instructions)
 - FEAT_SPECRES (Speculation restriction instructions)
 - FEAT_SSBS (Speculative Store Bypass Safe)
 - FEAT_TLBIOS (TLB invalidate instructions in Outer Shareable domain)
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index aaf2c243d6..d77522e278 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -1017,6 +1017,7 @@ static void aarch64_max_initfn(Object *obj)
  * we do for EL2 with the virtualization=on property.
  */
 t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);   /* FEAT_MTE3 */
+t = FIELD_DP64(t, ID_AA64PFR1, SME, 1);   /* FEAT_SME */
 t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
 cpu->isar.id_aa64pfr1 = t;
 
@@ -1067,6 +1068,16 @@ static void aarch64_max_initfn(Object *obj)
 t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5);/* FEAT_PMUv3p4 */
 cpu->isar.id_aa64dfr0 = t;
 
+t = cpu->isar.id_aa64smfr0;
+t = FIELD_DP64(t, ID_AA64SMFR0, F32F32, 1);   /* FEAT_SME */
+t = FIELD_DP64(t, ID_AA64SMFR0, B16F32, 1);   /* FEAT_SME */
+t = FIELD_DP64(t, ID_AA64SMFR0, F16F32, 1);   /* FEAT_SME */
+t = FIELD_DP64(t, ID_AA64SMFR0, I8I32, 0xf);  /* FEAT_SME */
+t = FIELD_DP64(t, ID_AA64SMFR0, F64F64, 1);   /* FEAT_SME_F64F64 */
+t = FIELD_DP64(t, ID_AA64SMFR0, I16I64, 0xf); /* FEAT_SME_I16I64 */
+t = FIELD_DP64(t, ID_AA64SMFR0, FA64, 1); /* FEAT_SME_FA64 */
+cpu->isar.id_aa64smfr0 = t;
+
 /* Replicate the same data to the 32-bit id registers.  */
 aa32_max_features(cpu);
 
-- 
2.34.1




[PATCH 50/71] target/arm: Implement FMOPA, FMOPS (non-widening)

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sme.h|  5 +++
 target/arm/sme.decode  |  9 +
 target/arm/sme_helper.c| 67 ++
 target/arm/translate-sme.c | 33 +++
 4 files changed, 114 insertions(+)

diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
index 6f0fce7e2c..727095a3eb 100644
--- a/target/arm/helper-sme.h
+++ b/target/arm/helper-sme.h
@@ -119,3 +119,8 @@ DEF_HELPER_FLAGS_5(sme_addha_s, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sme_addva_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sme_addha_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sme_addva_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_7(sme_fmopa_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
index 8cb6c4053c..ba4774d174 100644
--- a/target/arm/sme.decode
+++ b/target/arm/sme.decode
@@ -64,3 +64,12 @@ ADDHA_s 1100 10 01000 0 ... ... . 000 .. 
   @adda_32
 ADDVA_s 1100 10 01000 1 ... ... . 000 ..@adda_32
 ADDHA_d 1100 11 01000 0 ... ... . 00 ...@adda_64
 ADDVA_d 1100 11 01000 1 ... ... . 00 ...@adda_64
+
+### SME Outer Product
+
+ zad zn zm pm pn sub:bool
+@op_32   ... zm:5 pm:3 pn:3 zn:5 sub:1 .. zad:2 
+@op_64   ... zm:5 pm:3 pn:3 zn:5 sub:1 .  zad:3 
+
+FMOPA_s 1000 100 . ... ... . . 00 ..@op_32
+FMOPA_d 1000 110 . ... ... . . 0 ...@op_64
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
index b2b6380901..16655c86a2 100644
--- a/target/arm/sme_helper.c
+++ b/target/arm/sme_helper.c
@@ -25,6 +25,7 @@
 #include "exec/cpu_ldst.h"
 #include "exec/exec-all.h"
 #include "qemu/int128.h"
+#include "fpu/softfloat.h"
 #include "vec_internal.h"
 #include "sve_ldst_internal.h"
 
@@ -896,3 +897,69 @@ void HELPER(sme_addva_d)(void *vzda, void *vzn, void *vpn,
 }
 }
 }
+
+void HELPER(sme_fmopa_s)(void *vza, void *vzn, void *vzm, void *vpn,
+ void *vpm, void *vst, uint32_t desc)
+{
+intptr_t row, col, oprsz = simd_maxsz(desc);
+uint32_t neg = simd_data(desc) << 31;
+uint16_t *pn = vpn, *pm = vpm;
+
+bool save_dn = get_default_nan_mode(vst);
+set_default_nan_mode(true, vst);
+
+for (row = 0; row < oprsz; ) {
+uint16_t pa = pn[H2(row >> 4)];
+do {
+if (pa & 1) {
+void *vza_row = vza + row * sizeof(ARMVectorReg);
+uint32_t n = *(uint32_t *)(vzn + row) ^ neg;
+
+for (col = 0; col < oprsz; ) {
+uint16_t pb = pm[H2(col >> 4)];
+do {
+if (pb & 1) {
+uint32_t *a = vza_row + col;
+uint32_t *m = vzm + col;
+*a = float32_muladd(n, *m, *a, 0, vst);
+}
+col += 4;
+pb >>= 4;
+} while (col & 15);
+}
+}
+row += 4;
+pa >>= 4;
+} while (row & 15);
+}
+
+set_default_nan_mode(save_dn, vst);
+}
+
+void HELPER(sme_fmopa_d)(void *vza, void *vzn, void *vzm, void *vpn,
+ void *vpm, void *vst, uint32_t desc)
+{
+intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
+uint64_t neg = (uint64_t)simd_data(desc) << 63;
+uint64_t *za = vza, *zn = vzn, *zm = vzm;
+uint8_t *pn = vpn, *pm = vpm;
+
+bool save_dn = get_default_nan_mode(vst);
+set_default_nan_mode(true, vst);
+
+for (row = 0; row < oprsz; ++row) {
+if (pn[H1(row)] & 1) {
+uint64_t *za_row = [row * sizeof(ARMVectorReg)];
+uint64_t n = zn[row] ^ neg;
+
+for (col = 0; col < oprsz; ++col) {
+if (pm[H1(col)] & 1) {
+uint64_t *a = _row[col];
+*a = float64_muladd(n, zm[col], *a, 0, vst);
+}
+}
+}
+}
+
+set_default_nan_mode(save_dn, vst);
+}
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
index e9676b2415..e6e4541e76 100644
--- a/target/arm/translate-sme.c
+++ b/target/arm/translate-sme.c
@@ -273,3 +273,36 @@ TRANS_FEAT(ADDHA_s, aa64_sme, do_adda, a, MO_32, 
gen_helper_sme_addha_s)
 TRANS_FEAT(ADDVA_s, aa64_sme, do_adda, a, MO_32, gen_helper_sme_addva_s)
 TRANS_FEAT(ADDHA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addha_d)
 TRANS_FEAT(ADDVA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addva_d)
+
+static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
+   

[PATCH 52/71] target/arm: Implement FMOPA, FMOPS (widening)

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sme.h|  2 ++
 target/arm/sme.decode  |  1 +
 target/arm/sme_helper.c| 74 ++
 target/arm/translate-sme.c |  2 ++
 4 files changed, 79 insertions(+)

diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
index 6b36542133..ecc957be14 100644
--- a/target/arm/helper-sme.h
+++ b/target/arm/helper-sme.h
@@ -120,6 +120,8 @@ DEF_HELPER_FLAGS_5(sme_addva_s, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sme_addha_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sme_addva_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_7(sme_fmopa_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_7(sme_fmopa_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
index afd9c0dffd..e8d27fd8a0 100644
--- a/target/arm/sme.decode
+++ b/target/arm/sme.decode
@@ -75,3 +75,4 @@ FMOPA_s 1000 100 . ... ... . . 00 ..  
  @op_32
 FMOPA_d 1000 110 . ... ... . . 0 ...@op_64
 
 BFMOPA  1001 100 . ... ... . . 00 ..@op_32
+FMOPA_h 1001 101 . ... ... . . 00 ..@op_32
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
index 69e4252abc..0807fbc708 100644
--- a/target/arm/sme_helper.c
+++ b/target/arm/sme_helper.c
@@ -980,6 +980,80 @@ static inline uint32_t f16mop_adj_pair(uint32_t pair, 
uint32_t pg, uint32_t neg)
 return pair;
 }
 
+static float32 f16_dotadd(float32 sum, uint32_t e1, uint32_t e2,
+  float_status *s)
+{
+float64 e1r = float16_to_float64(e1 & 0x, true, s);
+float64 e1c = float16_to_float64(e1 >> 16, true, s);
+float64 e2r = float16_to_float64(e2 & 0x, true, s);
+float64 e2c = float16_to_float64(e2 >> 16, true, s);
+float64 t64;
+float32 t32;
+
+/*
+ * The ARM pseudocode function FPDot performs both multiplies
+ * and the add with a single rounding operation.  Emulate this
+ * by performing the first multiply in round-to-odd, then doing
+ * the second multiply as fused multiply-add, and rounding to
+ * float32 all in one step.
+ */
+FloatRoundMode old_rm = get_float_rounding_mode(s);
+set_float_rounding_mode(float_round_to_odd, s);
+
+t64 = float64_mul(e1r, e2r, s);
+
+set_float_rounding_mode(old_rm, s);
+
+t64 = float64r32_muladd(e1c, e2c, t64, 0, s);
+
+/* This conversion is exact, because we've already rounded. */
+t32 = float64_to_float32(t64, s);
+
+/* The final accumulation step is not fused. */
+return float32_add(sum, t32, s);
+}
+
+void HELPER(sme_fmopa_h)(void *vza, void *vzn, void *vzm, void *vpn,
+ void *vpm, void *vst, uint32_t desc)
+{
+intptr_t row, col, oprsz = simd_maxsz(desc);
+uint32_t neg = simd_data(desc) << 15;
+uint16_t *pn = vpn, *pm = vpm;
+
+bool save_dn = get_default_nan_mode(vst);
+set_default_nan_mode(true, vst);
+
+for (row = 0; row < oprsz; ) {
+uint16_t pa = pn[H2(row >> 4)];
+do {
+void *vza_row = vza + row * sizeof(ARMVectorReg);
+uint32_t n = *(uint32_t *)(vzn + row);
+
+n = f16mop_adj_pair(n, pa, neg);
+
+for (col = 0; col < oprsz; ) {
+uint16_t pb = pm[H2(col >> 4)];
+do {
+if ((pa & 0b0101) == 0b0101 || (pb & 0b0101) == 0b0101) {
+uint32_t *a = vza_row + col;
+uint32_t m = *(uint32_t *)(vzm + col);
+
+m = f16mop_adj_pair(m, pb, neg);
+*a = f16_dotadd(*a, n, m, vst);
+
+col += 4;
+pb >>= 4;
+}
+} while (col & 15);
+}
+row += 4;
+pa >>= 4;
+} while (row & 15);
+}
+
+set_default_nan_mode(save_dn, vst);
+}
+
 void HELPER(sme_bfmopa)(void *vza, void *vzn, void *vzm, void *vpn,
 void *vpm, uint32_t desc)
 {
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
index 581bf9174f..847f2274b1 100644
--- a/target/arm/translate-sme.c
+++ b/target/arm/translate-sme.c
@@ -328,6 +328,8 @@ static bool do_outprod_fpst(DisasContext *s, arg_op *a, 
MemOp esz,
 return true;
 }
 
+TRANS_FEAT(FMOPA_h, aa64_sme, do_outprod_fpst,
+   a, MO_32, gen_helper_sme_fmopa_h)
 TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst,
a, MO_32, gen_helper_sme_fmopa_s)
 TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst,
-- 
2.34.1




[PATCH 41/71] target/arm: Add infrastructure for disas_sme

2022-06-02 Thread Richard Henderson
This includes the build rules for the decoder, and the
new file for translation, but excludes any instructions.

Signed-off-by: Richard Henderson 
---
 target/arm/translate-a64.h |  1 +
 target/arm/sme.decode  | 20 
 target/arm/translate-a64.c |  7 ++-
 target/arm/translate-sme.c | 35 +++
 target/arm/meson.build |  2 ++
 5 files changed, 64 insertions(+), 1 deletion(-)
 create mode 100644 target/arm/sme.decode
 create mode 100644 target/arm/translate-sme.c

diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index f0970c6b8c..789b6e8e78 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -146,6 +146,7 @@ static inline int pred_gvec_reg_size(DisasContext *s)
 }
 
 bool disas_sve(DisasContext *, uint32_t);
+bool disas_sme(DisasContext *, uint32_t);
 
 void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
new file mode 100644
index 00..c25c031a71
--- /dev/null
+++ b/target/arm/sme.decode
@@ -0,0 +1,20 @@
+# AArch64 SME instruction descriptions
+#
+#  Copyright (c) 2022 Linaro, Ltd
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, see .
+
+#
+# This file is processed by scripts/decodetree.py
+#
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index b1d2840819..8a38fbc33b 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14814,7 +14814,12 @@ static void aarch64_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cpu)
 }
 
 switch (extract32(insn, 25, 4)) {
-case 0x0: case 0x1: case 0x3: /* UNALLOCATED */
+case 0x0:
+if (!disas_sme(s, insn)) {
+unallocated_encoding(s);
+}
+break;
+case 0x1: case 0x3: /* UNALLOCATED */
 unallocated_encoding(s);
 break;
 case 0x2:
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
new file mode 100644
index 00..786c93fb2d
--- /dev/null
+++ b/target/arm/translate-sme.c
@@ -0,0 +1,35 @@
+/*
+ * AArch64 SME translation
+ *
+ * Copyright (c) 2022 Linaro, Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see .
+ */
+
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "tcg/tcg-op.h"
+#include "tcg/tcg-op-gvec.h"
+#include "tcg/tcg-gvec-desc.h"
+#include "translate.h"
+#include "exec/helper-gen.h"
+#include "translate-a64.h"
+#include "fpu/softfloat.h"
+
+
+/*
+ * Include the generated decoder.
+ */
+
+#include "decode-sme.c.inc"
diff --git a/target/arm/meson.build b/target/arm/meson.build
index 02c91f72bb..c47d86c609 100644
--- a/target/arm/meson.build
+++ b/target/arm/meson.build
@@ -1,5 +1,6 @@
 gen = [
   decodetree.process('sve.decode', extra_args: '--decode=disas_sve'),
+  decodetree.process('sme.decode', extra_args: '--decode=disas_sme'),
   decodetree.process('neon-shared.decode', extra_args: 
'--decode=disas_neon_shared'),
   decodetree.process('neon-dp.decode', extra_args: '--decode=disas_neon_dp'),
   decodetree.process('neon-ls.decode', extra_args: '--decode=disas_neon_ls'),
@@ -50,6 +51,7 @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
   'sme_helper.c',
   'translate-a64.c',
   'translate-sve.c',
+  'translate-sme.c',
 ))
 
 arm_softmmu_ss = ss.source_set()
-- 
2.34.1




[PATCH 34/71] target/arm: Generalize cpu_arm_{get, set}_default_vec_len

2022-06-02 Thread Richard Henderson
Rename from cpu_arm_{get,set}_sve_default_vec_len,
and take the pointer to default_vq from opaque.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu64.c | 27 ++-
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index dcec0a6559..c5bfc3d082 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -638,11 +638,11 @@ static void cpu_arm_set_sve(Object *obj, bool value, 
Error **errp)
 
 #ifdef CONFIG_USER_ONLY
 /* Mirror linux /proc/sys/abi/sve_default_vector_length. */
-static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
-const char *name, void *opaque,
-Error **errp)
+static void cpu_arm_set_default_vec_len(Object *obj, Visitor *v,
+const char *name, void *opaque,
+Error **errp)
 {
-ARMCPU *cpu = ARM_CPU(obj);
+uint32_t *ptr_default_vq = opaque;
 int32_t default_len, default_vq, remainder;
 
 if (!visit_type_int32(v, name, _len, errp)) {
@@ -651,7 +651,7 @@ static void cpu_arm_set_sve_default_vec_len(Object *obj, 
Visitor *v,
 
 /* Undocumented, but the kernel allows -1 to indicate "maximum". */
 if (default_len == -1) {
-cpu->sve_default_vq = ARM_MAX_VQ;
+*ptr_default_vq = ARM_MAX_VQ;
 return;
 }
 
@@ -675,15 +675,15 @@ static void cpu_arm_set_sve_default_vec_len(Object *obj, 
Visitor *v,
 return;
 }
 
-cpu->sve_default_vq = default_vq;
+*ptr_default_vq = default_vq;
 }
 
-static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
-const char *name, void *opaque,
-Error **errp)
+static void cpu_arm_get_default_vec_len(Object *obj, Visitor *v,
+const char *name, void *opaque,
+Error **errp)
 {
-ARMCPU *cpu = ARM_CPU(obj);
-int32_t value = cpu->sve_default_vq * 16;
+uint32_t *ptr_default_vq = opaque;
+int32_t value = *ptr_default_vq * 16;
 
 visit_type_int32(v, name, , errp);
 }
@@ -706,8 +706,9 @@ void aarch64_add_sve_properties(Object *obj)
 #ifdef CONFIG_USER_ONLY
 /* Mirror linux /proc/sys/abi/sve_default_vector_length. */
 object_property_add(obj, "sve-default-vector-length", "int32",
-cpu_arm_get_sve_default_vec_len,
-cpu_arm_set_sve_default_vec_len, NULL, NULL);
+cpu_arm_get_default_vec_len,
+cpu_arm_set_default_vec_len, NULL,
+>sve_default_vq);
 #endif
 }
 
-- 
2.34.1




[PATCH 51/71] target/arm: Implement BFMOPA, BFMOPS

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sme.h|  2 ++
 target/arm/sme.decode  |  2 ++
 target/arm/sme_helper.c| 52 ++
 target/arm/translate-sme.c | 29 +
 4 files changed, 85 insertions(+)

diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
index 727095a3eb..6b36542133 100644
--- a/target/arm/helper-sme.h
+++ b/target/arm/helper-sme.h
@@ -124,3 +124,5 @@ DEF_HELPER_FLAGS_7(sme_fmopa_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_7(sme_fmopa_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sme_bfmopa, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
index ba4774d174..afd9c0dffd 100644
--- a/target/arm/sme.decode
+++ b/target/arm/sme.decode
@@ -73,3 +73,5 @@ ADDVA_d 1100 11 01000 1 ... ... . 00 ...  
  @adda_64
 
 FMOPA_s 1000 100 . ... ... . . 00 ..@op_32
 FMOPA_d 1000 110 . ... ... . . 0 ...@op_64
+
+BFMOPA  1001 100 . ... ... . . 00 ..@op_32
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
index 16655c86a2..69e4252abc 100644
--- a/target/arm/sme_helper.c
+++ b/target/arm/sme_helper.c
@@ -963,3 +963,55 @@ void HELPER(sme_fmopa_d)(void *vza, void *vzn, void *vzm, 
void *vpn,
 
 set_default_nan_mode(save_dn, vst);
 }
+
+/*
+ * Alter PAIR as needed for controlling predicates being false,
+ * and for NEG on an enabled row element.
+ */
+static inline uint32_t f16mop_adj_pair(uint32_t pair, uint32_t pg, uint32_t 
neg)
+{
+pair ^= neg;
+if (!(pg & 1)) {
+pair &= 0xu;
+}
+if (!(pg & 4)) {
+pair &= 0xu;
+}
+return pair;
+}
+
+void HELPER(sme_bfmopa)(void *vza, void *vzn, void *vzm, void *vpn,
+void *vpm, uint32_t desc)
+{
+intptr_t row, col, oprsz = simd_maxsz(desc);
+uint32_t neg = simd_data(desc) << 15;
+uint16_t *pn = vpn, *pm = vpm;
+
+for (row = 0; row < oprsz; ) {
+uint16_t pa = pn[H2(row >> 4)];
+do {
+void *vza_row = vza + row * sizeof(ARMVectorReg);
+uint32_t n = *(uint32_t *)(vzn + row);
+
+n = f16mop_adj_pair(n, pa, neg);
+
+for (col = 0; col < oprsz; ) {
+uint16_t pb = pm[H2(col >> 4)];
+do {
+if ((pa & 0b0101) == 0b0101 || (pb & 0b0101) == 0b0101) {
+uint32_t *a = vza_row + col;
+uint32_t m = *(uint32_t *)(vzm + col);
+
+m = f16mop_adj_pair(m, pb, neg);
+*a = bfdotadd(*a, n, m);
+
+col += 4;
+pb >>= 4;
+}
+} while (col & 15);
+}
+row += 4;
+pa >>= 4;
+} while (row & 15);
+}
+}
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
index e6e4541e76..581bf9174f 100644
--- a/target/arm/translate-sme.c
+++ b/target/arm/translate-sme.c
@@ -274,6 +274,32 @@ TRANS_FEAT(ADDVA_s, aa64_sme, do_adda, a, MO_32, 
gen_helper_sme_addva_s)
 TRANS_FEAT(ADDHA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addha_d)
 TRANS_FEAT(ADDVA_d, aa64_sme_i16i64, do_adda, a, MO_64, gen_helper_sme_addva_d)
 
+static bool do_outprod(DisasContext *s, arg_op *a, MemOp esz,
+   gen_helper_gvec_5 *fn)
+{
+uint32_t desc = simd_desc(s->svl, s->svl, a->sub);
+TCGv_ptr za, zn, zm, pn, pm;
+
+if (!sme_smza_enabled_check(s)) {
+return true;
+}
+
+/* Sum XZR+zad to find ZAd. */
+za = get_tile_rowcol(s, esz, 31, a->zad, false);
+zn = vec_full_reg_ptr(s, a->zn);
+zm = vec_full_reg_ptr(s, a->zm);
+pn = pred_full_reg_ptr(s, a->pn);
+pm = pred_full_reg_ptr(s, a->pm);
+
+fn(za, zn, zm, pn, pm, tcg_constant_i32(desc));
+
+tcg_temp_free_ptr(za);
+tcg_temp_free_ptr(zn);
+tcg_temp_free_ptr(pn);
+tcg_temp_free_ptr(pm);
+return true;
+}
+
 static bool do_outprod_fpst(DisasContext *s, arg_op *a, MemOp esz,
 gen_helper_gvec_5_ptr *fn)
 {
@@ -306,3 +332,6 @@ TRANS_FEAT(FMOPA_s, aa64_sme, do_outprod_fpst,
a, MO_32, gen_helper_sme_fmopa_s)
 TRANS_FEAT(FMOPA_d, aa64_sme_f64f64, do_outprod_fpst,
a, MO_64, gen_helper_sme_fmopa_d)
+
+/* TODO: FEAT_EBF16 */
+TRANS_FEAT(BFMOPA, aa64_sme, do_outprod, a, MO_32, gen_helper_sme_bfmopa)
-- 
2.34.1




[PATCH 39/71] target/arm: Add SVL to TB flags

2022-06-02 Thread Richard Henderson
We need SVL separate from VL for RDSVL at al, as well as
ZA storage loads and stores, which do not require PSTATE.SM.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   | 12 
 target/arm/translate.h |  1 +
 target/arm/helper.c|  8 +++-
 target/arm/translate-a64.c |  1 +
 4 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index e41a75a3a3..0c32c3afaa 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3292,6 +3292,7 @@ FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
 FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
 FIELD(TBFLAG_A64, PSTATE_SM, 22, 1)
 FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
+FIELD(TBFLAG_A64, SVL, 24, 4)
 
 /*
  * Helpers for using the above.
@@ -3337,6 +3338,17 @@ static inline int sve_vq_cached(CPUARMState *env)
 return EX_TBFLAG_A64(env->hflags, VL) + 1;
 }
 
+/**
+ * sme_vq_cached
+ * @env: the cpu context
+ *
+ * Return the SVL cached within env->hflags, in units of quadwords.
+ */
+static inline int sme_vq_cached(CPUARMState *env)
+{
+return EX_TBFLAG_A64(env->hflags, SVL) + 1;
+}
+
 static inline bool bswap_code(bool sctlr_b)
 {
 #ifdef CONFIG_USER_ONLY
diff --git a/target/arm/translate.h b/target/arm/translate.h
index fbd6713572..1330281f8b 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -44,6 +44,7 @@ typedef struct DisasContext {
 int sve_excp_el; /* SVE exception EL or 0 if enabled */
 int sme_excp_el; /* SME exception EL or 0 if enabled */
 int vl;  /* current vector length in bytes */
+int svl; /* current streaming vector length in bytes */
 /* Flag indicating that exceptions from secure mode are routed to EL3. */
 bool secure_routed_to_el3;
 bool vfp_enabled; /* FP enabled via FPSCR.EN */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index cb78d2354a..c9db12d524 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -13874,7 +13874,13 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState 
*env, int el, int fp_el,
 DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
 }
 if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
-DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
+int sme_el = sme_exception_el(env, el);
+
+DP_TBFLAG_A64(flags, SMEEXC_EL, sme_el);
+if (sme_el == 0) {
+/* Similarly, do not compute SVL if SME is disabled. */
+DP_TBFLAG_A64(flags, SVL, sve_vqm1_for_el_sm(env, el, true));
+}
 if (FIELD_EX64(env->svcr, SVCR, SM)) {
 DP_TBFLAG_A64(flags, PSTATE_SM, 1);
 }
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 40f2e53983..b1d2840819 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14652,6 +14652,7 @@ static void 
aarch64_tr_init_disas_context(DisasContextBase *dcbase,
 dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
 dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
 dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
+dc->svl = (EX_TBFLAG_A64(tb_flags, SVL) + 1) * 16;
 dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
 dc->bt = EX_TBFLAG_A64(tb_flags, BT);
 dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
-- 
2.34.1




[PATCH 45/71] target/arm: Implement SME MOVA

2022-06-02 Thread Richard Henderson
We can reuse the SVE functions for implementing moves to/from
horizontal tile slices, but we need new ones for moves to/from
vertical tile slices.

Signed-off-by: Richard Henderson 
---
 target/arm/helper-sme.h|  11 
 target/arm/helper-sve.h|   2 +
 target/arm/translate-a64.h |   9 +++
 target/arm/translate.h |   5 ++
 target/arm/sme.decode  |  15 +
 target/arm/sme_helper.c| 110 -
 target/arm/sve_helper.c|  12 
 target/arm/translate-a64.c |  21 +++
 target/arm/translate-sme.c | 105 +++
 9 files changed, 289 insertions(+), 1 deletion(-)

diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
index c4ee1f09e4..600346e08c 100644
--- a/target/arm/helper-sme.h
+++ b/target/arm/helper-sme.h
@@ -21,3 +21,14 @@ DEF_HELPER_FLAGS_2(set_pstate_sm, TCG_CALL_NO_RWG, void, 
env, i32)
 DEF_HELPER_FLAGS_2(set_pstate_za, TCG_CALL_NO_RWG, void, env, i32)
 
 DEF_HELPER_FLAGS_3(sme_zero, TCG_CALL_NO_RWG, void, env, i32, i32)
+
+DEF_HELPER_FLAGS_4(sme_mova_avz_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_zav_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_avz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_zav_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_avz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_zav_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_avz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_zav_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_avz_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sme_mova_zav_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index dc629f851a..ab0333400f 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -325,6 +325,8 @@ DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_sel_zpzz_q, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_5(sve2_addp_zpzz_b, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index ec5d580ba0..c341c95582 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -31,6 +31,7 @@ bool logic_imm_decode_wmask(uint64_t *result, unsigned int 
immn,
 bool sve_access_check(DisasContext *s);
 bool sme_enabled_check(DisasContext *s);
 bool sme_za_enabled_check(DisasContext *s);
+bool sme_smza_enabled_check(DisasContext *s);
 TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
 TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
 bool tag_checked, int log2_size);
@@ -147,6 +148,14 @@ static inline int pred_gvec_reg_size(DisasContext *s)
 return size_for_gvec(pred_full_reg_size(s));
 }
 
+/* Return a newly allocated pointer to the predicate register.  */
+static inline TCGv_ptr pred_full_reg_ptr(DisasContext *s, int regno)
+{
+TCGv_ptr ret = tcg_temp_new_ptr();
+tcg_gen_addi_ptr(ret, cpu_env, pred_full_reg_offset(s, regno));
+return ret;
+}
+
 bool disas_sve(DisasContext *, uint32_t);
 bool disas_sme(DisasContext *, uint32_t);
 
diff --git a/target/arm/translate.h b/target/arm/translate.h
index 775297aa40..d03afd0034 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -159,6 +159,11 @@ static inline int plus_2(DisasContext *s, int x)
 return x + 2;
 }
 
+static inline int plus_12(DisasContext *s, int x)
+{
+return x + 12;
+}
+
 static inline int times_2(DisasContext *s, int x)
 {
 return x * 2;
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
index 6e4483fdce..241b4895b7 100644
--- a/target/arm/sme.decode
+++ b/target/arm/sme.decode
@@ -22,3 +22,18 @@
 ### SME Misc
 
 ZERO1100 00 001 000 imm:8
+
+### SME Move into/from Array
+
+%mova_rs13:2 !function=plus_12
+   esz rs pg zr za_imm v:bool to_vec:bool
+
+MOVA1100 esz:2 0 0 v:1 .. pg:3 zr:5 0 za_imm:4  \
+ to_vec=0 rs=%mova_rs
+MOVA1100 110 1 v:1 .. pg:3 zr:5 0 za_imm:4  \
+ to_vec=0 rs=%mova_rs esz=4
+
+MOVA1100 esz:2 1 0 v:1 .. pg:3 0 za_imm:4 zr:5  \
+ to_vec=1 rs=%mova_rs
+MOVA1100 111 1 v:1 .. pg:3 0 za_imm:4 zr:5  \
+ to_vec=1 rs=%mova_rs esz=4
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
index 4172b788f9..8b73474eb0 100644
--- a/target/arm/sme_helper.c
+++ b/target/arm/sme_helper.c
@@ -19,8 +19,10 @@
 
 #include "qemu/osdep.h"
 #include "cpu.h"
-#include 

[PATCH 37/71] target/arm: Add cpu properties for SME

2022-06-02 Thread Richard Henderson
Mirror the properties for SVE.  The main difference is
that any arbitrary set of powers of 2 may be supported,
and not the stricter constraints that apply to SVE.

Include a property to control FEAT_SME_FA64, as failing
to restrict the runtime to the proper subset of insns
could be a major point for bugs.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |   2 +
 target/arm/internals.h |   1 +
 target/arm/cpu.c   |  14 +++--
 target/arm/cpu64.c | 114 +++--
 4 files changed, 124 insertions(+), 7 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 60f84ba033..d74c06e2f0 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1046,9 +1046,11 @@ struct ArchCPU {
 #ifdef CONFIG_USER_ONLY
 /* Used to set the default vector length at process start. */
 uint32_t sve_default_vq;
+uint32_t sme_default_vq;
 #endif
 
 ARMVQMap sve_vq;
+ARMVQMap sme_vq;
 
 /* Generic timer counter frequency, in Hz */
 uint64_t gt_cntfrq_hz;
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 756301b536..7e160d1349 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1310,6 +1310,7 @@ int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, 
int reg);
 int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
 int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
 void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
+void arm_cpu_sme_finalize(ARMCPU *cpu, Error **errp);
 void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
 void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
 #endif
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index b5276fa944..75295a14a3 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1122,11 +1122,13 @@ static void arm_cpu_initfn(Object *obj)
 #ifdef CONFIG_USER_ONLY
 # ifdef TARGET_AARCH64
 /*
- * The linux kernel defaults to 512-bit vectors, when sve is supported.
- * See documentation for /proc/sys/abi/sve_default_vector_length, and
- * our corresponding sve-default-vector-length cpu property.
+ * The linux kernel defaults to 512-bit for SVE, and 256-bit for SME.
+ * These values were chosen to fit within the default signal frame.
+ * See documentation for /proc/sys/abi/{sve,sme}_default_vector_length,
+ * and our corresponding cpu property.
  */
 cpu->sve_default_vq = 4;
+cpu->sme_default_vq = 2;
 # endif
 #else
 /* Our inbound IRQ and FIQ lines */
@@ -1429,6 +1431,12 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
 return;
 }
 
+arm_cpu_sme_finalize(cpu, _err);
+if (local_err != NULL) {
+error_propagate(errp, local_err);
+return;
+}
+
 arm_cpu_pauth_finalize(cpu, _err);
 if (local_err != NULL) {
 error_propagate(errp, local_err);
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 9ae9be6698..aaf2c243d6 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -589,10 +589,13 @@ static void cpu_arm_get_vq(Object *obj, Visitor *v, const 
char *name,
 ARMCPU *cpu = ARM_CPU(obj);
 ARMVQMap *vq_map = opaque;
 uint32_t vq = atoi([3]) / 128;
+bool sve = vq_map == >sve_vq;
 bool value;
 
-/* All vector lengths are disabled when SVE is off. */
-if (!cpu_isar_feature(aa64_sve, cpu)) {
+/* All vector lengths are disabled when feature is off. */
+if (sve
+? !cpu_isar_feature(aa64_sve, cpu)
+: !cpu_isar_feature(aa64_sme, cpu)) {
 value = false;
 } else {
 value = extract32(vq_map->map, vq - 1, 1);
@@ -636,8 +639,80 @@ static void cpu_arm_set_sve(Object *obj, bool value, Error 
**errp)
 cpu->isar.id_aa64pfr0 = t;
 }
 
+void arm_cpu_sme_finalize(ARMCPU *cpu, Error **errp)
+{
+uint32_t vq_map = cpu->sme_vq.map;
+uint32_t vq_init = cpu->sme_vq.init;
+uint32_t vq_supported = cpu->sme_vq.supported;
+uint32_t vq;
+
+if (vq_map == 0) {
+if (!cpu_isar_feature(aa64_sme, cpu)) {
+cpu->isar.id_aa64smfr0 = 0;
+return;
+}
+
+/* TODO: KVM will require limitations via SMCR_EL2. */
+vq_map = vq_supported & ~vq_init;
+
+if (vq_map == 0) {
+vq = ctz32(vq_supported) + 1;
+error_setg(errp, "cannot disable sme%d", vq * 128);
+error_append_hint(errp, "All SME vector lengths are disabled.\n");
+error_append_hint(errp, "With SME enabled, at least one "
+  "vector length must be enabled.\n");
+return;
+}
+} else {
+if (!cpu_isar_feature(aa64_sme, cpu)) {
+vq = 32 - clz32(vq_map);
+error_setg(errp, "cannot enable sme%d", vq * 128);
+error_append_hint(errp, "SME must be enabled to enable "
+  "vector lengths.\n");
+error_append_hint(errp, "Add sme=on to the CPU property 

[PATCH 49/71] target/arm: Implement SME ADDHA, ADDVA

2022-06-02 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sme.h|  5 +++
 target/arm/sme.decode  | 11 +
 target/arm/sme_helper.c| 90 ++
 target/arm/translate-sme.c | 30 +
 4 files changed, 136 insertions(+)

diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
index 5cca01f372..6f0fce7e2c 100644
--- a/target/arm/helper-sme.h
+++ b/target/arm/helper-sme.h
@@ -114,3 +114,8 @@ DEF_HELPER_FLAGS_5(sme_st1q_be_h_mte, TCG_CALL_NO_WG, void, 
env, ptr, ptr, tl, i
 DEF_HELPER_FLAGS_5(sme_st1q_le_h_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
 DEF_HELPER_FLAGS_5(sme_st1q_be_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
 DEF_HELPER_FLAGS_5(sme_st1q_le_v_mte, TCG_CALL_NO_WG, void, env, ptr, ptr, tl, 
i32)
+
+DEF_HELPER_FLAGS_5(sme_addha_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sme_addva_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sme_addha_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sme_addva_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sme.decode b/target/arm/sme.decode
index f1ebd857a5..8cb6c4053c 100644
--- a/target/arm/sme.decode
+++ b/target/arm/sme.decode
@@ -53,3 +53,14 @@ LDST1   111 111 st:1 rm:5 v:1 .. pg:3 rn:5 0 
za_imm:4  \
 
 LDR 111 100 0 00 .. 000 . 0 @ldstr
 STR 111 100 1 00 .. 000 . 0 @ldstr
+
+### SME Add Vector to Array
+
+   zad zn pm pn
+@adda_32 .. . . pm:3 pn:3 zn:5 ... zad:2
+@adda_64 .. . . pm:3 pn:3 zn:5 ..  zad:3
+
+ADDHA_s 1100 10 01000 0 ... ... . 000 ..@adda_32
+ADDVA_s 1100 10 01000 1 ... ... . 000 ..@adda_32
+ADDHA_d 1100 11 01000 0 ... ... . 00 ...@adda_64
+ADDVA_d 1100 11 01000 1 ... ... . 00 ...@adda_64
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
index b32c8435cb..b2b6380901 100644
--- a/target/arm/sme_helper.c
+++ b/target/arm/sme_helper.c
@@ -806,3 +806,93 @@ DO_ST(q, _be, MO_128)
 DO_ST(q, _le, MO_128)
 
 #undef DO_ST
+
+void HELPER(sme_addha_s)(void *vzda, void *vzn, void *vpn,
+ void *vpm, uint32_t desc)
+{
+intptr_t row, col, oprsz = simd_oprsz(desc) / 4;
+uint64_t *pn = vpn, *pm = vpm;
+uint32_t * restrict zda = vzda, * restrict zn = vzn;
+
+for (row = 0; row < oprsz; ) {
+uint64_t pa = pn[row >> 4];
+do {
+if (pa & 1) {
+for (col = 0; col < oprsz; ) {
+uint64_t pb = pm[col >> 4];
+do {
+if (pb & 1) {
+zda[row * sizeof(ARMVectorReg) + col] += zn[col];
+}
+pb >>= 4;
+} while (++col & 15);
+}
+}
+pa >>= 4;
+} while (++row & 15);
+}
+}
+
+void HELPER(sme_addha_d)(void *vzda, void *vzn, void *vpn,
+ void *vpm, uint32_t desc)
+{
+intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
+uint8_t *pn = vpn, *pm = vpm;
+uint64_t * restrict zda = vzda, * restrict zn = vzn;
+
+for (row = 0; row < oprsz; ++row) {
+if (pn[H1(row)] & 1) {
+for (col = 0; col < oprsz; ++col) {
+if (pm[H1(col)] & 1) {
+zda[row * sizeof(ARMVectorReg) + col] += zn[col];
+}
+}
+}
+}
+}
+
+void HELPER(sme_addva_s)(void *vzda, void *vzn, void *vpn,
+ void *vpm, uint32_t desc)
+{
+intptr_t row, col, oprsz = simd_oprsz(desc) / 4;
+uint64_t *pn = vpn, *pm = vpm;
+uint32_t * restrict zda = vzda, * restrict zn = vzn;
+
+for (row = 0; row < oprsz; ) {
+uint64_t pa = pn[row >> 4];
+do {
+if (pa & 1) {
+uint32_t zn_row = zn[row];
+for (col = 0; col < oprsz; ) {
+uint64_t pb = pm[col >> 4];
+do {
+if (pb & 1) {
+zda[row * sizeof(ARMVectorReg) + col] += zn_row;
+}
+pb >>= 4;
+} while (++col & 15);
+}
+}
+pa >>= 4;
+} while (++row & 15);
+}
+}
+
+void HELPER(sme_addva_d)(void *vzda, void *vzn, void *vpn,
+ void *vpm, uint32_t desc)
+{
+intptr_t row, col, oprsz = simd_oprsz(desc) / 8;
+uint8_t *pn = vpn, *pm = vpm;
+uint64_t * restrict zda = vzda, * restrict zn = vzn;
+
+for (row = 0; row < oprsz; ++row) {
+if (pn[H1(row)] & 1) {
+uint64_t zn_row = zn[row];
+for (col = 0; col < oprsz; ++col) {
+if (pm[H1(col)] & 1) {
+zda[row * 

[PATCH 42/71] target/arm: Trap AdvSIMD usage when Streaming SVE is active

2022-06-02 Thread Richard Henderson
This new behaviour is in the ARM pseudocode function
AArch64.CheckFPAdvSIMDEnabled, which applies to AArch32
via AArch32.CheckAdvSIMDOrFPEnabled when the EL to which
the trap would be delivered is in AArch64 mode.

Given that ARMv9 drops support for AArch32 outside EL0,
the trap EL detection ought to be trivially true, but
the pseudocode still contains a number of conditions,
and QEMU has not yet committed to dropping A32 support
for EL[12] when v9 features are present.

Since the computation of SME_TRAP_SIMD is necessarily
different for the two modes, we might as well preserve
bits within TBFLAG_ANY and allocate separate bits within
TBFLAG_A32 and TBFLAG_A64 instead.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  6 +++
 target/arm/translate.h |  3 ++
 target/arm/sme-fa64.decode | 89 ++
 target/arm/helper.c| 42 ++
 target/arm/translate-a64.c | 41 +-
 target/arm/translate-vfp.c | 13 ++
 target/arm/translate.c |  1 +
 target/arm/meson.build |  1 +
 8 files changed, 194 insertions(+), 2 deletions(-)
 create mode 100644 target/arm/sme-fa64.decode

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 0c32c3afaa..899ecb7c82 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3256,6 +3256,11 @@ FIELD(TBFLAG_A32, HSTR_ACTIVE, 9, 1)
  * the same thing as the current security state of the processor!
  */
 FIELD(TBFLAG_A32, NS, 10, 1)
+/*
+ * Indicates that SME Streaming mode is active, and SMCR_ELx.FA64 is not.
+ * This requires an SME trap from AArch32 mode when using NEON.
+ */
+FIELD(TBFLAG_A32, SME_TRAP_SIMD, 11, 1)
 
 /*
  * Bit usage when in AArch32 state, for M-profile only.
@@ -3293,6 +3298,7 @@ FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
 FIELD(TBFLAG_A64, PSTATE_SM, 22, 1)
 FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
 FIELD(TBFLAG_A64, SVL, 24, 4)
+FIELD(TBFLAG_A64, SME_TRAP_SIMD, 28, 1)
 
 /*
  * Helpers for using the above.
diff --git a/target/arm/translate.h b/target/arm/translate.h
index 1330281f8b..775297aa40 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -106,6 +106,9 @@ typedef struct DisasContext {
 bool pstate_sm;
 /* True if PSTATE.ZA is set. */
 bool pstate_za;
+/* True if AdvSIMD insns should raise an SME Streaming exception. */
+bool sme_trap_simd;
+bool sme_trap_this_insn;
 /* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
 bool mve_no_pred;
 /*
diff --git a/target/arm/sme-fa64.decode b/target/arm/sme-fa64.decode
new file mode 100644
index 00..4c2569477d
--- /dev/null
+++ b/target/arm/sme-fa64.decode
@@ -0,0 +1,89 @@
+# AArch64 SME allowed instruction decoding
+#
+#  Copyright (c) 2022 Linaro, Ltd
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, see .
+
+#
+# This file is processed by scripts/decodetree.py
+#
+
+# These patterns are taken from Appendix E1.1 of DDI0616 A.a,
+# Arm Architecture Reference Manual Supplement,
+# The Scalable Matrix Extension (SME), for Armv9-A
+
+{
+  [
+OK  0-00 1110  0001 0010 11--     # SMOV W|Xd,Vn.B[0]
+OK  0-00 1110  0010 0010 11--     # SMOV W|Xd,Vn.H[0]
+OK  0100 1110  0100 0010 11--     # SMOV Xd,Vn.S[0]
+OK   1110  0001 0011 11--     # UMOV Wd,Vn.B[0]
+OK   1110  0010 0011 11--     # UMOV Wd,Vn.H[0]
+OK   1110  0100 0011 11--     # UMOV Wd,Vn.S[0]
+OK  0100 1110  1000 0011 11--     # UMOV Xd,Vn.D[0]
+  ]
+  FAIL  0--0 111-         # Advanced SIMD vector 
operations
+}
+
+{
+  [
+OK  0101 1110 --1-  11-1 11--     # FMULX/FRECPS/FRSQRTS 
(scalar)
+OK  0101 1110 -10-  00-1 11--     # FMULX/FRECPS/FRSQRTS 
(scalar, FP16)
+OK  01-1 1110 1-10 0001 11-1 10--     # FRECPE/FRSQRTE/FRECPX 
(scalar)
+OK  01-1 1110  1001 11-1 10--     # FRECPE/FRSQRTE/FRECPX 
(scalar, FP16)
+  ]
+  FAIL  01-1 111-         # Advanced SIMD 
single-element operations
+}
+
+FAIL0-00 110-         # Advanced SIMD structure 
load/store
+FAIL1100 1110         # Advanced SIMD cryptography 
extensions
+
+# These are the "avoidance of doubt" final table of Illegal Advanced SIMD 
instructions
+# We don't actually need 

[PATCH 31/71] target/arm: Move error for sve%d property to arm_cpu_sve_finalize

2022-06-02 Thread Richard Henderson
Keep all of the error messages together.  This does mean that
when setting many sve length properties we'll only generate
one error, but we only really need one.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu64.c | 15 +++
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 51c5d8d4bc..e18f585fa7 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -487,8 +487,13 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
   "using only sve properties.\n");
 } else {
 error_setg(errp, "cannot enable sve%d", vq * 128);
-error_append_hint(errp, "This CPU does not support "
-  "the vector length %d-bits.\n", vq * 128);
+if (vq_supported) {
+error_append_hint(errp, "This CPU does not support "
+  "the vector length %d-bits.\n", vq * 
128);
+} else {
+error_append_hint(errp, "SVE not supported by KVM "
+  "on this host\n");
+}
 }
 return;
 } else {
@@ -606,12 +611,6 @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, 
const char *name,
 return;
 }
 
-if (value && kvm_enabled() && !kvm_arm_sve_supported()) {
-error_setg(errp, "cannot enable %s", name);
-error_append_hint(errp, "SVE not supported by KVM on this host\n");
-return;
-}
-
 cpu->sve_vq_map = deposit32(cpu->sve_vq_map, vq - 1, 1, value);
 cpu->sve_vq_init |= 1 << (vq - 1);
 }
-- 
2.34.1




[PATCH 48/71] target/arm: Implement SME LDR, STR

2022-06-02 Thread Richard Henderson
We can reuse the SVE functions for LDR and STR, passing in the
base of the ZA vector and a zero offset.

Signed-off-by: Richard Henderson 
---
 target/arm/sme.decode  |  7 +++
 target/arm/translate-sme.c | 23 +++
 2 files changed, 30 insertions(+)

diff --git a/target/arm/sme.decode b/target/arm/sme.decode
index 900e3f2a07..f1ebd857a5 100644
--- a/target/arm/sme.decode
+++ b/target/arm/sme.decode
@@ -46,3 +46,10 @@ LDST1   111 0 esz:2 st:1 rm:5 v:1 .. pg:3 rn:5 0 
za_imm:4  \
  rs=%mova_rs
 LDST1   111 111 st:1 rm:5 v:1 .. pg:3 rn:5 0 za_imm:4  \
  esz=4 rs=%mova_rs
+
+  rv rn imm
+@ldstr  ... ... . .. .. ... rn:5 . imm:4 \
+ rv=%mova_rs
+
+LDR 111 100 0 00 .. 000 . 0 @ldstr
+STR 111 100 1 00 .. 000 . 0 @ldstr
diff --git a/target/arm/translate-sme.c b/target/arm/translate-sme.c
index 978af74d1d..c3e544d69c 100644
--- a/target/arm/translate-sme.c
+++ b/target/arm/translate-sme.c
@@ -220,3 +220,26 @@ static bool trans_LDST1(DisasContext *s, arg_LDST1 *a)
 tcg_temp_free_i64(addr);
 return true;
 }
+
+typedef void GenLdStR(DisasContext *, TCGv_ptr, int, int, int, int);
+
+static bool do_ldst_r(DisasContext *s, arg_ldstr *a, GenLdStR *fn)
+{
+int imm = a->imm;
+TCGv_ptr base;
+
+if (!sme_za_enabled_check(s)) {
+return true;
+}
+
+/* ZA[n] equates to ZA0H.B[n]. */
+base = get_tile_rowcol(s, MO_8, a->rv, imm, false);
+
+fn(s, base, 0, s->svl, a->rn, imm * s->svl);
+
+tcg_temp_free_ptr(base);
+return true;
+}
+
+TRANS_FEAT(LDR, aa64_sme, do_ldst_r, a, gen_sve_ldr)
+TRANS_FEAT(STR, aa64_sme, do_ldst_r, a, gen_sve_str)
-- 
2.34.1




[PATCH 33/71] target/arm: Generalize cpu_arm_{get,set}_vq

2022-06-02 Thread Richard Henderson
Rename from cpu_arm_{get,set}_sve_vq, and take the
ARMVQMap as the opaque parameter.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu64.c | 29 +++--
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 0a2f4f3170..dcec0a6559 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -579,15 +579,15 @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor 
*v, const char *name,
 }
 
 /*
- * Note that cpu_arm_get/set_sve_vq cannot use the simpler
- * object_property_add_bool interface because they make use
- * of the contents of "name" to determine which bit on which
- * to operate.
+ * Note that cpu_arm_{get,set}_vq cannot use the simpler
+ * object_property_add_bool interface because they make use of the
+ * contents of "name" to determine which bit on which to operate.
  */
-static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
-   void *opaque, Error **errp)
+static void cpu_arm_get_vq(Object *obj, Visitor *v, const char *name,
+   void *opaque, Error **errp)
 {
 ARMCPU *cpu = ARM_CPU(obj);
+ARMVQMap *vq_map = opaque;
 uint32_t vq = atoi([3]) / 128;
 bool value;
 
@@ -595,15 +595,15 @@ static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, 
const char *name,
 if (!cpu_isar_feature(aa64_sve, cpu)) {
 value = false;
 } else {
-value = extract32(cpu->sve_vq.map, vq - 1, 1);
+value = extract32(vq_map->map, vq - 1, 1);
 }
 visit_type_bool(v, name, , errp);
 }
 
-static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
-   void *opaque, Error **errp)
+static void cpu_arm_set_vq(Object *obj, Visitor *v, const char *name,
+   void *opaque, Error **errp)
 {
-ARMCPU *cpu = ARM_CPU(obj);
+ARMVQMap *vq_map = opaque;
 uint32_t vq = atoi([3]) / 128;
 bool value;
 
@@ -611,8 +611,8 @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, 
const char *name,
 return;
 }
 
-cpu->sve_vq.map = deposit32(cpu->sve_vq.map, vq - 1, 1, value);
-cpu->sve_vq.init |= 1 << (vq - 1);
+vq_map->map = deposit32(vq_map->map, vq - 1, 1, value);
+vq_map->init |= 1 << (vq - 1);
 }
 
 static bool cpu_arm_get_sve(Object *obj, Error **errp)
@@ -691,6 +691,7 @@ static void cpu_arm_get_sve_default_vec_len(Object *obj, 
Visitor *v,
 
 void aarch64_add_sve_properties(Object *obj)
 {
+ARMCPU *cpu = ARM_CPU(obj);
 uint32_t vq;
 
 object_property_add_bool(obj, "sve", cpu_arm_get_sve, cpu_arm_set_sve);
@@ -698,8 +699,8 @@ void aarch64_add_sve_properties(Object *obj)
 for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
 char name[8];
 sprintf(name, "sve%d", vq * 128);
-object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
-cpu_arm_set_sve_vq, NULL, NULL);
+object_property_add(obj, name, "bool", cpu_arm_get_vq,
+cpu_arm_set_vq, NULL, >sve_vq);
 }
 
 #ifdef CONFIG_USER_ONLY
-- 
2.34.1




[PATCH 38/71] target/arm: Introduce sve_vqm1_for_el_sm

2022-06-02 Thread Richard Henderson
When Streaming SVE mode is enabled, the size is taken from
SMCR_ELx instead of ZCR_ELx.  The format is shared, but the
set of vector lengths is not.  Further, Streaming SVE does
not require any particular length to be supported.

Adjust sve_vqm1_for_el to pass the current value of PSTATE.SM
to the new function.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h|  9 +++--
 target/arm/helper.c | 32 +---
 2 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index d74c06e2f0..e41a75a3a3 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1140,13 +1140,18 @@ int sve_exception_el(CPUARMState *env, int cur_el);
 int sme_exception_el(CPUARMState *env, int cur_el);
 
 /**
- * sve_vqm1_for_el:
+ * sve_vqm1_for_el_sm:
  * @env: CPUARMState
  * @el: exception level
+ * @sm: streaming mode
  *
- * Compute the current SVE vector length for @el, in units of
+ * Compute the current vector length for @el & @sm, in units of
  * Quadwords Minus 1 -- the same scale used for ZCR_ELx.LEN.
+ * If @sm, compute for SVL, otherwise NVL.
  */
+uint32_t sve_vqm1_for_el_sm(CPUARMState *env, int el, bool sm);
+
+/* Likewise, but using @sm = PSTATE.SM. */
 uint32_t sve_vqm1_for_el(CPUARMState *env, int el);
 
 static inline bool is_a64(CPUARMState *env)
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 2e7669180f..cb78d2354a 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6276,23 +6276,41 @@ int sme_exception_el(CPUARMState *env, int el)
 /*
  * Given that SVE is enabled, return the vector length for EL.
  */
-uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
+uint32_t sve_vqm1_for_el_sm(CPUARMState *env, int el, bool sm)
 {
 ARMCPU *cpu = env_archcpu(env);
-uint32_t len = cpu->sve_max_vq - 1;
+uint64_t *cr = env->vfp.zcr_el;
+uint32_t map = cpu->sve_vq.map;
+uint32_t len = ARM_MAX_VQ - 1;
+
+if (sm) {
+cr = env->vfp.smcr_el;
+map = cpu->sme_vq.map;
+}
 
 if (el <= 1 && !el_is_in_host(env, el)) {
-len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
+len = MIN(len, 0xf & (uint32_t)cr[1]);
 }
 if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) {
-len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[2]);
+len = MIN(len, 0xf & (uint32_t)cr[2]);
 }
 if (arm_feature(env, ARM_FEATURE_EL3)) {
-len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
+len = MIN(len, 0xf & (uint32_t)cr[3]);
 }
 
-len = 31 - clz32(cpu->sve_vq.map & MAKE_64BIT_MASK(0, len + 1));
-return len;
+map &= MAKE_64BIT_MASK(0, len + 1);
+if (map != 0) {
+return 31 - clz32(map);
+}
+
+/* Bit 0 is always set for Normal SVE -- not so for Streaming SVE. */
+assert(sm);
+return ctz32(cpu->sme_vq.map);
+}
+
+uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
+{
+return sve_vqm1_for_el_sm(env, el, FIELD_EX64(env->svcr, SVCR, SM));
 }
 
 static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
-- 
2.34.1




[PATCH 27/71] target/arm: Add SMIDR_EL1, SMPRI_EL1, SMPRIMAP_EL2

2022-06-02 Thread Richard Henderson
Implement the streaming mode identification register, and the
two streaming priority registers.  For QEMU, they are all RES0.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 33 +
 1 file changed, 33 insertions(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 4149570b95..f852fd7644 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6355,6 +6355,18 @@ static CPAccessResult access_tpidr2(CPUARMState *env, 
const ARMCPRegInfo *ri,
 return CP_ACCESS_OK;
 }
 
+static CPAccessResult access_esm(CPUARMState *env, const ARMCPRegInfo *ri,
+ bool isread)
+{
+/* TODO: FEAT_FGT for SMPRI_EL1 but not SMPRIMAP_EL2 */
+if (arm_current_el(env) < 3
+&& arm_feature(env, ARM_FEATURE_EL3)
+&& !FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, ESM)) {
+return CP_ACCESS_TRAP_EL3;
+}
+return CP_ACCESS_OK;
+}
+
 static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
uint64_t value)
 {
@@ -6412,6 +6424,27 @@ static const ARMCPRegInfo sme_reginfo[] = {
   .access = PL3_RW, .type = ARM_CP_SME,
   .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[3]),
   .writefn = smcr_write, .raw_writefn = raw_write },
+{ .name = "SMIDR_EL1", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 6,
+  .access = PL1_RW, .accessfn = access_aa64_tid1,
+  /*
+   * IMPLEMENTOR = 0 (software)
+   * REVISION= 0 (implementation defined)
+   * SMPS= 0 (no streaming execution priority in QEMU)
+   * AFFINITY= 0 (streaming sve mode not shared with other PEs)
+   */
+  .type = ARM_CP_CONST, .resetvalue = 0, },
+/*
+ * Because SMIDR_EL1.SMPS is 0, SMPRI_EL1 and SMPRIMAP_EL2 are RES 0.
+ */
+{ .name = "SMPRI_EL1", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 4,
+  .access = PL1_RW, .accessfn = access_esm,
+  .type = ARM_CP_CONST, .resetvalue = 0 },
+{ .name = "SMPRIMAP_EL2", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 5,
+  .access = PL2_RW, .accessfn = access_esm,
+  .type = ARM_CP_CONST, .resetvalue = 0 },
 };
 #endif /* TARGET_AARCH64 */
 
-- 
2.34.1




[PATCH 29/71] target/arm: Add the SME ZA storage to CPUARMState

2022-06-02 Thread Richard Henderson
Place this late in the resettable section of the structure,
to keep the most common element offsets from being > 64k.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h |  8 
 target/arm/machine.c | 36 
 2 files changed, 44 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 9bd8058afe..1bc7de1da1 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -694,6 +694,14 @@ typedef struct CPUArchState {
 } keys;
 
 uint64_t scxtnum_el[4];
+
+/*
+ * SME ZA storage -- 256 x 256 byte array, with bytes in host word order,
+ * as we do with vfp.zregs[].  Because this is so large, keep this toward
+ * the end of the reset area, to keep the offsets into the rest of the
+ * structure smaller.
+ */
+ARMVectorReg zarray[ARM_MAX_VQ * 16];
 #endif
 
 #if defined(CONFIG_USER_ONLY)
diff --git a/target/arm/machine.c b/target/arm/machine.c
index 285e387d2c..d9dff6576d 100644
--- a/target/arm/machine.c
+++ b/target/arm/machine.c
@@ -167,6 +167,39 @@ static const VMStateDescription vmstate_sve = {
 VMSTATE_END_OF_LIST()
 }
 };
+
+static const VMStateDescription vmstate_za_row = {
+.name = "cpu/sme/za_row",
+.version_id = 1,
+.minimum_version_id = 1,
+.fields = (VMStateField[]) {
+VMSTATE_UINT64_ARRAY(d, ARMVectorReg, ARM_MAX_VQ * 2),
+VMSTATE_END_OF_LIST()
+}
+};
+
+static bool za_needed(void *opaque)
+{
+ARMCPU *cpu = opaque;
+
+/*
+ * When ZA storage is disabled, its contents are discarded.
+ * It will be zeroed when ZA storage is re-enabled.
+ */
+return FIELD_EX64(cpu->env.svcr, SVCR, ZA);
+}
+
+static const VMStateDescription vmstate_za = {
+.name = "cpu/sme",
+.version_id = 1,
+.minimum_version_id = 1,
+.needed = za_needed,
+.fields = (VMStateField[]) {
+VMSTATE_STRUCT_ARRAY(env.zarray, ARMCPU, ARM_MAX_VQ * 16, 0,
+ vmstate_za_row, ARMVectorReg),
+VMSTATE_END_OF_LIST()
+}
+};
 #endif /* AARCH64 */
 
 static bool serror_needed(void *opaque)
@@ -887,6 +920,9 @@ const VMStateDescription vmstate_arm_cpu = {
 #endif
 _serror,
 _irq_line_state,
+#ifdef TARGET_AARCH64
+_za,
+#endif
 NULL
 }
 };
-- 
2.34.1




[PATCH 43/71] target/arm: Implement SME RDSVL, ADDSVL, ADDSPL

2022-06-02 Thread Richard Henderson
These SME instructions are nominally within the SVE decode space,
so we add them to sve.decode and translate-sve.c.

Signed-off-by: Richard Henderson 
---
 target/arm/translate-a64.h |  1 +
 target/arm/sve.decode  |  5 -
 target/arm/translate-a64.c | 15 +++
 target/arm/translate-sve.c | 38 ++
 4 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 789b6e8e78..6bd1b2eb4b 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -29,6 +29,7 @@ void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v);
 bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
 unsigned int imms, unsigned int immr);
 bool sve_access_check(DisasContext *s);
+bool sme_enabled_check(DisasContext *s);
 TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr);
 TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write,
 bool tag_checked, int log2_size);
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index a54feb2f61..bbdaac6ac7 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -449,14 +449,17 @@ INDEX_ri0100 esz:2 1 imm:s5 010001 rn:5 rd:5
 # SVE index generation (register start, register increment)
 INDEX_rr0100 .. 1 . 010011 . .  @rd_rn_rm
 
-### SVE Stack Allocation Group
+### SVE / Streaming SVE Stack Allocation Group
 
 # SVE stack frame adjustment
 ADDVL   0100 001 . 01010 .. .   @rd_rn_i6
+ADDSVL  0100 001 . 01011 .. .   @rd_rn_i6
 ADDPL   0100 011 . 01010 .. .   @rd_rn_i6
+ADDSPL  0100 011 . 01011 .. .   @rd_rn_i6
 
 # SVE stack frame size
 RDVL0100 101 1 01010 imm:s6 rd:5
+RDSVL   0100 101 1 01011 imm:s6 rd:5
 
 ### SVE Bitwise Shift - Unpredicated Group
 
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 029c0a917c..222f93d42d 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1216,6 +1216,21 @@ static bool sme_access_check(DisasContext *s)
 return true;
 }
 
+/* Note that this function corresponds to CheckSMEEnabled. */
+bool sme_enabled_check(DisasContext *s)
+{
+/*
+ * Note that unlike sve_excp_el, we have not constrained sme_excp_el
+ * to be zero when fp_excp_el has priority.  This is because we need
+ * sme_excp_el by itself for cpregs access checks.
+ */
+if (!s->fp_excp_el || s->sme_excp_el < s->fp_excp_el) {
+s->fp_access_checked = true;
+return sme_access_check(s);
+}
+return fp_access_check_only(s);
+}
+
 /*
  * This utility function is for doing register extension with an
  * optional shift. You will likely want to pass a temporary for the
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 62b5f3040c..13bdd027a5 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1286,6 +1286,19 @@ static bool trans_ADDVL(DisasContext *s, arg_ADDVL *a)
 return true;
 }
 
+static bool trans_ADDSVL(DisasContext *s, arg_ADDSVL *a)
+{
+if (!dc_isar_feature(aa64_sme, s)) {
+return false;
+}
+if (sme_enabled_check(s)) {
+TCGv_i64 rd = cpu_reg_sp(s, a->rd);
+TCGv_i64 rn = cpu_reg_sp(s, a->rn);
+tcg_gen_addi_i64(rd, rn, a->imm * s->svl);
+}
+return true;
+}
+
 static bool trans_ADDPL(DisasContext *s, arg_ADDPL *a)
 {
 if (!dc_isar_feature(aa64_sve, s)) {
@@ -1299,6 +1312,19 @@ static bool trans_ADDPL(DisasContext *s, arg_ADDPL *a)
 return true;
 }
 
+static bool trans_ADDSPL(DisasContext *s, arg_ADDSPL *a)
+{
+if (!dc_isar_feature(aa64_sme, s)) {
+return false;
+}
+if (sme_enabled_check(s)) {
+TCGv_i64 rd = cpu_reg_sp(s, a->rd);
+TCGv_i64 rn = cpu_reg_sp(s, a->rn);
+tcg_gen_addi_i64(rd, rn, a->imm * (s->svl / 8));
+}
+return true;
+}
+
 static bool trans_RDVL(DisasContext *s, arg_RDVL *a)
 {
 if (!dc_isar_feature(aa64_sve, s)) {
@@ -1311,6 +1337,18 @@ static bool trans_RDVL(DisasContext *s, arg_RDVL *a)
 return true;
 }
 
+static bool trans_RDSVL(DisasContext *s, arg_RDSVL *a)
+{
+if (!dc_isar_feature(aa64_sme, s)) {
+return false;
+}
+if (sme_enabled_check(s)) {
+TCGv_i64 reg = cpu_reg(s, a->rd);
+tcg_gen_movi_i64(reg, a->imm * s->svl);
+}
+return true;
+}
+
 /*
  *** SVE Compute Vector Address Group
  */
-- 
2.34.1




[PATCH 26/71] target/arm: Add SMCR_ELx

2022-06-02 Thread Richard Henderson
These cpregs control the streaming vector length and whether the
full a64 instruction set is allowed while in streaming mode.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h|  8 ++--
 target/arm/helper.c | 41 +
 2 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 31b764556c..1ae1b7122b 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -669,8 +669,8 @@ typedef struct CPUArchState {
 float_status standard_fp_status;
 float_status standard_fp_status_f16;
 
-/* ZCR_EL[1-3] */
-uint64_t zcr_el[4];
+uint64_t zcr_el[4];   /* ZCR_EL[1-3] */
+uint64_t smcr_el[4];  /* SMCR_EL[1-3] */
 } vfp;
 uint64_t exclusive_addr;
 uint64_t exclusive_val;
@@ -1434,6 +1434,10 @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
 FIELD(SVCR, SM, 0, 1)
 FIELD(SVCR, ZA, 1, 1)
 
+/* Fields for SMCR_ELx. */
+FIELD(SMCR, LEN, 0, 4)
+FIELD(SMCR, FA64, 31, 1)
+
 /* Write a new value to v7m.exception, thus transitioning into or out
  * of Handler mode; this may result in a change of active stack pointer.
  */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 366420385a..4149570b95 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -5883,6 +5883,8 @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU 
*cpu)
  */
 { K(3, 0,  1, 2, 0), K(3, 4,  1, 2, 0), K(3, 5, 1, 2, 0),
   "ZCR_EL1", "ZCR_EL2", "ZCR_EL12", isar_feature_aa64_sve },
+{ K(3, 0,  1, 2, 6), K(3, 4,  1, 2, 6), K(3, 5, 1, 2, 6),
+  "SMCR_EL1", "SMCR_EL2", "SMCR_EL12", isar_feature_aa64_sme },
 
 { K(3, 0,  5, 6, 0), K(3, 4,  5, 6, 0), K(3, 5, 5, 6, 0),
   "TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte },
@@ -6361,6 +6363,30 @@ static void svcr_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 env->svcr = value;
 }
 
+static void smcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
+   uint64_t value)
+{
+int cur_el = arm_current_el(env);
+int old_len = sve_vqm1_for_el(env, cur_el);
+int new_len;
+
+QEMU_BUILD_BUG_ON(ARM_MAX_VQ > R_SMCR_LEN_MASK + 1);
+value &= R_SMCR_LEN_MASK | R_SMCR_FA64_MASK;
+raw_write(env, ri, value);
+
+/*
+ * Note that it is CONSTRAINED UNPREDICTABLE what happens to ZA storage
+ * when SVL is widened (old values kept, or zeros).  Choose to keep the
+ * current values for simplicity.  But for QEMU internals, we must still
+ * apply the narrower SVL to the Zregs and Pregs -- see the comment
+ * above aarch64_sve_narrow_vq.
+ */
+new_len = sve_vqm1_for_el(env, cur_el);
+if (new_len < old_len) {
+aarch64_sve_narrow_vq(env, new_len + 1);
+}
+}
+
 static const ARMCPRegInfo sme_reginfo[] = {
 { .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
   .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
@@ -6371,6 +6397,21 @@ static const ARMCPRegInfo sme_reginfo[] = {
   .access = PL0_RW, .type = ARM_CP_SME,
   .fieldoffset = offsetof(CPUARMState, svcr),
   .writefn = svcr_write, .raw_writefn = raw_write },
+{ .name = "SMCR_EL1", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 6,
+  .access = PL1_RW, .type = ARM_CP_SME,
+  .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[1]),
+  .writefn = smcr_write, .raw_writefn = raw_write },
+{ .name = "SMCR_EL2", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 6,
+  .access = PL2_RW, .type = ARM_CP_SME,
+  .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[2]),
+  .writefn = smcr_write, .raw_writefn = raw_write },
+{ .name = "SMCR_EL3", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 6,
+  .access = PL3_RW, .type = ARM_CP_SME,
+  .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[3]),
+  .writefn = smcr_write, .raw_writefn = raw_write },
 };
 #endif /* TARGET_AARCH64 */
 
-- 
2.34.1




[PATCH 25/71] target/arm: Add SVCR

2022-06-02 Thread Richard Henderson
This cpreg is used to access two new bits of PSTATE
that are not visible via any other mechanism.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h|  6 ++
 target/arm/helper.c | 13 +
 2 files changed, 19 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 31f812eda7..31b764556c 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -258,6 +258,7 @@ typedef struct CPUArchState {
  *  nRW (also known as M[4]) is kept, inverted, in env->aarch64
  *  DAIF (exception masks) are kept in env->daif
  *  BTYPE is kept in env->btype
+ *  SM and ZA are kept in env->svcr
  *  all other bits are stored in their correct places in env->pstate
  */
 uint32_t pstate;
@@ -292,6 +293,7 @@ typedef struct CPUArchState {
 uint32_t condexec_bits; /* IT bits.  cpsr[15:10,26:25].  */
 uint32_t btype;  /* BTI branch type.  spsr[11:10].  */
 uint64_t daif; /* exception masks, in the bits they are in PSTATE */
+uint64_t svcr; /* PSTATE.{SM,ZA} in the bits they are in SVCR */
 
 uint64_t elr_el[4]; /* AArch64 exception link regs  */
 uint64_t sp_el[4]; /* AArch64 banked stack pointers */
@@ -1428,6 +1430,10 @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
 #define PSTATE_MODE_EL1t 4
 #define PSTATE_MODE_EL0t 0
 
+/* PSTATE bits that are accessed via SVCR and not stored in SPRS_ELx. */
+FIELD(SVCR, SM, 0, 1)
+FIELD(SVCR, ZA, 1, 1)
+
 /* Write a new value to v7m.exception, thus transitioning into or out
  * of Handler mode; this may result in a change of active stack pointer.
  */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 98de2c797f..366420385a 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6353,11 +6353,24 @@ static CPAccessResult access_tpidr2(CPUARMState *env, 
const ARMCPRegInfo *ri,
 return CP_ACCESS_OK;
 }
 
+static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
+   uint64_t value)
+{
+value &= R_SVCR_SM_MASK | R_SVCR_ZA_MASK;
+/* TODO: Side effects. */
+env->svcr = value;
+}
+
 static const ARMCPRegInfo sme_reginfo[] = {
 { .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
   .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
   .access = PL0_RW, .accessfn = access_tpidr2,
   .fieldoffset = offsetof(CPUARMState, cp15.tpidr2_el0) },
+{ .name = "SVCR", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 2,
+  .access = PL0_RW, .type = ARM_CP_SME,
+  .fieldoffset = offsetof(CPUARMState, svcr),
+  .writefn = svcr_write, .raw_writefn = raw_write },
 };
 #endif /* TARGET_AARCH64 */
 
-- 
2.34.1




[PATCH 40/71] target/arm: Move pred_{full, gvec}_reg_{offset, size} to translate-a64.h

2022-06-02 Thread Richard Henderson
We will need these functions in translate-sme.c.

Signed-off-by: Richard Henderson 
---
 target/arm/translate-a64.h | 38 ++
 target/arm/translate-sve.c | 36 
 2 files changed, 38 insertions(+), 36 deletions(-)

diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index dbc917ee65..f0970c6b8c 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -107,6 +107,44 @@ static inline int vec_full_reg_size(DisasContext *s)
 return s->vl;
 }
 
+/*
+ * Return the offset info CPUARMState of the predicate vector register Pn.
+ * Note for this purpose, FFR is P16.
+ */
+static inline int pred_full_reg_offset(DisasContext *s, int regno)
+{
+return offsetof(CPUARMState, vfp.pregs[regno]);
+}
+
+/* Return the byte size of the whole predicate register, VL / 64.  */
+static inline int pred_full_reg_size(DisasContext *s)
+{
+return s->vl >> 3;
+}
+
+/*
+ * Round up the size of a register to a size allowed by
+ * the tcg vector infrastructure.  Any operation which uses this
+ * size may assume that the bits above pred_full_reg_size are zero,
+ * and must leave them the same way.
+ *
+ * Note that this is not needed for the vector registers as they
+ * are always properly sized for tcg vectors.
+ */
+static inline int size_for_gvec(int size)
+{
+if (size <= 8) {
+return 8;
+} else {
+return QEMU_ALIGN_UP(size, 16);
+}
+}
+
+static inline int pred_gvec_reg_size(DisasContext *s)
+{
+return size_for_gvec(pred_full_reg_size(s));
+}
+
 bool disas_sve(DisasContext *, uint32_t);
 
 void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 67761bf2cc..62b5f3040c 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -100,42 +100,6 @@ static inline int msz_dtype(DisasContext *s, int msz)
  * Implement all of the translator functions referenced by the decoder.
  */
 
-/* Return the offset info CPUARMState of the predicate vector register Pn.
- * Note for this purpose, FFR is P16.
- */
-static inline int pred_full_reg_offset(DisasContext *s, int regno)
-{
-return offsetof(CPUARMState, vfp.pregs[regno]);
-}
-
-/* Return the byte size of the whole predicate register, VL / 64.  */
-static inline int pred_full_reg_size(DisasContext *s)
-{
-return s->vl >> 3;
-}
-
-/* Round up the size of a register to a size allowed by
- * the tcg vector infrastructure.  Any operation which uses this
- * size may assume that the bits above pred_full_reg_size are zero,
- * and must leave them the same way.
- *
- * Note that this is not needed for the vector registers as they
- * are always properly sized for tcg vectors.
- */
-static int size_for_gvec(int size)
-{
-if (size <= 8) {
-return 8;
-} else {
-return QEMU_ALIGN_UP(size, 16);
-}
-}
-
-static int pred_gvec_reg_size(DisasContext *s)
-{
-return size_for_gvec(pred_full_reg_size(s));
-}
-
 /* Invoke an out-of-line helper on 2 Zregs. */
 static bool gen_gvec_ool_zz(DisasContext *s, gen_helper_gvec_2 *fn,
 int rd, int rn, int data)
-- 
2.34.1




[PATCH 23/71] target/arm: Add syn_smetrap

2022-06-02 Thread Richard Henderson
This will be used for raising various traps for SME.

Signed-off-by: Richard Henderson 
---
 target/arm/syndrome.h | 13 +
 1 file changed, 13 insertions(+)

diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
index 0cb26dde7d..4792df0f0f 100644
--- a/target/arm/syndrome.h
+++ b/target/arm/syndrome.h
@@ -48,6 +48,7 @@ enum arm_exception_class {
 EC_AA64_SMC   = 0x17,
 EC_SYSTEMREGISTERTRAP = 0x18,
 EC_SVEACCESSTRAP  = 0x19,
+EC_SMETRAP= 0x1d,
 EC_INSNABORT  = 0x20,
 EC_INSNABORT_SAME_EL  = 0x21,
 EC_PCALIGNMENT= 0x22,
@@ -68,6 +69,13 @@ enum arm_exception_class {
 EC_AA64_BKPT  = 0x3c,
 };
 
+typedef enum {
+SME_ET_AccessTrap,
+SME_ET_Streaming,
+SME_ET_NotStreaming,
+SME_ET_InactiveZA,
+} SMEExceptionType;
+
 #define ARM_EL_EC_SHIFT 26
 #define ARM_EL_IL_SHIFT 25
 #define ARM_EL_ISV_SHIFT 24
@@ -206,6 +214,11 @@ static inline uint32_t syn_sve_access_trap(void)
 return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
 }
 
+static inline uint32_t syn_smetrap(SMEExceptionType etype, bool is_16bit)
+{
+return (EC_SMETRAP << ARM_EL_EC_SHIFT) | (!is_16bit * ARM_EL_IL) | etype;
+}
+
 static inline uint32_t syn_pactrap(void)
 {
 return EC_PACTRAP << ARM_EL_EC_SHIFT;
-- 
2.34.1




[PATCH 28/71] target/arm: Add PSTATE.{SM,ZA} to TB flags

2022-06-02 Thread Richard Henderson
These are required to determine if various insns
are allowed to issue.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   | 2 ++
 target/arm/translate.h | 4 
 target/arm/helper.c| 4 
 target/arm/translate-a64.c | 2 ++
 4 files changed, 12 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 1ae1b7122b..9bd8058afe 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3284,6 +3284,8 @@ FIELD(TBFLAG_A64, TCMA, 16, 2)
 FIELD(TBFLAG_A64, MTE_ACTIVE, 18, 1)
 FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
 FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
+FIELD(TBFLAG_A64, PSTATE_SM, 22, 1)
+FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
 
 /*
  * Helpers for using the above.
diff --git a/target/arm/translate.h b/target/arm/translate.h
index a492e4217b..fbd6713572 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -101,6 +101,10 @@ typedef struct DisasContext {
 bool align_mem;
 /* True if PSTATE.IL is set */
 bool pstate_il;
+/* True if PSTATE.SM is set. */
+bool pstate_sm;
+/* True if PSTATE.ZA is set. */
+bool pstate_za;
 /* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
 bool mve_no_pred;
 /*
diff --git a/target/arm/helper.c b/target/arm/helper.c
index f852fd7644..3edecb56b6 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -13857,6 +13857,10 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState 
*env, int el, int fp_el,
 }
 if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
 DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
+if (FIELD_EX64(env->svcr, SVCR, SM)) {
+DP_TBFLAG_A64(flags, PSTATE_SM, 1);
+}
+DP_TBFLAG_A64(flags, PSTATE_ZA, FIELD_EX64(env->svcr, SVCR, ZA));
 }
 
 sctlr = regime_sctlr(env, stage1);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index f51d80d816..fdc035ad9a 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14635,6 +14635,8 @@ static void 
aarch64_tr_init_disas_context(DisasContextBase *dcbase,
 dc->ata = EX_TBFLAG_A64(tb_flags, ATA);
 dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE);
 dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
+dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
+dc->pstate_za = EX_TBFLAG_A64(tb_flags, PSTATE_ZA);
 dc->vec_len = 0;
 dc->vec_stride = 0;
 dc->cp_regs = arm_cpu->cp_regs;
-- 
2.34.1




[PATCH 32/71] target/arm: Create ARMVQMap

2022-06-02 Thread Richard Henderson
Pull the three sve_vq_* values into a structure.
This will be reused for SME.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h| 29 ++---
 target/arm/cpu64.c  | 22 +++---
 target/arm/helper.c |  2 +-
 target/arm/kvm64.c  |  2 +-
 4 files changed, 27 insertions(+), 28 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index b65e370b70..9408d36b8a 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -793,6 +793,19 @@ typedef enum ARMPSCIState {
 
 typedef struct ARMISARegisters ARMISARegisters;
 
+/*
+ * In map, each set bit is a supported vector length of (bit-number + 1) * 16
+ * bytes, i.e. each bit number + 1 is the vector length in quadwords.
+ *
+ * While processing properties during initialization, corresponding init bits
+ * are set for bits in sve_vq_map that have been set by properties.
+ *
+ * Bits set in supported represent valid vector lengths for the CPU type.
+ */
+typedef struct {
+uint32_t map, init, supported;
+} ARMVQMap;
+
 /**
  * ARMCPU:
  * @env: #CPUARMState
@@ -1041,21 +1054,7 @@ struct ArchCPU {
 uint32_t sve_default_vq;
 #endif
 
-/*
- * In sve_vq_map each set bit is a supported vector length of
- * (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
- * length in quadwords.
- *
- * While processing properties during initialization, corresponding
- * sve_vq_init bits are set for bits in sve_vq_map that have been
- * set by properties.
- *
- * Bits set in sve_vq_supported represent valid vector lengths for
- * the CPU type.
- */
-uint32_t sve_vq_map;
-uint32_t sve_vq_init;
-uint32_t sve_vq_supported;
+ARMVQMap sve_vq;
 
 /* Generic timer counter frequency, in Hz */
 uint64_t gt_cntfrq_hz;
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index e18f585fa7..0a2f4f3170 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -355,8 +355,8 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
  * any of the above.  Finally, if SVE is not disabled, then at least one
  * vector length must be enabled.
  */
-uint32_t vq_map = cpu->sve_vq_map;
-uint32_t vq_init = cpu->sve_vq_init;
+uint32_t vq_map = cpu->sve_vq.map;
+uint32_t vq_init = cpu->sve_vq.init;
 uint32_t vq_supported;
 uint32_t vq_mask = 0;
 uint32_t tmp, vq, max_vq = 0;
@@ -369,14 +369,14 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
  */
 if (kvm_enabled()) {
 if (kvm_arm_sve_supported()) {
-cpu->sve_vq_supported = kvm_arm_sve_get_vls(CPU(cpu));
-vq_supported = cpu->sve_vq_supported;
+cpu->sve_vq.supported = kvm_arm_sve_get_vls(CPU(cpu));
+vq_supported = cpu->sve_vq.supported;
 } else {
 assert(!cpu_isar_feature(aa64_sve, cpu));
 vq_supported = 0;
 }
 } else {
-vq_supported = cpu->sve_vq_supported;
+vq_supported = cpu->sve_vq.supported;
 }
 
 /*
@@ -534,7 +534,7 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
 
 /* From now on sve_max_vq is the actual maximum supported length. */
 cpu->sve_max_vq = max_vq;
-cpu->sve_vq_map = vq_map;
+cpu->sve_vq.map = vq_map;
 }
 
 static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
@@ -595,7 +595,7 @@ static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, 
const char *name,
 if (!cpu_isar_feature(aa64_sve, cpu)) {
 value = false;
 } else {
-value = extract32(cpu->sve_vq_map, vq - 1, 1);
+value = extract32(cpu->sve_vq.map, vq - 1, 1);
 }
 visit_type_bool(v, name, , errp);
 }
@@ -611,8 +611,8 @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, 
const char *name,
 return;
 }
 
-cpu->sve_vq_map = deposit32(cpu->sve_vq_map, vq - 1, 1, value);
-cpu->sve_vq_init |= 1 << (vq - 1);
+cpu->sve_vq.map = deposit32(cpu->sve_vq.map, vq - 1, 1, value);
+cpu->sve_vq.init |= 1 << (vq - 1);
 }
 
 static bool cpu_arm_get_sve(Object *obj, Error **errp)
@@ -973,7 +973,7 @@ static void aarch64_max_initfn(Object *obj)
 cpu->dcz_blocksize = 7; /*  512 bytes */
 #endif
 
-cpu->sve_vq_supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
+cpu->sve_vq.supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
 
 aarch64_add_pauth_properties(obj);
 aarch64_add_sve_properties(obj);
@@ -1022,7 +1022,7 @@ static void aarch64_a64fx_initfn(Object *obj)
 
 /* The A64FX supports only 128, 256 and 512 bit vector lengths */
 aarch64_add_sve_properties(obj);
-cpu->sve_vq_supported = (1 << 0)  /* 128bit */
+cpu->sve_vq.supported = (1 << 0)  /* 128bit */
   | (1 << 1)  /* 256bit */
   | (1 << 3); /* 512bit */
 
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 5328676deb..2e7669180f 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6291,7 +6291,7 @@ uint32_t 

[PATCH 24/71] target/arm: Add ARM_CP_SME

2022-06-02 Thread Richard Henderson
This will be used for controlling access to SME cpregs.

Signed-off-by: Richard Henderson 
---
 target/arm/cpregs.h|  5 +
 target/arm/translate-a64.c | 18 ++
 2 files changed, 23 insertions(+)

diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
index d9b678c2f1..d30758ee71 100644
--- a/target/arm/cpregs.h
+++ b/target/arm/cpregs.h
@@ -113,6 +113,11 @@ enum {
 ARM_CP_EL3_NO_EL2_UNDEF  = 1 << 16,
 ARM_CP_EL3_NO_EL2_KEEP   = 1 << 17,
 ARM_CP_EL3_NO_EL2_C_NZ   = 1 << 18,
+/*
+ * Flag: Access check for this sysreg is constrained by the
+ * ARM pseudocode function CheckSMEAccess().
+ */
+ARM_CP_SME   = 1 << 19,
 };
 
 /*
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 8bbd1b7f07..f51d80d816 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -1186,6 +1186,22 @@ bool sve_access_check(DisasContext *s)
 return fp_access_check(s);
 }
 
+/*
+ * Check that SME access is enabled, raise an exception if not.
+ * Note that this function corresponds to CheckSMEAccess and is
+ * only used directly for cpregs.
+ */
+static bool sme_access_check(DisasContext *s)
+{
+if (s->sme_excp_el) {
+gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
+   syn_smetrap(SME_ET_AccessTrap, false),
+   s->sme_excp_el);
+return false;
+}
+return true;
+}
+
 /*
  * This utility function is for doing register extension with an
  * optional shift. You will likely want to pass a temporary for the
@@ -1958,6 +1974,8 @@ static void handle_sys(DisasContext *s, uint32_t insn, 
bool isread,
 return;
 } else if ((ri->type & ARM_CP_SVE) && !sve_access_check(s)) {
 return;
+} else if ((ri->type & ARM_CP_SME) && !sme_access_check(s)) {
+return;
 }
 
 if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
-- 
2.34.1




[PATCH 06/71] target/arm: Use el_is_in_host for sve_zcr_len_for_el

2022-06-02 Thread Richard Henderson
The ARM pseudocode function NVL uses this predicate now,
and I think it's a bit clearer.  Simplify the pseudocode
condition by noting that IsInHost is always false for EL1.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 839d6401b0..135c3e790c 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6248,8 +6248,7 @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
 ARMCPU *cpu = env_archcpu(env);
 uint32_t zcr_len = cpu->sve_max_vq - 1;
 
-if (el <= 1 &&
-(arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
+if (el <= 1 && !el_is_in_host(env, el)) {
 zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
 }
 if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) {
-- 
2.34.1




[PATCH 36/71] target/arm: Unexport aarch64_add_*_properties

2022-06-02 Thread Richard Henderson
These functions are not used outside cpu64.c,
so make them static.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   | 3 ---
 target/arm/cpu64.c | 4 ++--
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 3999152f1a..60f84ba033 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1097,8 +1097,6 @@ int aarch64_cpu_gdb_write_register(CPUState *cpu, uint8_t 
*buf, int reg);
 void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq);
 void aarch64_sve_change_el(CPUARMState *env, int old_el,
int new_el, bool el0_a64);
-void aarch64_add_sve_properties(Object *obj);
-void aarch64_add_pauth_properties(Object *obj);
 void arm_reset_sve_state(CPUARMState *env);
 
 /*
@@ -1130,7 +1128,6 @@ static inline void aarch64_sve_narrow_vq(CPUARMState 
*env, unsigned vq) { }
 static inline void aarch64_sve_change_el(CPUARMState *env, int o,
  int n, bool a)
 { }
-static inline void aarch64_add_sve_properties(Object *obj) { }
 #endif
 
 void aarch64_sync_32_to_64(CPUARMState *env);
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index c5bfc3d082..9ae9be6698 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -689,7 +689,7 @@ static void cpu_arm_get_default_vec_len(Object *obj, 
Visitor *v,
 }
 #endif
 
-void aarch64_add_sve_properties(Object *obj)
+static void aarch64_add_sve_properties(Object *obj)
 {
 ARMCPU *cpu = ARM_CPU(obj);
 uint32_t vq;
@@ -752,7 +752,7 @@ static Property arm_cpu_pauth_property =
 static Property arm_cpu_pauth_impdef_property =
 DEFINE_PROP_BOOL("pauth-impdef", ARMCPU, prop_pauth_impdef, false);
 
-void aarch64_add_pauth_properties(Object *obj)
+static void aarch64_add_pauth_properties(Object *obj)
 {
 ARMCPU *cpu = ARM_CPU(obj);
 
-- 
2.34.1




[PATCH 20/71] target/arm: Add ID_AA64SMFR0_EL1

2022-06-02 Thread Richard Henderson
This register is allocated from the existing block of id registers,
so it is already RES0 for cpus that do not implement SME.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h| 25 +
 target/arm/helper.c |  4 ++--
 target/arm/kvm64.c  |  9 +
 3 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index f6d114aad7..24c5266f35 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -966,6 +966,7 @@ struct ArchCPU {
 uint64_t id_aa64dfr0;
 uint64_t id_aa64dfr1;
 uint64_t id_aa64zfr0;
+uint64_t id_aa64smfr0;
 uint64_t reset_pmcr_el0;
 } isar;
 uint64_t midr;
@@ -2190,6 +2191,15 @@ FIELD(ID_AA64ZFR0, I8MM, 44, 4)
 FIELD(ID_AA64ZFR0, F32MM, 52, 4)
 FIELD(ID_AA64ZFR0, F64MM, 56, 4)
 
+FIELD(ID_AA64SMFR0, F32F32, 32, 1)
+FIELD(ID_AA64SMFR0, B16F32, 34, 1)
+FIELD(ID_AA64SMFR0, F16F32, 35, 1)
+FIELD(ID_AA64SMFR0, I8I32, 36, 4)
+FIELD(ID_AA64SMFR0, F64F64, 48, 1)
+FIELD(ID_AA64SMFR0, I16I64, 52, 4)
+FIELD(ID_AA64SMFR0, SMEVER, 56, 4)
+FIELD(ID_AA64SMFR0, FA64, 63, 1)
+
 FIELD(ID_DFR0, COPDBG, 0, 4)
 FIELD(ID_DFR0, COPSDBG, 4, 4)
 FIELD(ID_DFR0, MMAPDBG, 8, 4)
@@ -4190,6 +4200,21 @@ static inline bool isar_feature_aa64_sve_f64mm(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F64MM) != 0;
 }
 
+static inline bool isar_feature_aa64_sme_f64f64(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, F64F64);
+}
+
+static inline bool isar_feature_aa64_sme_i16i64(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, I16I64) == 0xf;
+}
+
+static inline bool isar_feature_aa64_sme_fa64(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64smfr0, ID_AA64SMFR0, FA64);
+}
+
 /*
  * Feature tests for "does this exist in either 32-bit or 64-bit?"
  */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index cb44d528c0..48534db0bd 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -7732,11 +7732,11 @@ void register_cp_regs_for_features(ARMCPU *cpu)
   .access = PL1_R, .type = ARM_CP_CONST,
   .accessfn = access_aa64_tid3,
   .resetvalue = cpu->isar.id_aa64zfr0 },
-{ .name = "ID_AA64PFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
+{ .name = "ID_AA64SMFR0_EL1", .state = ARM_CP_STATE_AA64,
   .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 5,
   .access = PL1_R, .type = ARM_CP_CONST,
   .accessfn = access_aa64_tid3,
-  .resetvalue = 0 },
+  .resetvalue = cpu->isar.id_aa64smfr0 },
 { .name = "ID_AA64PFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
   .opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 6,
   .access = PL1_R, .type = ARM_CP_CONST,
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
index b3f635fc95..28001643c6 100644
--- a/target/arm/kvm64.c
+++ b/target/arm/kvm64.c
@@ -682,13 +682,14 @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures 
*ahcf)
 ahcf->isar.id_aa64pfr0 = t;
 
 /*
- * Before v5.1, KVM did not support SVE and did not expose
- * ID_AA64ZFR0_EL1 even as RAZ.  After v5.1, KVM still does
- * not expose the register to "user" requests like this
- * unless the host supports SVE.
+ * KVM began exposing the unallocated ID registers as RAZ in 4.15.
+ * Using SVE supported is an easy way to tell if these registers
+ * are exposed, since both of these depend on SVE anyway.
  */
 err |= read_sys_reg64(fdarray[2], >isar.id_aa64zfr0,
   ARM64_SYS_REG(3, 0, 0, 4, 4));
+err |= read_sys_reg64(fdarray[2], >isar.id_aa64smfr0,
+  ARM64_SYS_REG(3, 0, 0, 4, 5));
 }
 
 kvm_arm_destroy_scratch_host_vcpu(fdarray);
-- 
2.34.1




[PATCH 22/71] target/arm: Add SMEEXC_EL to TB flags

2022-06-02 Thread Richard Henderson
This is CheckSMEAccess, which is the basis for a set of
related tests for various SME cpregs and instructions.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  2 ++
 target/arm/translate.h |  1 +
 target/arm/helper.c| 52 ++
 target/arm/translate-a64.c |  1 +
 4 files changed, 56 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 245d144fa1..31f812eda7 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1134,6 +1134,7 @@ void aarch64_sync_64_to_32(CPUARMState *env);
 
 int fp_exception_el(CPUARMState *env, int cur_el);
 int sve_exception_el(CPUARMState *env, int cur_el);
+int sme_exception_el(CPUARMState *env, int cur_el);
 
 /**
  * sve_vqm1_for_el:
@@ -3272,6 +3273,7 @@ FIELD(TBFLAG_A64, ATA, 15, 1)
 FIELD(TBFLAG_A64, TCMA, 16, 2)
 FIELD(TBFLAG_A64, MTE_ACTIVE, 18, 1)
 FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
+FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
 
 /*
  * Helpers for using the above.
diff --git a/target/arm/translate.h b/target/arm/translate.h
index f473a21ed4..a492e4217b 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -42,6 +42,7 @@ typedef struct DisasContext {
 bool ns;/* Use non-secure CPREG bank on access */
 int fp_excp_el; /* FP exception EL or 0 if enabled */
 int sve_excp_el; /* SVE exception EL or 0 if enabled */
+int sme_excp_el; /* SME exception EL or 0 if enabled */
 int vl;  /* current vector length in bytes */
 /* Flag indicating that exceptions from secure mode are routed to EL3. */
 bool secure_routed_to_el3;
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 204c5cf849..98de2c797f 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6222,6 +6222,55 @@ int sve_exception_el(CPUARMState *env, int el)
 return 0;
 }
 
+/*
+ * Return the exception level to which exceptions should be taken for SME.
+ * C.f. the ARM pseudocode function CheckSMEAccess.
+ */
+int sme_exception_el(CPUARMState *env, int el)
+{
+#ifndef CONFIG_USER_ONLY
+if (el <= 1 && !el_is_in_host(env, el)) {
+switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, SMEN)) {
+case 1:
+if (el != 0) {
+break;
+}
+/* fall through */
+case 0:
+case 2:
+return 1;
+}
+}
+
+if (el <= 2 && arm_is_el2_enabled(env)) {
+/* CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE). */
+if (env->cp15.hcr_el2 & HCR_E2H) {
+switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, SMEN)) {
+case 1:
+if (el != 0 || !(env->cp15.hcr_el2 & HCR_TGE)) {
+break;
+}
+/* fall through */
+case 0:
+case 2:
+return 2;
+}
+} else {
+if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TSM)) {
+return 2;
+}
+}
+}
+
+/* CPTR_EL3.  Since EZ is negative we must check for EL3.  */
+if (arm_feature(env, ARM_FEATURE_EL3)
+&& !FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, ESM)) {
+return 3;
+}
+#endif
+return 0;
+}
+
 /*
  * Given that SVE is enabled, return the vector length for EL.
  */
@@ -13719,6 +13768,9 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState 
*env, int el, int fp_el,
 }
 DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
 }
+if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
+DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
+}
 
 sctlr = regime_sctlr(env, stage1);
 
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d438fb89e7..8bbd1b7f07 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14608,6 +14608,7 @@ static void 
aarch64_tr_init_disas_context(DisasContextBase *dcbase,
 dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
 dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
 dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
+dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
 dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
 dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
 dc->bt = EX_TBFLAG_A64(tb_flags, BT);
-- 
2.34.1




[PATCH 35/71] target/arm: Move arm_cpu_*_finalize to internals.h

2022-06-02 Thread Richard Henderson
Drop the aa32-only inline fallbacks,
and just use a couple of ifdefs.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   | 6 --
 target/arm/internals.h | 3 +++
 target/arm/cpu.c   | 2 ++
 3 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 9408d36b8a..3999152f1a 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -205,14 +205,8 @@ typedef struct {
 
 #ifdef TARGET_AARCH64
 # define ARM_MAX_VQ16
-void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
-void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
-void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
 #else
 # define ARM_MAX_VQ1
-static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
-static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { }
-static inline void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp) { }
 #endif
 
 typedef struct ARMVectorReg {
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 8bac570475..756301b536 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1309,6 +1309,9 @@ int arm_gdb_get_svereg(CPUARMState *env, GByteArray *buf, 
int reg);
 int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg);
 int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
 int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
+void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
+void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
+void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
 #endif
 
 #ifdef CONFIG_USER_ONLY
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 1b5d535788..b5276fa944 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1421,6 +1421,7 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
 {
 Error *local_err = NULL;
 
+#ifdef TARGET_AARCH64
 if (arm_feature(>env, ARM_FEATURE_AARCH64)) {
 arm_cpu_sve_finalize(cpu, _err);
 if (local_err != NULL) {
@@ -1440,6 +1441,7 @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
 return;
 }
 }
+#endif
 
 if (kvm_enabled()) {
 kvm_arm_steal_time_finalize(cpu, _err);
-- 
2.34.1




[PATCH 17/71] target/arm: Move expand_pred_h to vec_internal.h

2022-06-02 Thread Richard Henderson
Move the data to vec_helper.c and the inline to vec_internal.h.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/vec_internal.h |  7 +++
 target/arm/sve_helper.c   | 29 -
 target/arm/vec_helper.c   | 26 ++
 3 files changed, 33 insertions(+), 29 deletions(-)

diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
index d1a1ea4a66..1d527fadac 100644
--- a/target/arm/vec_internal.h
+++ b/target/arm/vec_internal.h
@@ -59,6 +59,13 @@ static inline uint64_t expand_pred_b(uint8_t byte)
 return expand_pred_b_data[byte];
 }
 
+/* Similarly for half-word elements. */
+extern const uint64_t expand_pred_h_data[0x55 + 1];
+static inline uint64_t expand_pred_h(uint8_t byte)
+{
+return expand_pred_h_data[byte & 0x55];
+}
+
 static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
 {
 uint64_t *d = vd + opr_sz;
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index e865c12527..1654c0bbf9 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -103,35 +103,6 @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t 
words)
 return flags;
 }
 
-/* Similarly for half-word elements.
- *  for (i = 0; i < 256; ++i) {
- *  unsigned long m = 0;
- *  if (i & 0xaa) {
- *  continue;
- *  }
- *  for (j = 0; j < 8; j += 2) {
- *  if ((i >> j) & 1) {
- *  m |= 0xul << (j << 3);
- *  }
- *  }
- *  printf("[0x%x] = 0x%016lx,\n", i, m);
- *  }
- */
-static inline uint64_t expand_pred_h(uint8_t byte)
-{
-static const uint64_t word[] = {
-[0x01] = 0x, [0x04] = 0x,
-[0x05] = 0x, [0x10] = 0x,
-[0x11] = 0x, [0x14] = 0x,
-[0x15] = 0x, [0x40] = 0x,
-[0x41] = 0x, [0x44] = 0x,
-[0x45] = 0x, [0x50] = 0x,
-[0x51] = 0x, [0x54] = 0x,
-[0x55] = 0x,
-};
-return word[byte & 0x55];
-}
-
 /* Similarly for single word elements.  */
 static inline uint64_t expand_pred_s(uint8_t byte)
 {
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 17fb158362..26c373e522 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -127,6 +127,32 @@ const uint64_t expand_pred_b_data[256] = {
 0x,
 };
 
+/*
+ * Similarly for half-word elements.
+ *  for (i = 0; i < 256; ++i) {
+ *  unsigned long m = 0;
+ *  if (i & 0xaa) {
+ *  continue;
+ *  }
+ *  for (j = 0; j < 8; j += 2) {
+ *  if ((i >> j) & 1) {
+ *  m |= 0xul << (j << 3);
+ *  }
+ *  }
+ *  printf("[0x%x] = 0x%016lx,\n", i, m);
+ *  }
+ */
+const uint64_t expand_pred_h_data[0x55 + 1] = {
+[0x01] = 0x, [0x04] = 0x,
+[0x05] = 0x, [0x10] = 0x,
+[0x11] = 0x, [0x14] = 0x,
+[0x15] = 0x, [0x40] = 0x,
+[0x41] = 0x, [0x44] = 0x,
+[0x45] = 0x, [0x50] = 0x,
+[0x51] = 0x, [0x54] = 0x,
+[0x55] = 0x,
+};
+
 /* Signed saturating rounding doubling multiply-accumulate high half, 8-bit */
 int8_t do_sqrdmlah_b(int8_t src1, int8_t src2, int8_t src3,
  bool neg, bool round)
-- 
2.34.1




[PATCH 21/71] target/arm: Implement TPIDR2_EL0

2022-06-02 Thread Richard Henderson
This register is part of SME, but isn't closely related to the
rest of the extension.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h|  1 +
 target/arm/helper.c | 32 
 2 files changed, 33 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 24c5266f35..245d144fa1 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -474,6 +474,7 @@ typedef struct CPUArchState {
 };
 uint64_t tpidr_el[4];
 };
+uint64_t tpidr2_el0;
 /* The secure banks of these registers don't map anywhere */
 uint64_t tpidrurw_s;
 uint64_t tpidrprw_s;
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 48534db0bd..204c5cf849 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6283,6 +6283,35 @@ static const ARMCPRegInfo zcr_reginfo[] = {
   .writefn = zcr_write, .raw_writefn = raw_write },
 };
 
+#ifdef TARGET_AARCH64
+static CPAccessResult access_tpidr2(CPUARMState *env, const ARMCPRegInfo *ri,
+bool isread)
+{
+int el = arm_current_el(env);
+
+if (el == 0) {
+uint64_t sctlr = arm_sctlr(env, el);
+if (!(sctlr & SCTLR_EnTP2)) {
+uint64_t hcr = arm_hcr_el2_eff(env);
+return hcr & HCR_TGE ? CP_ACCESS_TRAP_EL2 : CP_ACCESS_TRAP;
+}
+}
+if (el < 3
+&& arm_feature(env, ARM_FEATURE_EL3)
+&& !(env->cp15.scr_el3 & SCR_ENTP2)) {
+return CP_ACCESS_TRAP_EL3;
+}
+return CP_ACCESS_OK;
+}
+
+static const ARMCPRegInfo sme_reginfo[] = {
+{ .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
+  .access = PL0_RW, .accessfn = access_tpidr2,
+  .fieldoffset = offsetof(CPUARMState, cp15.tpidr2_el0) },
+};
+#endif /* TARGET_AARCH64 */
+
 void hw_watchpoint_update(ARMCPU *cpu, int n)
 {
 CPUARMState *env = >env;
@@ -8444,6 +8473,9 @@ void register_cp_regs_for_features(ARMCPU *cpu)
 }
 
 #ifdef TARGET_AARCH64
+if (cpu_isar_feature(aa64_sme, cpu)) {
+define_arm_cp_regs(cpu, sme_reginfo);
+}
 if (cpu_isar_feature(aa64_pauth, cpu)) {
 define_arm_cp_regs(cpu, pauth_reginfo);
 }
-- 
2.34.1




[PATCH 18/71] target/arm: Export bfdotadd from vec_helper.c

2022-06-02 Thread Richard Henderson
We will need this over in sme_helper.c.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/vec_internal.h | 13 +
 target/arm/vec_helper.c   |  2 +-
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
index 1d527fadac..1f4ed80ff7 100644
--- a/target/arm/vec_internal.h
+++ b/target/arm/vec_internal.h
@@ -230,4 +230,17 @@ uint64_t pmull_h(uint64_t op1, uint64_t op2);
  */
 uint64_t pmull_w(uint64_t op1, uint64_t op2);
 
+/**
+ * bfdotadd:
+ * @sum: addend
+ * @e1, @e2: multiplicand vectors
+ *
+ * BFloat16 2-way dot product of @e1 & @e2, accumulating with @sum.
+ * The @e1 and @e2 operands correspond to the 32-bit source vector
+ * slots and contain two Bfloat16 values each.
+ *
+ * Corresponds to the ARM pseudocode function BFDotAdd.
+ */
+float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2);
+
 #endif /* TARGET_ARM_VEC_INTERNAL_H */
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 26c373e522..9a9c034e36 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -2557,7 +2557,7 @@ DO_MMLA_B(gvec_usmmla_b, do_usmmla_b)
  * BFloat16 Dot Product
  */
 
-static float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2)
+float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2)
 {
 /* FPCR is ignored for BFDOT and BFMMLA. */
 float_status bf_status = {
-- 
2.34.1




[PATCH 15/71] target/arm: Move expand_pred_b to vec_internal.h

2022-06-02 Thread Richard Henderson
Put the inline function near the array declaration.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/vec_internal.h | 8 +++-
 target/arm/sve_helper.c   | 9 -
 2 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
index 1d63402042..d1a1ea4a66 100644
--- a/target/arm/vec_internal.h
+++ b/target/arm/vec_internal.h
@@ -50,8 +50,14 @@
 #define H8(x)   (x)
 #define H1_8(x) (x)
 
-/* Data for expanding active predicate bits to bytes, for byte elements. */
+/*
+ * Expand active predicate bits to bytes, for byte elements.
+ */
 extern const uint64_t expand_pred_b_data[256];
+static inline uint64_t expand_pred_b(uint8_t byte)
+{
+return expand_pred_b_data[byte];
+}
 
 static inline void clear_tail(void *vd, uintptr_t opr_sz, uintptr_t max_sz)
 {
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 8cd371e3e3..e865c12527 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -103,15 +103,6 @@ uint32_t HELPER(sve_predtest)(void *vd, void *vg, uint32_t 
words)
 return flags;
 }
 
-/*
- * Expand active predicate bits to bytes, for byte elements.
- * (The data table itself is in vec_helper.c as MVE also needs it.)
- */
-static inline uint64_t expand_pred_b(uint8_t byte)
-{
-return expand_pred_b_data[byte];
-}
-
 /* Similarly for half-word elements.
  *  for (i = 0; i < 256; ++i) {
  *  unsigned long m = 0;
-- 
2.34.1




[PATCH 10/71] target/arm: Merge aarch64_sve_zcr_get_valid_len into caller

2022-06-02 Thread Richard Henderson
This function is used only once, and will need modification
for Streaming SVE mode.

Signed-off-by: Richard Henderson 
---
 target/arm/internals.h | 11 ---
 target/arm/helper.c| 30 +++---
 2 files changed, 11 insertions(+), 30 deletions(-)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index a73f2a94c5..4dcdca918b 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -189,17 +189,6 @@ void arm_translate_init(void);
 void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb);
 #endif /* CONFIG_TCG */
 
-/**
- * aarch64_sve_zcr_get_valid_len:
- * @cpu: cpu context
- * @start_len: maximum len to consider
- *
- * Return the maximum supported sve vector length <= @start_len.
- * Note that both @start_len and the return value are in units
- * of ZCR_ELx.LEN, so the vector bit length is (x + 1) * 128.
- */
-uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len);
-
 enum arm_fprounding {
 FPROUNDING_TIEEVEN,
 FPROUNDING_POSINF,
diff --git a/target/arm/helper.c b/target/arm/helper.c
index dc8f1e44cc..e84d30e5fc 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6222,39 +6222,31 @@ int sve_exception_el(CPUARMState *env, int el)
 return 0;
 }
 
-uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
-{
-uint32_t end_len;
-
-start_len = MIN(start_len, ARM_MAX_VQ - 1);
-end_len = start_len;
-
-if (!test_bit(start_len, cpu->sve_vq_map)) {
-end_len = find_last_bit(cpu->sve_vq_map, start_len);
-assert(end_len < start_len);
-}
-return end_len;
-}
-
 /*
  * Given that SVE is enabled, return the vector length for EL.
  */
 uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
 {
 ARMCPU *cpu = env_archcpu(env);
-uint32_t zcr_len = cpu->sve_max_vq - 1;
+uint32_t len = cpu->sve_max_vq - 1;
+uint32_t end_len;
 
 if (el <= 1 && !el_is_in_host(env, el)) {
-zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
+len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
 }
 if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) {
-zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[2]);
+len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[2]);
 }
 if (arm_feature(env, ARM_FEATURE_EL3)) {
-zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
+len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
 }
 
-return aarch64_sve_zcr_get_valid_len(cpu, zcr_len);
+end_len = len;
+if (!test_bit(len, cpu->sve_vq_map)) {
+end_len = find_last_bit(cpu->sve_vq_map, len);
+assert(end_len < len);
+}
+return end_len;
 }
 
 static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
-- 
2.34.1




[PATCH 30/71] target/arm: Implement SMSTART, SMSTOP

2022-06-02 Thread Richard Henderson
These two instructions are aliases of MSR (immediate).
Use the two helpers to properly implement svcr_write.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  1 +
 target/arm/helper-sme.h| 21 +
 target/arm/helper.h|  1 +
 target/arm/helper.c|  6 ++--
 target/arm/sme_helper.c| 60 ++
 target/arm/translate-a64.c | 24 +++
 target/arm/meson.build |  1 +
 7 files changed, 111 insertions(+), 3 deletions(-)
 create mode 100644 target/arm/helper-sme.h
 create mode 100644 target/arm/sme_helper.c

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 1bc7de1da1..b65e370b70 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1106,6 +1106,7 @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
int new_el, bool el0_a64);
 void aarch64_add_sve_properties(Object *obj);
 void aarch64_add_pauth_properties(Object *obj);
+void arm_reset_sve_state(CPUARMState *env);
 
 /*
  * SVE registers are encoded in KVM's memory in an endianness-invariant format.
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
new file mode 100644
index 00..3bd48c235f
--- /dev/null
+++ b/target/arm/helper-sme.h
@@ -0,0 +1,21 @@
+/*
+ *  AArch64 SME specific helper definitions
+ *
+ *  Copyright (c) 2022 Linaro, Ltd
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see .
+ */
+
+DEF_HELPER_FLAGS_2(set_pstate_sm, TCG_CALL_NO_RWG, void, env, i32)
+DEF_HELPER_FLAGS_2(set_pstate_za, TCG_CALL_NO_RWG, void, env, i32)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index b1334e0c42..5bca7255f1 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1020,6 +1020,7 @@ DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
+#include "helper-sme.h"
 #endif
 
 #include "helper-mve.h"
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 3edecb56b6..5328676deb 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6370,9 +6370,9 @@ static CPAccessResult access_esm(CPUARMState *env, const 
ARMCPRegInfo *ri,
 static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
uint64_t value)
 {
-value &= R_SVCR_SM_MASK | R_SVCR_ZA_MASK;
-/* TODO: Side effects. */
-env->svcr = value;
+helper_set_pstate_sm(env, FIELD_EX64(value, SVCR, SM));
+helper_set_pstate_za(env, FIELD_EX64(value, SVCR, ZA));
+arm_rebuild_hflags(env);
 }
 
 static void smcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
new file mode 100644
index 00..c34d1b2e6b
--- /dev/null
+++ b/target/arm/sme_helper.c
@@ -0,0 +1,60 @@
+/*
+ * ARM SME Operations
+ *
+ * Copyright (c) 2022 Linaro, Ltd.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see .
+ */
+
+#include "qemu/osdep.h"
+#include "cpu.h"
+#include "internals.h"
+#include "exec/helper-proto.h"
+
+/* ResetSVEState */
+void arm_reset_sve_state(CPUARMState *env)
+{
+memset(env->vfp.zregs, 0, sizeof(env->vfp.zregs));
+memset(env->vfp.pregs, 0, sizeof(env->vfp.pregs));
+vfp_set_fpcr(env, 0x089f);
+}
+
+void helper_set_pstate_sm(CPUARMState *env, uint32_t i)
+{
+if (i == FIELD_EX64(env->svcr, SVCR, SM)) {
+return;
+}
+env->svcr ^= R_SVCR_SM_MASK;
+arm_reset_sve_state(env);
+}
+
+void helper_set_pstate_za(CPUARMState *env, uint32_t i)
+{
+if (i == FIELD_EX64(env->svcr, SVCR, ZA)) {
+return;
+}
+env->svcr ^= R_SVCR_ZA_MASK;
+
+/*
+ * ResetSMEState.
+ *
+ * SetPSTATE_ZA zeros on enable and disable.  It would appear that we
+ * can zero this only on enable: while 

[PATCH 16/71] target/arm: Use expand_pred_b in mve_helper.c

2022-06-02 Thread Richard Henderson
Use the function instead of the array directly.

Because the function performs its own masking, via the uint8_t
parameter, we need to do nothing extra within the users: the bits
above the first 2 (_uh) or 4 (_uw) will be discarded by assignment
to the local bmask variables, and of course _uq uses the entire
uint64_t result.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/mve_helper.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
index 846962bf4c..403b345ea3 100644
--- a/target/arm/mve_helper.c
+++ b/target/arm/mve_helper.c
@@ -726,7 +726,7 @@ static void mergemask_sb(int8_t *d, int8_t r, uint16_t mask)
 
 static void mergemask_uh(uint16_t *d, uint16_t r, uint16_t mask)
 {
-uint16_t bmask = expand_pred_b_data[mask & 3];
+uint16_t bmask = expand_pred_b(mask);
 *d = (*d & ~bmask) | (r & bmask);
 }
 
@@ -737,7 +737,7 @@ static void mergemask_sh(int16_t *d, int16_t r, uint16_t 
mask)
 
 static void mergemask_uw(uint32_t *d, uint32_t r, uint16_t mask)
 {
-uint32_t bmask = expand_pred_b_data[mask & 0xf];
+uint32_t bmask = expand_pred_b(mask);
 *d = (*d & ~bmask) | (r & bmask);
 }
 
@@ -748,7 +748,7 @@ static void mergemask_sw(int32_t *d, int32_t r, uint16_t 
mask)
 
 static void mergemask_uq(uint64_t *d, uint64_t r, uint16_t mask)
 {
-uint64_t bmask = expand_pred_b_data[mask & 0xff];
+uint64_t bmask = expand_pred_b(mask);
 *d = (*d & ~bmask) | (r & bmask);
 }
 
-- 
2.34.1




[PATCH 19/71] target/arm: Add isar_feature_aa64_sme

2022-06-02 Thread Richard Henderson
This will be used for implementing FEAT_SME.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index cb37787c35..f6d114aad7 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4043,6 +4043,11 @@ static inline bool isar_feature_aa64_mte(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, MTE) >= 2;
 }
 
+static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
+}
+
 static inline bool isar_feature_aa64_pmu_8_1(const ARMISARegisters *id)
 {
 return FIELD_EX64(id->id_aa64dfr0, ID_AA64DFR0, PMUVER) >= 4 &&
-- 
2.34.1




[PATCH 13/71] target/arm: Split out load/store primitives to sve_ldst_internal.h

2022-06-02 Thread Richard Henderson
Begin creation of sve_ldst_internal.h by moving the primitives
that access host and tlb memory.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/sve_ldst_internal.h | 127 +
 target/arm/sve_helper.c| 107 +--
 2 files changed, 128 insertions(+), 106 deletions(-)
 create mode 100644 target/arm/sve_ldst_internal.h

diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
new file mode 100644
index 00..ef9117e84c
--- /dev/null
+++ b/target/arm/sve_ldst_internal.h
@@ -0,0 +1,127 @@
+/*
+ * ARM SVE Load/Store Helpers
+ *
+ * Copyright (c) 2018-2022 Linaro
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see .
+ */
+
+#ifndef TARGET_ARM_SVE_LDST_INTERNAL_H
+#define TARGET_ARM_SVE_LDST_INTERNAL_H
+
+#include "exec/cpu_ldst.h"
+
+/*
+ * Load one element into @vd + @reg_off from @host.
+ * The controlling predicate is known to be true.
+ */
+typedef void sve_ldst1_host_fn(void *vd, intptr_t reg_off, void *host);
+
+/*
+ * Load one element into @vd + @reg_off from (@env, @vaddr, @ra).
+ * The controlling predicate is known to be true.
+ */
+typedef void sve_ldst1_tlb_fn(CPUARMState *env, void *vd, intptr_t reg_off,
+  target_ulong vaddr, uintptr_t retaddr);
+
+/*
+ * Generate the above primitives.
+ */
+
+#define DO_LD_HOST(NAME, H, TYPEE, TYPEM, HOST)  \
+static inline void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
+{ TYPEM val = HOST(host); *(TYPEE *)(vd + H(reg_off)) = val; }
+
+#define DO_ST_HOST(NAME, H, TYPEE, TYPEM, HOST)  \
+static inline void sve_##NAME##_host(void *vd, intptr_t reg_off, void *host) \
+{ TYPEM val = *(TYPEE *)(vd + H(reg_off)); HOST(host, val); }
+
+#define DO_LD_TLB(NAME, H, TYPEE, TYPEM, TLB)  \
+static inline void sve_##NAME##_tlb(CPUARMState *env, void *vd,\
+intptr_t reg_off, target_ulong addr, uintptr_t ra) \
+{  \
+TYPEM val = TLB(env, useronly_clean_ptr(addr), ra);\
+*(TYPEE *)(vd + H(reg_off)) = val; \
+}
+
+#define DO_ST_TLB(NAME, H, TYPEE, TYPEM, TLB)  \
+static inline void sve_##NAME##_tlb(CPUARMState *env, void *vd,\
+intptr_t reg_off, target_ulong addr, uintptr_t ra) \
+{  \
+TYPEM val = *(TYPEE *)(vd + H(reg_off));   \
+TLB(env, useronly_clean_ptr(addr), val, ra);   \
+}
+
+#define DO_LD_PRIM_1(NAME, H, TE, TM)   \
+DO_LD_HOST(NAME, H, TE, TM, ldub_p) \
+DO_LD_TLB(NAME, H, TE, TM, cpu_ldub_data_ra)
+
+DO_LD_PRIM_1(ld1bb,  H1,   uint8_t,  uint8_t)
+DO_LD_PRIM_1(ld1bhu, H1_2, uint16_t, uint8_t)
+DO_LD_PRIM_1(ld1bhs, H1_2, uint16_t,  int8_t)
+DO_LD_PRIM_1(ld1bsu, H1_4, uint32_t, uint8_t)
+DO_LD_PRIM_1(ld1bss, H1_4, uint32_t,  int8_t)
+DO_LD_PRIM_1(ld1bdu, H1_8, uint64_t, uint8_t)
+DO_LD_PRIM_1(ld1bds, H1_8, uint64_t,  int8_t)
+
+#define DO_ST_PRIM_1(NAME, H, TE, TM)   \
+DO_ST_HOST(st1##NAME, H, TE, TM, stb_p) \
+DO_ST_TLB(st1##NAME, H, TE, TM, cpu_stb_data_ra)
+
+DO_ST_PRIM_1(bb,   H1,  uint8_t, uint8_t)
+DO_ST_PRIM_1(bh, H1_2, uint16_t, uint8_t)
+DO_ST_PRIM_1(bs, H1_4, uint32_t, uint8_t)
+DO_ST_PRIM_1(bd, H1_8, uint64_t, uint8_t)
+
+#define DO_LD_PRIM_2(NAME, H, TE, TM, LD) \
+DO_LD_HOST(ld1##NAME##_be, H, TE, TM, LD##_be_p)\
+DO_LD_HOST(ld1##NAME##_le, H, TE, TM, LD##_le_p)\
+DO_LD_TLB(ld1##NAME##_be, H, TE, TM, cpu_##LD##_be_data_ra) \
+DO_LD_TLB(ld1##NAME##_le, H, TE, TM, cpu_##LD##_le_data_ra)
+
+#define DO_ST_PRIM_2(NAME, H, TE, TM, ST) \
+DO_ST_HOST(st1##NAME##_be, H, TE, TM, ST##_be_p)\
+DO_ST_HOST(st1##NAME##_le, H, TE, TM, ST##_le_p)\
+DO_ST_TLB(st1##NAME##_be, H, TE, TM, cpu_##ST##_be_data_ra) \
+DO_ST_TLB(st1##NAME##_le, H, TE, TM, cpu_##ST##_le_data_ra)
+
+DO_LD_PRIM_2(hh,  H1_2, uint16_t, uint16_t, lduw)
+DO_LD_PRIM_2(hsu, H1_4, uint32_t, uint16_t, lduw)
+DO_LD_PRIM_2(hss, H1_4, uint32_t,  int16_t, lduw)

[PATCH 09/71] target/arm: Do not use aarch64_sve_zcr_get_valid_len in reset

2022-06-02 Thread Richard Henderson
We don't need to constrain the value set in zcr_el[1],
because it will be done by sve_zcr_len_for_el.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index d2bd74c2ed..0621944167 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -208,8 +208,7 @@ static void arm_cpu_reset(DeviceState *dev)
  CPACR_EL1, ZEN, 3);
 /* with reasonable vector length */
 if (cpu_isar_feature(aa64_sve, cpu)) {
-env->vfp.zcr_el[1] =
-aarch64_sve_zcr_get_valid_len(cpu, cpu->sve_default_vq - 1);
+env->vfp.zcr_el[1] = cpu->sve_default_vq - 1;
 }
 /*
  * Enable 48-bit address space (TODO: take reserved_va into account).
-- 
2.34.1




[PATCH 05/71] target/arm: Add el_is_in_host

2022-06-02 Thread Richard Henderson
This (newish) ARM pseudocode function is easier to work with
than open-coded tests for HCR_E2H etc.  Use of the function
will be staged into the code base in parts.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/internals.h |  2 ++
 target/arm/helper.c| 28 
 2 files changed, 30 insertions(+)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index b654bee468..a73f2a94c5 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1328,6 +1328,8 @@ static inline void 
define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
 void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
 #endif
 
+bool el_is_in_host(CPUARMState *env, int el);
+
 void aa32_max_features(ARMCPU *cpu);
 
 #endif
diff --git a/target/arm/helper.c b/target/arm/helper.c
index bcf48f1b11..839d6401b0 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -5292,6 +5292,34 @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
 return ret;
 }
 
+/*
+ * Corresponds to ARM pseudocode function ELIsInHost().
+ */
+bool el_is_in_host(CPUARMState *env, int el)
+{
+uint64_t mask;
+
+/*
+ * Since we only care about E2H and TGE, we can skip arm_hcr_el2_eff().
+ * Perform the simplest bit tests first, and validate EL2 afterward.
+ */
+if (el & 1) {
+return false; /* EL1 or EL3 */
+}
+
+/*
+ * Note that hcr_write() checks isar_feature_aa64_vh(),
+ * aka HaveVirtHostExt(), in allowing HCR_E2H to be set.
+ */
+mask = el ? HCR_E2H : HCR_E2H | HCR_TGE;
+if ((env->cp15.hcr_el2 & mask) != mask) {
+return false;
+}
+
+/* TGE and/or E2H set: double check those bits are currently legal. */
+return arm_is_el2_enabled(env) && arm_el_is_aa64(env, 2);
+}
+
 static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
uint64_t value)
 {
-- 
2.34.1




[PATCH 12/71] target/arm: Rename sve_zcr_len_for_el to sve_vqm1_for_el

2022-06-02 Thread Richard Henderson
This will be used for both Normal and Streaming SVE, and the value
does not necessarily come from ZCR_ELx.  While we're at it, emphasize
the units in which the value is returned.

Patch produced by
git grep -l sve_zcr_len_for_el | \
xargs -n1 sed -i 's/sve_zcr_len_for_el/sve_vqm1_for_el/g'

and then adding a function comment.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   | 11 ++-
 target/arm/arch_dump.c |  2 +-
 target/arm/cpu.c   |  2 +-
 target/arm/gdbstub64.c |  2 +-
 target/arm/helper.c| 12 ++--
 5 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index ef51c3774e..cb37787c35 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1132,7 +1132,16 @@ void aarch64_sync_64_to_32(CPUARMState *env);
 
 int fp_exception_el(CPUARMState *env, int cur_el);
 int sve_exception_el(CPUARMState *env, int cur_el);
-uint32_t sve_zcr_len_for_el(CPUARMState *env, int el);
+
+/**
+ * sve_vqm1_for_el:
+ * @env: CPUARMState
+ * @el: exception level
+ *
+ * Compute the current SVE vector length for @el, in units of
+ * Quadwords Minus 1 -- the same scale used for ZCR_ELx.LEN.
+ */
+uint32_t sve_vqm1_for_el(CPUARMState *env, int el);
 
 static inline bool is_a64(CPUARMState *env)
 {
diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c
index 0184845310..b1f040e69f 100644
--- a/target/arm/arch_dump.c
+++ b/target/arm/arch_dump.c
@@ -166,7 +166,7 @@ static off_t sve_fpcr_offset(uint32_t vq)
 
 static uint32_t sve_current_vq(CPUARMState *env)
 {
-return sve_zcr_len_for_el(env, arm_current_el(env)) + 1;
+return sve_vqm1_for_el(env, arm_current_el(env)) + 1;
 }
 
 static size_t sve_size_vq(uint32_t vq)
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 0621944167..1b5d535788 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -925,7 +925,7 @@ static void aarch64_cpu_dump_state(CPUState *cs, FILE *f, 
int flags)
  vfp_get_fpcr(env), vfp_get_fpsr(env));
 
 if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
-int j, zcr_len = sve_zcr_len_for_el(env, el);
+int j, zcr_len = sve_vqm1_for_el(env, el);
 
 for (i = 0; i <= FFR_PRED_NUM; i++) {
 bool eol;
diff --git a/target/arm/gdbstub64.c b/target/arm/gdbstub64.c
index 596878666d..07a6746944 100644
--- a/target/arm/gdbstub64.c
+++ b/target/arm/gdbstub64.c
@@ -152,7 +152,7 @@ int arm_gdb_get_svereg(CPUARMState *env, GByteArray *buf, 
int reg)
  * We report in Vector Granules (VG) which is 64bit in a Z reg
  * while the ZCR works in Vector Quads (VQ) which is 128bit chunks.
  */
-int vq = sve_zcr_len_for_el(env, arm_current_el(env)) + 1;
+int vq = sve_vqm1_for_el(env, arm_current_el(env)) + 1;
 return gdb_get_reg64(buf, vq * 2);
 }
 default:
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 7b6f31e9c8..cb44d528c0 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6225,7 +6225,7 @@ int sve_exception_el(CPUARMState *env, int el)
 /*
  * Given that SVE is enabled, return the vector length for EL.
  */
-uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
+uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
 {
 ARMCPU *cpu = env_archcpu(env);
 uint32_t len = cpu->sve_max_vq - 1;
@@ -6248,7 +6248,7 @@ static void zcr_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
   uint64_t value)
 {
 int cur_el = arm_current_el(env);
-int old_len = sve_zcr_len_for_el(env, cur_el);
+int old_len = sve_vqm1_for_el(env, cur_el);
 int new_len;
 
 /* Bits other than [3:0] are RAZ/WI.  */
@@ -6259,7 +6259,7 @@ static void zcr_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
  * Because we arrived here, we know both FP and SVE are enabled;
  * otherwise we would have trapped access to the ZCR_ELn register.
  */
-new_len = sve_zcr_len_for_el(env, cur_el);
+new_len = sve_vqm1_for_el(env, cur_el);
 if (new_len < old_len) {
 aarch64_sve_narrow_vq(env, new_len + 1);
 }
@@ -13683,7 +13683,7 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState 
*env, int el, int fp_el,
 sve_el = 0;
 }
 } else if (sve_el == 0) {
-DP_TBFLAG_A64(flags, VL, sve_zcr_len_for_el(env, el));
+DP_TBFLAG_A64(flags, VL, sve_vqm1_for_el(env, el));
 }
 DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
 }
@@ -14049,10 +14049,10 @@ void aarch64_sve_change_el(CPUARMState *env, int 
old_el,
  */
 old_a64 = old_el ? arm_el_is_aa64(env, old_el) : el0_a64;
 old_len = (old_a64 && !sve_exception_el(env, old_el)
-   ? sve_zcr_len_for_el(env, old_el) : 0);
+   ? sve_vqm1_for_el(env, old_el) : 0);
 new_a64 = new_el ? arm_el_is_aa64(env, new_el) : el0_a64;
 new_len = (new_a64 && !sve_exception_el(env, new_el)
-   ? sve_zcr_len_for_el(env, new_el) : 0);
+  

[PATCH 04/71] target/arm: Remove fp checks from sve_exception_el

2022-06-02 Thread Richard Henderson
Instead of checking these bits in fp_exception_el and
also in sve_exception_el, document that we must compare
the results.  The only place where we have not already
checked that FP EL is zero is in rebuild_hflags_a64.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 58 +++--
 1 file changed, 19 insertions(+), 39 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 8ace3ad533..bcf48f1b11 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6139,11 +6139,15 @@ static const ARMCPRegInfo minimal_ras_reginfo[] = {
   .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vsesr_el2) },
 };
 
-/* Return the exception level to which exceptions should be taken
- * via SVEAccessTrap.  If an exception should be routed through
- * AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should
- * take care of raising that exception.
- * C.f. the ARM pseudocode function CheckSVEEnabled.
+/*
+ * Return the exception level to which exceptions should be taken
+ * via SVEAccessTrap.  This excludes the check for whether the exception
+ * should be routed through AArch64.AdvSIMDFPAccessTrap.  That can easily
+ * be found by testing 0 < fp_exception_el < sve_exception_el.
+ *
+ * C.f. the ARM pseudocode function CheckSVEEnabled.  Note that the
+ * pseudocode does *not* separate out the FP trap checks, but has them
+ * all in one function.
  */
 int sve_exception_el(CPUARMState *env, int el)
 {
@@ -6161,18 +6165,6 @@ int sve_exception_el(CPUARMState *env, int el)
 case 2:
 return 1;
 }
-
-/* Check CPACR.FPEN.  */
-switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, FPEN)) {
-case 1:
-if (el != 0) {
-break;
-}
-/* fall through */
-case 0:
-case 2:
-return 0;
-}
 }
 
 /*
@@ -6190,24 +6182,10 @@ int sve_exception_el(CPUARMState *env, int el)
 case 2:
 return 2;
 }
-
-switch (FIELD_EX32(env->cp15.cptr_el[2], CPTR_EL2, FPEN)) {
-case 1:
-if (el == 2 || !(hcr_el2 & HCR_TGE)) {
-break;
-}
-/* fall through */
-case 0:
-case 2:
-return 0;
-}
 } else if (arm_is_el2_enabled(env)) {
 if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TZ)) {
 return 2;
 }
-if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TFP)) {
-return 0;
-}
 }
 }
 
@@ -13683,19 +13661,21 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState 
*env, int el, int fp_el,
 
 if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
 int sve_el = sve_exception_el(env, el);
-uint32_t zcr_len;
 
 /*
- * If SVE is disabled, but FP is enabled,
- * then the effective len is 0.
+ * If either FP or SVE are disabled, translator does not need len.
+ * If SVE EL > FP EL, FP exception has precedence, and translator
+ * does not need SVE EL.  Save potential re-translations by forcing
+ * the unneeded data to zero.
  */
-if (sve_el != 0 && fp_el == 0) {
-zcr_len = 0;
-} else {
-zcr_len = sve_zcr_len_for_el(env, el);
+if (fp_el != 0) {
+if (sve_el > fp_el) {
+sve_el = 0;
+}
+} else if (sve_el == 0) {
+DP_TBFLAG_A64(flags, VL, sve_zcr_len_for_el(env, el));
 }
 DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
-DP_TBFLAG_A64(flags, VL, zcr_len);
 }
 
 sctlr = regime_sctlr(env, stage1);
-- 
2.34.1




[PATCH 11/71] target/arm: Use uint32_t instead of bitmap for sve vq's

2022-06-02 Thread Richard Henderson
The bitmap need only hold 15 bits; bitmap is over-complicated.
We can simplify operations quite a bit with plain logical ops.

The introduction of SVE_VQ_POW2_MAP eliminates the need for
looping in order to search for powers of two.  Simply perform
the logical ops and use count leading or trailing zeros as
required to find the result.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |   6 +--
 target/arm/internals.h |   5 ++
 target/arm/kvm_arm.h   |   7 ++-
 target/arm/cpu64.c | 117 -
 target/arm/helper.c|   9 +---
 target/arm/kvm64.c |  36 +++--
 6 files changed, 75 insertions(+), 105 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 830d358d46..ef51c3774e 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1041,9 +1041,9 @@ struct ArchCPU {
  * Bits set in sve_vq_supported represent valid vector lengths for
  * the CPU type.
  */
-DECLARE_BITMAP(sve_vq_map, ARM_MAX_VQ);
-DECLARE_BITMAP(sve_vq_init, ARM_MAX_VQ);
-DECLARE_BITMAP(sve_vq_supported, ARM_MAX_VQ);
+uint32_t sve_vq_map;
+uint32_t sve_vq_init;
+uint32_t sve_vq_supported;
 
 /* Generic timer counter frequency, in Hz */
 uint64_t gt_cntfrq_hz;
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 4dcdca918b..8bac570475 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1321,4 +1321,9 @@ bool el_is_in_host(CPUARMState *env, int el);
 
 void aa32_max_features(ARMCPU *cpu);
 
+/* Powers of 2 for sve_vq_map et al. */
+#define SVE_VQ_POW2_MAP \
+((1 << (1 - 1)) | (1 << (2 - 1)) |  \
+ (1 << (4 - 1)) | (1 << (8 - 1)) | (1 << (16 - 1)))
+
 #endif
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
index b7f78b5215..99017b635c 100644
--- a/target/arm/kvm_arm.h
+++ b/target/arm/kvm_arm.h
@@ -239,13 +239,12 @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures 
*ahcf);
 /**
  * kvm_arm_sve_get_vls:
  * @cs: CPUState
- * @map: bitmap to fill in
  *
  * Get all the SVE vector lengths supported by the KVM host, setting
  * the bits corresponding to their length in quadwords minus one
- * (vq - 1) in @map up to ARM_MAX_VQ.
+ * (vq - 1) up to ARM_MAX_VQ.  Return the resulting map.
  */
-void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map);
+uint32_t kvm_arm_sve_get_vls(CPUState *cs);
 
 /**
  * kvm_arm_set_cpu_features_from_host:
@@ -439,7 +438,7 @@ static inline void kvm_arm_steal_time_finalize(ARMCPU *cpu, 
Error **errp)
 g_assert_not_reached();
 }
 
-static inline void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
+static inline uint32_t kvm_arm_sve_get_vls(CPUState *cs)
 {
 g_assert_not_reached();
 }
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 3ff9219ca3..51c5d8d4bc 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -355,8 +355,11 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
  * any of the above.  Finally, if SVE is not disabled, then at least one
  * vector length must be enabled.
  */
-DECLARE_BITMAP(tmp, ARM_MAX_VQ);
-uint32_t vq, max_vq = 0;
+uint32_t vq_map = cpu->sve_vq_map;
+uint32_t vq_init = cpu->sve_vq_init;
+uint32_t vq_supported;
+uint32_t vq_mask = 0;
+uint32_t tmp, vq, max_vq = 0;
 
 /*
  * CPU models specify a set of supported vector lengths which are
@@ -364,10 +367,16 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
  * in the supported bitmap results in an error.  When KVM is enabled we
  * fetch the supported bitmap from the host.
  */
-if (kvm_enabled() && kvm_arm_sve_supported()) {
-kvm_arm_sve_get_vls(CPU(cpu), cpu->sve_vq_supported);
-} else if (kvm_enabled()) {
-assert(!cpu_isar_feature(aa64_sve, cpu));
+if (kvm_enabled()) {
+if (kvm_arm_sve_supported()) {
+cpu->sve_vq_supported = kvm_arm_sve_get_vls(CPU(cpu));
+vq_supported = cpu->sve_vq_supported;
+} else {
+assert(!cpu_isar_feature(aa64_sve, cpu));
+vq_supported = 0;
+}
+} else {
+vq_supported = cpu->sve_vq_supported;
 }
 
 /*
@@ -375,8 +384,9 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
  * From the properties, sve_vq_map implies sve_vq_init.
  * Check first for any sve enabled.
  */
-if (!bitmap_empty(cpu->sve_vq_map, ARM_MAX_VQ)) {
-max_vq = find_last_bit(cpu->sve_vq_map, ARM_MAX_VQ) + 1;
+if (vq_map != 0) {
+max_vq = 32 - clz32(vq_map);
+vq_mask = MAKE_64BIT_MASK(0, max_vq);
 
 if (cpu->sve_max_vq && max_vq > cpu->sve_max_vq) {
 error_setg(errp, "cannot enable sve%d", max_vq * 128);
@@ -392,15 +402,10 @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
  * For KVM we have to automatically enable all supported 
unitialized
  * lengths, even when the smaller lengths are not all 

[PATCH 08/71] target/arm: Hoist arm_is_el2_enabled check in sve_exception_el

2022-06-02 Thread Richard Henderson
This check is buried within arm_hcr_el2_eff(), but since we
have to have the explicit check for CPTR_EL2.TZ, we might as
well just check it once at the beginning of the block.

Once this is done, we can test HCR_EL2.{E2H,TGE} directly,
rather than going through arm_hcr_el2_eff().

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 13 +
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 7319c91fc2..dc8f1e44cc 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6193,15 +6193,12 @@ int sve_exception_el(CPUARMState *env, int el)
 }
 }
 
-/*
- * CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE).
- */
-if (el <= 2) {
-uint64_t hcr_el2 = arm_hcr_el2_eff(env);
-if (hcr_el2 & HCR_E2H) {
+if (el <= 2 && arm_is_el2_enabled(env)) {
+/* CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE). */
+if (env->cp15.hcr_el2 & HCR_E2H) {
 switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, ZEN)) {
 case 1:
-if (el != 0 || !(hcr_el2 & HCR_TGE)) {
+if (el != 0 || !(env->cp15.hcr_el2 & HCR_TGE)) {
 break;
 }
 /* fall through */
@@ -6209,7 +6206,7 @@ int sve_exception_el(CPUARMState *env, int el)
 case 2:
 return 2;
 }
-} else if (arm_is_el2_enabled(env)) {
+} else {
 if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TZ)) {
 return 2;
 }
-- 
2.34.1




[PATCH 14/71] target/arm: Export sve contiguous ldst support functions

2022-06-02 Thread Richard Henderson
Export all of the support functions for performing bulk
fault analysis on a set of elements at contiguous addresses
controlled by a predicate.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
 target/arm/sve_ldst_internal.h | 94 ++
 target/arm/sve_helper.c| 87 ++-
 2 files changed, 111 insertions(+), 70 deletions(-)

diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
index ef9117e84c..b5c473fc48 100644
--- a/target/arm/sve_ldst_internal.h
+++ b/target/arm/sve_ldst_internal.h
@@ -124,4 +124,98 @@ DO_ST_PRIM_2(dd, H1_8, uint64_t, uint64_t, stq)
 #undef DO_LD_PRIM_2
 #undef DO_ST_PRIM_2
 
+/*
+ * Resolve the guest virtual address to info->host and info->flags.
+ * If @nofault, return false if the page is invalid, otherwise
+ * exit via page fault exception.
+ */
+
+typedef struct {
+void *host;
+int flags;
+MemTxAttrs attrs;
+} SVEHostPage;
+
+bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
+target_ulong addr, int mem_off, MMUAccessType access_type,
+int mmu_idx, uintptr_t retaddr);
+
+/*
+ * Analyse contiguous data, protected by a governing predicate.
+ */
+
+typedef enum {
+FAULT_NO,
+FAULT_FIRST,
+FAULT_ALL,
+} SVEContFault;
+
+typedef struct {
+/*
+ * First and last element wholly contained within the two pages.
+ * mem_off_first[0] and reg_off_first[0] are always set >= 0.
+ * reg_off_last[0] may be < 0 if the first element crosses pages.
+ * All of mem_off_first[1], reg_off_first[1] and reg_off_last[1]
+ * are set >= 0 only if there are complete elements on a second page.
+ *
+ * The reg_off_* offsets are relative to the internal vector register.
+ * The mem_off_first offset is relative to the memory address; the
+ * two offsets are different when a load operation extends, a store
+ * operation truncates, or for multi-register operations.
+ */
+int16_t mem_off_first[2];
+int16_t reg_off_first[2];
+int16_t reg_off_last[2];
+
+/*
+ * One element that is misaligned and spans both pages,
+ * or -1 if there is no such active element.
+ */
+int16_t mem_off_split;
+int16_t reg_off_split;
+
+/*
+ * The byte offset at which the entire operation crosses a page boundary.
+ * Set >= 0 if and only if the entire operation spans two pages.
+ */
+int16_t page_split;
+
+/* TLB data for the two pages. */
+SVEHostPage page[2];
+} SVEContLdSt;
+
+/*
+ * Find first active element on each page, and a loose bound for the
+ * final element on each page.  Identify any single element that spans
+ * the page boundary.  Return true if there are any active elements.
+ */
+bool sve_cont_ldst_elements(SVEContLdSt *info, target_ulong addr, uint64_t *vg,
+intptr_t reg_max, int esz, int msize);
+
+/*
+ * Resolve the guest virtual addresses to info->page[].
+ * Control the generation of page faults with @fault.  Return false if
+ * there is no work to do, which can only happen with @fault == FAULT_NO.
+ */
+bool sve_cont_ldst_pages(SVEContLdSt *info, SVEContFault fault,
+ CPUARMState *env, target_ulong addr,
+ MMUAccessType access_type, uintptr_t retaddr);
+
+#ifdef CONFIG_USER_ONLY
+static inline void
+sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env, uint64_t *vg,
+  target_ulong addr, int esize, int msize,
+  int wp_access, uintptr_t retaddr)
+{ }
+#else
+void sve_cont_ldst_watchpoints(SVEContLdSt *info, CPUARMState *env,
+   uint64_t *vg, target_ulong addr,
+   int esize, int msize, int wp_access,
+   uintptr_t retaddr);
+#endif
+
+void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env, uint64_t *vg,
+ target_ulong addr, int esize, int msize,
+ uint32_t mtedesc, uintptr_t ra);
+
 #endif /* TARGET_ARM_SVE_LDST_INTERNAL_H */
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 0c6dde00aa..8cd371e3e3 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -5341,16 +5341,9 @@ static intptr_t find_next_active(uint64_t *vg, intptr_t 
reg_off,
  * exit via page fault exception.
  */
 
-typedef struct {
-void *host;
-int flags;
-MemTxAttrs attrs;
-} SVEHostPage;
-
-static bool sve_probe_page(SVEHostPage *info, bool nofault,
-   CPUARMState *env, target_ulong addr,
-   int mem_off, MMUAccessType access_type,
-   int mmu_idx, uintptr_t retaddr)
+bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
+target_ulong addr, int mem_off, MMUAccessType access_type,
+int mmu_idx, uintptr_t 

[PATCH 01/71] target/arm: Rename TBFLAG_A64 ZCR_LEN to VL

2022-06-02 Thread Richard Henderson
With SME, the vector length does not only come from ZCR_ELx.
Comment that this is either NVL or SVL, like the pseudocode.

Reviewed-by: Peter Maydell 
Signed-off-by: Richard Henderson 
---
v2: Renamed from SVE_LEN to VL.
---
 target/arm/cpu.h   | 3 ++-
 target/arm/translate-a64.h | 2 +-
 target/arm/translate.h | 2 +-
 target/arm/helper.c| 2 +-
 target/arm/translate-a64.c | 2 +-
 target/arm/translate-sve.c | 2 +-
 6 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index c1865ad5da..015ce12fe2 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3241,7 +3241,8 @@ FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1)/* Not 
cached. */
  */
 FIELD(TBFLAG_A64, TBII, 0, 2)
 FIELD(TBFLAG_A64, SVEEXC_EL, 2, 2)
-FIELD(TBFLAG_A64, ZCR_LEN, 4, 4)
+/* The current vector length, either NVL or SVL. */
+FIELD(TBFLAG_A64, VL, 4, 4)
 FIELD(TBFLAG_A64, PAUTH_ACTIVE, 8, 1)
 FIELD(TBFLAG_A64, BT, 9, 1)
 FIELD(TBFLAG_A64, BTYPE, 10, 2) /* Not cached. */
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index f2e8ee0ee1..dbc917ee65 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -104,7 +104,7 @@ static inline TCGv_ptr vec_full_reg_ptr(DisasContext *s, 
int regno)
 /* Return the byte size of the "whole" vector register, VL / 8.  */
 static inline int vec_full_reg_size(DisasContext *s)
 {
-return s->sve_len;
+return s->vl;
 }
 
 bool disas_sve(DisasContext *, uint32_t);
diff --git a/target/arm/translate.h b/target/arm/translate.h
index 9f0bb270c5..f473a21ed4 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -42,7 +42,7 @@ typedef struct DisasContext {
 bool ns;/* Use non-secure CPREG bank on access */
 int fp_excp_el; /* FP exception EL or 0 if enabled */
 int sve_excp_el; /* SVE exception EL or 0 if enabled */
-int sve_len; /* SVE vector length in bytes */
+int vl;  /* current vector length in bytes */
 /* Flag indicating that exceptions from secure mode are routed to EL3. */
 bool secure_routed_to_el3;
 bool vfp_enabled; /* FP enabled via FPSCR.EN */
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 40da63913c..960899022d 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -13696,7 +13696,7 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState 
*env, int el, int fp_el,
 zcr_len = sve_zcr_len_for_el(env, el);
 }
 DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
-DP_TBFLAG_A64(flags, ZCR_LEN, zcr_len);
+DP_TBFLAG_A64(flags, VL, zcr_len);
 }
 
 sctlr = regime_sctlr(env, stage1);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 935e1929bb..d438fb89e7 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14608,7 +14608,7 @@ static void 
aarch64_tr_init_disas_context(DisasContextBase *dcbase,
 dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
 dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
 dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
-dc->sve_len = (EX_TBFLAG_A64(tb_flags, ZCR_LEN) + 1) * 16;
+dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
 dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
 dc->bt = EX_TBFLAG_A64(tb_flags, BT);
 dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 836511d719..67761bf2cc 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -111,7 +111,7 @@ static inline int pred_full_reg_offset(DisasContext *s, int 
regno)
 /* Return the byte size of the whole predicate register, VL / 64.  */
 static inline int pred_full_reg_size(DisasContext *s)
 {
-return s->sve_len >> 3;
+return s->vl >> 3;
 }
 
 /* Round up the size of a register to a size allowed by
-- 
2.34.1




[PATCH 07/71] target/arm: Use el_is_in_host for sve_exception_el

2022-06-02 Thread Richard Henderson
The ARM pseudocode function CheckNormalSVEEnabled uses this
predicate now, and I think it's a bit clearer.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 135c3e790c..7319c91fc2 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6180,9 +6180,7 @@ static const ARMCPRegInfo minimal_ras_reginfo[] = {
 int sve_exception_el(CPUARMState *env, int el)
 {
 #ifndef CONFIG_USER_ONLY
-uint64_t hcr_el2 = arm_hcr_el2_eff(env);
-
-if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
+if (el <= 1 && !el_is_in_host(env, el)) {
 switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, ZEN)) {
 case 1:
 if (el != 0) {
@@ -6199,6 +6197,7 @@ int sve_exception_el(CPUARMState *env, int el)
  * CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE).
  */
 if (el <= 2) {
+uint64_t hcr_el2 = arm_hcr_el2_eff(env);
 if (hcr_el2 & HCR_E2H) {
 switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, ZEN)) {
 case 1:
-- 
2.34.1




[PATCH 03/71] target/arm: Remove route_to_el2 check from sve_exception_el

2022-06-02 Thread Richard Henderson
We handle this routing in raise_exception.  Promoting the value early
means that we can't directly compare FPEXC_EL and SVEEXC_EL.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 960899022d..8ace3ad533 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6159,8 +6159,7 @@ int sve_exception_el(CPUARMState *env, int el)
 /* fall through */
 case 0:
 case 2:
-/* route_to_el2 */
-return hcr_el2 & HCR_TGE ? 2 : 1;
+return 1;
 }
 
 /* Check CPACR.FPEN.  */
-- 
2.34.1




[PATCH 00/71] target/arm: Scalable Matrix Extension

2022-06-02 Thread Richard Henderson
Implement FEAT_SME and most optional extensions, which are really
pretty trivial compared to the main feature.  FEAT_EBF16 is still
on the to-do list.

Includes linux-user support, based on Mark Brown's code that has been
merged into linux 5.19-rc1.

Mark's kselftest suite is in fact all of the testing that I have done,
since the current public Arm FVP does not include support for SME.
On the bright side, Mark's tests handle all of the new mode switching,
which IMO is the hairy part, and wouldn't be tested by RISU anyway.

All prerequisites are either merged or dropped.

Supercedes: 20220527180623.185261-1-richard.hender...@linaro.org
("[PATCH v3 00/15] target/arm: SME prep patches")


r~


Richard Henderson (71):
  target/arm: Rename TBFLAG_A64 ZCR_LEN to VL
  linux-user/aarch64: Introduce sve_vq_cached
  target/arm: Remove route_to_el2 check from sve_exception_el
  target/arm: Remove fp checks from sve_exception_el
  target/arm: Add el_is_in_host
  target/arm: Use el_is_in_host for sve_zcr_len_for_el
  target/arm: Use el_is_in_host for sve_exception_el
  target/arm: Hoist arm_is_el2_enabled check in sve_exception_el
  target/arm: Do not use aarch64_sve_zcr_get_valid_len in reset
  target/arm: Merge aarch64_sve_zcr_get_valid_len into caller
  target/arm: Use uint32_t instead of bitmap for sve vq's
  target/arm: Rename sve_zcr_len_for_el to sve_vqm1_for_el
  target/arm: Split out load/store primitives to sve_ldst_internal.h
  target/arm: Export sve contiguous ldst support functions
  target/arm: Move expand_pred_b to vec_internal.h
  target/arm: Use expand_pred_b in mve_helper.c
  target/arm: Move expand_pred_h to vec_internal.h
  target/arm: Export bfdotadd from vec_helper.c
  target/arm: Add isar_feature_aa64_sme
  target/arm: Add ID_AA64SMFR0_EL1
  target/arm: Implement TPIDR2_EL0
  target/arm: Add SMEEXC_EL to TB flags
  target/arm: Add syn_smetrap
  target/arm: Add ARM_CP_SME
  target/arm: Add SVCR
  target/arm: Add SMCR_ELx
  target/arm: Add SMIDR_EL1, SMPRI_EL1, SMPRIMAP_EL2
  target/arm: Add PSTATE.{SM,ZA} to TB flags
  target/arm: Add the SME ZA storage to CPUARMState
  target/arm: Implement SMSTART, SMSTOP
  target/arm: Move error for sve%d property to arm_cpu_sve_finalize
  target/arm: Create ARMVQMap
  target/arm: Generalize cpu_arm_{get,set}_vq
  target/arm: Generalize cpu_arm_{get,set}_default_vec_len
  target/arm: Move arm_cpu_*_finalize to internals.h
  target/arm: Unexport aarch64_add_*_properties
  target/arm: Add cpu properties for SME
  target/arm: Introduce sve_vqm1_for_el_sm
  target/arm: Add SVL to TB flags
  target/arm: Move pred_{full,gvec}_reg_{offset,size} to translate-a64.h
  target/arm: Add infrastructure for disas_sme
  target/arm: Trap AdvSIMD usage when Streaming SVE is active
  target/arm: Implement SME RDSVL, ADDSVL, ADDSPL
  target/arm: Implement SME ZERO
  target/arm: Implement SME MOVA
  target/arm: Implement SME LD1, ST1
  target/arm: Export unpredicated ld/st from translate-sve.c
  target/arm: Implement SME LDR, STR
  target/arm: Implement SME ADDHA, ADDVA
  target/arm: Implement FMOPA, FMOPS (non-widening)
  target/arm: Implement BFMOPA, BFMOPS
  target/arm: Implement FMOPA, FMOPS (widening)
  target/arm: Implement SME integer outer product
  target/arm: Implement PSEL
  target/arm: Implement REVD
  target/arm: Implement SCLAMP, UCLAMP
  target/arm: Reset streaming sve state on exception boundaries
  target/arm: Enable SME for -cpu max
  linux-user/aarch64: Clear tpidr2_el0 if CLONE_SETTLS
  linux-user/aarch64: Reset PSTATE.SM on syscalls
  linux-user/aarch64: Add SM bit to SVE signal context
  linux-user/aarch64: Tidy target_restore_sigframe error return
  linux-user/aarch64: Do not allow duplicate or short sve records
  linux-user/aarch64: Verify extra record lock succeeded
  linux-user/aarch64: Move sve record checks into restore
  linux-user/aarch64: Implement SME signal handling
  linux-user: Rename sve prctls
  linux-user/aarch64: Implement PR_SME_GET_VL, PR_SME_SET_VL
  target/arm: Only set ZEN in reset if SVE present
  target/arm: Enable SME for user-only
  linux-user/aarch64: Add SME related hwcap entries

 docs/system/arm/emulation.rst |4 +
 linux-user/aarch64/target_cpu.h   |5 +-
 linux-user/aarch64/target_prctl.h |   76 +-
 target/arm/cpregs.h   |5 +
 target/arm/cpu.h  |  146 +++-
 target/arm/helper-sme.h   |  146 
 target/arm/helper-sve.h   |4 +
 target/arm/helper.h   |   19 +
 target/arm/internals.h|   22 +-
 target/arm/kvm_arm.h  |7 +-
 target/arm/sve_ldst_internal.h|  221 ++
 target/arm/syndrome.h |   13 +
 target/arm/translate-a64.h|   55 +-
 target/arm/translate.h|   16 +-
 target/arm/vec_internal.h |   28 +-
 target/arm/sme-fa64.decode|   89 +++
 target/arm/sme.decode |   88 +++
 target/arm/sve.decode |   31 +-
 linux-user/aarch64/cpu_loop.c |9 +
 

[PATCH 02/71] linux-user/aarch64: Introduce sve_vq_cached

2022-06-02 Thread Richard Henderson
Add an interface function to extract the digested vector length
rather than the raw zcr_el[1] value.  This fixes an incorrect
return from do_prctl_set_vl where we didn't take into account
the set of vector lengths supported by the cpu.

Signed-off-by: Richard Henderson 
---
v2: Add sve_vq_cached rather than directly access hflags.
---
 linux-user/aarch64/target_prctl.h | 20 +---
 target/arm/cpu.h  | 11 +++
 linux-user/aarch64/signal.c   |  4 ++--
 3 files changed, 26 insertions(+), 9 deletions(-)

diff --git a/linux-user/aarch64/target_prctl.h 
b/linux-user/aarch64/target_prctl.h
index 3f5a5d3933..fdd973e07d 100644
--- a/linux-user/aarch64/target_prctl.h
+++ b/linux-user/aarch64/target_prctl.h
@@ -10,7 +10,7 @@ static abi_long do_prctl_get_vl(CPUArchState *env)
 {
 ARMCPU *cpu = env_archcpu(env);
 if (cpu_isar_feature(aa64_sve, cpu)) {
-return ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
+return sve_vq_cached(env) * 16;
 }
 return -TARGET_EINVAL;
 }
@@ -25,18 +25,24 @@ static abi_long do_prctl_set_vl(CPUArchState *env, abi_long 
arg2)
  */
 if (cpu_isar_feature(aa64_sve, env_archcpu(env))
 && arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
-ARMCPU *cpu = env_archcpu(env);
 uint32_t vq, old_vq;
 
-old_vq = (env->vfp.zcr_el[1] & 0xf) + 1;
-vq = MAX(arg2 / 16, 1);
-vq = MIN(vq, cpu->sve_max_vq);
+old_vq = sve_vq_cached(env);
 
+/*
+ * Bound the value of arg2, so that we know that it fits into
+ * the 4-bit field in ZCR_EL1.  Rely on the hflags rebuild to
+ * sort out the length supported by the cpu.
+ */
+vq = MAX(arg2 / 16, 1);
+vq = MIN(vq, ARM_MAX_VQ);
+env->vfp.zcr_el[1] = vq - 1;
+arm_rebuild_hflags(env);
+
+vq = sve_vq_cached(env);
 if (vq < old_vq) {
 aarch64_sve_narrow_vq(env, vq);
 }
-env->vfp.zcr_el[1] = vq - 1;
-arm_rebuild_hflags(env);
 return vq * 16;
 }
 return -TARGET_EINVAL;
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 015ce12fe2..830d358d46 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3286,6 +3286,17 @@ static inline int cpu_mmu_index(CPUARMState *env, bool 
ifetch)
 return EX_TBFLAG_ANY(env->hflags, MMUIDX);
 }
 
+/**
+ * sve_vq_cached
+ * @env: the cpu context
+ *
+ * Return the VL cached within env->hflags, in units of quadwords.
+ */
+static inline int sve_vq_cached(CPUARMState *env)
+{
+return EX_TBFLAG_A64(env->hflags, VL) + 1;
+}
+
 static inline bool bswap_code(bool sctlr_b)
 {
 #ifdef CONFIG_USER_ONLY
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
index 7de4c96eb9..30e89f67c8 100644
--- a/linux-user/aarch64/signal.c
+++ b/linux-user/aarch64/signal.c
@@ -315,7 +315,7 @@ static int target_restore_sigframe(CPUARMState *env,
 
 case TARGET_SVE_MAGIC:
 if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
-vq = (env->vfp.zcr_el[1] & 0xf) + 1;
+vq = sve_vq_cached(env);
 sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
 if (!sve && size == sve_size) {
 sve = (struct target_sve_context *)ctx;
@@ -434,7 +434,7 @@ static void target_setup_frame(int usig, struct 
target_sigaction *ka,
 
 /* SVE state needs saving only if it exists.  */
 if (cpu_isar_feature(aa64_sve, env_archcpu(env))) {
-vq = (env->vfp.zcr_el[1] & 0xf) + 1;
+vq = sve_vq_cached(env);
 sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
 sve_ofs = alloc_sigframe_space(sve_size, );
 }
-- 
2.34.1




[PATCH 1/1] hw/ide/core: Accumulate PIO output within io_buffer prior to pwritev

2022-06-02 Thread Lev Kujawski
Delay writing PIO output until io_buffer is filled or ATA command
completion, rather than when interrupts are generated.  As an example
of the new behavior, issuing WRITE SECTOR(S) with a sector count of
256 will result in only a single call to blk_aio_pwritev rather than
after each of the 256 sectors are transferred.  Up to a 50% increase
in PIO throughput can be achieved thanks to the reduction in system
call overhead and writing larger blocks (up to 128 KiB, with the size
limited by IDE_DMA_BUF_SECTORS).

Signed-off-by: Lev Kujawski 
---
 hw/ide/core.c | 62 ---
 include/hw/ide/internal.h |  1 +
 2 files changed, 39 insertions(+), 24 deletions(-)

diff --git a/hw/ide/core.c b/hw/ide/core.c
index 5a24547e49..b178584bc3 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -1025,23 +1025,20 @@ static void ide_sector_write_cb(void *opaque, int ret)
 
 block_acct_done(blk_get_stats(s->blk), >acct);
 
-n = s->nsector;
-if (n > s->req_nb_sectors) {
-n = s->req_nb_sectors;
-}
-s->nsector -= n;
-
+n = (s->data_end - s->io_buffer) >> BDRV_SECTOR_BITS;
 ide_set_sector(s, ide_get_sector(s) + n);
+n %= s->req_nb_sectors;
+s->nsector -= n ? n : s->req_nb_sectors;
+
 if (s->nsector == 0) {
 /* no more sectors to write */
 ide_transfer_stop(s);
 } else {
-int n1 = s->nsector;
-if (n1 > s->req_nb_sectors) {
-n1 = s->req_nb_sectors;
-}
-ide_transfer_start(s, s->io_buffer, n1 * BDRV_SECTOR_SIZE,
-   ide_sector_write);
+const int n1 =
+(MIN(IDE_DMA_BUF_SECTORS, s->nsector)) << BDRV_SECTOR_BITS;
+s->octets_until_irq =
+(MIN(s->nsector, s->req_nb_sectors)) << BDRV_SECTOR_BITS;
+ide_transfer_start(s, s->io_buffer, n1, ide_sector_write);
 }
 
 if (win2k_install_hack && ((++s->irq_count % 16) == 0)) {
@@ -1063,14 +1060,21 @@ static void ide_sector_write(IDEState *s)
 int64_t sector_num;
 int n;
 
-s->status = READY_STAT | SEEK_STAT | BUSY_STAT;
-sector_num = ide_get_sector(s);
+assert(s->octets_until_irq == 0);
 
-n = s->nsector;
-if (n > s->req_nb_sectors) {
-n = s->req_nb_sectors;
+if (s->data_ptr < s->data_end) {
+s->nsector -= s->req_nb_sectors;
+s->octets_until_irq =
+(MIN(s->nsector, s->req_nb_sectors)) << BDRV_SECTOR_BITS;
+s->status = READY_STAT | SEEK_STAT | DRQ_STAT;
+ide_set_irq(s->bus);
+return;
 }
 
+s->status = READY_STAT | SEEK_STAT | BUSY_STAT;
+sector_num = ide_get_sector(s);
+n = (s->data_end - s->io_buffer) >> BDRV_SECTOR_BITS;
+
 trace_ide_sector_write(sector_num, n);
 
 if (!ide_sect_range_ok(s, sector_num, n)) {
@@ -1378,6 +1382,7 @@ static void ide_reset(IDEState *s)
 /* ATA DMA state */
 s->io_buffer_size = 0;
 s->req_nb_sectors = 0;
+s->octets_until_irq = 0;
 
 ide_set_signature(s);
 /* init the transfer handler so that 0x is returned on data
@@ -1500,10 +1505,11 @@ static bool cmd_write_multiple(IDEState *s, uint8_t cmd)
 ide_cmd_lba48_transform(s, lba48);
 
 s->req_nb_sectors = s->mult_sectors;
-n = MIN(s->nsector, s->req_nb_sectors);
-
+n = (MIN(IDE_DMA_BUF_SECTORS, s->nsector)) << BDRV_SECTOR_BITS;
+s->octets_until_irq =
+(MIN(s->nsector, s->req_nb_sectors)) << BDRV_SECTOR_BITS;
 s->status = SEEK_STAT | READY_STAT;
-ide_transfer_start(s, s->io_buffer, 512 * n, ide_sector_write);
+ide_transfer_start(s, s->io_buffer, n, ide_sector_write);
 
 s->media_changed = 1;
 
@@ -1535,6 +1541,7 @@ static bool cmd_read_pio(IDEState *s, uint8_t cmd)
 static bool cmd_write_pio(IDEState *s, uint8_t cmd)
 {
 bool lba48 = (cmd == WIN_WRITE_EXT);
+int n;
 
 if (!s->blk) {
 ide_abort_command(s);
@@ -1544,8 +1551,10 @@ static bool cmd_write_pio(IDEState *s, uint8_t cmd)
 ide_cmd_lba48_transform(s, lba48);
 
 s->req_nb_sectors = 1;
+n = (MIN(IDE_DMA_BUF_SECTORS, s->nsector)) << BDRV_SECTOR_BITS;
+s->octets_until_irq = BDRV_SECTOR_SIZE;
 s->status = SEEK_STAT | READY_STAT;
-ide_transfer_start(s, s->io_buffer, 512, ide_sector_write);
+ide_transfer_start(s, s->io_buffer, n, ide_sector_write);
 
 s->media_changed = 1;
 
@@ -1699,7 +1708,7 @@ static bool cmd_identify_packet(IDEState *s, uint8_t cmd)
 {
 ide_atapi_identify(s);
 s->status = READY_STAT | SEEK_STAT;
-ide_transfer_start(s, s->io_buffer, 512, ide_transfer_stop);
+ide_transfer_start(s, s->io_buffer, BDRV_SECTOR_SIZE, ide_transfer_stop);
 ide_set_irq(s->bus);
 return false;
 }
@@ -1745,6 +1754,7 @@ static bool cmd_packet(IDEState *s, uint8_t cmd)
 s->dma_cmd = IDE_DMA_ATAPI;
 }
 s->nsector = 1;
+s->octets_until_irq = ATAPI_PACKET_SIZE;
 ide_transfer_start(s, s->io_buffer, ATAPI_PACKET_SIZE,
ide_atapi_cmd);
 return false;
@@ 

[PATCH 0/1] IDE: Addressing slow PIO throughput

2022-06-02 Thread Lev Kujawski
Hello,

Is there any mechanism within QEMU for an emulated device to handle
string IO instructions (e.g., insw) directly?

I have noticed that PIO transfers seem rather slow (~240 kb/s) when
running QEMU on my computer, despite using a raw block device (SSD),
aio=io_uring, and file.cache.direct=on.  The attached patch improves
the rate by about 50% for me, and I would appreciate feedback on
whether this holds for others as well.

Kind regards,
Lev Kujawski

Lev Kujawski (1):
  hw/ide/core: Accumulate PIO output within io_buffer prior to pwritev

 hw/ide/core.c | 62 ---
 include/hw/ide/internal.h |  1 +
 2 files changed, 39 insertions(+), 24 deletions(-)

-- 
2.34.1




Re: [PULL 0/2] VFIO fixes 2022-02-03

2022-06-02 Thread Alex Williamson
On Mon, 7 Feb 2022 17:20:02 +0100
Thomas Huth  wrote:

> On 07/02/2022 16.50, Alex Williamson wrote:
> > On Sat, 5 Feb 2022 10:49:35 +
> > Peter Maydell  wrote:
> >   
> >> On Thu, 3 Feb 2022 at 22:38, Alex Williamson  
> >> wrote:  
> >>>
> >>> The following changes since commit 
> >>> 8f3e5ce773c62bb5c4a847f3a9a5c98bbb3b359f:
> >>>
> >>>Merge remote-tracking branch 
> >>> 'remotes/hdeller/tags/hppa-updates-pull-request' into staging (2022-02-02 
> >>> 19:54:30 +)
> >>>
> >>> are available in the Git repository at:
> >>>
> >>>git://github.com/awilliam/qemu-vfio.git tags/vfio-fixes-20220203.0
> >>>
> >>> for you to fetch changes up to 36fe5d5836c8d5d928ef6d34e999d6991a2f732e:
> >>>
> >>>hw/vfio/common: Silence ram device offset alignment error traces 
> >>> (2022-02-03 15:05:05 -0700)
> >>>
> >>> 
> >>> VFIO fixes 2022-02-03
> >>>
> >>>   * Fix alignment warnings when using TPM CRB with vfio-pci devices
> >>> (Eric Auger & Philippe Mathieu-Daudé)  
> >>
> >> Hi; this has a format-string issue that means it doesn't build
> >> on 32-bit systems:
> >>
> >> https://gitlab.com/qemu-project/qemu/-/jobs/2057116569
> >>
> >> ../hw/vfio/common.c: In function 'vfio_listener_region_add':
> >> ../hw/vfio/common.c:893:26: error: format '%llx' expects argument of
> >> type 'long long unsigned int', but argument 6 has type 'intptr_t' {aka
> >> 'int'} [-Werror=format=]
> >> error_report("%s received unaligned region %s iova=0x%"PRIx64
> >> ^~
> >> ../hw/vfio/common.c:899:26:
> >> qemu_real_host_page_mask);
> >> 
> >>
> >> For intptr_t you want PRIxPTR.  
> > 
> > Darn.  Well, let me use this opportunity to ask, how are folks doing
> > 32-bit cross builds on Fedora?  I used to keep an i686 PAE VM for this
> > purpose, but I was eventually no longer able to maintain the build
> > dependencies.  Looks like this failed on a mipsel cross build, but I
> > don't see such a cross compiler in Fedora.  I do mingw32/64 cross
> > builds, but they leave a lot to be desired for code coverage.  Thanks,  
> 
> The easiest way for getting more test coverage is likely to move your qemu 
> repository from github to gitlab - then you get most of the CI for free, 
> which should catch such issues before sending pull requests.

Well, it worked for a few months, but now pushing a tag to gitlab runs
a whole 4 jobs vs the 124 jobs that it previously ran, so that's
useless now :(  Thanks,

Alex




Re: [PATCH v2 0/3] PIIX3-IDE XEN cleanup

2022-06-02 Thread Bernhard Beschow
On Saturday, May 28, 2022, Bernhard Beschow  wrote:
> Am 13. Mai 2022 18:09:54 UTC schrieb Bernhard Beschow :
>>v2:
>>* Have pci_xen_ide_unplug() return void (Paul Durrant)
>>* CC Xen maintainers (Michael S. Tsirkin)
>>
>>v1:
>>This patch series first removes the redundant "piix3-ide-xen" device
class and
>>then moves a XEN-specific helper function from PIIX3 code to XEN code.
The idea
>>is to decouple PIIX3-IDE and XEN and to compile XEN-specific bits only if
XEN
>>support is enabled.
>>
>>Testing done:
>>'qemu-system-x86_64 -M pc -m 1G -cdrom archlinux-2022.05.01-x86_64.iso"
boots
>>successfully and a 'poweroff' inside the VM also shuts it down correctly.
>>
>>XEN mode wasn't tested for the time being since its setup procedure seems
quite
>>sophisticated. Please let me know in case this is an obstacle.
>>
>>Bernhard Beschow (3):
>>  hw/ide/piix: Remove redundant "piix3-ide-xen" device class
>>  hw/ide/piix: Add some documentation to pci_piix3_xen_ide_unplug()
>>  include/hw/ide: Unexport pci_piix3_xen_ide_unplug()
>>
>> hw/i386/pc_piix.c  |  3 +--
>> hw/i386/xen/xen_platform.c | 48 +-
>> hw/ide/piix.c  | 42 -
>> include/hw/ide.h   |  3 ---
>> 4 files changed, 48 insertions(+), 48 deletions(-)
>>
>
> Ping
>
> Whole series is reviewed/acked.

Ping 2


[PATCH v3 0/3] QOM improvements for rtc/mc146818rtc

2022-06-02 Thread Bernhard Beschow
Ping

Am 29. Mai 2022 18:40:03 UTC schrieb Bernhard Beschow :
>v3:
>* "iobase" is now u16 (Philippe)
>
>v2:
>* Explicitly fail with _abort rather than NULL (Mark)
>* Explicitly fail with _abort rather than NULL in existing code (me)
>* Unexport rather than remove RTC_ISA_BASE (Mark)
>* Use object_property_get_*u*int() also for "iobase" (me)
>
>v1:
>This little series enhances QOM support for mc146818rtc:
>* makes microvm-dt respect mc146818rtc's IRQ number set by QOM property and
>* adds an io_base QOM property similar to other ISA devices
>
>Bernhard Beschow (3):
>  hw/i386/microvm-dt: Force explicit failure if retrieving QOM property
>fails
>  hw/i386/microvm-dt: Determine mc146818rtc's IRQ number from QOM
>property
>  rtc/mc146818rtc: QOM'ify io_base offset
>
> hw/i386/microvm-dt.c | 9 +
> hw/rtc/mc146818rtc.c | 9 ++---
> include/hw/rtc/mc146818rtc.h | 2 +-
> 3 files changed, 12 insertions(+), 8 deletions(-)
>

Ping


Re: [PATCH v2 00/11] hw/acpi/piix4: remove legacy piix4_pm_init() function

2022-06-02 Thread Bernhard Beschow
Am 30. Mai 2022 11:27:07 UTC schrieb "Philippe Mathieu-Daudé" 
:
>From: Philippe Mathieu-Daudé 
>
>This series moves the outstanding logic from piix4_pm_init() into
>the relevant instance init() and realize() functions, changes the
>IRQs to use qdev gpios, and then finally removes the now-unused
>piix4_pm_initfn() function.
>
>v2:
>- Addressed Ani & Bernhard review comments

Patch 4 still introduces the redundant include in acpi/piix4.c, and perhaps all 
includes already included in the new piix4.h could still be removed alrogether 
[1]. Anyway:
Reviewed-by: Bernhard Beschow 

[1] https://lists.nongnu.org/archive/html/qemu-devel/2022-05/msg05756.html

>
>If no further comments I plan to queue this via mips-next end of
>this week.
>
>Regards,
>
>Phil.
>
>Mark Cave-Ayland (11):
>  hw/acpi/piix4: move xen_enabled() logic from piix4_pm_init() to
>piix4_pm_realize()
>  hw/acpi/piix4: change smm_enabled from int to bool
>  hw/acpi/piix4: convert smm_enabled bool to qdev property
>  hw/acpi/piix4: move PIIX4PMState into separate piix4.h header
>  hw/acpi/piix4: alter piix4_pm_init() to return PIIX4PMState
>  hw/acpi/piix4: rename piix4_pm_init() to piix4_pm_initfn()
>  hw/acpi/piix4: use qdev gpio to wire up sci_irq
>  hw/acpi/piix4: use qdev gpio to wire up smi_irq
>  hw/i386/pc_piix: create PIIX4_PM device directly instead of using
>piix4_pm_initfn()
>  hw/isa/piix4.c: create PIIX4_PM device directly instead of using
>piix4_pm_initfn()
>  hw/acpi/piix4: remove unused piix4_pm_initfn() function
>
> hw/acpi/piix4.c   | 77 ++-
> hw/i386/acpi-build.c  |  1 +
> hw/i386/pc_piix.c | 16 +---
> hw/isa/piix4.c| 11 +++--
> include/hw/acpi/piix4.h   | 75 ++
> include/hw/southbridge/piix.h |  6 ---
> 6 files changed, 107 insertions(+), 79 deletions(-)
> create mode 100644 include/hw/acpi/piix4.h
>




Re: [PATCH v2 03/16] ppc/pnv: add PnvPHB base/proxy device

2022-06-02 Thread Daniel Henrique Barboza




On 6/2/22 13:16, Frederic Barrat wrote:



On 31/05/2022 23:49, Daniel Henrique Barboza wrote:

The PnvPHB device is going to be the base device for all other powernv
PHBs. It consists of a device that has the same user API as the other
PHB, namely being a PCIHostBridge and having chip-id and index
properties. It also has a 'backend' pointer that will be initialized
with the PHB implementation that the device is going to use.

The initialization of the PHB backend is done by checking the PHB
version via a 'version' attribute that can be set via a global machine
property.  The 'version' field will be used to make adjustments based on
the running version, e.g. PHB3 uses a 'chip' reference while PHB4 uses
'pec'. To init the PnvPHB bus we'll rely on helpers for each version.
The version 3 helper is already added (pnv_phb3_bus_init), the PHB4
helper will be added later on.

For now let's add the basic logic of the PnvPHB object, which consists
mostly of pnv_phb_realize() doing all the work of checking the
phb->version set, initializing the proper backend, passing through its
attributes to the chosen backend, finalizing the backend realize and
adding a root port in the end.

Signed-off-by: Daniel Henrique Barboza 
---
  hw/pci-host/meson.build |   3 +-
  hw/pci-host/pnv_phb.c   | 123 
  hw/pci-host/pnv_phb.h   |  39 +
  3 files changed, 164 insertions(+), 1 deletion(-)
  create mode 100644 hw/pci-host/pnv_phb.c
  create mode 100644 hw/pci-host/pnv_phb.h

diff --git a/hw/pci-host/meson.build b/hw/pci-host/meson.build
index c07596d0d1..e832babc9d 100644
--- a/hw/pci-host/meson.build
+++ b/hw/pci-host/meson.build
@@ -35,5 +35,6 @@ specific_ss.add(when: 'CONFIG_PCI_POWERNV', if_true: files(
    'pnv_phb3_msi.c',
    'pnv_phb3_pbcq.c',
    'pnv_phb4.c',
-  'pnv_phb4_pec.c'
+  'pnv_phb4_pec.c',
+  'pnv_phb.c',
  ))
diff --git a/hw/pci-host/pnv_phb.c b/hw/pci-host/pnv_phb.c
new file mode 100644
index 00..fa8472622f
--- /dev/null
+++ b/hw/pci-host/pnv_phb.c
@@ -0,0 +1,123 @@
+/*
+ * QEMU PowerPC PowerNV Proxy PHB model
+ *
+ * Copyright (c) 2022, IBM Corporation.
+ *
+ * This code is licensed under the GPL version 2 or later. See the
+ * COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "qapi/visitor.h"
+#include "qapi/error.h"
+#include "hw/pci-host/pnv_phb.h"
+#include "hw/pci-host/pnv_phb3.h"
+#include "hw/pci-host/pnv_phb4.h"
+#include "hw/ppc/pnv.h"
+#include "hw/qdev-properties.h"
+#include "qom/object.h"
+
+
+static void pnv_phb_realize(DeviceState *dev, Error **errp)
+{
+    PnvPHB *phb = PNV_PHB(dev);
+    PCIHostState *pci = PCI_HOST_BRIDGE(dev);
+    g_autofree char *phb_typename = NULL;
+    g_autofree char *phb_rootport_typename = NULL;
+
+    if (!phb->version) {
+    error_setg(errp, "version not specified");
+    return;
+    }
+
+    switch (phb->version) {
+    case 3:
+    phb_typename = g_strdup(TYPE_PNV_PHB3);
+    phb_rootport_typename = g_strdup(TYPE_PNV_PHB3_ROOT_PORT);
+    break;
+    case 4:
+    phb_typename = g_strdup(TYPE_PNV_PHB4);
+    phb_rootport_typename = g_strdup(TYPE_PNV_PHB4_ROOT_PORT);
+    break;
+    case 5:
+    phb_typename = g_strdup(TYPE_PNV_PHB5);
+    phb_rootport_typename = g_strdup(TYPE_PNV_PHB5_ROOT_PORT);
+    break;
+    default:
+    g_assert_not_reached();
+    }
+
+    phb->backend = object_new(phb_typename);
+    object_property_add_child(OBJECT(dev), "phb-device", phb->backend);
+
+    /* Passthrough child device properties to the proxy device */
+    object_property_set_uint(phb->backend, "index", phb->phb_id, errp);
+    object_property_set_uint(phb->backend, "chip-id", phb->chip_id, errp);
+    object_property_set_link(phb->backend, "phb-base", OBJECT(phb), errp);
+
+    if (phb->version == 3) {
+    object_property_set_link(phb->backend, "chip",
+ OBJECT(phb->chip), errp);
+    } else {
+    object_property_set_link(phb->backend, "pec", OBJECT(phb->pec), errp);
+    }



The patch is fine, but it just highlights that we're doing something wrong. I 
don't believe there's any reason for the chip/pec/phb relationship to be 
different between P8 and P9/P10. One day, a brave soul could try to unify the 
models, it would avoid test like that.


Not a bad idea, especially if we can cut more complexity out of the code.
I'll give it some thought.


Daniel




It would be a good cleanup series to do if we ever extend the model with yet 
another version :-)




+
+    if (!qdev_realize(DEVICE(phb->backend), NULL, errp)) {
+    return;
+    }
+
+    if (phb->version == 3) {
+    pnv_phb3_bus_init(dev, (PnvPHB3 *)phb->backend);
+    }
+
+    pnv_phb_attach_root_port(pci, phb_rootport_typename);




After we've removed the other instances (done in later patches), we could move 
pnv_phb_attach_root_port() to pnv_phb.c instead of pnv.c. It would be the 
perfect 

[PATCH v2] hw/ide/piix: Ignore writes of hardwired PCI command register bits

2022-06-02 Thread Lev Kujawski
One method to enable PCI bus mastering for IDE controllers, often used
by x86 firmware, is to write 0x7 to the PCI command register.  Neither
the PIIX3 specification nor actual hardware (a Tyan S1686D system)
permit modification of the Memory Space Enable (MSE) bit, 1, and thus
the command register would be left in an unspecified state without
this patch.

Signed-off-by: Lev Kujawski 
---
This revised patch uses QEMU's built-in PCI bit-masking support rather
than attempting to manually filter writes.  Thanks to Philippe Mathieu-
Daude and Michael S. Tsirkin for review and the pointer.

 hw/ide/piix.c | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/hw/ide/piix.c b/hw/ide/piix.c
index 76ea8fd9f6..bd3f397de8 100644
--- a/hw/ide/piix.c
+++ b/hw/ide/piix.c
@@ -25,6 +25,8 @@
  * References:
  *  [1] 82371FB (PIIX) AND 82371SB (PIIX3) PCI ISA IDE XCELERATOR,
  *  290550-002, Intel Corporation, April 1997.
+ *  [2] 82371AB PCI-TO-ISA / IDE XCELERATOR (PIIX4), 290562-001,
+ *  Intel Corporation, April 1997.
  */
 
 #include "qemu/osdep.h"
@@ -160,6 +162,19 @@ static void pci_piix_ide_realize(PCIDevice *dev, Error 
**errp)
 uint8_t *pci_conf = dev->config;
 int rc;
 
+/*
+ * Mask all IDE PCI command register bits except for Bus Master
+ * Function Enable (bit 2) and I/O Space Enable (bit 1), as the
+ * remainder are hardwired to 0 [1, p.48] [2, p.89-90].
+ *
+ * NOTE: According to the PIIX3 datasheet [1], the Memory Space
+ * Enable (MSE bit) is hardwired to 1, but this is contradicted by
+ * actual PIIX3 hardware, the datasheet itself (viz., Default
+ * Value: h), and the PIIX4 datasheet [2].
+ */
+pci_set_word(dev->wmask + PCI_COMMAND,
+ PCI_COMMAND_MASTER | PCI_COMMAND_IO);
+
 pci_conf[PCI_CLASS_PROG] = 0x80; // legacy ATA mode
 
 bmdma_setup_bar(d);
-- 
2.34.1




Re: [PATCH v2 03/16] ppc/pnv: add PnvPHB base/proxy device

2022-06-02 Thread Daniel Henrique Barboza




On 6/2/22 04:18, Mark Cave-Ayland wrote:

On 31/05/2022 22:49, Daniel Henrique Barboza wrote:


The PnvPHB device is going to be the base device for all other powernv
PHBs. It consists of a device that has the same user API as the other
PHB, namely being a PCIHostBridge and having chip-id and index
properties. It also has a 'backend' pointer that will be initialized
with the PHB implementation that the device is going to use.

The initialization of the PHB backend is done by checking the PHB
version via a 'version' attribute that can be set via a global machine
property.  The 'version' field will be used to make adjustments based on
the running version, e.g. PHB3 uses a 'chip' reference while PHB4 uses
'pec'. To init the PnvPHB bus we'll rely on helpers for each version.
The version 3 helper is already added (pnv_phb3_bus_init), the PHB4
helper will be added later on.

For now let's add the basic logic of the PnvPHB object, which consists
mostly of pnv_phb_realize() doing all the work of checking the
phb->version set, initializing the proper backend, passing through its
attributes to the chosen backend, finalizing the backend realize and
adding a root port in the end.

Signed-off-by: Daniel Henrique Barboza 
---
  hw/pci-host/meson.build |   3 +-
  hw/pci-host/pnv_phb.c   | 123 
  hw/pci-host/pnv_phb.h   |  39 +
  3 files changed, 164 insertions(+), 1 deletion(-)
  create mode 100644 hw/pci-host/pnv_phb.c
  create mode 100644 hw/pci-host/pnv_phb.h

diff --git a/hw/pci-host/meson.build b/hw/pci-host/meson.build
index c07596d0d1..e832babc9d 100644
--- a/hw/pci-host/meson.build
+++ b/hw/pci-host/meson.build
@@ -35,5 +35,6 @@ specific_ss.add(when: 'CONFIG_PCI_POWERNV', if_true: files(
    'pnv_phb3_msi.c',
    'pnv_phb3_pbcq.c',
    'pnv_phb4.c',
-  'pnv_phb4_pec.c'
+  'pnv_phb4_pec.c',
+  'pnv_phb.c',
  ))
diff --git a/hw/pci-host/pnv_phb.c b/hw/pci-host/pnv_phb.c
new file mode 100644
index 00..fa8472622f
--- /dev/null
+++ b/hw/pci-host/pnv_phb.c
@@ -0,0 +1,123 @@
+/*
+ * QEMU PowerPC PowerNV Proxy PHB model
+ *
+ * Copyright (c) 2022, IBM Corporation.
+ *
+ * This code is licensed under the GPL version 2 or later. See the
+ * COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/log.h"
+#include "qapi/visitor.h"
+#include "qapi/error.h"
+#include "hw/pci-host/pnv_phb.h"
+#include "hw/pci-host/pnv_phb3.h"
+#include "hw/pci-host/pnv_phb4.h"
+#include "hw/ppc/pnv.h"
+#include "hw/qdev-properties.h"
+#include "qom/object.h"
+
+
+static void pnv_phb_realize(DeviceState *dev, Error **errp)
+{
+    PnvPHB *phb = PNV_PHB(dev);
+    PCIHostState *pci = PCI_HOST_BRIDGE(dev);
+    g_autofree char *phb_typename = NULL;
+    g_autofree char *phb_rootport_typename = NULL;
+
+    if (!phb->version) {
+    error_setg(errp, "version not specified");
+    return;
+    }
+
+    switch (phb->version) {
+    case 3:
+    phb_typename = g_strdup(TYPE_PNV_PHB3);
+    phb_rootport_typename = g_strdup(TYPE_PNV_PHB3_ROOT_PORT);
+    break;
+    case 4:
+    phb_typename = g_strdup(TYPE_PNV_PHB4);
+    phb_rootport_typename = g_strdup(TYPE_PNV_PHB4_ROOT_PORT);
+    break;
+    case 5:
+    phb_typename = g_strdup(TYPE_PNV_PHB5);
+    phb_rootport_typename = g_strdup(TYPE_PNV_PHB5_ROOT_PORT);
+    break;
+    default:
+    g_assert_not_reached();
+    }
+
+    phb->backend = object_new(phb_typename);
+    object_property_add_child(OBJECT(dev), "phb-device", phb->backend);
+
+    /* Passthrough child device properties to the proxy device */
+    object_property_set_uint(phb->backend, "index", phb->phb_id, errp);
+    object_property_set_uint(phb->backend, "chip-id", phb->chip_id, errp);
+    object_property_set_link(phb->backend, "phb-base", OBJECT(phb), errp);
+
+    if (phb->version == 3) {
+    object_property_set_link(phb->backend, "chip",
+ OBJECT(phb->chip), errp);
+    } else {
+    object_property_set_link(phb->backend, "pec", OBJECT(phb->pec), errp);
+    }
+
+    if (!qdev_realize(DEVICE(phb->backend), NULL, errp)) {
+    return;
+    }
+
+    if (phb->version == 3) {
+    pnv_phb3_bus_init(dev, (PnvPHB3 *)phb->backend);
+    }
+
+    pnv_phb_attach_root_port(pci, phb_rootport_typename);
+}
+
+static const char *pnv_phb_root_bus_path(PCIHostState *host_bridge,
+ PCIBus *rootbus)
+{
+    PnvPHB *phb = PNV_PHB(host_bridge);
+
+    snprintf(phb->bus_path, sizeof(phb->bus_path), "00%02x:%02x",
+ phb->chip_id, phb->phb_id);
+    return phb->bus_path;
+}
+
+static Property pnv_phb_properties[] = {
+    DEFINE_PROP_UINT32("index", PnvPHB, phb_id, 0),
+    DEFINE_PROP_UINT32("chip-id", PnvPHB, chip_id, 0),
+    DEFINE_PROP_UINT32("version", PnvPHB, version, 0),
+
+    DEFINE_PROP_LINK("chip", PnvPHB, chip, TYPE_PNV_CHIP, PnvChip *),
+
+    DEFINE_PROP_LINK("pec", PnvPHB, 

Re: [PATCH] target/ppc: fix unreachable code in fpu_helper.c

2022-06-02 Thread Lucas Mateus Martins Araujo e Castro


On 02/06/2022 16:10, Daniel Henrique Barboza wrote:

Commit c29018cc7395 added an env->fpscr OR operation using a ternary
that checks if 'error' is not zero:

 env->fpscr |= error ? FP_FEX : 0;

However, in the current body of do_fpscr_check_status(), 'error' is
granted to be always non-zero at that point. The result is that Coverity
is less than pleased:

   Control flow issues  (DEADCODE)
Execution cannot reach the expression "0ULL" inside this statement:
"env->fpscr |= (error ? 1073...".

Remove the ternary and always make env->fpscr |= FP_FEX.

Cc: Lucas Mateus Castro (alqotel)
Cc: Richard Henderson
Fixes: Coverity CID 1489442
Fixes: c29018cc7395 ("target/ppc: Implemented xvf*ger*")
Signed-off-by: Daniel Henrique Barboza
---


Reviewed-by: Lucas Mateus Castro (alqotel) 
--
Lucas Mateus M. Araujo e Castro
Instituto de Pesquisas ELDORADO 


Departamento Computação Embarcada
Analista de Software Trainee
Aviso Legal - Disclaimer 

Re: [RFC PATCH 3/3] hw/openrisc: Add the OpenRISC virtual machine

2022-06-02 Thread Stafford Horne
On Thu, Jun 02, 2022 at 09:08:52PM +0200, Geert Uytterhoeven wrote:
> Hi Joel,
> 
> On Thu, Jun 2, 2022 at 1:42 PM Joel Stanley  wrote:
> > On Fri, 27 May 2022 at 17:27, Stafford Horne  wrote:
> > > This patch add the OpenRISC virtual machine 'virt' for OpenRISC.  This
> > > platform allows for a convenient CI platform for toolchain, software
> > > ports and the OpenRISC linux kernel port.
> > >
> > > Much of this has been sourced from the m68k and riscv virt platforms.
> 
> > I enabled the options:
> >
> > CONFIG_RTC_CLASS=y
> > # CONFIG_RTC_SYSTOHC is not set
> > # CONFIG_RTC_NVMEM is not set
> > CONFIG_RTC_DRV_GOLDFISH=y
> >
> > But it didn't work. It seems the goldfish rtc model doesn't handle a
> > big endian guest running on my little endian host.
> >
> > Doing this fixes it:
> >
> > -.endianness = DEVICE_NATIVE_ENDIAN,
> > +.endianness = DEVICE_HOST_ENDIAN,
> >
> > [0.19] goldfish_rtc 96005000.rtc: registered as rtc0
> > [0.19] goldfish_rtc 96005000.rtc: setting system clock to
> > 2022-06-02T11:16:04 UTC (1654168564)
> >
> > But literally no other model in the tree does this, so I suspect it's
> > not the right fix.
> 
> Goldfish devices are supposed to be little endian.
> Unfortunately m68k got this wrong, cfr.
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2e2ac4a3327479f7e2744cdd88a5c823f2057bad
> Please don't duplicate this bad behavior for new architectures

Thanks for the pointer, I just wired in the goldfish RTC because I wanted to
play with it.  I was not attached to it. I can either remove it our find another
RTC.

-Stafford



Re: Outreachy project task: Adding QEMU block layer APIs resembling Linux ZBD ioctls.

2022-06-02 Thread Stefan Hajnoczi
On Thu, 2 Jun 2022 at 11:28, Sam Li  wrote:
>
> Stefan Hajnoczi  于2022年6月2日周四 16:05写道:
> >
> > On Thu, 2 Jun 2022 at 06:43, Sam Li  wrote:
> > >
> > > Hi Stefan,
> > >
> > > Stefan Hajnoczi  于2022年6月1日周三 19:43写道:
> > > >
> > > > On Wed, 1 Jun 2022 at 06:47, Damien Le Moal
> > > >  wrote:
> > > > >
> > > > > On 6/1/22 11:57, Sam Li wrote:
> > > > > > Hi Stefan,
> > > > > >
> > > > > > Stefan Hajnoczi  于2022年5月30日周一 19:19写道:
> > > > > >
> > > > > >
> > > > > >>
> > > > > >> On Mon, 30 May 2022 at 06:09, Sam Li  
> > > > > >> wrote:
> > > > > >>>
> > > > > >>> Hi everyone,
> > > > > >>> I'm Sam Li, working on the Outreachy project which is to add zoned
> > > > > >>> device support to QEMU's virtio-blk emulation.
> > > > > >>>
> > > > > >>> For the first goal, adding QEMU block layer APIs resembling Linux 
> > > > > >>> ZBD
> > > > > >>> ioctls, I think the naive approach would be to introduce a new 
> > > > > >>> stable
> > > > > >>> struct zbd_zone descriptor for the library function interface. 
> > > > > >>> More
> > > > > >>> specifically, what I'd like to add to the BlockDriver struct are:
> > > > > >>> 1. zbd_info as zone block device information: includes numbers of
> > > > > >>> zones, size of logical blocks, and physical blocks.
> > > > > >>> 2. zbd_zone_type and zbd_zone_state
> > > > > >>> 3. zbd_dev_model: host-managed zbd, host-aware zbd
> > > > > >>> With those basic structs, we can start to implement new functions 
> > > > > >>> as
> > > > > >>> bdrv*() APIs for BLOCK*ZONE ioctls.
> > > > > >>>
> > > > > >>> I'll start to finish this task based on the above description. If
> > > > > >>> there is any problem or something I may miss in the design, 
> > > > > >>> please let
> > > > > >>> me know.
> > > > > >>
> > > > > >> Hi Sam,
> > > > > >> Can you propose function prototypes for the new BlockDriver 
> > > > > >> callbacks
> > > > > >> needed for zoned devices?
> > > > > >
> > > > > > I have made some modifications based on Damien's device in design 
> > > > > > part
> > > > > > 1 and added the function prototypes in design part 2. If there is 
> > > > > > any
> > > > > > problem or part I missed, please let me know.
> > > > > >
> > > > > > Design of Block Layer APIs in BlockDriver:
> > > > > > 1. introduce a new stable struct zbd_zone descriptor for the library
> > > > > > function interface.
> > > > > >   a. zbd_info as zone block device information: includes numbers of
> > > > > > zones, size of blocks, write granularity in byte(minimal write size
> > > > > > and alignment
> > > > > > - write granularity: 512e SMRs: writes in units of physical 
> > > > > > block
> > > > > > size, 4096 bytes; NVMe ZNS write granularity is equal to the block
> > > > > > size.
> > > > > > - zone descriptor: start, length, capacity, write pointer, zone 
> > > > > > type
> > > > > >   b. zbd_zone_type
> > > > > > - zone type: conventional, sequential write required, sequential
> > > > > > write preferred
> > > > > >   c. zbd_dev_model: host-managed zbd, host-aware zbd
> > > > >
> > > > > This explanation is a little hard to understand. It seems to be 
> > > > > mixing up
> > > > > device level information and per-zone information. I think it would 
> > > > > be a
> > > > > lot simpler to write a struct definition to directly illustrate what 
> > > > > you
> > > > > are planning.
> > > > >
> > > > > It is something like this ?
> > > > >
> > > > > struct zbd_zone {
> > > > > enum zone_type  type;
> > > > > enum zone_cond  cond;
> > > > > uint64_tstart;
> > > > > uint32_tlength;
> > > > > uint32_tcap;
> > > > > uint64_twp;
> > > > > };
> > > > >
> > > > > strcut zbd_dev {
> > > > > enum zone_model model;
> > > > > uint32_tblock_size;
> > > > > uint32_twrite_granularity;
> > > > > uint32_tnr_zones
> > > > > struct zbd_zone *zones; /* array of zones */
> > > > > };
> > > > >
> > > > > If yes, then my comments are as follows.
> > > > >
> > > > > For the device struct: It may be good to have also the maximum number 
> > > > > of
> > > > > open zones and the maximum number of active zones.
> > > > >
> > > > > For the zone struct: You may need to add a read-write lock per zone 
> > > > > to be
> > > > > able to write lock zones to ensure a sequential write pattern (virtio
> > > > > devices can be multi-queue and so writes may be coming in from 
> > > > > different
> > > > > contexts) and to correctly emulate zone append operations with an 
> > > > > atomic
> > > > > update of the wp field.
> > > > >
> > > > > These need to be integrated into the generic block driver interface in
> > > > > include/block/block_int-common.h or include/block/block-common.h.
> > > >
> > > > QEMU's block layer has a few ways of exposing information about block 
> > > > devices:
> > > >
> > > > int (*bdrv_get_info)(BlockDriverState *bs, BlockDriverInfo *bdi);
> > > > ImageInfoSpecific 

Re: [RFC PATCH v2 0/6] hw/i2c: i2c slave mode support

2022-06-02 Thread Klaus Jensen
On Jun  2 17:40, Cédric Le Goater wrote:
> On 6/2/22 16:29, Jae Hyun Yoo wrote:
> > Hi Klaus,
> > 
> > On 6/2/2022 6:50 AM, Cédric Le Goater wrote:
> > > On 6/2/22 10:21, Klaus Jensen wrote:
> > > > 
> > > > There is an outstanding issue with the SLAVE_ADDR_RX_MATCH interrupt bit
> > > > (bit 7). Remember from my first series I had a workaround to make sure
> > > > it wasnt masked.
> > > > 
> > > > I posted this upstream to linux
> > > > 
> > > > https://lore.kernel.org/lkml/20220602054842.122271-1-...@irrelevant.dk/
> > > > 
> > > > Not sure if that is the right way to fix it.
> > > 
> > > That's weird. I would have thought it was already enabled [ Adding Jae ]
> > 
> > Slave mode support in Aspeed I2C driver is already enabled and it has
> > worked well so far. The fix Klaus made in the link is incorrect.
> > 
> > https://lore.kernel.org/lkml/20220602054842.122271-1-...@irrelevant.dk/
> > 
> > The patch is adding ASPEED_I2CD_INTR_SLAVE_MATCH as a mask bit for
> > I2CD0C (Interrupt Control Register) but actually this bit is part of
> > I2CD10 (Interrupt Status Register). Means that the slave match interrupt
> > can be enabled without enabling any mask bit in I2CD0C.
> 
> Thanks Jae.
> 
> So we should enable this interrupt always independently of the
> Interrupt Control Register value.
> 
> I would simply extend the mask value (bus->regs[intr_ctrl_reg])
> with the SLAVE_ADDR_RX_MATCH bit when interrupts are raised in
> aspeed_i2c_bus_raise_interrupt().
> 

Alright, so my "workaround" from v1 was actually the right fix - I'll
re-add it ;)



signature.asc
Description: PGP signature


[PATCH] target/ppc: fix unreachable code in fpu_helper.c

2022-06-02 Thread Daniel Henrique Barboza
Commit c29018cc7395 added an env->fpscr OR operation using a ternary
that checks if 'error' is not zero:

env->fpscr |= error ? FP_FEX : 0;

However, in the current body of do_fpscr_check_status(), 'error' is
granted to be always non-zero at that point. The result is that Coverity
is less than pleased:

  Control flow issues  (DEADCODE)
Execution cannot reach the expression "0ULL" inside this statement:
"env->fpscr |= (error ? 1073...".

Remove the ternary and always make env->fpscr |= FP_FEX.

Cc: Lucas Mateus Castro (alqotel) 
Cc: Richard Henderson 
Fixes: Coverity CID 1489442
Fixes: c29018cc7395 ("target/ppc: Implemented xvf*ger*")
Signed-off-by: Daniel Henrique Barboza 
---
 target/ppc/fpu_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/ppc/fpu_helper.c b/target/ppc/fpu_helper.c
index fed0ce420a..7ab6beadad 100644
--- a/target/ppc/fpu_helper.c
+++ b/target/ppc/fpu_helper.c
@@ -464,7 +464,7 @@ static void do_fpscr_check_status(CPUPPCState *env, 
uintptr_t raddr)
 }
 cs->exception_index = POWERPC_EXCP_PROGRAM;
 env->error_code = error | POWERPC_EXCP_FP;
-env->fpscr |= error ? FP_FEX : 0;
+env->fpscr |= FP_FEX;
 /* Deferred floating-point exception after target FPSCR update */
 if (fp_exceptions_enabled(env)) {
 raise_exception_err_ra(env, cs->exception_index,
-- 
2.36.1




Re: [RFC PATCH 3/3] hw/openrisc: Add the OpenRISC virtual machine

2022-06-02 Thread Geert Uytterhoeven
Hi Joel,

On Thu, Jun 2, 2022 at 1:42 PM Joel Stanley  wrote:
> On Fri, 27 May 2022 at 17:27, Stafford Horne  wrote:
> > This patch add the OpenRISC virtual machine 'virt' for OpenRISC.  This
> > platform allows for a convenient CI platform for toolchain, software
> > ports and the OpenRISC linux kernel port.
> >
> > Much of this has been sourced from the m68k and riscv virt platforms.

> I enabled the options:
>
> CONFIG_RTC_CLASS=y
> # CONFIG_RTC_SYSTOHC is not set
> # CONFIG_RTC_NVMEM is not set
> CONFIG_RTC_DRV_GOLDFISH=y
>
> But it didn't work. It seems the goldfish rtc model doesn't handle a
> big endian guest running on my little endian host.
>
> Doing this fixes it:
>
> -.endianness = DEVICE_NATIVE_ENDIAN,
> +.endianness = DEVICE_HOST_ENDIAN,
>
> [0.19] goldfish_rtc 96005000.rtc: registered as rtc0
> [0.19] goldfish_rtc 96005000.rtc: setting system clock to
> 2022-06-02T11:16:04 UTC (1654168564)
>
> But literally no other model in the tree does this, so I suspect it's
> not the right fix.

Goldfish devices are supposed to be little endian.
Unfortunately m68k got this wrong, cfr.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2e2ac4a3327479f7e2744cdd88a5c823f2057bad
Please don't duplicate this bad behavior for new architectures.

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds



TARGET_SYS_HEAPINFO and Cortex-A15 memory map

2022-06-02 Thread Liviu Ionescu
I'm experiencing some issues with the startup code for an emulated Cortex-a15 
machine I plan to use for running unit-tests.

I'm starting QEMU with:

.../qemu-system-arm  "--machine" "virt" "--cpu" "cortex-a15" "--nographic" "-d" 
"unimp,guest_errors" "--semihosting-config" 
"enable=on,target=native,arg=sample-test,arg=one,arg=two" -s -S


At 0x0 I'm loading the application that uses the newlib semihosting library and 
startup.


The application starts and I can use GDB to step into the code from the very 
beginning. In crt0 the first thing I see is a call to SYS_HEAPINFO, followed by 
setting the heap and stack.

The values returned are:

0x0400 - heap base
0x0800 - heap limit
0x0800 - stack base
0x0 - stack limit

This sets the SP at 0x0800, which I'm not sure it is a valid memory 
address, since writes to it seem ineffective, and in the first function called, 
when it tries to pop registers from the stack, everything is zero, and the 
program jumps to 0x0.

I'm not very familiar with the Cortex-A15 memory map and initialisation; if the 
memory below 0x0800 is indeed valid for a stack, probably I need to enable 
something more during the reset sequence, to make it writable.

Any suggestion?


Liviu







Re: [PATCH] pnv/xive2: Access direct mapped thread contexts from all chips

2022-06-02 Thread Daniel Henrique Barboza




On 6/2/22 14:06, Frederic Barrat wrote:



On 02/06/2022 19:00, Cédric Le Goater wrote:

On 6/2/22 18:53, Frederic Barrat wrote:

When accessing a thread context through the IC BAR, the offset of the
page in the BAR identifies the CPU. From that offset, we can compute
the PIR (processor ID register) of the CPU to do the data structure
lookup. On P10, the current code assumes an access for node 0 when
computing the PIR. Everything is almost in place to allow access for
other nodes though. So this patch reworks how the PIR value is
computed so that we can access all thread contexts through the IC BAR.

The PIR is already correct on P9, so no need to modify anything there.

Signed-off-by: Frederic Barrat 


Reviewed-by: Cédric Le Goater 

Is that a P10 bug ? If so, a fixes tag is needed.



Fixes: da71b7e3ed45 ("ppc/pnv: Add a XIVE2 controller to the POWER10 chip")

Daniel, good enough or you prefer a resend?


I can fixup the tag, don't worry about it.


Daniel



   Fred




Re: [PATCH 8/9] tests: add python3-venv to debian10.docker

2022-06-02 Thread John Snow
On Wed, Jun 1, 2022, 3:29 AM Thomas Huth  wrote:

> On 31/05/2022 20.28, John Snow wrote:
> > On Mon, May 30, 2022 at 3:33 AM Thomas Huth  wrote:
> >>
> >> On 26/05/2022 02.09, John Snow wrote:
> >>> This is needed to be able to add a venv-building step to 'make check';
> >>> the clang-user job in particular needs this to be able to run
> >>> check-unit.
> >>>
> >>> Signed-off-by: John Snow 
> >>> ---
> >>>tests/docker/dockerfiles/debian10.docker | 1 +
> >>>1 file changed, 1 insertion(+)
> >>>
> >>> diff --git a/tests/docker/dockerfiles/debian10.docker
> b/tests/docker/dockerfiles/debian10.docker
> >>> index b414af1b9f7..03be9230664 100644
> >>> --- a/tests/docker/dockerfiles/debian10.docker
> >>> +++ b/tests/docker/dockerfiles/debian10.docker
> >>> @@ -34,4 +34,5 @@ RUN apt update && \
> >>>python3 \
> >>>python3-sphinx \
> >>>python3-sphinx-rtd-theme \
> >>> +python3-venv \
> >>>$(apt-get -s build-dep --arch-only qemu | egrep ^Inst |
> fgrep '[all]' | cut -d\  -f2)
> >>
> >> Note that we'll (hopefully) drop the debian 10 container soon, since
> Debian
> >> 10 is EOL by the time we publish the next QEMU release.
> >>
> >
> > Noted -- do you think it'd be OK to sneak this change in first and
> > have you move the requisite to the new container? :)
>
> I don't mind - whatever comes first ... I just wanted to make you aware
> that
> there might be conflicts ;-)
>
>   Thomas
>

Yep, got it! No problem at all. Thanks ~~

>


Re: [PATCH 0/9] tests, python: prepare to expand usage of test venv

2022-06-02 Thread John Snow
On Wed, Jun 1, 2022, 6:06 AM Paolo Bonzini  wrote:

> On 5/27/22 16:27, John Snow wrote:
> > Paolo: I assume this falls under your jurisdiction...ish, unless Cleber
> > (avocado) or Alex (tests more broadly) have any specific inputs.
> >
> > I'm fine with waiting for reviews, but don't know whose bucket this goes
> to.
> >
>
> I thought it was yours, but I've queued it now.
>
> Paolo
>

I wanted to be polite since it was build system and tests as well - I don't
technically maintain most of these files :)

Thank you!

>


Re: [PATCH 4/5] gitlab: convert build/container jobs to .base_job_template

2022-06-02 Thread Thomas Huth

On 26/05/2022 13.07, Daniel P. Berrangé wrote:

This converts the main build and container jobs to use the
base job rules, defining the following new variables

  - QEMU_JOB_SKIPPED - jobs that are known to be currently
broken and should not be run. Can still be manually
launched if desired.

  - QEMU_JOB_AVOCADO - jobs that run the Avocado integration
test harness.

  - QEMU_JOB_PUBLISH - jobs that publish content after the
branch is merged upstream

Signed-off-by: Daniel P. Berrangé 
---
  .gitlab-ci.d/base.yml| 22 ++
  .gitlab-ci.d/buildtest-template.yml  | 16 
  .gitlab-ci.d/buildtest.yml   | 28 +---
  .gitlab-ci.d/container-cross.yml |  6 ++
  .gitlab-ci.d/container-template.yml  |  1 +
  .gitlab-ci.d/crossbuild-template.yml |  3 +++
  .gitlab-ci.d/windows.yml |  1 +
  docs/devel/ci-jobs.rst.inc   | 19 +++
  8 files changed, 65 insertions(+), 31 deletions(-)

...

diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml
index e9620c3074..ecac3ec50c 100644
--- a/.gitlab-ci.d/buildtest.yml
+++ b/.gitlab-ci.d/buildtest.yml
@@ -360,12 +360,11 @@ build-cfi-aarch64:
  expire_in: 2 days
  paths:
- build
-  rules:
+  variables:
  # FIXME: This job is often failing, likely due to out-of-memory problems 
in
  # the constrained containers of the shared runners. Thus this is marked as
-# manual until the situation has been solved.
-- when: manual
-  allow_failure: true
+# skipped until the situation has been solved.
+QEMU_JOB_SKIPPED: 1
  
  check-cfi-aarch64:

extends: .native_test_job_template
@@ -402,12 +401,11 @@ build-cfi-ppc64-s390x:
  expire_in: 2 days
  paths:
- build
-  rules:
+  variables:
  # FIXME: This job is often failing, likely due to out-of-memory problems 
in
  # the constrained containers of the shared runners. Thus this is marked as
-# manual until the situation has been solved.
-- when: manual
-  allow_failure: true
+# skipped until the situation has been solved.
+QEMU_JOB_SKIPPED: 1


FYI, this patch broke the build-cfi-aarch64 and build-cfi-ppc64-s390x jobs 
since they've now got two "variables:" sections and apparently only the 
second one is taken into account...


 Thomas




Re: [PATCH] pnv/xive2: Access direct mapped thread contexts from all chips

2022-06-02 Thread Frederic Barrat




On 02/06/2022 19:00, Cédric Le Goater wrote:

On 6/2/22 18:53, Frederic Barrat wrote:

When accessing a thread context through the IC BAR, the offset of the
page in the BAR identifies the CPU. From that offset, we can compute
the PIR (processor ID register) of the CPU to do the data structure
lookup. On P10, the current code assumes an access for node 0 when
computing the PIR. Everything is almost in place to allow access for
other nodes though. So this patch reworks how the PIR value is
computed so that we can access all thread contexts through the IC BAR.

The PIR is already correct on P9, so no need to modify anything there.

Signed-off-by: Frederic Barrat 


Reviewed-by: Cédric Le Goater 

Is that a P10 bug ? If so, a fixes tag is needed.



Fixes: da71b7e3ed45 ("ppc/pnv: Add a XIVE2 controller to the POWER10 chip")

Daniel, good enough or you prefer a resend?

  Fred



Re: [PATCH] pnv/xive2: Access direct mapped thread contexts from all chips

2022-06-02 Thread Cédric Le Goater

On 6/2/22 18:53, Frederic Barrat wrote:

When accessing a thread context through the IC BAR, the offset of the
page in the BAR identifies the CPU. From that offset, we can compute
the PIR (processor ID register) of the CPU to do the data structure
lookup. On P10, the current code assumes an access for node 0 when
computing the PIR. Everything is almost in place to allow access for
other nodes though. So this patch reworks how the PIR value is
computed so that we can access all thread contexts through the IC BAR.

The PIR is already correct on P9, so no need to modify anything there.

Signed-off-by: Frederic Barrat 


Reviewed-by: Cédric Le Goater 

Is that a P10 bug ? If so, a fixes tag is needed.

Thanks,

C.


---
  hw/intc/pnv_xive2.c | 18 ++
  1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index a39e070e82..f31c53c28d 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1574,6 +1574,12 @@ static const MemoryRegionOps pnv_xive2_ic_sync_ops = {
   * When the TM direct pages of the IC controller are accessed, the
   * target HW thread is deduced from the page offset.
   */
+static uint32_t pnv_xive2_ic_tm_get_pir(PnvXive2 *xive, hwaddr offset)
+{
+/* On P10, the node ID shift in the PIR register is 8 bits */
+return xive->chip->chip_id << 8 | offset >> xive->ic_shift;
+}
+
  static XiveTCTX *pnv_xive2_get_indirect_tctx(PnvXive2 *xive, uint32_t pir)
  {
  PnvChip *chip = xive->chip;
@@ -1596,10 +1602,12 @@ static uint64_t pnv_xive2_ic_tm_indirect_read(void 
*opaque, hwaddr offset,
unsigned size)
  {
  PnvXive2 *xive = PNV_XIVE2(opaque);
-uint32_t pir = offset >> xive->ic_shift;
-XiveTCTX *tctx = pnv_xive2_get_indirect_tctx(xive, pir);
+uint32_t pir;
+XiveTCTX *tctx;
  uint64_t val = -1;
  
+pir = pnv_xive2_ic_tm_get_pir(xive, offset);

+tctx = pnv_xive2_get_indirect_tctx(xive, pir);
  if (tctx) {
  val = xive_tctx_tm_read(NULL, tctx, offset, size);
  }
@@ -1611,9 +1619,11 @@ static void pnv_xive2_ic_tm_indirect_write(void *opaque, 
hwaddr offset,
 uint64_t val, unsigned size)
  {
  PnvXive2 *xive = PNV_XIVE2(opaque);
-uint32_t pir = offset >> xive->ic_shift;
-XiveTCTX *tctx = pnv_xive2_get_indirect_tctx(xive, pir);
+uint32_t pir;
+XiveTCTX *tctx;
  
+pir = pnv_xive2_ic_tm_get_pir(xive, offset);

+tctx = pnv_xive2_get_indirect_tctx(xive, pir);
  if (tctx) {
  xive_tctx_tm_write(NULL, tctx, offset, val, size);
  }





  1   2   >