[PATCH] m25p80: Add support for the GD25WQ32E flash

2024-03-30 Thread Giacomo Parmeggiani
This introduces the GigaDevice GD25WQ32E flash, including the SFDP table

Signed-off-by: Giacomo Parmeggiani 
---
 hw/block/m25p80.c  |  2 ++
 hw/block/m25p80_sfdp.c | 40 
 hw/block/m25p80_sfdp.h |  2 ++
 3 files changed, 44 insertions(+)

diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
index 8dec134832..6cc05b63e5 100644
--- a/hw/block/m25p80.c
+++ b/hw/block/m25p80.c
@@ -205,6 +205,8 @@ static const FlashPartInfo known_devices[] = {
 /* GigaDevice */
 { INFO("gd25q32", 0xc84016,  0,  64 << 10,  64, ER_4K) },
 { INFO("gd25q64", 0xc84017,  0,  64 << 10, 128, ER_4K) },
+{ INFO("gd25wq32e",   0xc86516,  0,  64 << 10,  64, ER_4K),
+  .sfdp_read = m25p80_sfdp_gd25wq32e },
 
 /* Intel/Numonyx -- xxxs33b */
 { INFO("160s33b", 0x898911,  0,  64 << 10,  32, 0) },
diff --git a/hw/block/m25p80_sfdp.c b/hw/block/m25p80_sfdp.c
index 6ee2cfaf11..cb0963328d 100644
--- a/hw/block/m25p80_sfdp.c
+++ b/hw/block/m25p80_sfdp.c
@@ -406,3 +406,43 @@ static const uint8_t sfdp_is25wp256[] = {
 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
 };
 define_sfdp_read(is25wp256);
+
+/*
+ * GigaDevice
+ */
+
+static const uint8_t sfdp_gd25wq32e[] = {
+0x53, 0x46, 0x44, 0x50, 0x06, 0x01, 0x01, 0xff,
+0x00, 0x06, 0x01, 0x10, 0x30, 0x00, 0x00, 0xff,
+0xc8, 0x00, 0x01, 0x03, 0x90, 0x00, 0x00, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xe5, 0x20, 0xf1, 0xff, 0xff, 0xff, 0xff, 0x01,
+0x44, 0xeb, 0x08, 0x6b, 0x08, 0x3b, 0x42, 0xbb,
+0xee, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0xff,
+0xff, 0xff, 0x00, 0xff, 0x0c, 0x20, 0x0f, 0x52,
+0x10, 0xd8, 0x00, 0xff, 0x63, 0x92, 0xfd, 0xfe,
+0x83, 0x2f, 0x26, 0x46, 0xec, 0x82, 0x18, 0x44,
+0x7a, 0x75, 0x7a, 0x75, 0x04, 0xbd, 0xd5, 0x5c,
+0x00, 0x06, 0x64, 0x00, 0x08, 0x10, 0x00, 0x00,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0x00, 0x36, 0x50, 0x16, 0x9e, 0xf9, 0x77, 0x64,
+0xfc, 0xcb, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+};
+define_sfdp_read(gd25wq32e);
diff --git a/hw/block/m25p80_sfdp.h b/hw/block/m25p80_sfdp.h
index 1733b56950..7d1f60f2ee 100644
--- a/hw/block/m25p80_sfdp.h
+++ b/hw/block/m25p80_sfdp.h
@@ -29,4 +29,6 @@ uint8_t m25p80_sfdp_w25q01jvq(uint32_t addr);
 
 uint8_t m25p80_sfdp_is25wp256(uint32_t addr);
 
+uint8_t m25p80_sfdp_gd25wq32e(uint32_t addr);
+
 #endif
-- 
2.32.1 (Apple Git-133)




Re: [PATCH v10 17/23] hw/intc/arm_gicv3: Add NMI handling CPU interface registers

2024-03-30 Thread Peter Maydell
On Sat, 30 Mar 2024 at 02:44, Jinjie Ruan via  wrote:
>
>
>
> On 2024/3/28 22:50, Peter Maydell wrote:
> > The NMI bit also exists only in the AP1R0 bit, not in every AP
> > register. So you can check it before the for() loop, something like this:
> >
> > if (cs->gic->nmi_support) {
> > /*
> >  * If an NMI is active this takes precedence over anything else
> >  * for priority purposes; the NMI bit is only in the AP1R0 bit.
> >  * We return here the effective priority of the NMI, which is
> >  * either 0x0 or 0x80. Callers will need to check NMI again for
> >  * purposes of either setting the RPR register bits or for
> >  * prioritization of NMI vs non-NMI.
> >  */
> > prio = 0;
> > if (cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI) {
> > return 0;
> > }
> > if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
> > return (cs->gic->gicd_ctlr & GICD_CTLR_DS) ? 0 : 0x80;
> > }
> > }
> >
> > Then in icc_rpr_read() we can pretty much directly write the same
> > logic that the pseudocode uses to determine whether to set the RPR
> > NMI bits, after the point where we do the shifting of the prio for
> > the NS view:
> >
> > if (cs->gic->nmi_support) {
> > /* NMI info is reported in the high bits of RPR */
> > if (arm_feature(env, ARM_FEATURE_EL3) && !arm_is_secure(env)) {
> > if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
> > prio |= ICC_RPR_EL1_NMI;
>
> It seems ICC_RPR_EL1_NSNMI in pseudocode:
>
> // GICv3.3
> if HaveNMIExt() then
> if HaveEL(EL3) && (IsNonSecure() || IsRealm()) then
> pPriority<63> = ICC_AP1R_EL1NS<63>;
> ~~~
> else
> pPriority<63> = ICC_AP1R_EL1S<63>;
> pPriority<62> = ICC_AP1R_EL1NS<63>;

I'm not sure what you have in mind here? For QEMU,
ICC_AP1R_EL1NS<63> is the ICC_AP1R_EL1_NMI bit in the
icc_apr[GICV3_G1NS][0] value, and ICC_RPR_EL1_NMI is bit 63,
so the C code seems to me to match up with the pseudocode line
that you highlight.

thanks
-- PMM



Re: [PATCH v10 14/23] hw/intc/arm_gicv3: Add irq non-maskable property

2024-03-30 Thread Peter Maydell
On Sat, 30 Mar 2024 at 01:42, Jinjie Ruan  wrote:
>
>
>
> On 2024/3/28 22:54, Peter Maydell wrote:
> > On Mon, 25 Mar 2024 at 08:52, Jinjie Ruan  wrote:
> >>
> >> A SPI, PPI or SGI interrupt can have non-maskable property. So maintain
> >> non-maskable property in PendingIrq and GICR/GICD. Since add new device
> >> state, it also needs to be migrated, so also save NMI info in
> >> vmstate_gicv3_cpu and vmstate_gicv3.
> >>
> >> Signed-off-by: Jinjie Ruan 
> >> Acked-by: Richard Henderson 
> >> ---
> >> v10:
> >> - superprio -> nmi, gicr_isuperprio -> gicr_inmir0.
> >> - Save NMI state in vmstate_gicv3_cpu and vmstate_gicv3.
> >> - Update the commit message.
> >> v3:
> >> - Place this ahead of implement GICR_INMIR.
> >> - Add Acked-by.
> >> ---
> >>  hw/intc/arm_gicv3_common.c | 44 ++
> >>  include/hw/intc/arm_gicv3_common.h |  4 +++
> >>  2 files changed, 48 insertions(+)
> >>
> >> diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
> >> index 2d2cea6858..be76ae0be6 100644
> >> --- a/hw/intc/arm_gicv3_common.c
> >> +++ b/hw/intc/arm_gicv3_common.c
> >> @@ -164,6 +164,24 @@ const VMStateDescription vmstate_gicv3_gicv4 = {
> >>  }
> >>  };
> >>
> >> +static bool nmi_needed(void *opaque)
> >> +{
> >> +GICv3CPUState *cs = opaque;
> >> +
> >> +return cs->gic->nmi_support != 0;
> >
> > nmi_support is already a bool, so you can
> >return cs->gic_nmi_support;
> >
> >
> >> +}

> >> @@ -238,6 +260,24 @@ const VMStateDescription 
> >> vmstate_gicv3_gicd_no_migration_shift_bug = {
> >>  }
> >>  };
> >>
> >> +static bool needed_nmi(void *opaque)
> >> +{
> >> +GICv3State *cs = opaque;
> >> +
> >> +return cs->nmi_support != 0;
> >> +}
> >
> > You already have nmi_needed() above, so you can use it
> > as the .needed function for both vmstate struct definitions.
>
> The input opaque pointer seems not same, one is "GICv3CPUState *", but
> another is "GICv3State *"

Oops, you're right. In that case let's give these two
functions names that hopefully guide the reader towards
the difference, like gic_cpu_state_nmi_needed() and
gic_state_nmi_needed().

thanks
-- PMM



[PULL for-9.0 2/2] qtest/virtio-9p-test.c: remove g_test_slow() gate

2024-03-30 Thread Christian Schoenebeck
From: Daniel Henrique Barboza 

Commit 558f5c42ef gated the local tests with g_test_slow() to skip them
in 'make check'. The reported issue back then was this following CI
problem:

https://lists.nongnu.org/archive/html/qemu-devel/2020-11/msg05510.html

This problem ended up being fixed after it was detected with the
recently added risc-v machine nodes [1]. virtio-9p-test.c is now
creating and removing temporary dirs for each test run, instead of
creating a single dir for the entire qos-test scope.

We're now able to run these tests with 'make check' in the CI, so let's
go ahead and re-enable them.

This reverts commit 558f5c42efded3e0d0b20a90bce2a9a14580d824.

[1] https://mail.gnu.org/archive/html/qemu-devel/2024-03/msg05807.html

Signed-off-by: Daniel Henrique Barboza 
Message-Id: <20240327142011.805728-3-dbarb...@ventanamicro.com>
Reviewed-by: Greg Kurz 
Reviewed-by: Christian Schoenebeck 
Tested-by: Thomas Huth 
Signed-off-by: Christian Schoenebeck 
---
 tests/qtest/virtio-9p-test.c | 9 -
 1 file changed, 9 deletions(-)

diff --git a/tests/qtest/virtio-9p-test.c b/tests/qtest/virtio-9p-test.c
index 0179b3a394..3c8cd235cf 100644
--- a/tests/qtest/virtio-9p-test.c
+++ b/tests/qtest/virtio-9p-test.c
@@ -746,15 +746,6 @@ static void register_virtio_9p_test(void)
 
 
 /* 9pfs test cases using the 'local' filesystem driver */
-
-/*
- * XXX: Until we are sure that these tests can run everywhere,
- * keep them as "slow" so that they aren't run with "make check".
- */
-if (!g_test_slow()) {
-return;
-}
-
 opts.before = assign_9p_local_driver;
 qos_add_test("local/config", "virtio-9p", pci_config,  );
 qos_add_test("local/create_dir", "virtio-9p", fs_create_dir, );
-- 
2.30.2




[PULL for-9.0 1/2] qtest/virtio-9p-test.c: create/remove temp dirs after each test

2024-03-30 Thread Christian Schoenebeck
From: Daniel Henrique Barboza 

The local 9p driver in virtio-9p-test.c its temporary dir right at the
start of qos-test (via virtio_9p_create_local_test_dir()) and only
deletes it after qos-test is finished (via
virtio_9p_remove_local_test_dir()).

This means that any qos-test machine that ends up running virtio-9p-test
local tests more than once will end up re-using the same temp dir. This
is what's happening in [1] after we introduced the riscv machine nodes:
if we enable slow tests with the '-m slow' flag using
qemu-system-riscv64, this is what happens:

- a temp dir is created;

- virtio-9p-device tests will run virtio-9p-test successfully;

- virtio-9p-pci tests will run virtio-9p-test, and fail right at the
  first slow test at fs_create_dir() because the "01" file was already
created by fs_create_dir() test when running with the virtio-9p-device.

The root cause is that we're creating a single temporary dir, via the
construct/destruct callbacks, and this temp dir is kept for the entire
qos-test run.

We can change each test to clean after themselves. This approach would
make the 'create' tests obsolete since we would need to create and
delete dirs/files/symlinks for the cleanup, turning them into the
'unlinkat' tests that comes right after.

We chose a different approach that handles the root cause: do not use
constructor/destructor to create the temp dir. Create one temp dir for
each test, and remove it after the test is complete. This is the
approach taken for other qtests like vhost-user-test.c where each test
requires a setup() and a subsequent cleanup(), all of those instantiated
in the .before callback.

[1] https://mail.gnu.org/archive/html/qemu-devel/2024-03/msg05807.html

Reported-by: Thomas Huth 
Signed-off-by: Daniel Henrique Barboza 
Message-Id: <20240327142011.805728-2-dbarb...@ventanamicro.com>
Reviewed-by: Greg Kurz 
Reviewed-by: Christian Schoenebeck 
Tested-by: Thomas Huth 
Signed-off-by: Christian Schoenebeck 
---
 tests/qtest/virtio-9p-test.c | 23 +++
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/tests/qtest/virtio-9p-test.c b/tests/qtest/virtio-9p-test.c
index 65e69491e5..0179b3a394 100644
--- a/tests/qtest/virtio-9p-test.c
+++ b/tests/qtest/virtio-9p-test.c
@@ -693,9 +693,20 @@ static void fs_unlinkat_hardlink(void *obj, void *data,
 g_assert(stat(real_file, _real) == 0);
 }
 
+static void cleanup_9p_local_driver(void *data)
+{
+/* remove previously created test dir when test is completed */
+virtio_9p_remove_local_test_dir();
+}
+
 static void *assign_9p_local_driver(GString *cmd_line, void *arg)
 {
+/* make sure test dir for the 'local' tests exists */
+virtio_9p_create_local_test_dir();
+
 virtio_9p_assign_local_driver(cmd_line, "security_model=mapped-xattr");
+
+g_test_queue_destroy(cleanup_9p_local_driver, NULL);
 return arg;
 }
 
@@ -759,15 +770,3 @@ static void register_virtio_9p_test(void)
 }
 
 libqos_init(register_virtio_9p_test);
-
-static void __attribute__((constructor)) construct_9p_test(void)
-{
-/* make sure test dir for the 'local' tests exists */
-virtio_9p_create_local_test_dir();
-}
-
-static void __attribute__((destructor)) destruct_9p_test(void)
-{
-/* remove previously created test dir when test suite completed */
-virtio_9p_remove_local_test_dir();
-}
-- 
2.30.2




[PULL for-9.0 0/2] 9p queue 2024-03-29

2024-03-30 Thread Christian Schoenebeck
The following changes since commit 5012e522aca161be5c141596c66e5cc6082538a9:

  Update version for v9.0.0-rc1 release (2024-03-26 19:46:55 +)

are available in the Git repository at:

  https://github.com/cschoenebeck/qemu.git tags/pull-9p-20240329

for you to fetch changes up to dcae75fba1084823d0fc87caa13f0ba6f32155f3:

  qtest/virtio-9p-test.c: remove g_test_slow() gate (2024-03-28 09:54:47 +0100)


Changes for 9p tests only:

* Fix 9p tests for riscv.

* Re-enable 9p 'local' tests for running in CI pipelines.


Daniel Henrique Barboza (2):
  qtest/virtio-9p-test.c: create/remove temp dirs after each test
  qtest/virtio-9p-test.c: remove g_test_slow() gate

 tests/qtest/virtio-9p-test.c | 32 +++-
 1 file changed, 11 insertions(+), 21 deletions(-)



[PATCH v11 22/23] target/arm: Add FEAT_NMI to max

2024-03-30 Thread Jinjie Ruan via
Enable FEAT_NMI on the 'max' CPU.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v3:
- Add Reviewed-by.
- Sorted to last.
---
 docs/system/arm/emulation.rst | 1 +
 target/arm/tcg/cpu64.c| 1 +
 2 files changed, 2 insertions(+)

diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
index 2a7bbb82dc..a9ae7ede9f 100644
--- a/docs/system/arm/emulation.rst
+++ b/docs/system/arm/emulation.rst
@@ -64,6 +64,7 @@ the following architecture extensions:
 - FEAT_MTE (Memory Tagging Extension)
 - FEAT_MTE2 (Memory Tagging Extension)
 - FEAT_MTE3 (MTE Asymmetric Fault Handling)
+- FEAT_NMI (Non-maskable Interrupt)
 - FEAT_NV (Nested Virtualization)
 - FEAT_NV2 (Enhanced nested virtualization support)
 - FEAT_PACIMP (Pointer authentication - IMPLEMENTATION DEFINED algorithm)
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
index 9f7a9f3d2c..62c4663512 100644
--- a/target/arm/tcg/cpu64.c
+++ b/target/arm/tcg/cpu64.c
@@ -1175,6 +1175,7 @@ void aarch64_max_tcg_initfn(Object *obj)
 t = FIELD_DP64(t, ID_AA64PFR1, RAS_FRAC, 0);  /* FEAT_RASv1p1 + 
FEAT_DoubleFault */
 t = FIELD_DP64(t, ID_AA64PFR1, SME, 1);   /* FEAT_SME */
 t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
+t = FIELD_DP64(t, ID_AA64PFR1, NMI, 1);   /* FEAT_NMI */
 cpu->isar.id_aa64pfr1 = t;
 
 t = cpu->isar.id_aa64mmfr0;
-- 
2.34.1




[PATCH v11 17/23] hw/intc/arm_gicv3: Add NMI handling CPU interface registers

2024-03-30 Thread Jinjie Ruan via
Add the NMIAR CPU interface registers which deal with acknowledging NMI.

When introduce NMI interrupt, there are some updates to the semantics for the
register ICC_IAR1_EL1 and ICC_HPPIR1_EL1. For ICC_IAR1_EL1 register, it
should return 1022 if the intid has non-maskable property. And for
ICC_NMIAR1_EL1 register, it should return 1023 if the intid do not have
non-maskable property. Howerever, these are not necessary for ICC_HPPIR1_EL1
register.

And the APR and RPR has NMI bits which should be handled correctly.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v11:
- Handle NMI priority in icc_highest_active_prio() and handle NMI RPR in
  icc_rpr_read() separately.
- Only set NMI bit for a NMI and and ordinary priority bit for a non-NMI in
  icc_activate_irq().
- Only clear APR bit for AP1R0 in icc_drop_prio().
- Check special INTID_* in callers instead of passing two extra boolean args
  for ack functions.
- Handle NMI in icc_hppi_can_preempt() and icc_highest_active_group().
- Also check icc_hppi_can_preempt() for icc_nmiar1_read().
v10:
- is_nmi -> nmi.
- is_hppi -> hppi.
- Exchange the order of nmi and hppi parameters.
- superprio -> nmi.
- Handle APR and RPR NMI bits.
- Update the commit message, super priority -> non-maskable property.
v7:
- Add Reviewed-by.
v4:
- Define ICC_NMIAR1_EL1 only if FEAT_GICv3_NMI is implemented.
- Check sctrl_elx.SCTLR_NMI to return 1022 for icc_iar1_read().
- Add gicv3_icc_nmiar1_read() trace event.
- Do not check icc_hppi_can_preempt() for icc_nmiar1_read().
- Add icv_nmiar1_read() and call it when EL2Enabled() and HCR_EL2.IMO == '1'
---
 hw/intc/arm_gicv3_cpuif.c | 137 --
 hw/intc/gicv3_internal.h  |   5 ++
 hw/intc/trace-events  |   1 +
 3 files changed, 138 insertions(+), 5 deletions(-)

diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
index e1a60d8c15..f99f2570a6 100644
--- a/hw/intc/arm_gicv3_cpuif.c
+++ b/hw/intc/arm_gicv3_cpuif.c
@@ -795,6 +795,13 @@ static uint64_t icv_iar_read(CPUARMState *env, const 
ARMCPRegInfo *ri)
 return intid;
 }
 
+static uint64_t icv_nmiar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+/* todo */
+uint64_t intid = INTID_SPURIOUS;
+return intid;
+}
+
 static uint32_t icc_fullprio_mask(GICv3CPUState *cs)
 {
 /*
@@ -832,6 +839,23 @@ static int icc_highest_active_prio(GICv3CPUState *cs)
  */
 int i;
 
+if (cs->gic->nmi_support) {
+/*
+ * If an NMI is active this takes precedence over anything else
+ * for priority purposes; the NMI bit is only in the AP1R0 bit.
+ * We return here the effective priority of the NMI, which is
+ * either 0x0 or 0x80. Callers will need to check NMI again for
+ * purposes of either setting the RPR register bits or for
+ * prioritization of NMI vs non-NMI.
+ */
+if (cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI) {
+return 0;
+}
+if (cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI) {
+return (cs->gic->gicd_ctlr & GICD_CTLR_DS) ? 0 : 0x80;
+}
+}
+
 for (i = 0; i < icc_num_aprs(cs); i++) {
 uint32_t apr = cs->icc_apr[GICV3_G0][i] |
 cs->icc_apr[GICV3_G1][i] | cs->icc_apr[GICV3_G1NS][i];
@@ -898,12 +922,24 @@ static bool icc_hppi_can_preempt(GICv3CPUState *cs)
  */
 int rprio;
 uint32_t mask;
+ARMCPU *cpu = ARM_CPU(cs->cpu);
+CPUARMState *env = >env;
 
 if (icc_no_enabled_hppi(cs)) {
 return false;
 }
 
-if (cs->hppi.prio >= cs->icc_pmr_el1) {
+if (cs->gic->nmi_support && cs->hppi.nmi) {
+if (!(cs->gic->gicd_ctlr & GICD_CTLR_DS) &&
+cs->hppi.grp == GICV3_G1NS) {
+if (cs->icc_pmr_el1 < 0x80) {
+return false;
+}
+if (arm_is_secure(env) && cs->icc_pmr_el1 == 0x80) {
+return false;
+}
+}
+} else if (cs->hppi.prio >= cs->icc_pmr_el1) {
 /* Priority mask masks this interrupt */
 return false;
 }
@@ -923,6 +959,18 @@ static bool icc_hppi_can_preempt(GICv3CPUState *cs)
 return true;
 }
 
+if (cs->gic->nmi_support && cs->hppi.nmi &&
+(cs->hppi.prio & mask) == (rprio & mask)) {
+if ((cs->hppi.grp == GICV3_G1NS) &&
+!(cs->icc_apr[GICV3_G1NS][0] & ICC_AP1R_EL1_NMI)) {
+return true;
+}
+if ((cs->hppi.grp == GICV3_G1) &&
+!(cs->icc_apr[GICV3_G1][0] & ICC_AP1R_EL1_NMI)) {
+return true;
+}
+}
+
 return false;
 }
 
@@ -1044,8 +1092,13 @@ static void icc_activate_irq(GICv3CPUState *cs, int irq)
 int aprbit = prio >> (8 - cs->prebits);
 int regno = aprbit / 32;
 int regbit = aprbit % 32;
+bool nmi = cs->hppi.nmi;
 
-cs->icc_apr[cs->hppi.grp][regno] |= (1 << regbit);
+if (cs->gic->nmi_support && nmi) {
+cs->icc_apr[cs->hppi.grp][regno] |= ICC_AP1R_EL1_NMI;
+} else {

[PATCH v11 00/23] target/arm: Implement FEAT_NMI and FEAT_GICv3_NMI

2024-03-30 Thread Jinjie Ruan via
This patch set implements FEAT_NMI and FEAT_GICv3_NMI for ARMv8. These
introduce support for a new category of interrupts in the architecture
which we can use to provide NMI like functionality.

There are two modes for using this FEAT_NMI. When PSTATE.ALLINT or
PSTATE.SP & SCTLR_ELx.SCTLR_SPINTMASK is set, any entry to ELx causes all
interrupts including those with superpriority to be masked on entry to ELn
until the mask is explicitly removed by software or hardware. PSTATE.ALLINT
can be managed by software using the new register control ALLINT.ALLINT.
Independent controls are provided for this feature at each EL, usage at EL1
should not disrupt EL2 or EL3.

I have tested it with the following linux patches which try to support
FEAT_NMI in linux kernel:


https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/T/#mb4ba4a2c045bf72c10c2202c1dd1b82d3240dc88

In the test, SGI, PPI and SPI interrupts can all be set to have super priority
to be converted to a hardware NMI interrupt. The SGI is tested with kernel
IPI as NMI framework, softlockup, hardlockup and kgdb test cases, and the PPI
interrupt is tested with "perf top" command with hardware NMI enabled, and
the SPI interrupt is tested with a custom test module, in which NMI interrupts
can be received and sent normally.

And the Virtual NMI(VNMI) SGI, PPI and SPI interrupts has also been tested in
nested QEMU Virtual Machine with host "virtualization=true". The SGI VNMI is
tested by accessing GICR_INMIR0 and GICR_ISPENDR0 with devmem command, as well
as hardlockup and kgdb testcases. The PPI VNMI is tested by accessing
GICR_INMIR0 and GICR_ISPENDR0 with devmem command, as well as "perf top"
command with hardware NMI enabled, which works well. The SPI VNMI is tested
with a custom test module, in which SPI VNMI can be sent from the
GIC and received normally.

 +-+
 |   Distributor   |
 +-+
 SPI |  NMI|  NMI
\/\/
++ +---+
| Redist | | Redist|
++ +---+
SGI  | NMI PPI | NMI
\/\/
  +-+ +---+
  |CPU Interface|   ...   | CPU Interface |
  +-+ +---+
   | NMI | NMI
  \/\/
+-+   +-+
|  PE |   |  PE |
+-+   +-+

Changes in v11:
- Put vmstate_gicv3_cpu_nmi and vmstate_gicv3_gicd_nmi into existing list.
- Remove the excess != 0.
- Handle NMI priority in icc_highest_active_prio() and handle NMI RPR in
  icc_rpr_read() separately.
- Only set NMI bit for a NMI and and ordinary priority bit for a non-NMI in
  icc_activate_irq().
- Only clear APR bit for AP1R0 in icc_drop_prio().
- Check special INTID_* in callers instead of passing two extra boolean args
  for ack functions.
- Handle NMI in icc_hppi_can_preempt() and icc_highest_active_group().
- Also check icc_hppi_can_preempt() for icc_nmiar1_read().
- Deal with NMI in the callers instead of ich_highest_active_virt_prio().
- Set either NMI or a group-priority bit, not both.
- Only set AP NMI bits in the 0 reg.
- Handle NMI in hppvi_index(), icv_hppi_can_preempt() and icv_eoir_write().
- Add Reviewed-by.

Changes in v10:
- Correct the exception_target_el(env) to 2 in msr_set_allint_el1 helper,
  since it is a hypervisor trap from EL1 to EL2.
- In arm_cpu_exec_interrupt(), if SCTLR_ELx.NMI is 0, NMI -> IRQ,
  VINMI -> VIRQ, VFNMI -> VFIQ.
- Make arm_cpu_update_virq() and arm_cpu_update_vfiq() check that it is not a
  VINMI/VFNMI, so only set 1 bit in interrupt_request, not 2.
- Adjust "hw/intc: Enable FEAT_GICv3_NMI Feature" to before "add irq
  non-maskable property".
- superprio -> nmi, gicr_isuperprio -> gicr_inmir0, is_nmi -> nmi,
  is_hppi -> hppi, has_superprio -> nmi, superpriority -> non-maskable property.
- Save NMI state in vmstate_gicv3_cpu and vmstate_gicv3.
- Exchange the order of nmi and hppi parameters.
- Handle APR and RPR NMI bits, rename ICH_AP1R_EL2_NMI to ICV_AP1R_EL1_NMI.
- Set ICV_RPR_EL1.NMI according to the ICV_AP1R_EL1.NMI in
  ich_highest_active_virt_prio()
- Update the commit message.

Changes in v9:
- Move nmi_reginfo and related functions inside an existing ifdef
  TARGET_AARCH64 to solve the --target-list=aarch64-softmmu,arm-softmmu
  compilation problem.
- Check 'isread' when writing to ALLINT.
- Update the GPIOs passed in the arm_cpu_kvm_set_irq, and update the comment.
- Definitely not merge VINMI and VFNMI into EXCP_VNMI.
- ARM_CPU_VNMI -> ARM_CPU_VINMI, 

[PATCH v11 10/23] hw/arm/virt: Wire NMI and VINMI irq lines from GIC to CPU

2024-03-30 Thread Jinjie Ruan via
Wire the new NMI and VINMI interrupt line from the GIC to each CPU.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v9:
- Rename ARM_CPU_VNMI to ARM_CPU_VINMI.
- Update the commit message.
v4:
- Add Reviewed-by.
v3:
- Also add VNMI wire.
---
 hw/arm/virt.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index a9a913aead..ef2e6c2c4d 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -821,7 +821,8 @@ static void create_gic(VirtMachineState *vms, MemoryRegion 
*mem)
 
 /* Wire the outputs from each CPU's generic timer and the GICv3
  * maintenance interrupt signal to the appropriate GIC PPI inputs,
- * and the GIC's IRQ/FIQ/VIRQ/VFIQ interrupt outputs to the CPU's inputs.
+ * and the GIC's IRQ/FIQ/VIRQ/VFIQ/NMI/VINMI interrupt outputs to the
+ * CPU's inputs.
  */
 for (i = 0; i < smp_cpus; i++) {
 DeviceState *cpudev = DEVICE(qemu_get_cpu(i));
@@ -865,6 +866,10 @@ static void create_gic(VirtMachineState *vms, MemoryRegion 
*mem)
qdev_get_gpio_in(cpudev, ARM_CPU_VIRQ));
 sysbus_connect_irq(gicbusdev, i + 3 * smp_cpus,
qdev_get_gpio_in(cpudev, ARM_CPU_VFIQ));
+sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus,
+   qdev_get_gpio_in(cpudev, ARM_CPU_NMI));
+sysbus_connect_irq(gicbusdev, i + 5 * smp_cpus,
+   qdev_get_gpio_in(cpudev, ARM_CPU_VINMI));
 }
 
 fdt_add_gic_node(vms);
-- 
2.34.1




[PATCH v11 23/23] hw/arm/virt: Add FEAT_GICv3_NMI feature support in virt GIC

2024-03-30 Thread Jinjie Ruan via
A PE that implements FEAT_NMI and FEAT_GICv3 also implements
FEAT_GICv3_NMI. A PE that does not implement FEAT_NMI, does not implement
FEAT_GICv3_NMI

So included support FEAT_GICv3_NMI feature as part of virt platform
GIC initialization if FEAT_NMI and FEAT_GICv3 supported.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v4:
- Add Reviewed-by.
v3:
- Adjust to be the last after add FEAT_NMI to max.
- Check whether support FEAT_NMI and FEAT_GICv3 for FEAT_GICv3_NMI.
---
 hw/arm/virt.c | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index ef2e6c2c4d..63d9f5b553 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -729,6 +729,19 @@ static void create_v2m(VirtMachineState *vms)
 vms->msi_controller = VIRT_MSI_CTRL_GICV2M;
 }
 
+/*
+ * A PE that implements FEAT_NMI and FEAT_GICv3 also implements
+ * FEAT_GICv3_NMI. A PE that does not implement FEAT_NMI, does not implement
+ * FEAT_GICv3_NMI.
+ */
+static bool gicv3_nmi_present(VirtMachineState *vms)
+{
+ARMCPU *cpu = ARM_CPU(qemu_get_cpu(0));
+
+return cpu_isar_feature(aa64_nmi, cpu) &&
+   (vms->gic_version != VIRT_GIC_VERSION_2);
+}
+
 static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
 {
 MachineState *ms = MACHINE(vms);
@@ -802,6 +815,11 @@ static void create_gic(VirtMachineState *vms, MemoryRegion 
*mem)
   vms->virt);
 }
 }
+
+if (gicv3_nmi_present(vms)) {
+qdev_prop_set_bit(vms->gic, "has-nmi", true);
+}
+
 gicbusdev = SYS_BUS_DEVICE(vms->gic);
 sysbus_realize_and_unref(gicbusdev, _fatal);
 sysbus_mmio_map(gicbusdev, 0, vms->memmap[VIRT_GIC_DIST].base);
-- 
2.34.1




[PATCH v11 05/23] target/arm: Support MSR access to ALLINT

2024-03-30 Thread Jinjie Ruan via
Support ALLINT msr access as follow:
mrs , ALLINT// read allint
msr ALLINT, // write allint with imm

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v9:
- Move nmi_reginfo and related functions inside an existing ifdef
  TARGET_AARCH64 to solve the --target-list=aarch64-softmmu,arm-softmmu
  compilation problem.
- Check 'isread' when writing to ALLINT.
v5:
- Add Reviewed-by.
v4:
- Remove arm_is_el2_enabled() check in allint_check().
- Change to env->pstate instead of env->allint.
v3:
- Remove EL0 check in aa64_allint_access() which alreay checks in .access
  PL1_RW.
- Use arm_hcrx_el2_eff() in aa64_allint_access() instead of env->cp15.hcrx_el2.
- Make ALLINT msr access function controlled by aa64_nmi.
---
 target/arm/helper.c | 35 +++
 1 file changed, 35 insertions(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 7d6c6e9878..a65729af66 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -7497,6 +7497,37 @@ static const ARMCPRegInfo rme_mte_reginfo[] = {
   .opc0 = 1, .opc1 = 6, .crn = 7, .crm = 14, .opc2 = 5,
   .access = PL3_W, .type = ARM_CP_NOP },
 };
+
+static void aa64_allint_write(CPUARMState *env, const ARMCPRegInfo *ri,
+  uint64_t value)
+{
+env->pstate = (env->pstate & ~PSTATE_ALLINT) | (value & PSTATE_ALLINT);
+}
+
+static uint64_t aa64_allint_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+return env->pstate & PSTATE_ALLINT;
+}
+
+static CPAccessResult aa64_allint_access(CPUARMState *env,
+ const ARMCPRegInfo *ri, bool isread)
+{
+if (!isread && arm_current_el(env) == 1 &&
+(arm_hcrx_el2_eff(env) & HCRX_TALLINT)) {
+return CP_ACCESS_TRAP_EL2;
+}
+return CP_ACCESS_OK;
+}
+
+static const ARMCPRegInfo nmi_reginfo[] = {
+{ .name = "ALLINT", .state = ARM_CP_STATE_AA64,
+  .opc0 = 3, .opc1 = 0, .opc2 = 0, .crn = 4, .crm = 3,
+  .type = ARM_CP_NO_RAW,
+  .access = PL1_RW, .accessfn = aa64_allint_access,
+  .fieldoffset = offsetof(CPUARMState, pstate),
+  .writefn = aa64_allint_write, .readfn = aa64_allint_read,
+  .resetfn = arm_cp_reset_ignore },
+};
 #endif /* TARGET_AARCH64 */
 
 static void define_pmu_regs(ARMCPU *cpu)
@@ -9891,6 +9922,10 @@ void register_cp_regs_for_features(ARMCPU *cpu)
 if (cpu_isar_feature(aa64_nv2, cpu)) {
 define_arm_cp_regs(cpu, nv2_reginfo);
 }
+
+if (cpu_isar_feature(aa64_nmi, cpu)) {
+define_arm_cp_regs(cpu, nmi_reginfo);
+}
 #endif
 
 if (cpu_isar_feature(any_predinv, cpu)) {
-- 
2.34.1




[PATCH v11 06/23] target/arm: Add support for Non-maskable Interrupt

2024-03-30 Thread Jinjie Ruan via
This only implements the external delivery method via the GICv3.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v10:
- In arm_cpu_exec_interrupt(), if SCTLR_ELx.NMI is 0, NMI -> IRQ,
  VINMI -> VIRQ, VFNMI -> VFIQ.
- Make arm_cpu_update_virq() and arm_cpu_update_vfiq() check that it is not a
  VINMI/VFNMI, so only set 1 bit in interrupt_request, not 2.
v9:
- Update the GPIOs passed in the arm_cpu_kvm_set_irq, and update the comment.
- Definitely not merge VINMI and VFNMI into EXCP_VNMI.
- Update VINMI and VFNMI when writing HCR_EL2 or HCRX_EL2.
v8:
- Fix the rcu stall after sending a VNMI in qemu VM.
v7:
- Add Reviewed-by.
v6:
- env->cp15.hcr_el2 -> arm_hcr_el2_eff().
- env->cp15.hcrx_el2 -> arm_hcrx_el2_eff().
- Not include VF && VFNMI in CPU_INTERRUPT_VNMI.
v4:
- Accept NMI unconditionally for arm_cpu_has_work() but add comment.
- Change from & to && for EXCP_IRQ or EXCP_FIQ.
- Refator nmi mask in arm_excp_unmasked().
- Also handle VNMI in arm_cpu_exec_interrupt() and arm_cpu_set_irq().
- Rename virtual to Virtual.
v3:
- Not include CPU_INTERRUPT_NMI when FEAT_NMI not enabled
- Add ARM_CPU_VNMI.
- Refator nmi mask in arm_excp_unmasked().
- Test SCTLR_ELx.NMI for ALLINT mask for NMI.
---
 target/arm/cpu-qom.h   |   5 +-
 target/arm/cpu.c   | 146 +
 target/arm/cpu.h   |   6 ++
 target/arm/helper.c|  33 --
 target/arm/internals.h |  18 +
 5 files changed, 192 insertions(+), 16 deletions(-)

diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h
index 8e032691db..b497667d61 100644
--- a/target/arm/cpu-qom.h
+++ b/target/arm/cpu-qom.h
@@ -36,11 +36,14 @@ DECLARE_CLASS_CHECKERS(AArch64CPUClass, AARCH64_CPU,
 #define ARM_CPU_TYPE_SUFFIX "-" TYPE_ARM_CPU
 #define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX)
 
-/* Meanings of the ARMCPU object's four inbound GPIO lines */
+/* Meanings of the ARMCPU object's seven inbound GPIO lines */
 #define ARM_CPU_IRQ 0
 #define ARM_CPU_FIQ 1
 #define ARM_CPU_VIRQ 2
 #define ARM_CPU_VFIQ 3
+#define ARM_CPU_NMI 4
+#define ARM_CPU_VINMI 5
+#define ARM_CPU_VFNMI 6
 
 /* For M profile, some registers are banked secure vs non-secure;
  * these are represented as a 2-element array where the first element
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index ab8d007a86..74eb573aeb 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -122,6 +122,13 @@ void arm_restore_state_to_opc(CPUState *cs,
 }
 #endif /* CONFIG_TCG */
 
+/*
+ * With SCTLR_ELx.NMI == 0, IRQ with Superpriority is masked identically with
+ * IRQ without Superpriority. Moreover, if the GIC is configured so that
+ * FEAT_GICv3_NMI is only set if FEAT_NMI is set, then we won't ever see
+ * CPU_INTERRUPT_*NMI anyway. So we might as well accept NMI here
+ * unconditionally.
+ */
 static bool arm_cpu_has_work(CPUState *cs)
 {
 ARMCPU *cpu = ARM_CPU(cs);
@@ -129,6 +136,7 @@ static bool arm_cpu_has_work(CPUState *cs)
 return (cpu->power_state != PSCI_OFF)
 && cs->interrupt_request &
 (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
+ | CPU_INTERRUPT_NMI | CPU_INTERRUPT_VINMI | CPU_INTERRUPT_VFNMI
  | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
  | CPU_INTERRUPT_EXITTB);
 }
@@ -668,6 +676,7 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned 
int excp_idx,
 CPUARMState *env = cpu_env(cs);
 bool pstate_unmasked;
 bool unmasked = false;
+bool allIntMask = false;
 
 /*
  * Don't take exceptions if they target a lower EL.
@@ -678,13 +687,36 @@ static inline bool arm_excp_unmasked(CPUState *cs, 
unsigned int excp_idx,
 return false;
 }
 
+if (cpu_isar_feature(aa64_nmi, env_archcpu(env)) &&
+env->cp15.sctlr_el[target_el] & SCTLR_NMI && cur_el == target_el) {
+allIntMask = env->pstate & PSTATE_ALLINT ||
+ ((env->cp15.sctlr_el[target_el] & SCTLR_SPINTMASK) &&
+  (env->pstate & PSTATE_SP));
+}
+
 switch (excp_idx) {
+case EXCP_NMI:
+pstate_unmasked = !allIntMask;
+break;
+
+case EXCP_VINMI:
+if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
+/* VINMIs are only taken when hypervized.  */
+return false;
+}
+return !allIntMask;
+case EXCP_VFNMI:
+if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
+/* VFNMIs are only taken when hypervized.  */
+return false;
+}
+return !allIntMask;
 case EXCP_FIQ:
-pstate_unmasked = !(env->daif & PSTATE_F);
+pstate_unmasked = (!(env->daif & PSTATE_F)) && (!allIntMask);
 break;
 
 case EXCP_IRQ:
-pstate_unmasked = !(env->daif & PSTATE_I);
+pstate_unmasked = (!(env->daif & PSTATE_I)) && (!allIntMask);
 break;
 
 case EXCP_VFIQ:
@@ -692,13 +724,13 @@ static inline bool arm_excp_unmasked(CPUState *cs, 
unsigned int excp_idx,
 /* VFIQs are only taken 

[PATCH v11 19/23] hw/intc/arm_gicv3: Implement NMI interrupt prioirty

2024-03-30 Thread Jinjie Ruan via
If GICD_CTLR_DS bit is zero and the NMI is non-secure, the NMI prioirty
is higher than 0x80, otherwise it is higher than 0x0. And save NMI
super prioirty information in hppi.superprio to deliver NMI exception.
Since both GICR and GICD can deliver NMI, it is both necessary to check
whether the pending irq is NMI in gicv3_redist_update_noirqset and
gicv3_update_noirqset. And In irqbetter(), only a non-NMI with the same
priority and a smaller interrupt number can be preempted but not NMI.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v10:
- has_superprio -> nmi.
- superpriority -> non-maskable property.
- gicr_isuperprio -> gicr_inmir0.
- superprio -> nmi.
v8:
- Add Reviewed-by.
v7:
- Reorder the irqbetter() code for clarity.
- Eliminate the has_superprio local variable for gicv3_get_priority().
- false -> cs->hpplpi.superprio in gicv3_redist_update_noirqset().
- 0x0 -> false in arm_gicv3_common_reset_hold().
- Clear superprio in several places for hppi, hpplpi and hppvlpi.
v6:
- Put the "extract superprio info" logic into gicv3_get_priority().
- Update the comment in irqbetter().
- Reset the cs->hppi.superprio to 0x0.
- Set hppi.superprio to false for LPI.
v4:
- Replace is_nmi with has_superprio to not a mix NMI and superpriority.
- Update the comment in irqbetter().
- Extract gicv3_get_priority() to avoid code repeat.
---
v3:
- Add missing brace
---
 hw/intc/arm_gicv3.c| 67 +-
 hw/intc/arm_gicv3_common.c |  3 ++
 hw/intc/arm_gicv3_redist.c |  3 ++
 3 files changed, 64 insertions(+), 9 deletions(-)

diff --git a/hw/intc/arm_gicv3.c b/hw/intc/arm_gicv3.c
index 0b8f79a122..6704190d9d 100644
--- a/hw/intc/arm_gicv3.c
+++ b/hw/intc/arm_gicv3.c
@@ -21,7 +21,7 @@
 #include "hw/intc/arm_gicv3.h"
 #include "gicv3_internal.h"
 
-static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio)
+static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio, bool nmi)
 {
 /* Return true if this IRQ at this priority should take
  * precedence over the current recorded highest priority
@@ -30,14 +30,23 @@ static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t 
prio)
  * is the same as this one (a property which the calling code
  * relies on).
  */
-if (prio < cs->hppi.prio) {
-return true;
+if (prio != cs->hppi.prio) {
+return prio < cs->hppi.prio;
+}
+
+/*
+ * The same priority IRQ with non-maskable property should signal to
+ * the CPU as it have the priority higher than the labelled 0x80 or 0x00.
+ */
+if (nmi != cs->hppi.nmi) {
+return nmi;
 }
+
 /* If multiple pending interrupts have the same priority then it is an
  * IMPDEF choice which of them to signal to the CPU. We choose to
  * signal the one with the lowest interrupt number.
  */
-if (prio == cs->hppi.prio && irq <= cs->hppi.irq) {
+if (irq <= cs->hppi.irq) {
 return true;
 }
 return false;
@@ -129,6 +138,40 @@ static uint32_t gicr_int_pending(GICv3CPUState *cs)
 return pend;
 }
 
+static bool gicv3_get_priority(GICv3CPUState *cs, bool is_redist,
+   uint8_t *prio, int irq)
+{
+uint32_t nmi = 0x0;
+
+if (is_redist) {
+nmi = extract32(cs->gicr_inmir0, irq, 1);
+} else {
+nmi = *gic_bmp_ptr32(cs->gic->nmi, irq);
+nmi = nmi & (1 << (irq & 0x1f));
+}
+
+if (nmi) {
+/* DS = 0 & Non-secure NMI */
+if (!(cs->gic->gicd_ctlr & GICD_CTLR_DS) &&
+((is_redist && extract32(cs->gicr_igroupr0, irq, 1)) ||
+ (!is_redist && gicv3_gicd_group_test(cs->gic, irq {
+*prio = 0x80;
+} else {
+*prio = 0x0;
+}
+
+return true;
+}
+
+if (is_redist) {
+*prio = cs->gicr_ipriorityr[irq];
+} else {
+*prio = cs->gic->gicd_ipriority[irq];
+}
+
+return false;
+}
+
 /* Update the interrupt status after state in a redistributor
  * or CPU interface has changed, but don't tell the CPU i/f.
  */
@@ -141,6 +184,7 @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
 uint8_t prio;
 int i;
 uint32_t pend;
+bool nmi = false;
 
 /* Find out which redistributor interrupts are eligible to be
  * signaled to the CPU interface.
@@ -152,10 +196,11 @@ static void gicv3_redist_update_noirqset(GICv3CPUState 
*cs)
 if (!(pend & (1 << i))) {
 continue;
 }
-prio = cs->gicr_ipriorityr[i];
-if (irqbetter(cs, i, prio)) {
+nmi = gicv3_get_priority(cs, true, , i);
+if (irqbetter(cs, i, prio, nmi)) {
 cs->hppi.irq = i;
 cs->hppi.prio = prio;
+cs->hppi.nmi = nmi;
 seenbetter = true;
 }
 }
@@ -168,9 +213,10 @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
 if ((cs->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) && 

[PATCH v11 16/23] hw/intc/arm_gicv3: Implement GICD_INMIR

2024-03-30 Thread Jinjie Ruan via
Add GICD_INMIR, GICD_INMIRnE register and support access GICD_INMIR0.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
Reviewed-by: Peter Maydell 
---
v11:
- Add new Reviewed-by.
v10:
- superprio -> nmi.
v4:
- Make the GICD_INMIR implementation more clearer.
- Udpate the commit message.
v3:
- Add Reviewed-by.
---
 hw/intc/arm_gicv3_dist.c | 34 ++
 hw/intc/gicv3_internal.h |  2 ++
 2 files changed, 36 insertions(+)

diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
index 22ddc0d666..d8207acb22 100644
--- a/hw/intc/arm_gicv3_dist.c
+++ b/hw/intc/arm_gicv3_dist.c
@@ -89,6 +89,29 @@ static int gicd_ns_access(GICv3State *s, int irq)
 return extract32(s->gicd_nsacr[irq / 16], (irq % 16) * 2, 2);
 }
 
+static void gicd_write_bitmap_reg(GICv3State *s, MemTxAttrs attrs,
+  uint32_t *bmp, maskfn *maskfn,
+  int offset, uint32_t val)
+{
+/*
+ * Helper routine to implement writing to a "set" register
+ * (GICD_INMIR, etc).
+ * Semantics implemented here:
+ * RAZ/WI for SGIs, PPIs, unimplemented IRQs
+ * Bits corresponding to Group 0 or Secure Group 1 interrupts RAZ/WI.
+ * offset should be the offset in bytes of the register from the start
+ * of its group.
+ */
+int irq = offset * 8;
+
+if (irq < GIC_INTERNAL || irq >= s->num_irq) {
+return;
+}
+val &= mask_group_and_nsacr(s, attrs, maskfn, irq);
+*gic_bmp_ptr32(bmp, irq) = val;
+gicv3_update(s, irq, 32);
+}
+
 static void gicd_write_set_bitmap_reg(GICv3State *s, MemTxAttrs attrs,
   uint32_t *bmp,
   maskfn *maskfn,
@@ -545,6 +568,11 @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
 /* RAZ/WI since affinity routing is always enabled */
 *data = 0;
 return true;
+case GICD_INMIR ... GICD_INMIR + 0x7f:
+*data = (!s->nmi_support) ? 0 :
+gicd_read_bitmap_reg(s, attrs, s->nmi, NULL,
+ offset - GICD_INMIR);
+return true;
 case GICD_IROUTER ... GICD_IROUTER + 0x1fdf:
 {
 uint64_t r;
@@ -754,6 +782,12 @@ static bool gicd_writel(GICv3State *s, hwaddr offset,
 case GICD_SPENDSGIR ... GICD_SPENDSGIR + 0xf:
 /* RAZ/WI since affinity routing is always enabled */
 return true;
+case GICD_INMIR ... GICD_INMIR + 0x7f:
+if (s->nmi_support) {
+gicd_write_bitmap_reg(s, attrs, s->nmi, NULL,
+  offset - GICD_INMIR, value);
+}
+return true;
 case GICD_IROUTER ... GICD_IROUTER + 0x1fdf:
 {
 uint64_t r;
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
index 21697ecf39..8d793243f4 100644
--- a/hw/intc/gicv3_internal.h
+++ b/hw/intc/gicv3_internal.h
@@ -52,6 +52,8 @@
 #define GICD_SGIR0x0F00
 #define GICD_CPENDSGIR   0x0F10
 #define GICD_SPENDSGIR   0x0F20
+#define GICD_INMIR   0x0F80
+#define GICD_INMIRnE 0x3B00
 #define GICD_IROUTER 0x6000
 #define GICD_IDREGS  0xFFD0
 
-- 
2.34.1




[PATCH v11 03/23] target/arm: Add support for FEAT_NMI, Non-maskable Interrupt

2024-03-30 Thread Jinjie Ruan via
Add support for FEAT_NMI. NMI (FEAT_NMI) is an mandatory feature in
ARMv8.8-A and ARM v9.3-A.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v3:
- Add Reviewed-by.
- Adjust to before the MSR patches.
---
 target/arm/internals.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/target/arm/internals.h b/target/arm/internals.h
index dd3da211a3..516e0584bf 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1229,6 +1229,9 @@ static inline uint32_t aarch64_pstate_valid_mask(const 
ARMISARegisters *id)
 if (isar_feature_aa64_mte(id)) {
 valid |= PSTATE_TCO;
 }
+if (isar_feature_aa64_nmi(id)) {
+valid |= PSTATE_ALLINT;
+}
 
 return valid;
 }
-- 
2.34.1




[PATCH v11 20/23] hw/intc/arm_gicv3: Report the NMI interrupt in gicv3_cpuif_update()

2024-03-30 Thread Jinjie Ruan via
In CPU Interface, if the IRQ has the non-maskable property, report NMI to
the corresponding PE.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v10:
- superprio -> nmi.
- Update the commit message, superpriority -> non-maskable.
v6:
- Add Reviewed-by.
v4:
- Swap the ordering of the IFs.
v3:
- Remove handling nmi_is_irq flag.
---
 hw/intc/arm_gicv3_cpuif.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
index a7bc44b30c..6a2491b2a6 100644
--- a/hw/intc/arm_gicv3_cpuif.c
+++ b/hw/intc/arm_gicv3_cpuif.c
@@ -1042,6 +1042,7 @@ void gicv3_cpuif_update(GICv3CPUState *cs)
 /* Tell the CPU about its highest priority pending interrupt */
 int irqlevel = 0;
 int fiqlevel = 0;
+int nmilevel = 0;
 ARMCPU *cpu = ARM_CPU(cs->cpu);
 CPUARMState *env = >env;
 
@@ -1080,6 +1081,8 @@ void gicv3_cpuif_update(GICv3CPUState *cs)
 
 if (isfiq) {
 fiqlevel = 1;
+} else if (cs->hppi.nmi) {
+nmilevel = 1;
 } else {
 irqlevel = 1;
 }
@@ -1089,6 +1092,7 @@ void gicv3_cpuif_update(GICv3CPUState *cs)
 
 qemu_set_irq(cs->parent_fiq, fiqlevel);
 qemu_set_irq(cs->parent_irq, irqlevel);
+qemu_set_irq(cs->parent_nmi, nmilevel);
 }
 
 static uint64_t icc_pmr_read(CPUARMState *env, const ARMCPRegInfo *ri)
-- 
2.34.1




[PATCH v11 12/23] target/arm: Handle NMI in arm_cpu_do_interrupt_aarch64()

2024-03-30 Thread Jinjie Ruan via
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
with superpriority is always IRQ, never FIQ, so the NMI exception trap entry
behave like IRQ. And VINMI(vIRQ with Superpriority) can be raised from the
GIC or come from the hcrx_el2.HCRX_VINMI bit, VFNMI(vFIQ with Superpriority)
come from the hcrx_el2.HCRX_VFNMI bit.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v9:
- Update the commit message.
- Handle VINMI and VFNMI.
v7:
- Add Reviewed-by.
v6:
- Not combine VFNMI with CPU_INTERRUPT_VNMI.
v4:
- Also handle VNMI in arm_cpu_do_interrupt_aarch64().
v3:
- Remove the FIQ NMI handle.
---
 target/arm/helper.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 967e833ee8..eef37b801d 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -11650,10 +11650,13 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
 break;
 case EXCP_IRQ:
 case EXCP_VIRQ:
+case EXCP_NMI:
+case EXCP_VINMI:
 addr += 0x80;
 break;
 case EXCP_FIQ:
 case EXCP_VFIQ:
+case EXCP_VFNMI:
 addr += 0x100;
 break;
 case EXCP_VSERR:
-- 
2.34.1




[PATCH v11 01/23] target/arm: Handle HCR_EL2 accesses for bits introduced with FEAT_NMI

2024-03-30 Thread Jinjie Ruan via
FEAT_NMI defines another three new bits in HCRX_EL2: TALLINT, HCRX_VINMI and
HCRX_VFNMI. When the feature is enabled, allow these bits to be written in
HCRX_EL2.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v9:
- Declare cpu variable to reuse latter.
v4:
- Update the comment for FEAT_NMI in hcrx_write().
- Update the commit message, s/thress/three/g.
v3:
- Add Reviewed-by.
- Add HCRX_VINMI and HCRX_VFNMI support in HCRX_EL2.
- Upate the commit messsage.
---
 target/arm/cpu-features.h | 5 +
 target/arm/helper.c   | 9 -
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
index e5758d9fbc..b300d0446d 100644
--- a/target/arm/cpu-features.h
+++ b/target/arm/cpu-features.h
@@ -681,6 +681,11 @@ static inline bool isar_feature_aa64_sme(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
 }
 
+static inline bool isar_feature_aa64_nmi(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, NMI) != 0;
+}
+
 static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
 {
 return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 3f3a5b55d4..7d6c6e9878 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -6183,13 +6183,20 @@ bool el_is_in_host(CPUARMState *env, int el)
 static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
uint64_t value)
 {
+ARMCPU *cpu = env_archcpu(env);
+
 uint64_t valid_mask = 0;
 
 /* FEAT_MOPS adds MSCEn and MCE2 */
-if (cpu_isar_feature(aa64_mops, env_archcpu(env))) {
+if (cpu_isar_feature(aa64_mops, cpu)) {
 valid_mask |= HCRX_MSCEN | HCRX_MCE2;
 }
 
+/* FEAT_NMI adds TALLINT, VINMI and VFNMI */
+if (cpu_isar_feature(aa64_nmi, cpu)) {
+valid_mask |= HCRX_TALLINT | HCRX_VINMI | HCRX_VFNMI;
+}
+
 /* Clear RES0 bits.  */
 env->cp15.hcrx_el2 = value & valid_mask;
 }
-- 
2.34.1




[PATCH v11 13/23] hw/intc: Enable FEAT_GICv3_NMI Feature

2024-03-30 Thread Jinjie Ruan via
Added properties to enable FEAT_GICv3_NMI feature, setup distributor
and redistributor registers to indicate NMI support.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v10:
- Adjust to before add irq non-maskable property.
v4:
- Add Reviewed-by.
---
 hw/intc/arm_gicv3_common.c | 1 +
 hw/intc/arm_gicv3_dist.c   | 2 ++
 hw/intc/gicv3_internal.h   | 1 +
 include/hw/intc/arm_gicv3_common.h | 1 +
 4 files changed, 5 insertions(+)

diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
index c52f060026..2d2cea6858 100644
--- a/hw/intc/arm_gicv3_common.c
+++ b/hw/intc/arm_gicv3_common.c
@@ -569,6 +569,7 @@ static Property arm_gicv3_common_properties[] = {
 DEFINE_PROP_UINT32("num-irq", GICv3State, num_irq, 32),
 DEFINE_PROP_UINT32("revision", GICv3State, revision, 3),
 DEFINE_PROP_BOOL("has-lpi", GICv3State, lpi_enable, 0),
+DEFINE_PROP_BOOL("has-nmi", GICv3State, nmi_support, 0),
 DEFINE_PROP_BOOL("has-security-extensions", GICv3State, security_extn, 0),
 /*
  * Compatibility property: force 8 bits of physical priority, even
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
index 35e850685c..22ddc0d666 100644
--- a/hw/intc/arm_gicv3_dist.c
+++ b/hw/intc/arm_gicv3_dist.c
@@ -389,6 +389,7 @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
  *  by GICD_TYPER.IDbits)
  * MBIS == 0 (message-based SPIs not supported)
  * SecurityExtn == 1 if security extns supported
+ * NMI = 1 if Non-maskable interrupt property is supported
  * CPUNumber == 0 since for us ARE is always 1
  * ITLinesNumber == (((max SPI IntID + 1) / 32) - 1)
  */
@@ -402,6 +403,7 @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
 bool dvis = s->revision >= 4;
 
 *data = (1 << 25) | (1 << 24) | (dvis << 18) | (sec_extn << 10) |
+(s->nmi_support << GICD_TYPER_NMI_SHIFT) |
 (s->lpi_enable << GICD_TYPER_LPIS_SHIFT) |
 (0xf << 19) | itlinesnumber;
 return true;
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
index 29d5cdc1b6..8f4ebed2f4 100644
--- a/hw/intc/gicv3_internal.h
+++ b/hw/intc/gicv3_internal.h
@@ -68,6 +68,7 @@
 #define GICD_CTLR_E1NWF (1U << 7)
 #define GICD_CTLR_RWP   (1U << 31)
 
+#define GICD_TYPER_NMI_SHIFT   9
 #define GICD_TYPER_LPIS_SHIFT  17
 
 /* 16 bits EventId */
diff --git a/include/hw/intc/arm_gicv3_common.h 
b/include/hw/intc/arm_gicv3_common.h
index 7324c7d983..4358c5319c 100644
--- a/include/hw/intc/arm_gicv3_common.h
+++ b/include/hw/intc/arm_gicv3_common.h
@@ -249,6 +249,7 @@ struct GICv3State {
 uint32_t num_irq;
 uint32_t revision;
 bool lpi_enable;
+bool nmi_support;
 bool security_extn;
 bool force_8bit_prio;
 bool irq_reset_nonsecure;
-- 
2.34.1




[PATCH v11 08/23] target/arm: Handle IS/FS in ISR_EL1 for NMI, VINMI and VFNMI

2024-03-30 Thread Jinjie Ruan via
Add IS and FS bit in ISR_EL1 and handle the read. With CPU_INTERRUPT_NMI or
CPU_INTERRUPT_VINMI, both CPSR_I and ISR_IS must be set. With
CPU_INTERRUPT_VFNMI, both CPSR_F and ISR_FS must be set.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v9:
- CPU_INTERRUPT_VNMI -> CPU_INTERRUPT_VINMI.
- Handle CPSR_F and ISR_FS according to CPU_INTERRUPT_VFNMI instead of
  CPU_INTERRUPT_VFIQ and HCRX_EL2.VFNMI.
- Update the commit message.
v7:
- env->cp15.hcrx_el2 -> arm_hcrx_el2_eff().
- Add Reviewed-by.
v6:
- Verify that HCR_EL2.VF is set before checking VFNMI.
v4;
- Also handle VNMI.
v3:
- CPU_INTERRUPT_NMI do not set FIQ, so remove it.
- With CPU_INTERRUPT_NMI, both CPSR_I and ISR_IS must be set.
---
 target/arm/cpu.h|  2 ++
 target/arm/helper.c | 13 +
 2 files changed, 15 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 08a6bc50de..97997dbd08 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1398,6 +1398,8 @@ void pmu_init(ARMCPU *cpu);
 #define CPSR_N (1U << 31)
 #define CPSR_NZCV (CPSR_N | CPSR_Z | CPSR_C | CPSR_V)
 #define CPSR_AIF (CPSR_A | CPSR_I | CPSR_F)
+#define ISR_FS (1U << 9)
+#define ISR_IS (1U << 10)
 
 #define CPSR_IT (CPSR_IT_0_1 | CPSR_IT_2_7)
 #define CACHED_CPSR_BITS (CPSR_T | CPSR_AIF | CPSR_GE | CPSR_IT | CPSR_Q \
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 077c9a6923..b57114d35d 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -2021,16 +2021,29 @@ static uint64_t isr_read(CPUARMState *env, const 
ARMCPRegInfo *ri)
 if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
 ret |= CPSR_I;
 }
+if (cs->interrupt_request & CPU_INTERRUPT_VINMI) {
+ret |= ISR_IS;
+ret |= CPSR_I;
+}
 } else {
 if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
 ret |= CPSR_I;
 }
+
+if (cs->interrupt_request & CPU_INTERRUPT_NMI) {
+ret |= ISR_IS;
+ret |= CPSR_I;
+}
 }
 
 if (hcr_el2 & HCR_FMO) {
 if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
 ret |= CPSR_F;
 }
+if (cs->interrupt_request & CPU_INTERRUPT_VFNMI) {
+ret |= ISR_FS;
+ret |= CPSR_F;
+}
 } else {
 if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
 ret |= CPSR_F;
-- 
2.34.1




[PATCH v11 09/23] target/arm: Handle PSTATE.ALLINT on taking an exception

2024-03-30 Thread Jinjie Ruan via
Set or clear PSTATE.ALLINT on taking an exception to ELx according to the
SCTLR_ELx.SPINTMASK bit.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v9:
- Not check SCTLR_NMI in arm_cpu_do_interrupt_aarch64().
v3:
- Add Reviewed-by.
---
 target/arm/helper.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index b57114d35d..967e833ee8 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -11730,6 +11730,14 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
 }
 }
 
+if (cpu_isar_feature(aa64_nmi, cpu)) {
+if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPINTMASK)) {
+new_mode |= PSTATE_ALLINT;
+} else {
+new_mode &= ~PSTATE_ALLINT;
+}
+}
+
 pstate_write(env, PSTATE_DAIF | new_mode);
 env->aarch64 = true;
 aarch64_restore_sp(env, new_el);
-- 
2.34.1




[PATCH v11 07/23] target/arm: Add support for NMI in arm_phys_excp_target_el()

2024-03-30 Thread Jinjie Ruan via
According to Arm GIC section 4.6.3 Interrupt superpriority, the interrupt
with superpriority is always IRQ, never FIQ, so handle NMI same as IRQ in
arm_phys_excp_target_el().

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v4:
- Add Reviewed-by.
v3:
- Remove nmi_is_irq flag in CPUARMState.
- Handle NMI same as IRQ in arm_phys_excp_target_el().
---
 target/arm/helper.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/arm/helper.c b/target/arm/helper.c
index 1868235499..077c9a6923 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -10760,6 +10760,7 @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t 
excp_idx,
 hcr_el2 = arm_hcr_el2_eff(env);
 switch (excp_idx) {
 case EXCP_IRQ:
+case EXCP_NMI:
 scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
 hcr = hcr_el2 & HCR_IMO;
 break;
-- 
2.34.1




[PATCH v11 11/23] hw/intc/arm_gicv3: Add external IRQ lines for NMI

2024-03-30 Thread Jinjie Ruan via
Augment the GICv3's QOM device interface by adding one
new set of sysbus IRQ line, to signal NMI to each CPU.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
Reviewed-by: Peter Maydell 
---
v11:
- Add new Reviewed-by.
v4:
- Add Reviewed-by.
v3:
- Add support for VNMI.
---
 hw/intc/arm_gicv3_common.c | 6 ++
 include/hw/intc/arm_gic_common.h   | 2 ++
 include/hw/intc/arm_gicv3_common.h | 2 ++
 3 files changed, 10 insertions(+)

diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
index cb55c72681..c52f060026 100644
--- a/hw/intc/arm_gicv3_common.c
+++ b/hw/intc/arm_gicv3_common.c
@@ -299,6 +299,12 @@ void gicv3_init_irqs_and_mmio(GICv3State *s, 
qemu_irq_handler handler,
 for (i = 0; i < s->num_cpu; i++) {
 sysbus_init_irq(sbd, >cpu[i].parent_vfiq);
 }
+for (i = 0; i < s->num_cpu; i++) {
+sysbus_init_irq(sbd, >cpu[i].parent_nmi);
+}
+for (i = 0; i < s->num_cpu; i++) {
+sysbus_init_irq(sbd, >cpu[i].parent_vnmi);
+}
 
 memory_region_init_io(>iomem_dist, OBJECT(s), ops, s,
   "gicv3_dist", 0x1);
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
index 7080375008..97fea4102d 100644
--- a/include/hw/intc/arm_gic_common.h
+++ b/include/hw/intc/arm_gic_common.h
@@ -71,6 +71,8 @@ struct GICState {
 qemu_irq parent_fiq[GIC_NCPU];
 qemu_irq parent_virq[GIC_NCPU];
 qemu_irq parent_vfiq[GIC_NCPU];
+qemu_irq parent_nmi[GIC_NCPU];
+qemu_irq parent_vnmi[GIC_NCPU];
 qemu_irq maintenance_irq[GIC_NCPU];
 
 /* GICD_CTLR; for a GIC with the security extensions the NS banked version
diff --git a/include/hw/intc/arm_gicv3_common.h 
b/include/hw/intc/arm_gicv3_common.h
index 4e2fb518e7..7324c7d983 100644
--- a/include/hw/intc/arm_gicv3_common.h
+++ b/include/hw/intc/arm_gicv3_common.h
@@ -155,6 +155,8 @@ struct GICv3CPUState {
 qemu_irq parent_fiq;
 qemu_irq parent_virq;
 qemu_irq parent_vfiq;
+qemu_irq parent_nmi;
+qemu_irq parent_vnmi;
 
 /* Redistributor */
 uint32_t level;  /* Current IRQ level */
-- 
2.34.1




[PATCH v11 18/23] hw/intc/arm_gicv3: Handle icv_nmiar1_read() for icc_nmiar1_read()

2024-03-30 Thread Jinjie Ruan via
Implement icv_nmiar1_read() for icc_nmiar1_read(), so add definition for
ICH_LR_EL2.NMI and ICH_AP1R_EL2.NMI bit.

If FEAT_GICv3_NMI is supported, ich_ap_write() should consider ICV_AP1R_EL1.NMI
bit. In icv_activate_irq() and icv_eoir_write(), the ICV_AP1R_EL1.NMI bit
should be set or clear according to the Non-maskable property. And the RPR
priority should also update the NMI bit according to the APR priority NMI bit.

By the way, add gicv3_icv_nmiar1_read trace event.

If the hpp irq is a NMI, the icv iar read should return 1022 and trap for
NMI again

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v11:
- Deal with NMI in the callers instead of ich_highest_active_virt_prio().
- Set either NMI or a group-priority bit, not both.
- Only set AP NMI bits in the 0 reg.
- Handle NMI in hppvi_index(), icv_hppi_can_preempt() and icv_eoir_write().
v10:
- Rename ICH_AP1R_EL2_NMI to ICV_AP1R_EL1_NMI.
- Add ICV_RPR_EL1_NMI definition.
- Set ICV_RPR_EL1.NMI according to the ICV_AP1R_EL1.NMI in
  ich_highest_active_virt_prio().
v9:
- Correct the INTID_NMI logic.
v8:
- Fix an unexpected interrupt bug when sending VNMI by running qemu VM.
v7:
- Add Reviewed-by.
v6:
- Implement icv_nmiar1_read().
---
 hw/intc/arm_gicv3_cpuif.c | 97 ++-
 hw/intc/gicv3_internal.h  |  4 ++
 hw/intc/trace-events  |  1 +
 3 files changed, 91 insertions(+), 11 deletions(-)

diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
index f99f2570a6..a7bc44b30c 100644
--- a/hw/intc/arm_gicv3_cpuif.c
+++ b/hw/intc/arm_gicv3_cpuif.c
@@ -157,6 +157,10 @@ static int ich_highest_active_virt_prio(GICv3CPUState *cs)
 int i;
 int aprmax = ich_num_aprs(cs);
 
+if (cs->gic->nmi_support && cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI) 
{
+return 0x80;
+}
+
 for (i = 0; i < aprmax; i++) {
 uint32_t apr = cs->ich_apr[GICV3_G0][i] |
 cs->ich_apr[GICV3_G1NS][i];
@@ -191,6 +195,7 @@ static int hppvi_index(GICv3CPUState *cs)
  * correct behaviour.
  */
 int prio = 0xff;
+bool nmi = false;
 
 if (!(cs->ich_vmcr_el2 & (ICH_VMCR_EL2_VENG0 | ICH_VMCR_EL2_VENG1))) {
 /* Both groups disabled, definitely nothing to do */
@@ -200,6 +205,11 @@ static int hppvi_index(GICv3CPUState *cs)
 for (i = 0; i < cs->num_list_regs; i++) {
 uint64_t lr = cs->ich_lr_el2[i];
 int thisprio;
+bool thisnmi = false;
+
+if (cs->gic->nmi_support) {
+thisnmi = lr & ICH_LR_EL2_NMI;
+}
 
 if (ich_lr_state(lr) != ICH_LR_EL2_STATE_PENDING) {
 /* Not Pending */
@@ -219,9 +229,13 @@ static int hppvi_index(GICv3CPUState *cs)
 
 thisprio = ich_lr_prio(lr);
 
-if (thisprio < prio) {
+if ((thisprio < prio) || ((thisprio == prio) && (thisnmi & (!nmi {
 prio = thisprio;
 idx = i;
+
+if (cs->gic->nmi_support) {
+nmi = thisnmi;
+}
 }
 }
 
@@ -326,6 +340,12 @@ static bool icv_hppi_can_preempt(GICv3CPUState *cs, 
uint64_t lr)
 return true;
 }
 
+if ((prio & mask) == (rprio & mask) &&
+cs->gic->nmi_support && (lr & ICH_LR_EL2_NMI) &&
+(!(cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI))) {
+return true;
+}
+
 return false;
 }
 
@@ -550,7 +570,11 @@ static void icv_ap_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 
 trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), 
value);
 
-cs->ich_apr[grp][regno] = value & 0xU;
+if (cs->gic->nmi_support) {
+cs->ich_apr[grp][regno] = value & (0xU | ICV_AP1R_EL1_NMI);
+} else {
+cs->ich_apr[grp][regno] = value & 0xU;
+}
 
 gicv3_cpuif_virt_irq_fiq_update(cs);
 return;
@@ -697,7 +721,12 @@ static void icv_ctlr_write(CPUARMState *env, const 
ARMCPRegInfo *ri,
 static uint64_t icv_rpr_read(CPUARMState *env, const ARMCPRegInfo *ri)
 {
 GICv3CPUState *cs = icc_cs_from_env(env);
-int prio = ich_highest_active_virt_prio(cs);
+uint64_t prio = ich_highest_active_virt_prio(cs);
+
+if (cs->gic->nmi_support &&
+cs->ich_apr[GICV3_G1NS][0] & ICV_AP1R_EL1_NMI) {
+prio |= ICV_RPR_EL1_NMI;
+}
 
 trace_gicv3_icv_rpr_read(gicv3_redist_affid(cs), prio);
 return prio;
@@ -736,13 +765,19 @@ static void icv_activate_irq(GICv3CPUState *cs, int idx, 
int grp)
  */
 uint32_t mask = icv_gprio_mask(cs, grp);
 int prio = ich_lr_prio(cs->ich_lr_el2[idx]) & mask;
+bool nmi = cs->ich_lr_el2[idx] & ICH_LR_EL2_NMI;
 int aprbit = prio >> (8 - cs->vprebits);
 int regno = aprbit / 32;
 int regbit = aprbit % 32;
 
 cs->ich_lr_el2[idx] &= ~ICH_LR_EL2_STATE_PENDING_BIT;
 cs->ich_lr_el2[idx] |= ICH_LR_EL2_STATE_ACTIVE_BIT;
-cs->ich_apr[grp][regno] |= (1 << regbit);
+
+if (cs->gic->nmi_support && nmi) {
+cs->ich_apr[grp][regno] |= ICV_AP1R_EL1_NMI;
+} else {
+

[PATCH v11 14/23] hw/intc/arm_gicv3: Add irq non-maskable property

2024-03-30 Thread Jinjie Ruan via
A SPI, PPI or SGI interrupt can have non-maskable property. So maintain
non-maskable property in PendingIrq and GICR/GICD. Since add new device
state, it also needs to be migrated, so also save NMI info in
vmstate_gicv3_cpu and vmstate_gicv3.

Signed-off-by: Jinjie Ruan 
Acked-by: Richard Henderson 
---
v11:
- Put vmstate_gicv3_cpu_nmi and vmstate_gicv3_gicd_nmi into existing list.
- Remove the excess != 0.
v10:
- superprio -> nmi, gicr_isuperprio -> gicr_inmir0.
- Save NMI state in vmstate_gicv3_cpu and vmstate_gicv3.
- Update the commit message.
v3:
- Place this ahead of implement GICR_INMIR.
- Add Acked-by.
---
 hw/intc/arm_gicv3_common.c | 38 ++
 include/hw/intc/arm_gicv3_common.h |  4 
 2 files changed, 42 insertions(+)

diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
index 2d2cea6858..189258e1ca 100644
--- a/hw/intc/arm_gicv3_common.c
+++ b/hw/intc/arm_gicv3_common.c
@@ -164,6 +164,24 @@ const VMStateDescription vmstate_gicv3_gicv4 = {
 }
 };
 
+static bool nmi_needed(void *opaque)
+{
+GICv3CPUState *cs = opaque;
+
+return cs->gic->nmi_support;
+}
+
+static const VMStateDescription vmstate_gicv3_cpu_nmi = {
+.name = "arm_gicv3_cpu/nmi",
+.version_id = 1,
+.minimum_version_id = 1,
+.needed = nmi_needed,
+.fields = (const VMStateField[]) {
+VMSTATE_UINT32(gicr_inmir0, GICv3CPUState),
+VMSTATE_END_OF_LIST()
+}
+};
+
 static const VMStateDescription vmstate_gicv3_cpu = {
 .name = "arm_gicv3_cpu",
 .version_id = 1,
@@ -196,6 +214,7 @@ static const VMStateDescription vmstate_gicv3_cpu = {
 _gicv3_cpu_virt,
 _gicv3_cpu_sre_el1,
 _gicv3_gicv4,
+_gicv3_cpu_nmi,
 NULL
 }
 };
@@ -238,6 +257,24 @@ const VMStateDescription 
vmstate_gicv3_gicd_no_migration_shift_bug = {
 }
 };
 
+static bool needed_nmi(void *opaque)
+{
+GICv3State *cs = opaque;
+
+return cs->nmi_support;
+}
+
+const VMStateDescription vmstate_gicv3_gicd_nmi = {
+.name = "arm_gicv3/gicd_nmi",
+.version_id = 1,
+.minimum_version_id = 1,
+.needed = needed_nmi,
+.fields = (const VMStateField[]) {
+VMSTATE_UINT32_ARRAY(nmi, GICv3State, GICV3_BMP_SIZE),
+VMSTATE_END_OF_LIST()
+}
+};
+
 static const VMStateDescription vmstate_gicv3 = {
 .name = "arm_gicv3",
 .version_id = 1,
@@ -266,6 +303,7 @@ static const VMStateDescription vmstate_gicv3 = {
 },
 .subsections = (const VMStateDescription * const []) {
 _gicv3_gicd_no_migration_shift_bug,
+_gicv3_gicd_nmi,
 NULL
 }
 };
diff --git a/include/hw/intc/arm_gicv3_common.h 
b/include/hw/intc/arm_gicv3_common.h
index 4358c5319c..88533749eb 100644
--- a/include/hw/intc/arm_gicv3_common.h
+++ b/include/hw/intc/arm_gicv3_common.h
@@ -146,6 +146,7 @@ typedef struct {
 int irq;
 uint8_t prio;
 int grp;
+bool nmi;
 } PendingIrq;
 
 struct GICv3CPUState {
@@ -172,6 +173,7 @@ struct GICv3CPUState {
 uint32_t gicr_ienabler0;
 uint32_t gicr_ipendr0;
 uint32_t gicr_iactiver0;
+uint32_t gicr_inmir0;
 uint32_t edge_trigger; /* ICFGR0 and ICFGR1 even bits */
 uint32_t gicr_igrpmodr0;
 uint32_t gicr_nsacr;
@@ -275,6 +277,7 @@ struct GICv3State {
 GIC_DECLARE_BITMAP(active);   /* GICD_ISACTIVER */
 GIC_DECLARE_BITMAP(level);/* Current level */
 GIC_DECLARE_BITMAP(edge_trigger); /* GICD_ICFGR even bits */
+GIC_DECLARE_BITMAP(nmi);  /* GICD_INMIR */
 uint8_t gicd_ipriority[GICV3_MAXIRQ];
 uint64_t gicd_irouter[GICV3_MAXIRQ];
 /* Cached information: pointer to the cpu i/f for the CPUs specified
@@ -314,6 +317,7 @@ GICV3_BITMAP_ACCESSORS(pending)
 GICV3_BITMAP_ACCESSORS(active)
 GICV3_BITMAP_ACCESSORS(level)
 GICV3_BITMAP_ACCESSORS(edge_trigger)
+GICV3_BITMAP_ACCESSORS(nmi)
 
 #define TYPE_ARM_GICV3_COMMON "arm-gicv3-common"
 typedef struct ARMGICv3CommonClass ARMGICv3CommonClass;
-- 
2.34.1




[PATCH v11 15/23] hw/intc/arm_gicv3_redist: Implement GICR_INMIR0

2024-03-30 Thread Jinjie Ruan via
Add GICR_INMIR0 register and support access GICR_INMIR0.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
Reviewed-by: Peter Maydell 
---
v11:
- Add new Reviewed-by.
v10:
- gicr_isuperprio -> gicr_inmir0.
v6:
- Add Reviewed-by.
v4:
- Make the GICR_INMIR0 implementation more clearer.
---
 hw/intc/arm_gicv3_redist.c | 19 +++
 hw/intc/gicv3_internal.h   |  1 +
 2 files changed, 20 insertions(+)

diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
index 8153525849..ed1f9d1e44 100644
--- a/hw/intc/arm_gicv3_redist.c
+++ b/hw/intc/arm_gicv3_redist.c
@@ -35,6 +35,15 @@ static int gicr_ns_access(GICv3CPUState *cs, int irq)
 return extract32(cs->gicr_nsacr, irq * 2, 2);
 }
 
+static void gicr_write_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs,
+  uint32_t *reg, uint32_t val)
+{
+/* Helper routine to implement writing to a "set" register */
+val &= mask_group(cs, attrs);
+*reg = val;
+gicv3_redist_update(cs);
+}
+
 static void gicr_write_set_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs,
   uint32_t *reg, uint32_t val)
 {
@@ -406,6 +415,10 @@ static MemTxResult gicr_readl(GICv3CPUState *cs, hwaddr 
offset,
 *data = value;
 return MEMTX_OK;
 }
+case GICR_INMIR0:
+*data = cs->gic->nmi_support ?
+gicr_read_bitmap_reg(cs, attrs, cs->gicr_inmir0) : 0;
+return MEMTX_OK;
 case GICR_ICFGR0:
 case GICR_ICFGR1:
 {
@@ -555,6 +568,12 @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr 
offset,
 gicv3_redist_update(cs);
 return MEMTX_OK;
 }
+case GICR_INMIR0:
+if (cs->gic->nmi_support) {
+gicr_write_bitmap_reg(cs, attrs, >gicr_inmir0, value);
+}
+return MEMTX_OK;
+
 case GICR_ICFGR0:
 /* Register is all RAZ/WI or RAO/WI bits */
 return MEMTX_OK;
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
index 8f4ebed2f4..21697ecf39 100644
--- a/hw/intc/gicv3_internal.h
+++ b/hw/intc/gicv3_internal.h
@@ -110,6 +110,7 @@
 #define GICR_ICFGR1   (GICR_SGI_OFFSET + 0x0C04)
 #define GICR_IGRPMODR0(GICR_SGI_OFFSET + 0x0D00)
 #define GICR_NSACR(GICR_SGI_OFFSET + 0x0E00)
+#define GICR_INMIR0   (GICR_SGI_OFFSET + 0x0F80)
 
 /* VLPI redistributor registers, offsets from VLPI_base */
 #define GICR_VPROPBASER   (GICR_VLPI_OFFSET + 0x70)
-- 
2.34.1




[PATCH v11 04/23] target/arm: Implement ALLINT MSR (immediate)

2024-03-30 Thread Jinjie Ruan via
Add ALLINT MSR (immediate) to decodetree, in which the CRm is 0b000x. The
EL0 check is necessary to ALLINT, and the EL1 check is necessary when
imm == 1. So implement it inline for EL2/3, or EL1 with imm==0. Avoid the
unconditional write to pc and use raise_exception_ra to unwind.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v10:
- Correct the exception_target_el(env) to 2, since it is a hypervisor trap
  from EL1 to EL2.
v7:
- Add Reviewed-by.
v6:
- Fix DISAS_TOO_MANY to DISAS_UPDATE_EXIT and add the comment.
v5:
- Drop the & 1 in trans_MSR_i_ALLINT().
- Simplify and merge msr_i_allint() and allint_check().
- Rename msr_i_allint() to msr_set_allint_el1().
v4:
- Fix the ALLINT MSR (immediate) decodetree implementation.
- Remove arm_is_el2_enabled() check in allint_check().
- Update env->allint to env->pstate.
- Only call allint_check() when imm == 1.
- Simplify the allint_check() to not pass "op" and extract.
- Implement it inline for EL2/3, or EL1 with imm==0.
- Pass (a->imm & 1) * PSTATE_ALLINT (i64) to simplfy the ALLINT set/clear.
v3:
- Remove EL0 check in allint_check().
- Add TALLINT check for EL1 in allint_check().
- Remove unnecessarily arm_rebuild_hflags() in msr_i_allint helper.
---
 target/arm/tcg/a64.decode  |  1 +
 target/arm/tcg/helper-a64.c| 12 
 target/arm/tcg/helper-a64.h|  1 +
 target/arm/tcg/translate-a64.c | 19 +++
 4 files changed, 33 insertions(+)

diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 8a20dce3c8..0e7656fd15 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -207,6 +207,7 @@ MSR_i_DIT   1101 0101  0 011 0100  010 1 
@msr_i
 MSR_i_TCO   1101 0101  0 011 0100  100 1 @msr_i
 MSR_i_DAIFSET   1101 0101  0 011 0100  110 1 @msr_i
 MSR_i_DAIFCLEAR 1101 0101  0 011 0100  111 1 @msr_i
+MSR_i_ALLINT1101 0101  0 001 0100 000 imm:1 000 1
 MSR_i_SVCR  1101 0101  0 011 0100 0 mask:2 imm:1 011 1
 
 # MRS, MSR (register), SYS, SYSL. These are all essentially the
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
index ebaa7f00df..673e949422 100644
--- a/target/arm/tcg/helper-a64.c
+++ b/target/arm/tcg/helper-a64.c
@@ -66,6 +66,18 @@ void HELPER(msr_i_spsel)(CPUARMState *env, uint32_t imm)
 update_spsel(env, imm);
 }
 
+void HELPER(msr_set_allint_el1)(CPUARMState *env)
+{
+/* ALLINT update to PSTATE. */
+if (arm_hcrx_el2_eff(env) & HCRX_TALLINT) {
+raise_exception_ra(env, EXCP_UDEF,
+   syn_aa64_sysregtrap(0, 1, 0, 4, 1, 0x1f, 0), 2,
+   GETPC());
+}
+
+env->pstate |= PSTATE_ALLINT;
+}
+
 static void daif_check(CPUARMState *env, uint32_t op,
uint32_t imm, uintptr_t ra)
 {
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
index 575a5dab7d..0518165399 100644
--- a/target/arm/tcg/helper-a64.h
+++ b/target/arm/tcg/helper-a64.h
@@ -22,6 +22,7 @@ DEF_HELPER_FLAGS_1(rbit64, TCG_CALL_NO_RWG_SE, i64, i64)
 DEF_HELPER_2(msr_i_spsel, void, env, i32)
 DEF_HELPER_2(msr_i_daifset, void, env, i32)
 DEF_HELPER_2(msr_i_daifclear, void, env, i32)
+DEF_HELPER_1(msr_set_allint_el1, void, env)
 DEF_HELPER_3(vfp_cmph_a64, i64, f16, f16, ptr)
 DEF_HELPER_3(vfp_cmpeh_a64, i64, f16, f16, ptr)
 DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, ptr)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 340265beb0..21758b290d 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2036,6 +2036,25 @@ static bool trans_MSR_i_DAIFCLEAR(DisasContext *s, arg_i 
*a)
 return true;
 }
 
+static bool trans_MSR_i_ALLINT(DisasContext *s, arg_i *a)
+{
+if (!dc_isar_feature(aa64_nmi, s) || s->current_el == 0) {
+return false;
+}
+
+if (a->imm == 0) {
+clear_pstate_bits(PSTATE_ALLINT);
+} else if (s->current_el > 1) {
+set_pstate_bits(PSTATE_ALLINT);
+} else {
+gen_helper_msr_set_allint_el1(tcg_env);
+}
+
+/* Exit the cpu loop to re-evaluate pending IRQs. */
+s->base.is_jmp = DISAS_UPDATE_EXIT;
+return true;
+}
+
 static bool trans_MSR_i_SVCR(DisasContext *s, arg_MSR_i_SVCR *a)
 {
 if (!dc_isar_feature(aa64_sme, s) || a->mask == 0) {
-- 
2.34.1




[PATCH v11 21/23] hw/intc/arm_gicv3: Report the VINMI interrupt

2024-03-30 Thread Jinjie Ruan via
In vCPU Interface, if the vIRQ has the non-maskable property, report
vINMI to the corresponding vPE.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v10:
- Update the commit message, superpriority -> non-maskable.
v9:
- Update the commit subject and message, vNMI -> vINMI.
v6:
- Add Reviewed-by.
---
 hw/intc/arm_gicv3_cpuif.c | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
index 6a2491b2a6..fb811b856d 100644
--- a/hw/intc/arm_gicv3_cpuif.c
+++ b/hw/intc/arm_gicv3_cpuif.c
@@ -485,6 +485,7 @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
 int idx;
 int irqlevel = 0;
 int fiqlevel = 0;
+int nmilevel = 0;
 
 idx = hppvi_index(cs);
 trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx,
@@ -502,9 +503,17 @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
 uint64_t lr = cs->ich_lr_el2[idx];
 
 if (icv_hppi_can_preempt(cs, lr)) {
-/* Virtual interrupts are simple: G0 are always FIQ, and G1 IRQ */
+/*
+ * Virtual interrupts are simple: G0 are always FIQ, and G1 are
+ * IRQ or NMI which depends on the ICH_LR_EL2.NMI to have
+ * non-maskable property.
+ */
 if (lr & ICH_LR_EL2_GROUP) {
-irqlevel = 1;
+if (cs->gic->nmi_support && (lr & ICH_LR_EL2_NMI)) {
+nmilevel = 1;
+} else {
+irqlevel = 1;
+}
 } else {
 fiqlevel = 1;
 }
@@ -514,6 +523,7 @@ void gicv3_cpuif_virt_irq_fiq_update(GICv3CPUState *cs)
 trace_gicv3_cpuif_virt_set_irqs(gicv3_redist_affid(cs), fiqlevel, 
irqlevel);
 qemu_set_irq(cs->parent_vfiq, fiqlevel);
 qemu_set_irq(cs->parent_virq, irqlevel);
+qemu_set_irq(cs->parent_vnmi, nmilevel);
 }
 
 static void gicv3_cpuif_virt_update(GICv3CPUState *cs)
-- 
2.34.1




[PATCH v11 02/23] target/arm: Add PSTATE.ALLINT

2024-03-30 Thread Jinjie Ruan via
When PSTATE.ALLINT is set, an IRQ or FIQ interrupt that is targeted to
ELx, with or without superpriority is masked.

As Richard suggested, place ALLINT bit in PSTATE in env->pstate.

With the change to pstate_read/write, exception entry
and return are automatically handled.

Signed-off-by: Jinjie Ruan 
Reviewed-by: Richard Henderson 
---
v5:
- Remove the ALLINT comment, as it is covered by "all other bits".
- Add Reviewed-by.
v4:
- Keep PSTATE.ALLINT in env->pstate but not env->allint.
- Update the commit message.
v3:
- Remove ALLINT dump in aarch64_cpu_dump_state().
- Update the commit message.
---
 target/arm/cpu.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index bc0c84873f..de740d223f 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -1430,6 +1430,7 @@ void pmu_init(ARMCPU *cpu);
 #define PSTATE_D (1U << 9)
 #define PSTATE_BTYPE (3U << 10)
 #define PSTATE_SSBS (1U << 12)
+#define PSTATE_ALLINT (1U << 13)
 #define PSTATE_IL (1U << 20)
 #define PSTATE_SS (1U << 21)
 #define PSTATE_PAN (1U << 22)
-- 
2.34.1




Re: Backdoor in xz, should we switch compression format for tarballs?

2024-03-30 Thread Stefan Hajnoczi
On Fri, 29 Mar 2024 at 14:00, Paolo Bonzini  wrote:
>
> For more info, see 
> https://lwn.net/ml/oss-security/20240329155126.kjjfduxw2yrlx...@awork3.anarazel.de/
>  but, essentially, xz was backdoored and it seems like upstream was directly 
> responsible for this.
>
> Based on this, should we switch our distribution from bz2+xz to bz2+zstd or 
> bz2+lzip?

I think it's reasonable to drop xz as a precaution due to the
long-term control the attacker may have had over the code base. I
haven't researched the alternatives though.

I CCed Michael Tokarev because he looked at compression formats for
distributing QEMU recently and may have thoughts on which alternative
is suitable.

For the record, I confirmed that the following QEMU servers do not
have xz-utils 5.6.0 or 5.6.1 packages installed:
- shell1.qemu.org
- node1.qemu.org
- ci1 at OSUOSL
- qemu2.osuosl.org

Stefan