Re: How to use designware-root-port and designware-root-host devices ?
On 20/06/2024 10.28, Arthur Tumanyan wrote: Hi all, My question may sound stupid, however... Hi Arthur, no worries, the question how to use which device in QEMU can be quite tricky ;-) Currently I'm trying to make available designware-root-{port,host} devices in linux when I run it in qemu. I try the following way to run: qemu-system-arm -M virt -m 2G \ -kernel images/Image \ -append "rootwait root=/dev/vda ro" \ -drive file=images/rootfs.ext2,format=raw,id=hd0 \ -device designware-root-port,id=rp0,chassis=1,slot=0,bus=pcie.0,addr=1 \ -device e1000,netdev=net0,mac=52:54:00:12:34:56,bus=rp0,addr=0 \ -netdev user,id=net0 but it seems designware device is not enabled by default: qemu-system-arm: -device designware-root-port,id=rp0,chassis=1,slot=0,bus=pcie.0,addr=1: 'designware-root-port' is not a valid device model name Are you sure about the names? $ grep -r 'designware' * ... include/hw/pci-host/designware.h:#define TYPE_DESIGNWARE_PCIE_HOST "designware-pcie-host" include/hw/pci-host/designware.h:#define TYPE_DESIGNWARE_PCIE_ROOT "designware-pcie-root" when I enable it from Kconfig/meson.build it says the device is already registered and exits with abort(). From the other hand the device is declared as non pluggable: dc->user_creatable = false; Well, that means that you cannot use those with "-device". They can only be instantiated via the code that creates the machine. Can you please help me to use designware-root-host/port devices ? It seems like the i.MX7 SABRE machine is using this device, so instead of "-M virt", you could have a try with "-M mcimx7d-sabre" (and a kernel that supports this machine) instead. HTH, Thomas
Re: [PATCH v2 09/14] include/hw: temporarily disable deletion of versioned machine types
On 20/06/2024 18.57, Daniel P. Berrangé wrote: The new deprecation and deletion policy for versioned machine types is being introduced in QEMU 9.1.0. Under the new policy a number of old machine types (any prior to 2.12) would be liable for immediate deletion which would be a violation of our historical deprecation and removal policy Thus automatic deletions (by skipping QOM registration) are temporarily gated on existance of the env variable "QEMU_DELETE_MACHINES" / QEMU version number >= 10.1.0. This allows opt-in testing of the automatic deletion logic, while activating it fully in QEMU >= 10.1.0. This whole commit should be reverted in the 10.1.0 dev cycle or shortly thereafter. Signed-off-by: Daniel P. Berrangé --- include/hw/boards.h | 19 ++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/include/hw/boards.h b/include/hw/boards.h index 187ed76646..ef6f18f2c1 100644 --- a/include/hw/boards.h +++ b/include/hw/boards.h @@ -686,11 +686,28 @@ struct MachineState { * suitable period of time has passed, it will cause * execution of the method to return, avoiding registration * of the machine + * + * The new deprecation and deletion policy for versioned + * machine types was introduced in QEMU 9.1.0. + * + * Under the new policy a number of old machine types (any + * prior to 2.12) would be liable for immediate deletion + * which would be a violation of our historical deprecation + * and removal policy + * + * Thus deletions are temporarily gated on existance of + * the env variable "QEMU_DELETE_MACHINES" / QEMU version + * number >= 10.1.0. This gate can be deleted in the 10.1.0 + * dev cycle */ #define MACHINE_VER_DELETION(...) \ do { \ if (MACHINE_VER_SHOULD_DELETE(__VA_ARGS__)) { \ -return; \ +if (getenv("QEMU_DELETE_MACHINES") || \ +QEMU_VERSION_MAJOR > 10 || (QEMU_VERSION_MAJOR == 10 && \ +QEMU_VERSION_MINOR >= 1)) { \ +return; \ +} \ } \ } while (0) Reviewed-by: Thomas Huth
Re: [PATCH v2 09/14] include/hw: temporarily disable deletion of versioned machine types
On 20/06/2024 18.57, Daniel P. Berrangé wrote: The new deprecation and deletion policy for versioned machine types is being introduced in QEMU 9.1.0. Under the new policy a number of old machine types (any prior to 2.12) would be liable for immediate deletion which would be a violation of our historical deprecation and removal policy Thus automatic deletions (by skipping QOM registration) are temporarily gated on existance of the env variable "QEMU_DELETE_MACHINES" / QEMU version number >= 10.1.0. This allows opt-in testing of the automatic deletion logic, while activating it fully in QEMU >= 10.1.0. This whole commit should be reverted in the 10.1.0 dev cycle or shortly thereafter. Signed-off-by: Daniel P. Berrangé --- include/hw/boards.h | 19 ++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/include/hw/boards.h b/include/hw/boards.h index 187ed76646..ef6f18f2c1 100644 --- a/include/hw/boards.h +++ b/include/hw/boards.h @@ -686,11 +686,28 @@ struct MachineState { * suitable period of time has passed, it will cause * execution of the method to return, avoiding registration * of the machine + * + * The new deprecation and deletion policy for versioned + * machine types was introduced in QEMU 9.1.0. + * + * Under the new policy a number of old machine types (any + * prior to 2.12) would be liable for immediate deletion + * which would be a violation of our historical deprecation + * and removal policy + * + * Thus deletions are temporarily gated on existance of + * the env variable "QEMU_DELETE_MACHINES" / QEMU version + * number >= 10.1.0. This gate can be deleted in the 10.1.0 + * dev cycle */ #define MACHINE_VER_DELETION(...) \ do { \ if (MACHINE_VER_SHOULD_DELETE(__VA_ARGS__)) { \ -return; \ +if (getenv("QEMU_DELETE_MACHINES") || \ +QEMU_VERSION_MAJOR > 10 || (QEMU_VERSION_MAJOR == 10 && \ +QEMU_VERSION_MINOR >= 1)) { \ +return; \ +} \ } \ } while (0) Reviewed-by: Thomas Huth
Re: [PATCH v2 09/12] plugins: add migration blocker
On 20/06/2024 17.22, Alex Bennée wrote: If the plugin in controlling time there is some state that might be missing from the plugin tracking it. Migration is unlikely to work in this case so lets put a migration blocker in to let the user know if they try. Signed-off-by: Alex Bennée Suggested-by: "Dr. David Alan Gilbert" --- plugins/api.c | 8 1 file changed, 8 insertions(+) diff --git a/plugins/api.c b/plugins/api.c index 4431a0ea7e..c4239153af 100644 --- a/plugins/api.c +++ b/plugins/api.c @@ -47,6 +47,8 @@ #include "disas/disas.h" #include "plugin.h" #ifndef CONFIG_USER_ONLY +#include "qapi/error.h" +#include "migration/blocker.h" #include "exec/ram_addr.h" #include "qemu/plugin-memory.h" #include "hw/boards.h" @@ -589,11 +591,17 @@ uint64_t qemu_plugin_u64_sum(qemu_plugin_u64 entry) * Time control */ static bool has_control; +Error *migration_blocker; const void *qemu_plugin_request_time_control(void) { if (!has_control) { has_control = true; +#ifdef CONFIG_SOFTMMU +error_setg(_blocker, + "TCG plugin time control does not support migration"); +migrate_add_blocker(_blocker, NULL); +#endif return _control; } return NULL; Reviewed-by: Thomas Huth
Re: [PATCH 5/5] s390x: Enable and document boot device fallback on panic
On 17/06/2024 16.49, Christian Borntraeger wrote: Am 05.06.24 um 15:37 schrieb Thomas Huth: On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi On a panic during IPL (i.e. a device failed to boot) check for another device to boot from, as indicated by the presence of an unused IPLB. If an IPLB is successfully loaded, then jump to the start of BIOS, restarting IPL using the updated IPLB. Otherwise enter disabled wait. Signed-off-by: Jared Rossi --- docs/system/bootindex.rst | 7 --- docs/system/s390x/bootdevices.rst | 9 ++--- pc-bios/s390-ccw/s390-ccw.h | 6 ++ 3 files changed, 16 insertions(+), 6 deletions(-) Could you please split the documentation changes into a separate patch in v2 ? ... I think that would be cleaner. diff --git a/docs/system/bootindex.rst b/docs/system/bootindex.rst index 8b057f812f..de597561bd 100644 --- a/docs/system/bootindex.rst +++ b/docs/system/bootindex.rst @@ -50,9 +50,10 @@ Limitations Some firmware has limitations on which devices can be considered for booting. For instance, the PC BIOS boot specification allows only one -disk to be bootable. If boot from disk fails for some reason, the BIOS -won't retry booting from other disk. It can still try to boot from -floppy or net, though. +disk to be bootable, except for on s390x machines. If boot from disk fails for +some reason, the BIOS won't retry booting from other disk. It can still try to +boot from floppy or net, though. In the case of s390x, the BIOS will try up to +8 total devices, any number of which may be disks. Since the old text was already talking about "PC BIOS", I'd rather leave that paragraph as it is (maybe just replace "PC BIOS" with "x86 PC BIOS"), and add a separate paragraph afterwards about s390x instead. diff --git a/pc-bios/s390-ccw/s390-ccw.h b/pc-bios/s390-ccw/s390-ccw.h index c977a52b50..de3d1f0d5a 100644 --- a/pc-bios/s390-ccw/s390-ccw.h +++ b/pc-bios/s390-ccw/s390-ccw.h @@ -43,6 +43,7 @@ typedef unsigned long long u64; #include "iplb.h" /* start.s */ +extern char _start[]; void disabled_wait(void) __attribute__ ((__noreturn__)); void consume_sclp_int(void); void consume_io_int(void); @@ -88,6 +89,11 @@ __attribute__ ((__noreturn__)) static inline void panic(const char *string) { sclp_print(string); + if (load_next_iplb()) { + sclp_print("\nTrying next boot device..."); + jump_to_IPL_code((long)_start); + } + disabled_wait(); } Honestly, I am unsure whether this is a really cool idea or a very ugly hack ... but I think I tend towards the latter, sorry. Jumping back to the startup code might cause various problem, e.g. pre-initialized variables don't get their values reset, causing different behavior when the s390-ccw bios runs a function a second time this way. We jump back to _start and to me it looks like that this code does the resetting of bss segment. So anything that has a zero value this should be fine. But static variables != 0 are indeed tricky. As far as I can see we do have some of those :-( So instead of jumping, is there a way that remember somewhere at which device we are and then trigger a re-ipl to reload the BIOS? If there is an easy way, this could maybe an option, but in the long run, I'd really prefer if we'd merge the binaries and get rid of such tricks, since this makes the code flow quite hard to understand and maybe also more difficult to debug if you run into problems later. Thomas
Re: [PATCH 5/5] s390x: Enable and document boot device fallback on panic
On 17/06/2024 01.44, Jared Rossi wrote: On 6/7/24 1:57 AM, Thomas Huth wrote: On 05/06/2024 16.48, Jared Rossi wrote: diff --git a/pc-bios/s390-ccw/s390-ccw.h b/pc-bios/s390-ccw/s390-ccw.h index c977a52b50..de3d1f0d5a 100644 --- a/pc-bios/s390-ccw/s390-ccw.h +++ b/pc-bios/s390-ccw/s390-ccw.h @@ -43,6 +43,7 @@ typedef unsigned long long u64; #include "iplb.h" /* start.s */ +extern char _start[]; void disabled_wait(void) __attribute__ ((__noreturn__)); void consume_sclp_int(void); void consume_io_int(void); @@ -88,6 +89,11 @@ __attribute__ ((__noreturn__)) static inline void panic(const char *string) { sclp_print(string); + if (load_next_iplb()) { + sclp_print("\nTrying next boot device..."); + jump_to_IPL_code((long)_start); + } + disabled_wait(); } Honestly, I am unsure whether this is a really cool idea or a very ugly hack ... but I think I tend towards the latter, sorry. Jumping back to the startup code might cause various problem, e.g. pre-initialized variables don't get their values reset, causing different behavior when the s390-ccw bios runs a function a second time this way. Thus this sounds very fragile. Could we please try to get things cleaned up correctly, so that functions return with error codes instead of panicking when we can continue with another boot device? Even if its more work right now, I think this will be much more maintainable in the future. Thomas Thanks Thomas, I appreciate your insight. Your hesitation is perfectly understandable as well. My initial design was like you suggest, where the functions return instead of panic, but the issue I ran into is that netboot uses a separate image, which we jump in to at the start of IPL from a network device (see zipl_load() in pc-bios/s390-ccw/bootmap.c). I wasn't able to come up with a simple way to return to the main BIOS code if a netboot fails other than by jumping back. So, it seems to me that netboot kind of throws a monkeywrench into the basic idea of reworking the panics into returns. I'm open to suggestions on a better way to recover from a failed netboot, and it's certainly possible I've overlooked something, but as far as I can tell a jump is necessary in that particular case at least. Netboot could perhaps be handled as a special case where the jump back is permitted whereas other device types return, but I don't think that actually solves the main issue. What are your thoughts on this? Yes, I agree that jumping is currently required to get back from the netboot code. So if you could rework your patches in a way that limits the jumping to a failed netboot, that would be acceptable, I think. Apart from that: We originally decided to put the netboot code into a separate binary since the required roms/SLOF module might not always have been checked out (it needed to be done manually), so we were not able to compile it in all cases. But nowadays, this is handled in a much nicer way, the submodule is automatically checked out once you compile the s390x-softmmu target and have a s390x compiler available, so I wonder whether we should maybe do the next step and integrate the netboot code into the main s390-ccw.img now? Anybody got an opinion on this? Thomas Hi Thomas, I would generally defer the decision about integrating the netboot code to someone with more insight than me, but for what it's worth, I am of the opinion that if we want to rework all of panics into returns, then it would make the most sense to also do the integration now so that we can avoid using jump altogether. Unless I'm missing something simple, I don't think the panic/return conversion will be trivial, and actually I think it will be quite invasive since there are dozens of calls to panic and assert that will need to be changed. It doesn't seem worthwhile to do all of these conversions in order to avoid using jump, but then still being exposed to possible problems caused by jumping due to netboot requiring it anyway. Agreed, we should either do it right and merge the two binaries, or it does not make too much sense to only partly convert the code. I can look into merging the two binaries, but it might also take some time. So for the time being, I'm fine if we include the panic-jumping hack for now, we can still then clean it up later. Thomas
Re: [PATCH] configure: detect --cpu=mipsisa64r6
On 19/06/2024 15.34, Paolo Bonzini wrote: On Wed, Jun 19, 2024 at 2:49 PM Thomas Huth wrote: On 19/06/2024 13.46, Paolo Bonzini wrote: Treat it as a MIPS64 machine. ... diff --git a/configure b/configure index d0703ea279d..3669eec86e5 100755 --- a/configure +++ b/configure @@ -452,7 +452,7 @@ case "$cpu" in linux_arch=loongarch ;; - mips64*) + mips64*|mipsisa64*) Maybe simply switch to mips*64*) ? Not sure if it's a good idea, since we know the exact prefixes. Fair point. Reviewed-by: Thomas Huth
Re: [PATCH] hw/core: Rename CpuTopology to CPUTopology
On 19/06/2024 16.49, Zhao Liu wrote: Hi maintainers, Per my communication with Markus, it seems this renaming matches the "local consistency" principle in (include/hw/boards.h). :-) So do you think this change is acceptable? I don't care too much, both ways of naming look acceptable to me... ... but in case somebody else wants to merge this, FWIW: s390x parts Acked-by: Thomas Huth On Mon, May 27, 2024 at 09:18:37PM +0800, Zhao Liu wrote: Date: Mon, 27 May 2024 21:18:37 +0800 From: Zhao Liu Subject: [PATCH] hw/core: Rename CpuTopology to CPUTopology X-Mailer: git-send-email 2.34.1 Use CPUTopology to honor the generic style of CPU capitalization abbreviations. Signed-off-by: Zhao Liu --- * Split from the previous SMP cache RFC: https://lore.kernel.org/qemu-devel/20240220092504.726064-2-zhao1@linux.intel.com/ --- hw/s390x/cpu-topology.c | 6 +++--- include/hw/boards.h | 8 include/hw/s390x/cpu-topology.h | 6 +++--- tests/unit/test-smp-parse.c | 14 +++--- 4 files changed, 17 insertions(+), 17 deletions(-)
[PATCH] hw/intc/s390_flic: Fix interrupt controller migration on s390x with TCG
Migration of a s390x guest with TCG was long known to be very unstable, so the tests in tests/qtest/migration-test.c are disabled if running with TCG instead of KVM. Nicholas Piggin did a great analysis of the problem: "The flic pending state is not migrated, so if the machine is migrated while an interrupt is pending, it can be lost. This shows up in qtest migration test, an extint is pending (due to console writes?) and the CPU waits via s390_cpu_set_psw and expects the interrupt to wake it. However when the flic pending state is lost, s390_cpu_has_int returns false, so s390_cpu_exec_interrupt falls through to halting again." Thus let's finally migrate the pending state, and to be on the safe side, also the other state variables of the QEMUS390FLICState structure. Signed-off-by: Thomas Huth --- Once this has been merged, we can enable the migration-test again with Nicholas' patch here: https://lore.kernel.org/qemu-devel/20240525131241.378473-3-npig...@gmail.com/ include/hw/s390x/s390_flic.h | 1 + hw/intc/s390_flic.c | 75 ++-- hw/s390x/s390-virtio-ccw.c | 5 +++ 3 files changed, 78 insertions(+), 3 deletions(-) diff --git a/include/hw/s390x/s390_flic.h b/include/hw/s390x/s390_flic.h index 382d9833f1..4d66c5e42e 100644 --- a/include/hw/s390x/s390_flic.h +++ b/include/hw/s390x/s390_flic.h @@ -116,6 +116,7 @@ struct QEMUS390FLICState { uint8_t simm; uint8_t nimm; QLIST_HEAD(, QEMUS390FlicIO) io[8]; +bool migrate_all_state; }; uint32_t qemu_s390_flic_dequeue_service(QEMUS390FLICState *flic); diff --git a/hw/intc/s390_flic.c b/hw/intc/s390_flic.c index 6771645699..a91a4a47e8 100644 --- a/hw/intc/s390_flic.c +++ b/hw/intc/s390_flic.c @@ -361,15 +361,77 @@ bool ais_needed(void *opaque) return s->ais_supported; } +static bool ais_needed_v(void *opaque, int version_id) +{ +return ais_needed(opaque); +} + +static bool qemu_s390_flic_full_state_needed(void *opaque) +{ +QEMUS390FLICState *s = opaque; + +return s->migrate_all_state; +} + +static bool qemu_s390_flic_state_needed(void *opaque) +{ +return ais_needed(opaque) || qemu_s390_flic_full_state_needed(opaque); +} + +static const VMStateDescription vmstate_qemu_s390_flic_io = { + .name = "qemu-s390-flic-io", + .version_id = 1, + .minimum_version_id = 1, + .fields = (const VMStateField[]) { + VMSTATE_UINT16(id, QEMUS390FlicIO), + VMSTATE_UINT16(nr, QEMUS390FlicIO), + VMSTATE_UINT32(parm, QEMUS390FlicIO), + VMSTATE_UINT32(word, QEMUS390FlicIO), + VMSTATE_END_OF_LIST() + }, +}; + +static const VMStateDescription vmstate_qemu_s390_flic_full = { +.name = "qemu-s390-flic-full", +.version_id = 1, +.minimum_version_id = 1, +.needed = qemu_s390_flic_full_state_needed, +.fields = (const VMStateField[]) { +VMSTATE_UINT32(pending, QEMUS390FLICState), +VMSTATE_UINT32(service_param, QEMUS390FLICState), +VMSTATE_QLIST_V(io[0], QEMUS390FLICState, 1, +vmstate_qemu_s390_flic_io, QEMUS390FlicIO, next), +VMSTATE_QLIST_V(io[1], QEMUS390FLICState, 1, +vmstate_qemu_s390_flic_io, QEMUS390FlicIO, next), +VMSTATE_QLIST_V(io[2], QEMUS390FLICState, 1, +vmstate_qemu_s390_flic_io, QEMUS390FlicIO, next), +VMSTATE_QLIST_V(io[3], QEMUS390FLICState, 1, +vmstate_qemu_s390_flic_io, QEMUS390FlicIO, next), +VMSTATE_QLIST_V(io[4], QEMUS390FLICState, 1, +vmstate_qemu_s390_flic_io, QEMUS390FlicIO, next), +VMSTATE_QLIST_V(io[5], QEMUS390FLICState, 1, +vmstate_qemu_s390_flic_io, QEMUS390FlicIO, next), +VMSTATE_QLIST_V(io[6], QEMUS390FLICState, 1, +vmstate_qemu_s390_flic_io, QEMUS390FlicIO, next), +VMSTATE_QLIST_V(io[7], QEMUS390FLICState, 1, +vmstate_qemu_s390_flic_io, QEMUS390FlicIO, next), +VMSTATE_END_OF_LIST() +} +}; + static const VMStateDescription qemu_s390_flic_vmstate = { .name = "qemu-s390-flic", .version_id = 1, .minimum_version_id = 1, -.needed = ais_needed, +.needed = qemu_s390_flic_state_needed, .fields = (const VMStateField[]) { -VMSTATE_UINT8(simm, QEMUS390FLICState), -VMSTATE_UINT8(nimm, QEMUS390FLICState), +VMSTATE_UINT8_TEST(simm, QEMUS390FLICState, ais_needed_v), +VMSTATE_UINT8_TEST(nimm, QEMUS390FLICState, ais_needed_v), VMSTATE_END_OF_LIST() +}, +.subsections = (const VMStateDescription * const []) { +_qemu_s390_flic_full, +NULL } }; @@ -383,11 +445,18 @@ static void qemu_s390_flic_instance_init(Object *obj) } } +static Property qemu_s390_flic_properties[] = { +DEFINE_PROP_BOOL("migrate-all-state", QEMUS390FLICState, +
Re: [PATCH] configure: detect --cpu=mipsisa64r6
On 19/06/2024 13.46, Paolo Bonzini wrote: Treat it as a MIPS64 machine. Where did you encounter it? Signed-off-by: Paolo Bonzini --- configure | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/configure b/configure index d0703ea279d..3669eec86e5 100755 --- a/configure +++ b/configure @@ -452,7 +452,7 @@ case "$cpu" in linux_arch=loongarch ;; - mips64*) + mips64*|mipsisa64*) Maybe simply switch to mips*64*) ? cpu=mips64 host_arch=mips linux_arch=mips Thomas
Re: [kvm-unit-tests PATCH v10 15/15] powerpc/gitlab-ci: Enable more tests with Fedora 40
On 12/06/2024 07.23, Nicholas Piggin wrote: With Fedora 40 (QEMU 8.2), more tests can be enabled. Signed-off-by: Nicholas Piggin --- .gitlab-ci.yml| 2 +- powerpc/unittests.cfg | 17 - 2 files changed, 9 insertions(+), 10 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index ffb3767ec..ee14330a3 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -110,7 +110,7 @@ build-ppc64le: extends: .intree_template image: fedora:40 script: - - dnf install -y qemu-system-ppc gcc-powerpc64-linux-gnu nmap-ncat + - dnf install -y qemu-system-ppc gcc-powerpc64-linux-gnu nmap-ncat jq Please mention the addition of jq in the patch description (why it is needed). Thanks, Thomas
Re: [kvm-unit-tests PATCH v10 13/15] powerpc: Add a panic test
On 12/06/2024 07.23, Nicholas Piggin wrote: This adds a simple panic test for pseries and powernv that works with TCG (unlike the s390x panic tests), making it easier to test this part of the harness code. Signed-off-by: Nicholas Piggin --- lib/powerpc/asm/rtas.h | 1 + lib/powerpc/rtas.c | 16 powerpc/run| 2 +- powerpc/selftest.c | 18 -- powerpc/unittests.cfg | 5 + 5 files changed, 39 insertions(+), 3 deletions(-) Reviewed-by: Thomas Huth
Re: [kvm-unit-tests PATCH v10 10/15] powerpc: Remove remnants of ppc64 directory and build structure
On 12/06/2024 07.23, Nicholas Piggin wrote: This moves merges ppc64 directories and files into powerpc, and merges the 3 makefiles into one. The configure --arch=powerpc option is aliased to ppc64 for good measure. Acked-by: Thomas Huth Signed-off-by: Nicholas Piggin --- Seems like this patch does not apply cleanly anymore, could you please rebase? Thanks, Thomas
Re: [kvm-unit-tests PATCH v10 14/15] powerpc/gitlab-ci: Upgrade powerpc to Fedora 40
On 12/06/2024 07.23, Nicholas Piggin wrote: QEMU has fixed a number of powerpc test fails in Fedora 40, so upgrade to that image. Other architectures seem to be okay with Fedora 40 except for x86-64, which fails some xsave and realmode tests, so only change powerpc to start with. Signed-off-by: Nicholas Piggin --- .gitlab-ci.yml | 2 ++ 1 file changed, 2 insertions(+) FYI, I've pushed now the generic patch to bump all jobs to F40, so I think you can drop this one here from your queue now. Thomas
Re: [PATCH 2/2] target/ppc/arch_dump: set prstatus pid to cpuid
On 19/06/2024 07.00, Omar Sandoval wrote: Every other architecture does this, and debuggers need it to be able to identify which prstatus note corresponds to which CPU. Signed-off-by: Omar Sandoval --- target/ppc/arch_dump.c | 21 - 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/target/ppc/arch_dump.c b/target/ppc/arch_dump.c index a8315659d9..78b4205319 100644 --- a/target/ppc/arch_dump.c +++ b/target/ppc/arch_dump.c @@ -47,9 +47,11 @@ struct PPCUserRegStruct { } QEMU_PACKED; struct PPCElfPrstatus { -char pad1[112]; +char pad1[32]; +uint32_t pid; +uint8_t pad2[76]; struct PPCUserRegStruct pr_reg; -char pad2[40]; +char pad3[40]; } QEMU_PACKED; @@ -96,7 +98,7 @@ typedef struct NoteFuncArg { DumpState *state; } NoteFuncArg; -static void ppc_write_elf_prstatus(NoteFuncArg *arg, PowerPCCPU *cpu) +static void ppc_write_elf_prstatus(NoteFuncArg *arg, PowerPCCPU *cpu, int id) { int i; reg_t cr; @@ -109,6 +111,7 @@ static void ppc_write_elf_prstatus(NoteFuncArg *arg, PowerPCCPU *cpu) prstatus = >contents.prstatus; memset(prstatus, 0, sizeof(*prstatus)); +prstatus->pid = cpu_to_dump32(s, id); reg = >pr_reg; for (i = 0; i < 32; i++) { @@ -127,7 +130,7 @@ static void ppc_write_elf_prstatus(NoteFuncArg *arg, PowerPCCPU *cpu) reg->ccr = cpu_to_dump_reg(s, cr); } -static void ppc_write_elf_fpregset(NoteFuncArg *arg, PowerPCCPU *cpu) +static void ppc_write_elf_fpregset(NoteFuncArg *arg, PowerPCCPU *cpu, int id) { int i; struct PPCElfFpregset *fpregset; @@ -146,7 +149,7 @@ static void ppc_write_elf_fpregset(NoteFuncArg *arg, PowerPCCPU *cpu) fpregset->fpscr = cpu_to_dump_reg(s, cpu->env.fpscr); } -static void ppc_write_elf_vmxregset(NoteFuncArg *arg, PowerPCCPU *cpu) +static void ppc_write_elf_vmxregset(NoteFuncArg *arg, PowerPCCPU *cpu, int id) { int i; struct PPCElfVmxregset *vmxregset; @@ -178,7 +181,7 @@ static void ppc_write_elf_vmxregset(NoteFuncArg *arg, PowerPCCPU *cpu) vmxregset->vscr.u32[3] = cpu_to_dump32(s, ppc_get_vscr(>env)); } -static void ppc_write_elf_vsxregset(NoteFuncArg *arg, PowerPCCPU *cpu) +static void ppc_write_elf_vsxregset(NoteFuncArg *arg, PowerPCCPU *cpu, int id) { int i; struct PPCElfVsxregset *vsxregset; @@ -195,7 +198,7 @@ static void ppc_write_elf_vsxregset(NoteFuncArg *arg, PowerPCCPU *cpu) } } -static void ppc_write_elf_speregset(NoteFuncArg *arg, PowerPCCPU *cpu) +static void ppc_write_elf_speregset(NoteFuncArg *arg, PowerPCCPU *cpu, int id) { struct PPCElfSperegset *speregset; Note *note = >note; @@ -211,7 +214,7 @@ static void ppc_write_elf_speregset(NoteFuncArg *arg, PowerPCCPU *cpu) static const struct NoteFuncDescStruct { int contents_size; -void (*note_contents_func)(NoteFuncArg *arg, PowerPCCPU *cpu); +void (*note_contents_func)(NoteFuncArg *arg, PowerPCCPU *cpu, int id); } note_func[] = { {sizeof_field(Note, contents.prstatus), ppc_write_elf_prstatus}, {sizeof_field(Note, contents.fpregset), ppc_write_elf_fpregset}, @@ -282,7 +285,7 @@ static int ppc_write_all_elf_notes(const char *note_name, arg.note.hdr.n_descsz = cpu_to_dump32(s, nf->contents_size); strncpy(arg.note.name, note_name, sizeof(arg.note.name)); -(*nf->note_contents_func)(, cpu); +(*nf->note_contents_func)(, cpu, id); note_size = sizeof(arg.note) - sizeof(arg.note.contents) + nf->contents_size; Reviewed-by: Thomas Huth
Re: [PATCH 1/2] target/s390x/arch_dump: use correct byte order for pid
On 19/06/2024 07.00, Omar Sandoval wrote: The pid field of prstatus needs to be big endian like all of the other fields. Fixes: f738f296eaae ("s390x/arch_dump: pass cpuid into notes sections") Signed-off-by: Omar Sandoval --- target/s390x/arch_dump.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/target/s390x/arch_dump.c b/target/s390x/arch_dump.c index 7e8a1b4fc0..029d91d93a 100644 --- a/target/s390x/arch_dump.c +++ b/target/s390x/arch_dump.c @@ -102,7 +102,7 @@ static void s390x_write_elf64_prstatus(Note *note, S390CPU *cpu, int id) regs->acrs[i] = cpu_to_be32(cpu->env.aregs[i]); regs->gprs[i] = cpu_to_be64(cpu->env.regs[i]); } -note->contents.prstatus.pid = id; +note->contents.prstatus.pid = cpu_to_be32(id); } static void s390x_write_elf64_fpregset(Note *note, S390CPU *cpu, int id) Reviewed-by: Thomas Huth
[PATCH] MAINTAINERS: Cover all tests/qtest/migration-* files
Beside migration-test.c, there is nowadays migration-helpers.[ch], too, so update the entry in the migration section to also cover these files now. While we're at it, exclude these files in the common qtest section, since the migration test is well covered by the migration maintainers already. Since the test is under very active development, it was causing a lot of distraction to the generic qtest maintainers with regards to the patches that need to be reviewed by the migration maintainers anyway. Signed-off-by: Thomas Huth --- MAINTAINERS | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/MAINTAINERS b/MAINTAINERS index 0f63bcdc7d..32691302db 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3316,6 +3316,7 @@ F: tests/qtest/ F: docs/devel/qgraph.rst F: docs/devel/qtest.rst X: tests/qtest/bios-tables-test* +X: tests/qtest/migration-* Device Fuzzing M: Alexander Bulekov @@ -3412,7 +3413,7 @@ F: include/qemu/userfaultfd.h F: migration/ F: scripts/vmstate-static-checker.py F: tests/vmstate-static-checker-data/ -F: tests/qtest/migration-test.c +F: tests/qtest/migration-* F: docs/devel/migration/ F: qapi/migration.json F: tests/migration/ -- 2.45.2
Re: [kvm-unit-tests PATCH v10 08/15] powerpc: add pmu tests
On 12/06/2024 07.23, Nicholas Piggin wrote: Add some initial PMU testing. - PMC5/6 tests - PMAE / PMI test - BHRB basic tests Signed-off-by: Nicholas Piggin --- ... diff --git a/powerpc/pmu.c b/powerpc/pmu.c new file mode 100644 index 0..bdc45e167 --- /dev/null +++ b/powerpc/pmu.c @@ -0,0 +1,562 @@ ... +static void test_pmc5_with_ldat(void) +{ + unsigned long pmc5_1, pmc5_2; + register unsigned long r4 asm("r4"); + register unsigned long r5 asm("r5"); + register unsigned long r6 asm("r6"); + uint64_t val; + + reset_mmcr0(); + mtspr(SPR_PMC5, 0); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) & ~(MMCR0_FC | MMCR0_FC56)); + asm volatile(".rep 20 ; nop ; .endr" ::: "memory"); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) | (MMCR0_FC | MMCR0_FC56)); + pmc5_1 = mfspr(SPR_PMC5); + + val = 0xdeadbeef; + r4 = 0; + r5 = 0xdeadbeef; + r6 = 100; + reset_mmcr0(); + mtspr(SPR_PMC5, 0); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) & ~(MMCR0_FC | MMCR0_FC56)); + asm volatile(".rep 10 ; nop ; .endr ; ldat %0,%3,0x10 ; .rep 10 ; nop ; .endr" : "=r"(r4), "+r"(r5), "+r"(r6) : "r"() :"memory"); Looks like older versions of Clang do not like this instruction: /tmp/pmu-4fda98.s: Assembler messages: /tmp/pmu-4fda98.s:1685: Error: unrecognized opcode: `ldat' clang-13: error: assembler command failed with exit code 1 (use -v to see invocation) Could you please work-around that issue? Also, please break the very long line here. Thanks! + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) | (MMCR0_FC | MMCR0_FC56)); + pmc5_2 = mfspr(SPR_PMC5); + assert(r4 == 0xdeadbeef); + assert(val == 0xdeadbeef); + + /* TCG does not count instructions around syscalls correctly */ + report_kfail(host_is_tcg, pmc5_1 != pmc5_2 + 1, +"PMC5 counts instructions with ldat"); +} Thomas
Re: [kvm-unit-tests PATCH v10 08/15] powerpc: add pmu tests
On 12/06/2024 07.23, Nicholas Piggin wrote: Add some initial PMU testing. - PMC5/6 tests - PMAE / PMI test - BHRB basic tests Signed-off-by: Nicholas Piggin --- lib/powerpc/asm/processor.h | 2 + lib/powerpc/asm/reg.h | 9 + lib/powerpc/asm/setup.h | 1 + lib/powerpc/setup.c | 20 ++ powerpc/Makefile.common | 3 +- powerpc/pmu.c | 562 powerpc/unittests.cfg | 3 + 7 files changed, 599 insertions(+), 1 deletion(-) create mode 100644 powerpc/pmu.c Hi Nicholas! Something in this patch breaks compiling with Clang in the Travis-CI: https://app.travis-ci.com/github/huth/kvm-unit-tests/builds/271013764 Not sure what exactly it is, the log is not very helpful. Do you have a ppc64 host where you could test your patches with Clang? Thomas
Re: [PATCH v2 0/2] target/s390x: Fix tracing header path in TCG mem_helper.c
On 13/06/2024 12.44, Philippe Mathieu-Daudé wrote: In order to keep trace event headers local to their directory, introduce s390_skeys_get/s390_skeys_set helpers, fixing: In file included from ../../target/s390x/tcg/mem_helper.c:33: ../../target/s390x/tcg/trace.h:1:10: fatal error: 'trace/trace-target_s390x_tcg.h' file not found #include "trace/trace-target_s390x_tcg.h" ^~~~ 1 error generated. ninja: build stopped: subcommand failed. Philippe Mathieu-Daudé (2): hw/s390x: Introduce s390_skeys_get|set() helpers target/s390x: Use s390_skeys_get|set() helper include/hw/s390x/storage-keys.h | 10 ++ hw/s390x/s390-skeys.c | 27 +++ target/s390x/mmu_helper.c | 11 ++- target/s390x/tcg/mem_helper.c | 16 hw/s390x/trace-events | 4 target/s390x/trace-events | 4 6 files changed, 47 insertions(+), 25 deletions(-) Series Reviewed-by: Thomas Huth
[PATCH] hw/virtio: Fix the de-initialization of vhost-user devices
The unrealize functions of the various vhost-user devices are calling the corresponding vhost_*_set_status() functions with a status of 0 to shut down the device correctly. Now these vhost_*_set_status() functions all follow this scheme: bool should_start = virtio_device_should_start(vdev, status); if (vhost_dev_is_started(>vhost_dev) == should_start) { return; } if (should_start) { /* ... do the initialization stuff ... */ } else { /* ... do the cleanup stuff ... */ } The problem here is virtio_device_should_start(vdev, 0) currently always returns "true" since it internally only looks at vdev->started instead of looking at the "status" parameter. Thus once the device got started once, virtio_device_should_start() always returns true and thus the vhost_*_set_status() functions return early, without ever doing any clean-up when being called with status == 0. This causes e.g. problems when trying to hot-plug and hot-unplug a vhost user devices multiple times since the de-initialization step is completely skipped during the unplug operation. This bug has been introduced in commit 9f6bcfd99f ("hw/virtio: move vm_running check to virtio_device_started") which replaced should_start = status & VIRTIO_CONFIG_S_DRIVER_OK; with should_start = virtio_device_started(vdev, status); which later got replaced by virtio_device_should_start(). This blocked the possibility to set should_start to false in case the status flag VIRTIO_CONFIG_S_DRIVER_OK was not set. Fix it by adjusting the virtio_device_should_start() function to only consider the status flag instead of vdev->started. Since this function is only used in the various vhost_*_set_status() functions for exactly the same purpose, it should be fine to fix it in this central place there without any risk to change the behavior of other code. Fixes: 9f6bcfd99f ("hw/virtio: move vm_running check to virtio_device_started") Buglink: https://issues.redhat.com/browse/RHEL-40708 Signed-off-by: Thomas Huth --- include/hw/virtio/virtio.h | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h index 7d5ffdc145..2eafad17b8 100644 --- a/include/hw/virtio/virtio.h +++ b/include/hw/virtio/virtio.h @@ -470,9 +470,9 @@ static inline bool virtio_device_started(VirtIODevice *vdev, uint8_t status) * @vdev - the VirtIO device * @status - the devices status bits * - * This is similar to virtio_device_started() but also encapsulates a - * check on the VM status which would prevent a device starting - * anyway. + * This is similar to virtio_device_started() but ignores vdev->started + * and also encapsulates a check on the VM status which would prevent a + * device from starting anyway. */ static inline bool virtio_device_should_start(VirtIODevice *vdev, uint8_t status) { @@ -480,7 +480,7 @@ static inline bool virtio_device_should_start(VirtIODevice *vdev, uint8_t status return false; } -return virtio_device_started(vdev, status); +return status & VIRTIO_CONFIG_S_DRIVER_OK; } static inline void virtio_set_started(VirtIODevice *vdev, bool started) -- 2.45.2
Re: [RFC PATCH v2 1/2] Add RISC-V CSR qtest support
On 18/06/2024 08.44, Ivan Klokov wrote: The RISC-V architecture supports the creation of custom CSR-mapped devices. It would be convenient to test them in the same way as MMIO-mapped devices. To do this, a new call has been added to read/write CSR registers. Signed-off-by: Ivan Klokov --- ... diff --git a/target/riscv/csr.c b/target/riscv/csr.c index 58ef7079dc..82540ae5dc 100644 --- a/target/riscv/csr.c +++ b/target/riscv/csr.c @@ -29,7 +29,7 @@ #include "sysemu/cpu-timers.h" #include "qemu/guest-random.h" #include "qapi/error.h" - +#include "tests/qtest/libqtest.h" /* CSR function table public API */ void riscv_get_csr_ops(int csrno, riscv_csr_operations *ops) @@ -4549,6 +4549,53 @@ static RISCVException write_jvt(CPURISCVState *env, int csrno, return RISCV_EXCP_NONE; } +static uint64_t csr_call(char *cmd, uint64_t cpu_num, int csrno, +uint64_t *val) +{ +RISCVCPU *cpu = RISCV_CPU(cpu_by_arch_id(cpu_num)); +CPURISCVState *env = >env; + +int ret = RISCV_EXCP_NONE; +if (strcmp(cmd, "get_csr") == 0) { +ret = riscv_csrrw(env, csrno, (target_ulong *)val, 0, 0); + +} else if (strcmp(cmd, "set_csr") == 0) { +ret = riscv_csrrw(env, csrno, NULL, *(target_ulong *)val, MAKE_64BIT_MASK(0, TARGET_LONG_BITS)); +} + +if (ret == RISCV_EXCP_NONE) { +ret = 0; +} Is there a reason for ignoring errors here? If not, I'd rather replace that final if-statement with: else { g_assert_not_reached(); } to make sure that mistakes in setting the right sub-command don't get ignored without any error message. +return ret; +} + +bool csr_qtest_callback(CharBackend *chr, gchar **words) +{ +if (strcmp(words[0], "csr") == 0) { + +uint64_t res, cpu; + +uint64_t val; +int rc, csr; + +rc = qemu_strtou64(words[2], NULL, 0, ); +g_assert(rc == 0); +rc = qemu_strtoi(words[3], NULL, 0, ); +g_assert(rc == 0); +rc = qemu_strtou64(words[4], NULL, 0, ); +g_assert(rc == 0); +res = csr_call(words[1], cpu, csr, ); + +qtest_send_prefix(chr); +qtest_sendf(chr, "OK %"PRIx64" "TARGET_FMT_lx"\n", res, (target_ulong)val); + +return true; +} + +return false; +} + /* * Control and Status Register function table * riscv_csr_operations::predicate() must be provided for an implemented CSR diff --git a/tests/qtest/libqos/csr.c b/tests/qtest/libqos/csr.c new file mode 100644 index 00..2dc52fc442 --- /dev/null +++ b/tests/qtest/libqos/csr.c @@ -0,0 +1,42 @@ +/* + * QTest RISC-V CSR driver + * + * Copyright (c) 2024 Syntacore + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "qemu/osdep.h" +#include "../libqtest.h" +#include "csr.h" + +static uint64_t qcsr_call(QTestState *qts, const char *name, uint64_t cpu, + int csrno, uint64_t *val) +{ +uint64_t res = 0; + +res = qtest_csr_call(qts, name, cpu, csrno, val); + +return res; +} + +int qcsr_get_csr(QTestState *qts, uint64_t cpu, +int csrno, uint64_t *val) +{ +int res; + +res = qcsr_call(qts, "get_csr", cpu, csrno, val); + +return res; +} + +int qcsr_set_csr(QTestState *qts, uint64_t cpu, +int csrno, uint64_t *val) +{ +int res; + +res = qcsr_call(qts, "set_csr", cpu, csrno, val); + +return res; +} Technically, there does not seem to be anything related to libqos in your patch set. libqos is a framework for executing tests on various buses, e.g. to test PCI devices on various host PCI bus implementations. All that is triggered via qos-test.c. Your CSR test does not seem to fit into that catogory, so please put that code rather directly in your riscv-csr-test.c file instead. (unless you want to use it in a lot of other tests in the future, too, then maybe you could move them as static inlines into the csr.h header instead). diff --git a/tests/qtest/libqos/csr.h b/tests/qtest/libqos/csr.h new file mode 100644 index 00..d953735fe8 --- /dev/null +++ b/tests/qtest/libqos/csr.h Again, not related to libqos, please move it up to the qtest folder itself. @@ -0,0 +1,16 @@ +/* + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifndef LIBQOS_CSR_H +#define LIBQOS_CSR_H + +int qcsr_get_csr(QTestState *qts, uint64_t cpu, +int csrno, uint64_t *val); + +int qcsr_set_csr(QTestState *qts, uint64_t cpu, +int csrno, uint64_t *val); + + +#endif /* LIBQOS_CSR_H */ diff --git a/tests/qtest/libqos/meson.build b/tests/qtest/libqos/meson.build index 558eb4c24b..a944febbd8 100644 --- a/tests/qtest/libqos/meson.build +++ b/tests/qtest/libqos/meson.build @@ -25,6 +25,9 @@ libqos_srcs = files( # usb 'usb.c', +#riscv csr +
[PATCH] Documentation: Remove the unused "topology_updates" from kernel-parameters.txt
The "topology_updates" switch has been removed four years ago in commit c30f931e891e ("powerpc/numa: remove ability to enable topology updates"), so let's remove this from the documentation, too. Signed-off-by: Thomas Huth --- Documentation/admin-guide/kernel-parameters.txt | 6 -- 1 file changed, 6 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f58001338860..b75852f1a789 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -6600,12 +6600,6 @@ e.g. base its process migration decisions on it. Default is on. - topology_updates= [KNL, PPC, NUMA] - Format: {off} - Specify if the kernel should ignore (off) - topology updates sent by the hypervisor to this - LPAR. - torture.disable_onoff_at_boot= [KNL] Prevent the CPU-hotplug component of torturing until after init has spawned. -- 2.45.2
[PATCH] Documentation: Remove unused "nps_mtm_hs_ctr" from kernel-parameters.txt
The "nps_mtm_hs_ctr" parameter has been removed in commit dd7c7ab01a04 ("ARC: [plat-eznps]: Drop support for EZChip NPS platform"). Remove it from the documentation now, too. Signed-off-by: Thomas Huth --- Documentation/admin-guide/kernel-parameters.txt | 9 - 1 file changed, 9 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index dd8436c98735..f58001338860 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4143,15 +4143,6 @@ parameter, xsave area per process might occupy more memory on xsaves enabled systems. - nps_mtm_hs_ctr= [KNL,ARC] - This parameter sets the maximum duration, in - cycles, each HW thread of the CTOP can run - without interruptions, before HW switches it. - The actual maximum duration is 16 times this - parameter's value. - Format: integer between 1 and 255 - Default: 255 - nptcg= [IA-64] Override max number of concurrent global TLB purges which is reported from either PAL_VM_SUMMARY or SAL PALO. -- 2.45.2 ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
[PATCH v2] target/s390x: Add a CONFIG switch to disable legacy CPUs
The oldest model that IBM still supports is the z13. Considering that each generation can "emulate" the previous two generations in hardware (via the "IBC" feature of the CPUs), this means that everything that is older than z114/196 is not an officially supported CPU model anymore. The Linux kernel still support the z10, so if we also take this into account, everything older than that can definitely be considered as a legacy CPU model. For downstream builds of QEMU, we would like to be able to disable these legacy CPUs in the build. Thus add a CONFIG switch that can be used to disable them (and old machine types that use them by default). Signed-off-by: Thomas Huth --- v2: - Add comment in source code - Only consider z9 and older as legacy hw/s390x/s390-virtio-ccw.c | 5 + target/s390x/cpu_models.c | 9 + target/s390x/Kconfig | 5 + 3 files changed, 19 insertions(+) diff --git a/target/s390x/Kconfig b/target/s390x/Kconfig index d886be48b4..8a95f2bc3f 100644 --- a/target/s390x/Kconfig +++ b/target/s390x/Kconfig @@ -2,3 +2,8 @@ config S390X bool select PCI select S390_FLIC + +config S390X_LEGACY_CPUS +bool +default y +depends on S390X diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c index efb508cd2e..a6bafe153e 100644 --- a/target/s390x/cpu_models.c +++ b/target/s390x/cpu_models.c @@ -22,6 +22,7 @@ #include "qemu/module.h" #include "qemu/hw-version.h" #include "qemu/qemu-print.h" +#include CONFIG_DEVICES #ifndef CONFIG_USER_ONLY #include "sysemu/sysemu.h" #include "target/s390x/kvm/pv.h" @@ -47,6 +48,13 @@ * generation 15 one base feature and one optional feature have been deprecated. */ static S390CPUDef s390_cpu_defs[] = { +/* + * Linux requires at least z10 nowadays, and IBM only supports recent CPUs + * (see https://www.ibm.com/support/pages/ibm-mainframe-life-cycle-history), + * so we consider older CPUs as legacy that can optionally be disabled via + * the CONFIG_S390X_LEGACY_CPUS config switch. + */ +#ifdef CONFIG_S390X_LEGACY_CPUS CPUDEF_INIT(0x2064, 7, 1, 38, 0xU, "z900", "IBM zSeries 900 GA1"), CPUDEF_INIT(0x2064, 7, 2, 38, 0xU, "z900.2", "IBM zSeries 900 GA2"), CPUDEF_INIT(0x2064, 7, 3, 38, 0xU, "z900.3", "IBM zSeries 900 GA3"), @@ -64,6 +72,7 @@ static S390CPUDef s390_cpu_defs[] = { CPUDEF_INIT(0x2096, 9, 2, 40, 0xU, "z9BC", "IBM System z9 BC GA1"), CPUDEF_INIT(0x2094, 9, 3, 40, 0xU, "z9EC.3", "IBM System z9 EC GA3"), CPUDEF_INIT(0x2096, 9, 3, 40, 0xU, "z9BC.2", "IBM System z9 BC GA2"), +#endif CPUDEF_INIT(0x2097, 10, 1, 43, 0xU, "z10EC", "IBM System z10 EC GA1"), CPUDEF_INIT(0x2097, 10, 2, 43, 0xU, "z10EC.2", "IBM System z10 EC GA2"), CPUDEF_INIT(0x2098, 10, 2, 43, 0xU, "z10BC", "IBM System z10 BC GA1"), diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c index 3d0bc3e7f2..cd063f8b64 100644 --- a/hw/s390x/s390-virtio-ccw.c +++ b/hw/s390x/s390-virtio-ccw.c @@ -47,6 +47,7 @@ #include "migration/blocker.h" #include "qapi/visitor.h" #include "hw/s390x/cpu-topology.h" +#include CONFIG_DEVICES static Error *pv_mig_blocker; @@ -1126,6 +1127,8 @@ static void ccw_machine_2_12_class_options(MachineClass *mc) } DEFINE_CCW_MACHINE(2_12, "2.12", false); +#ifdef CONFIG_S390X_LEGACY_CPUS + static void ccw_machine_2_11_instance_options(MachineState *machine) { static const S390FeatInit qemu_cpu_feat = { S390_FEAT_LIST_QEMU_V2_11 }; @@ -1272,6 +1275,8 @@ static void ccw_machine_2_4_class_options(MachineClass *mc) } DEFINE_CCW_MACHINE(2_4, "2.4", false); +#endif + static void ccw_machine_register_types(void) { type_register_static(_machine_info); -- 2.45.2
Re: [PATCH] target/s390x: Add a CONFIG switch to disable legacy CPUs
On 14/06/2024 10.17, Christian Borntraeger wrote: Am 14.06.24 um 09:15 schrieb Thomas Huth: On 14/06/2024 08.07, Christian Borntraeger wrote: Am 13.06.24 um 19:07 schrieb Thomas Huth: Old CPU models are not officially supported anymore by IBM, and for downstream builds of QEMU, we would like to be able to disable these CPUs in the build. Thus add a CONFIG switch that can be used to disable these CPUs (and old machine types that use them by default). Signed-off-by: Thomas Huth --- If you're interested, the PDF that can be downloaded from https://www.ibm.com/support/pages/ibm-mainframe-life-cycle-history shows the supported CPUs in a nice diagram z13 is still supported so the patch needs to be fixed at least. Oh, drat, I misread the diagram, indeed. 'should have looked at the table instead :-/ Furthermore, z14 has the IBC/VAL cabability to behave like a z13, same for z15. (we do support VAL to N-2) Hmm, so if z13 is still supported, and also has the possibility to do N-2, I assume the z114/196 and z12 should still be considered as non-legacy, too? Yes. z9 and older is no longer relevant (only for people that collect old HW) but the upstream kernel has an minimum requirement for z10 so maybe we still want to support that for testing purposes. Ok, fair point, kernel support is a good hint, too. For upstream I prefer to keep the full list but I would be ok to hide those ancient things behind a config switch. That's what this patch is trying to do - by default, all CPUs are still enabled, you have actively disable the switch to get rid of the old ones. Thomas
Re: [PATCH] target/s390x: Add a CONFIG switch to disable legacy CPUs
On 14/06/2024 08.07, Christian Borntraeger wrote: Am 13.06.24 um 19:07 schrieb Thomas Huth: Old CPU models are not officially supported anymore by IBM, and for downstream builds of QEMU, we would like to be able to disable these CPUs in the build. Thus add a CONFIG switch that can be used to disable these CPUs (and old machine types that use them by default). Signed-off-by: Thomas Huth --- If you're interested, the PDF that can be downloaded from https://www.ibm.com/support/pages/ibm-mainframe-life-cycle-history shows the supported CPUs in a nice diagram z13 is still supported so the patch needs to be fixed at least. Oh, drat, I misread the diagram, indeed. 'should have looked at the table instead :-/ Furthermore, z14 has the IBC/VAL cabability to behave like a z13, same for z15. (we do support VAL to N-2) Hmm, so if z13 is still supported, and also has the possibility to do N-2, I assume the z114/196 and z12 should still be considered as non-legacy, too? I fail to see the value of this given how stable this code is. As mentioned in the patch description, it's meant for downstream builds of QEMU where we'd like to avoid legacy devices and CPUs in the builds (it doesn't make sense to have support for e.g. z900 CPUs there). Thomas
Re: [PATCH] target/s390x: Add a CONFIG switch to disable legacy CPUs
On 13/06/2024 19.17, Philippe Mathieu-Daudé wrote: Hi Thomas, On 13/6/24 19:07, Thomas Huth wrote: Old CPU models are not officially supported anymore by IBM, and for downstream builds of QEMU, we would like to be able to disable these CPUs in the build. Thus add a CONFIG switch that can be used to disable these CPUs (and old machine types that use them by default). Signed-off-by: Thomas Huth --- If you're interested, the PDF that can be downloaded from https://www.ibm.com/support/pages/ibm-mainframe-life-cycle-history shows the supported CPUs in a nice diagram I'd add this link ... hw/s390x/s390-virtio-ccw.c | 9 + target/s390x/cpu_models.c | 3 +++ target/s390x/Kconfig | 5 + 3 files changed, 17 insertions(+) diff --git a/target/s390x/Kconfig b/target/s390x/Kconfig index d886be48b4..8a95f2bc3f 100644 --- a/target/s390x/Kconfig +++ b/target/s390x/Kconfig @@ -2,3 +2,8 @@ config S390X bool select PCI select S390_FLIC + +config S390X_LEGACY_CPUS + bool + default y + depends on S390X diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c index efb508cd2e..ffae95dcb3 100644 --- a/target/s390x/cpu_models.c +++ b/target/s390x/cpu_models.c @@ -22,6 +22,7 @@ #include "qemu/module.h" #include "qemu/hw-version.h" #include "qemu/qemu-print.h" +#include CONFIG_DEVICES #ifndef CONFIG_USER_ONLY #include "sysemu/sysemu.h" #include "target/s390x/kvm/pv.h" @@ -47,6 +48,7 @@ * generation 15 one base feature and one optional feature have been deprecated. */ static S390CPUDef s390_cpu_defs[] = { +#ifdef CONFIG_S390X_LEGACY_CPUS ... here :) Can do ... let's just hope that the link is stable in the course of time! CPUDEF_INIT(0x2064, 7, 1, 38, 0xU, "z900", "IBM zSeries 900 GA1"), CPUDEF_INIT(0x2064, 7, 2, 38, 0xU, "z900.2", "IBM zSeries 900 GA2"), CPUDEF_INIT(0x2064, 7, 3, 38, 0xU, "z900.3", "IBM zSeries 900 GA3"), @@ -78,6 +80,7 @@ static S390CPUDef s390_cpu_defs[] = { CPUDEF_INIT(0x2964, 13, 1, 47, 0x0800U, "z13", "IBM z13 GA1"), CPUDEF_INIT(0x2964, 13, 2, 47, 0x0800U, "z13.2", "IBM z13 GA2"), CPUDEF_INIT(0x2965, 13, 2, 47, 0x0800U, "z13s", "IBM z13s GA1"), +#endif CPUDEF_INIT(0x3906, 14, 1, 47, 0x0800U, "z14", "IBM z14 GA1"), CPUDEF_INIT(0x3906, 14, 2, 47, 0x0800U, "z14.2", "IBM z14 GA2"), CPUDEF_INIT(0x3907, 14, 1, 47, 0x0800U, "z14ZR1", "IBM z14 Model ZR1 GA1"), diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c index 3d0bc3e7f2..7529d2fba8 100644 --- a/hw/s390x/s390-virtio-ccw.c +++ b/hw/s390x/s390-virtio-ccw.c @@ -47,6 +47,7 @@ #include "migration/blocker.h" #include "qapi/visitor.h" #include "hw/s390x/cpu-topology.h" +#include CONFIG_DEVICES static Error *pv_mig_blocker; @@ -603,6 +604,8 @@ static void s390_nmi(NMIState *n, int cpu_index, Error **errp) s390_cpu_restart(S390_CPU(cs)); } +#ifdef CONFIG_S390X_LEGACY_CPUS + static ram_addr_t s390_fixup_ram_size(ram_addr_t sz) { /* same logic as in sclp.c */ @@ -623,6 +626,8 @@ static ram_addr_t s390_fixup_ram_size(ram_addr_t sz) return newsz; } +#endif + static inline bool machine_get_aes_key_wrap(Object *obj, Error **errp) { S390CcwMachineState *ms = S390_CCW_MACHINE(obj); @@ -989,6 +994,8 @@ static void ccw_machine_6_1_class_options(MachineClass *mc) } DEFINE_CCW_MACHINE(6_1, "6.1", false); +#ifdef CONFIG_S390X_LEGACY_CPUS + static void ccw_machine_6_0_instance_options(MachineState *machine) { static const S390FeatInit qemu_cpu_feat = { S390_FEAT_LIST_QEMU_V6_0 }; Should we deprecate machines up to v6.0? I'm still hoping that Daniel will be able to get his auto-deprecation patches merged in this cycle - then we shouldn't derive from that, I think. By the way, what's up with your i440fx removal series? ... it would be good to get this finally merged now...? Thomas
Re: [PATCH] tests/qtest/fuzz/virtio_net_fuzz.c: fix virtio_net_fuzz_multi
On 13/06/2024 13.59, Дмитрий Фролов wrote: On 13.06.2024 13:08, Thomas Huth wrote: On 23/05/2024 12.28, Dmitry Frolov wrote: If QTestState was already CLOSED due to error, calling qtest_clock_step() afterwards makes no sense and only raises false-crash with message: "assertion timer != NULL failed". Signed-off-by: Dmitry Frolov --- tests/qtest/fuzz/virtio_net_fuzz.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tests/qtest/fuzz/virtio_net_fuzz.c b/tests/qtest/fuzz/virtio_net_fuzz.c index e239875e3b..2f57a8ddd8 100644 --- a/tests/qtest/fuzz/virtio_net_fuzz.c +++ b/tests/qtest/fuzz/virtio_net_fuzz.c @@ -81,6 +81,9 @@ static void virtio_net_fuzz_multi(QTestState *s, /* Run the main loop */ qtest_clock_step(s, 100); flush_events(s); + if (!qtest_probe_child(s)) { + return; + } According to your patch description, it rather sounds like the check should be done before the qtest_clock_step() ? ... or where does the QTestState get closed? During flush_events() ? To my understanding, the main loop is executed during flush_events(), where an error may occur. This behavior is legit and should not produce any crash report. Without the check, the test continues to wait on used descriptors, and finally fails with message: "assertion timer != NULL failed". Thus, any invalid input data produces a meaningless crash report. Ok, makes sense now, thanks! There seems to be another while loop with a flush_events() call later in this file, does it maybe need the same treatment, too? Thomas
[PATCH] target/s390x: Add a CONFIG switch to disable legacy CPUs
Old CPU models are not officially supported anymore by IBM, and for downstream builds of QEMU, we would like to be able to disable these CPUs in the build. Thus add a CONFIG switch that can be used to disable these CPUs (and old machine types that use them by default). Signed-off-by: Thomas Huth --- If you're interested, the PDF that can be downloaded from https://www.ibm.com/support/pages/ibm-mainframe-life-cycle-history shows the supported CPUs in a nice diagram hw/s390x/s390-virtio-ccw.c | 9 + target/s390x/cpu_models.c | 3 +++ target/s390x/Kconfig | 5 + 3 files changed, 17 insertions(+) diff --git a/target/s390x/Kconfig b/target/s390x/Kconfig index d886be48b4..8a95f2bc3f 100644 --- a/target/s390x/Kconfig +++ b/target/s390x/Kconfig @@ -2,3 +2,8 @@ config S390X bool select PCI select S390_FLIC + +config S390X_LEGACY_CPUS +bool +default y +depends on S390X diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c index efb508cd2e..ffae95dcb3 100644 --- a/target/s390x/cpu_models.c +++ b/target/s390x/cpu_models.c @@ -22,6 +22,7 @@ #include "qemu/module.h" #include "qemu/hw-version.h" #include "qemu/qemu-print.h" +#include CONFIG_DEVICES #ifndef CONFIG_USER_ONLY #include "sysemu/sysemu.h" #include "target/s390x/kvm/pv.h" @@ -47,6 +48,7 @@ * generation 15 one base feature and one optional feature have been deprecated. */ static S390CPUDef s390_cpu_defs[] = { +#ifdef CONFIG_S390X_LEGACY_CPUS CPUDEF_INIT(0x2064, 7, 1, 38, 0xU, "z900", "IBM zSeries 900 GA1"), CPUDEF_INIT(0x2064, 7, 2, 38, 0xU, "z900.2", "IBM zSeries 900 GA2"), CPUDEF_INIT(0x2064, 7, 3, 38, 0xU, "z900.3", "IBM zSeries 900 GA3"), @@ -78,6 +80,7 @@ static S390CPUDef s390_cpu_defs[] = { CPUDEF_INIT(0x2964, 13, 1, 47, 0x0800U, "z13", "IBM z13 GA1"), CPUDEF_INIT(0x2964, 13, 2, 47, 0x0800U, "z13.2", "IBM z13 GA2"), CPUDEF_INIT(0x2965, 13, 2, 47, 0x0800U, "z13s", "IBM z13s GA1"), +#endif CPUDEF_INIT(0x3906, 14, 1, 47, 0x0800U, "z14", "IBM z14 GA1"), CPUDEF_INIT(0x3906, 14, 2, 47, 0x0800U, "z14.2", "IBM z14 GA2"), CPUDEF_INIT(0x3907, 14, 1, 47, 0x0800U, "z14ZR1", "IBM z14 Model ZR1 GA1"), diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c index 3d0bc3e7f2..7529d2fba8 100644 --- a/hw/s390x/s390-virtio-ccw.c +++ b/hw/s390x/s390-virtio-ccw.c @@ -47,6 +47,7 @@ #include "migration/blocker.h" #include "qapi/visitor.h" #include "hw/s390x/cpu-topology.h" +#include CONFIG_DEVICES static Error *pv_mig_blocker; @@ -603,6 +604,8 @@ static void s390_nmi(NMIState *n, int cpu_index, Error **errp) s390_cpu_restart(S390_CPU(cs)); } +#ifdef CONFIG_S390X_LEGACY_CPUS + static ram_addr_t s390_fixup_ram_size(ram_addr_t sz) { /* same logic as in sclp.c */ @@ -623,6 +626,8 @@ static ram_addr_t s390_fixup_ram_size(ram_addr_t sz) return newsz; } +#endif + static inline bool machine_get_aes_key_wrap(Object *obj, Error **errp) { S390CcwMachineState *ms = S390_CCW_MACHINE(obj); @@ -989,6 +994,8 @@ static void ccw_machine_6_1_class_options(MachineClass *mc) } DEFINE_CCW_MACHINE(6_1, "6.1", false); +#ifdef CONFIG_S390X_LEGACY_CPUS + static void ccw_machine_6_0_instance_options(MachineState *machine) { static const S390FeatInit qemu_cpu_feat = { S390_FEAT_LIST_QEMU_V6_0 }; @@ -1272,6 +1279,8 @@ static void ccw_machine_2_4_class_options(MachineClass *mc) } DEFINE_CCW_MACHINE(2_4, "2.4", false); +#endif + static void ccw_machine_register_types(void) { type_register_static(_machine_info); -- 2.45.2
Re: [PATCH] tests/qtest/fuzz/virtio_net_fuzz.c: fix virtio_net_fuzz_multi
On 23/05/2024 12.28, Dmitry Frolov wrote: If QTestState was already CLOSED due to error, calling qtest_clock_step() afterwards makes no sense and only raises false-crash with message: "assertion timer != NULL failed". Signed-off-by: Dmitry Frolov --- tests/qtest/fuzz/virtio_net_fuzz.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tests/qtest/fuzz/virtio_net_fuzz.c b/tests/qtest/fuzz/virtio_net_fuzz.c index e239875e3b..2f57a8ddd8 100644 --- a/tests/qtest/fuzz/virtio_net_fuzz.c +++ b/tests/qtest/fuzz/virtio_net_fuzz.c @@ -81,6 +81,9 @@ static void virtio_net_fuzz_multi(QTestState *s, /* Run the main loop */ qtest_clock_step(s, 100); flush_events(s); +if (!qtest_probe_child(s)) { +return; +} According to your patch description, it rather sounds like the check should be done before the qtest_clock_step() ? ... or where does the QTestState get closed? During flush_events() ? Thomas
Re: [PATCH 2/2] QTest example for RISC-V CSR register
On 13/06/2024 11.56, Ivan Klokov wrote: Added demo for reading CSR register from qtest environment. Signed-off-by: Ivan Klokov --- tests/qtest/meson.build | 2 ++ tests/qtest/riscv-csr-test.c | 68 2 files changed, 70 insertions(+) create mode 100644 tests/qtest/riscv-csr-test.c diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build index 12792948ff..45d651da99 100644 --- a/tests/qtest/meson.build +++ b/tests/qtest/meson.build @@ -259,6 +259,8 @@ qtests_s390x = \ qtests_riscv32 = \ (config_all_devices.has_key('CONFIG_SIFIVE_E_AON') ? ['sifive-e-aon-watchdog-test'] : []) +qtests_riscv32 += ['riscv-csr-test'] + qos_test_ss = ss.source_set() qos_test_ss.add( 'ac97-test.c', diff --git a/tests/qtest/riscv-csr-test.c b/tests/qtest/riscv-csr-test.c new file mode 100644 index 00..715d5fe4b7 --- /dev/null +++ b/tests/qtest/riscv-csr-test.c @@ -0,0 +1,68 @@ +/* + * QTest testcase for RISC-V CSRs + * + * Copyright (c) 2024 Syntacore. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the + * Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License + * for more details. + */ + +#include "qemu/osdep.h" +#include "libqtest-single.h" +#include "qemu/error-report.h" + +#include "qapi/qmp/qdict.h" +#include "qapi/qmp/qjson.h" +#include "qapi/qmp/qlist.h" +#include "qapi/qmp/qstring.h" +#include "qapi/qmp/qobject.h" +#include "qapi/qobject-input-visitor.h" +#include "qapi/qobject-output-visitor.h" +#include "qom/object_interfaces.h" Do you really need all these headers for the short code below? Please double check and trim the list. +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "qemu/error-report.h" +#include "hw/qdev-properties.h" + +#include "qemu/osdep.h" Duplicate include statement, please remove. +#include "qemu/cutils.h" +#include "libqtest.h" + +#include "libqos/csr.h" +#include "libqos/libqos.h" + +static void run_test_csr(void) +{ + +uint64_t res; +uint64_t val = 0; + +res = qcsr_get_csr(global_qtest, 0, 0xf11, ); + +g_assert_cmpint(res, ==, 0); +g_assert_cmpint(val, ==, 0x100); +} + +int main(int argc, char **argv) +{ +g_test_init(, , NULL); + +qtest_add_func("/cpu/csr", run_test_csr); + +qtest_start("--nographic -machine virt -cpu any,mvendorid=0x100"); You don't need --nographic, it's been taken care off by the libqtest framework already. +g_test_run(); + +qtest_quit(global_qtest); + +return 0; You should return the result of g_test_run() here, otherwise your test will look like it always succeeds. +} Thomas
Re: [PATCH] tests/qtest/fuzz: fix memleak in qos_fuzz.c
On 21/05/2024 12.31, Dmitry Frolov wrote: Found with fuzzing for qemu-8.2, but also relevant for master Signed-off-by: Dmitry Frolov --- tests/qtest/fuzz/qos_fuzz.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tests/qtest/fuzz/qos_fuzz.c b/tests/qtest/fuzz/qos_fuzz.c index b71e945c5f..d3839bf999 100644 --- a/tests/qtest/fuzz/qos_fuzz.c +++ b/tests/qtest/fuzz/qos_fuzz.c @@ -180,6 +180,7 @@ static void walk_path(QOSGraphNode *orig_path, int len) fuzz_path_vec = path_vec; } else { +g_string_free(cmd_line, true); g_free(path_vec); } Reviewed-by: Thomas Huth
Re: Qemu License question
On 13/06/2024 07.22, Markus Armbruster wrote: Manos Pitsidianakis writes: On Thu, 13 Jun 2024 06:26, Peng Fan wrote: Hi All, The following files are marked as GPL-3.0-or-later. Will these Conflict with Qemu LICENSE? Should we update the files to GPL-2.0? ./tests/tcg/aarch64/semicall.h:7: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/x86_64/system/boot.S:13: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/riscv64/semicall.h:7: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/multiarch/float_convs.c:6: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/multiarch/float_helpers.h:6: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/multiarch/libs/float_helpers.c:10: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/multiarch/arm-compat-semi/semihosting.c:7: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/multiarch/arm-compat-semi/semiconsole.c:7: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/multiarch/float_convd.c:6: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/multiarch/float_madds.c:6: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/i386/system/boot.S:10: * SPDX-License-Identifier: GPL-3.0-or-later ./tests/tcg/arm/semicall.h:7: * SPDX-License-Identifier: GPL-3.0-or-later Thanks, Peng. Hello Peng, These are all actually GPL-2.0-or-later, in fact I can't find the string GPL-3.0-or-later in the current master at all. See commit 542b10bd148a (tests/tcg: update licenses to GPLv2 as intended). Maybe it could be included in the stable releases before 9.0, too? CC:-ing qemu-stable for this now. Thomas
Re: [PATCH 1/1] s390/virtio_ccw: fix config change notifications
On 11/06/2024 23.47, Halil Pasic wrote: Commit e3e9bda38e6d ("s390/virtio_ccw: use DMA handle from DMA API") broke configuration change notifications for virtio-ccw by putting the DMA address of *indicatorp directly into ccw->cda disregarding the fact that if !!(vcdev->is_thinint) then the function virtio_ccw_register_adapter_ind() will overwrite that ccw->cda value with the address of the virtio_thinint_area so it can actually set up the adapter interrupts via CCW_CMD_SET_IND_ADAPTER. Thus we end up pointing to the wrong object for both CCW_CMD_SET_IND if setting up the adapter interrupts fails, and for CCW_CMD_SET_CONF_IND regardless whether it succeeds or fails. To fix this, let us save away the dma address of *indicatorp in a local variable, and copy it to ccw->cda after the "vcdev->is_thinint" branch. Reported-by: Boqiao Fu Reported-by: Sebastian Mitterle Fixes: e3e9bda38e6d ("s390/virtio_ccw: use DMA handle from DMA API") Signed-off-by: Halil Pasic --- I know that checkpatch.pl complains about a missing 'Closes' tag. Unfortunately I don't have an appropriate URL at hand. @Sebastian, @Boqiao: do you have any suggetions? Closes: https://issues.redhat.com/browse/RHEL-39983 ? Anyway, I've tested the patch and it indeed fixes the problem with virtio-balloon and the link state for me: Tested-by: Thomas Huth
[PULL 13/15] tests/unit/test-smp-parse: Test "modules" and "dies" combination case
From: Zhao Liu Since i386 PC machine supports both "modules" and "dies" in -smp, add the "modules" and "dies" combination test case to match the actual topology usage scenario. Signed-off-by: Zhao Liu Tested-by: Yongwei Ma Message-ID: <20240529061925.350323-8-zhao1@intel.com> Signed-off-by: Thomas Huth --- tests/unit/test-smp-parse.c | 103 1 file changed, 103 insertions(+) diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c index 01832e5eda..2ca8530e93 100644 --- a/tests/unit/test-smp-parse.c +++ b/tests/unit/test-smp-parse.c @@ -445,6 +445,33 @@ static const struct SMPTestData data_with_dies_invalid[] = { }, }; +static const struct SMPTestData data_with_modules_dies_invalid[] = { +{ +/* + * config: -smp 200,sockets=3,dies=5,modules=2,cores=4,\ + * threads=2,maxcpus=200 + */ +.config = SMP_CONFIG_WITH_MODS_DIES(T, 200, T, 3, T, 5, T, +2, T, 4, T, 2, T, 200), +.expect_error = "Invalid CPU topology: " +"product of the hierarchy must match maxcpus: " +"sockets (3) * dies (5) * modules (2) * " +"cores (4) * threads (2) != maxcpus (200)", +}, { +/* + * config: -smp 242,sockets=3,dies=5,modules=2,cores=4,\ + * threads=2,maxcpus=240 + */ +.config = SMP_CONFIG_WITH_MODS_DIES(T, 242, T, 3, T, 5, T, +2, T, 4, T, 2, T, 240), +.expect_error = "Invalid CPU topology: " +"maxcpus must be equal to or greater than smp: " +"sockets (3) * dies (5) * modules (2) * " +"cores (4) * threads (2) " +"== maxcpus (240) < smp_cpus (242)", +}, +}; + static const struct SMPTestData data_with_clusters_invalid[] = { { /* config: -smp 16,sockets=2,clusters=2,cores=4,threads=2,maxcpus=16 */ @@ -905,6 +932,14 @@ static void machine_with_dies_class_init(ObjectClass *oc, void *data) mc->smp_props.dies_supported = true; } +static void machine_with_modules_dies_class_init(ObjectClass *oc, void *data) +{ +MachineClass *mc = MACHINE_CLASS(oc); + +mc->smp_props.modules_supported = true; +mc->smp_props.dies_supported = true; +} + static void machine_with_clusters_class_init(ObjectClass *oc, void *data) { MachineClass *mc = MACHINE_CLASS(oc); @@ -1082,6 +1117,67 @@ static void test_with_dies(const void *opaque) object_unref(obj); } +static void test_with_modules_dies(const void *opaque) +{ +const char *machine_type = opaque; +Object *obj = object_new(machine_type); +MachineState *ms = MACHINE(obj); +MachineClass *mc = MACHINE_GET_CLASS(obj); +SMPTestData data = {}; +unsigned int num_modules = 5, num_dies = 3; +int i; + +for (i = 0; i < ARRAY_SIZE(data_generic_valid); i++) { +data = data_generic_valid[i]; +unsupported_params_init(mc, ); + +/* + * when modules and dies parameters are omitted, they will + * be both set as 1. + */ +data.expect_prefer_sockets.modules = 1; +data.expect_prefer_sockets.dies = 1; +data.expect_prefer_cores.modules = 1; +data.expect_prefer_cores.dies = 1; + +smp_parse_test(ms, , true); + +/* when modules and dies parameters are both specified */ +data.config.has_modules = true; +data.config.modules = num_modules; +data.config.has_dies = true; +data.config.dies = num_dies; + +if (data.config.has_cpus) { +data.config.cpus *= num_modules * num_dies; +} +if (data.config.has_maxcpus) { +data.config.maxcpus *= num_modules * num_dies; +} + +data.expect_prefer_sockets.modules = num_modules; +data.expect_prefer_sockets.dies = num_dies; +data.expect_prefer_sockets.cpus *= num_modules * num_dies; +data.expect_prefer_sockets.max_cpus *= num_modules * num_dies; + +data.expect_prefer_cores.modules = num_modules; +data.expect_prefer_cores.dies = num_dies; +data.expect_prefer_cores.cpus *= num_modules * num_dies; +data.expect_prefer_cores.max_cpus *= num_modules * num_dies; + +smp_parse_test(ms, , true); +} + +for (i = 0; i < ARRAY_SIZE(data_with_modules_dies_invalid); i++) { +data = data_with_modules_dies_invalid[i]; +unsupported_params_init(mc, ); + +smp_parse_test(ms, , false); +} + +object_unref(obj); +} + static void test_with_clusters(const void *opaque) { const char *machine_type = opaque; @@ -1398,6 +1494,10 @@ static const TypeInfo smp_machine_types[] = {
[PULL 03/15] tests/qtest/libqtest: add qtest_has_cpu_model() api
From: Ani Sinha Added a new test api qtest_has_cpu_model() in order to check availability of some cpu models in the current QEMU binary. The specific architecture of the QEMU binary is selected using the QTEST_QEMU_BINARY environment variable. This api would be useful to run tests against some older cpu models after checking if QEMU actually supported these models. Signed-off-by: Ani Sinha Reviewed-by: Reviewed-by: Daniel P. Berrangé Message-ID: <20240610155303.7933-3-anisi...@redhat.com> Signed-off-by: Thomas Huth --- tests/qtest/libqtest.h | 8 tests/qtest/libqtest.c | 83 ++ 2 files changed, 91 insertions(+) diff --git a/tests/qtest/libqtest.h b/tests/qtest/libqtest.h index 6e3d3525bf..beb96b18eb 100644 --- a/tests/qtest/libqtest.h +++ b/tests/qtest/libqtest.h @@ -949,6 +949,14 @@ bool qtest_has_machine(const char *machine); */ bool qtest_has_machine_with_env(const char *var, const char *machine); +/** + * qtest_has_cpu_model: + * @cpu: The cpu to look for + * + * Returns: true if the cpu is available in the target binary. + */ +bool qtest_has_cpu_model(const char *cpu); + /** * qtest_has_device: * @device: The device to look for diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c index d8f80d335e..18e2f7f282 100644 --- a/tests/qtest/libqtest.c +++ b/tests/qtest/libqtest.c @@ -37,6 +37,7 @@ #include "qapi/qmp/qjson.h" #include "qapi/qmp/qlist.h" #include "qapi/qmp/qstring.h" +#include "qapi/qmp/qbool.h" #define MAX_IRQ 256 @@ -1471,6 +1472,12 @@ struct MachInfo { char *alias; }; +struct CpuModel { +char *name; +char *alias_of; +bool deprecated; +}; + static void qtest_free_machine_list(struct MachInfo *machines) { if (machines) { @@ -1550,6 +1557,82 @@ static struct MachInfo *qtest_get_machines(const char *var) return machines; } +static struct CpuModel *qtest_get_cpu_models(void) +{ +static struct CpuModel *cpus; +QDict *response, *minfo; +QList *list; +const QListEntry *p; +QObject *qobj; +QString *qstr; +QBool *qbool; +QTestState *qts; +int idx; + +if (cpus) { +return cpus; +} + +silence_spawn_log = !g_test_verbose(); + +qts = qtest_init_with_env(NULL, "-machine none"); +response = qtest_qmp(qts, "{ 'execute': 'query-cpu-definitions' }"); +g_assert(response); +list = qdict_get_qlist(response, "return"); +g_assert(list); + +cpus = g_new0(struct CpuModel, qlist_size(list) + 1); + +for (p = qlist_first(list), idx = 0; p; p = qlist_next(p), idx++) { +minfo = qobject_to(QDict, qlist_entry_obj(p)); +g_assert(minfo); + +qobj = qdict_get(minfo, "name"); +g_assert(qobj); +qstr = qobject_to(QString, qobj); +g_assert(qstr); +cpus[idx].name = g_strdup(qstring_get_str(qstr)); + +qobj = qdict_get(minfo, "alias_of"); +if (qobj) { /* old machines do not report aliases */ +qstr = qobject_to(QString, qobj); +g_assert(qstr); +cpus[idx].alias_of = g_strdup(qstring_get_str(qstr)); +} else { +cpus[idx].alias_of = NULL; +} + +qobj = qdict_get(minfo, "deprecated"); +qbool = qobject_to(QBool, qobj); +g_assert(qbool); +cpus[idx].deprecated = qbool_get_bool(qbool); +} + +qtest_quit(qts); +qobject_unref(response); + +silence_spawn_log = false; + +return cpus; +} + +bool qtest_has_cpu_model(const char *cpu) +{ +struct CpuModel *cpus; +int i; + +cpus = qtest_get_cpu_models(); + +for (i = 0; cpus[i].name != NULL; i++) { +if (g_str_equal(cpu, cpus[i].name) || +(cpus[i].alias_of && g_str_equal(cpu, cpus[i].alias_of))) { +return true; +} +} + +return false; +} + void qtest_cb_for_every_machine(void (*cb)(const char *machine), bool skip_old_versioned) { -- 2.45.2
[PULL 15/15] tests/tcg/s390x: Allow specifying extra QEMU options on the command line
From: Ilya Leoshkevich The use case for this is `make check-tcg EXTFLAGS="-accel kvm"`, which allows validating the system TCG testcases on real hardware. EXTFLAGS name is borrowed from tests/tcg/xtensa/Makefile.softmmu-target. While at it, use += instead of = in order to be consistent with the other architectures. Signed-off-by: Ilya Leoshkevich Reviewed-by: Thomas Huth Message-ID: <20240522184116.35975-1-...@linux.ibm.com> Signed-off-by: Thomas Huth --- tests/tcg/s390x/Makefile.softmmu-target | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/tcg/s390x/Makefile.softmmu-target b/tests/tcg/s390x/Makefile.softmmu-target index 80159cccf5..4c8e15e625 100644 --- a/tests/tcg/s390x/Makefile.softmmu-target +++ b/tests/tcg/s390x/Makefile.softmmu-target @@ -1,6 +1,6 @@ S390X_SRC=$(SRC_PATH)/tests/tcg/s390x VPATH+=$(S390X_SRC) -QEMU_OPTS=-action panic=exit-failure -nographic -kernel +QEMU_OPTS+=-action panic=exit-failure -nographic $(EXTFLAGS) -kernel LINK_SCRIPT=$(S390X_SRC)/softmmu.ld CFLAGS+=-ggdb -O0 LDFLAGS=-nostdlib -static -- 2.45.2
[PULL 04/15] tests/qtest/x86: check for availability of older cpu models before running tests
From: Ani Sinha It is better to check if some older cpu models like 486, athlon, pentium, penryn, phenom, core2duo etc are available before running their corresponding tests. Some downstream distributions may no longer support these older cpu models. Signature of add_feature_test() has been modified to return void as FeatureTestArgs* was not used by the caller. One minor correction. Replaced 'phenom' with '486' in the test 'x86/cpuid/auto-level/phenom/arat' matching the cpu used. Signed-off-by: Ani Sinha Reviewed-by: Daniel P. Berrangé Message-ID: <20240610155303.7933-4-anisi...@redhat.com> Signed-off-by: Thomas Huth --- tests/qtest/test-x86-cpuid-compat.c | 170 ++-- 1 file changed, 108 insertions(+), 62 deletions(-) diff --git a/tests/qtest/test-x86-cpuid-compat.c b/tests/qtest/test-x86-cpuid-compat.c index 6a39454fce..b9e7e5ef7b 100644 --- a/tests/qtest/test-x86-cpuid-compat.c +++ b/tests/qtest/test-x86-cpuid-compat.c @@ -67,10 +67,29 @@ static void test_cpuid_prop(const void *data) g_free(path); } -static void add_cpuid_test(const char *name, const char *cmdline, +static void add_cpuid_test(const char *name, const char *cpu, + const char *cpufeat, const char *machine, const char *property, int64_t expected_value) { CpuidTestArgs *args = g_new0(CpuidTestArgs, 1); +char *cmdline; +char *save; + +if (!qtest_has_cpu_model(cpu)) { +return; +} +cmdline = g_strdup_printf("-cpu %s", cpu); + +if (cpufeat) { +save = cmdline; +cmdline = g_strdup_printf("%s,%s", cmdline, cpufeat); +g_free(save); +} +if (machine) { +save = cmdline; +cmdline = g_strdup_printf("-machine %s %s", machine, cmdline); +g_free(save); +} args->cmdline = cmdline; args->property = property; args->expected_value = expected_value; @@ -149,12 +168,24 @@ static void test_feature_flag(const void *data) * either "feature-words" or "filtered-features", when running QEMU * using cmdline */ -static FeatureTestArgs *add_feature_test(const char *name, const char *cmdline, - uint32_t eax, uint32_t ecx, - const char *reg, int bitnr, - bool expected_value) +static void add_feature_test(const char *name, const char *cpu, + const char *cpufeat, uint32_t eax, + uint32_t ecx, const char *reg, + int bitnr, bool expected_value) { FeatureTestArgs *args = g_new0(FeatureTestArgs, 1); +char *cmdline; + +if (!qtest_has_cpu_model(cpu)) { +return; +} + +if (cpufeat) { +cmdline = g_strdup_printf("-cpu %s,%s", cpu, cpufeat); +} else { +cmdline = g_strdup_printf("-cpu %s", cpu); +} + args->cmdline = cmdline; args->in_eax = eax; args->in_ecx = ecx; @@ -162,13 +193,17 @@ static FeatureTestArgs *add_feature_test(const char *name, const char *cmdline, args->bitnr = bitnr; args->expected_value = expected_value; qtest_add_data_func(name, args, test_feature_flag); -return args; +return; } static void test_plus_minus_subprocess(void) { char *path; +if (!qtest_has_cpu_model("pentium")) { +return; +} + /* Rules: * 1)"-foo" overrides "+foo" * 2) "[+-]foo" overrides "foo=..." @@ -198,6 +233,10 @@ static void test_plus_minus_subprocess(void) static void test_plus_minus(void) { +if (!qtest_has_cpu_model("pentium")) { +return; +} + g_test_trap_subprocess("/x86/cpuid/parsing-plus-minus/subprocess", 0, 0); g_test_trap_assert_passed(); g_test_trap_assert_stderr("*Ambiguous CPU model string. " @@ -217,99 +256,105 @@ int main(int argc, char **argv) /* Original level values for CPU models: */ add_cpuid_test("x86/cpuid/phenom/level", - "-cpu phenom", "level", 5); + "phenom", NULL, NULL, "level", 5); add_cpuid_test("x86/cpuid/Conroe/level", - "-cpu Conroe", "level", 10); + "Conroe", NULL, NULL, "level", 10); add_cpuid_test("x86/cpuid/SandyBridge/level", - "-cpu SandyBridge", "level", 0xd); + "SandyBridge", NULL, NULL, "level", 0xd); add_cpuid_test("x86/cpuid/486/xlevel", - "-cpu 486", "xlevel", 0); + "486", NULL, NULL, "xlevel", 0); add_cpuid_test("x86/cpuid/co
[PULL 14/15] tests/unit/test-smp-parse: Test the full 8-levels topology hierarchy
From: Zhao Liu With module level, QEMU now support 8-levels topology hierarchy. Cover "modules" in SMP_CONFIG_WITH_FULL_TOPO related cases. Signed-off-by: Zhao Liu Tested-by: Yongwei Ma Message-ID: <20240529061925.350323-9-zhao1@intel.com> Signed-off-by: Thomas Huth --- tests/unit/test-smp-parse.c | 129 1 file changed, 85 insertions(+), 44 deletions(-) diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c index 2ca8530e93..f9bccb56ab 100644 --- a/tests/unit/test-smp-parse.c +++ b/tests/unit/test-smp-parse.c @@ -94,11 +94,11 @@ } /* - * Currently QEMU supports up to a 7-level topology hierarchy, which is the + * Currently QEMU supports up to a 8-level topology hierarchy, which is the * QEMU's unified abstract representation of CPU topology. - * -drawers/books/sockets/dies/clusters/cores/threads + * -drawers/books/sockets/dies/clusters/modules/cores/threads */ -#define SMP_CONFIG_WITH_FULL_TOPO(a, b, c, d, e, f, g, h, i)\ +#define SMP_CONFIG_WITH_FULL_TOPO(a, b, c, d, e, f, g, h, i, j) \ { \ .has_cpus = true, .cpus = a,\ .has_drawers = true, .drawers = b,\ @@ -106,9 +106,10 @@ .has_sockets = true, .sockets = d,\ .has_dies = true, .dies = e,\ .has_clusters = true, .clusters = f,\ -.has_cores= true, .cores= g,\ -.has_threads = true, .threads = h,\ -.has_maxcpus = true, .maxcpus = i,\ +.has_modules = true, .modules = g,\ +.has_cores= true, .cores= h,\ +.has_threads = true, .threads = i,\ +.has_maxcpus = true, .maxcpus = j,\ } /** @@ -336,10 +337,10 @@ static const struct SMPTestData data_generic_valid[] = { /* * Unsupported parameters are always allowed to be set to '1' * config: - * -smp 8,drawers=1,books=1,sockets=2,dies=1,clusters=1,cores=2,\ - *threads=2,maxcpus=8 + * -smp 8,drawers=1,books=1,sockets=2,dies=1,clusters=1,modules=1,\ + *cores=2,threads=2,maxcpus=8 * expect: cpus=8,sockets=2,cores=2,threads=2,maxcpus=8 */ -.config = SMP_CONFIG_WITH_FULL_TOPO(8, 1, 1, 2, 1, 1, 2, 2, 8), +.config = SMP_CONFIG_WITH_FULL_TOPO(8, 1, 1, 2, 1, 1, 1, 2, 2, 8), .expect_prefer_sockets = CPU_TOPOLOGY_GENERIC(8, 2, 2, 2, 8), .expect_prefer_cores = CPU_TOPOLOGY_GENERIC(8, 2, 2, 2, 8), }, @@ -561,32 +562,37 @@ static const struct SMPTestData data_full_topo_invalid[] = { { /* * config: -smp 200,drawers=3,books=5,sockets=2,dies=4,\ - * clusters=2,cores=7,threads=2,maxcpus=200 + * clusters=2,modules=3,cores=7,threads=2,\ + * maxcpus=200 */ -.config = SMP_CONFIG_WITH_FULL_TOPO(200, 3, 5, 2, 4, 2, 7, 2, 200), +.config = SMP_CONFIG_WITH_FULL_TOPO(200, 3, 5, 2, 4, 2, 3, 7, 2, 200), .expect_error = "Invalid CPU topology: " "product of the hierarchy must match maxcpus: " "drawers (3) * books (5) * sockets (2) * dies (4) * " -"clusters (2) * cores (7) * threads (2) " +"clusters (2) * modules (3) * cores (7) * threads (2) " "!= maxcpus (200)", }, { /* - * config: -smp 3361,drawers=3,books=5,sockets=2,dies=4,\ - * clusters=2,cores=7,threads=2,maxcpus=3360 + * config: -smp 2881,drawers=3,books=5,sockets=2,dies=4,\ + * clusters=2,modules=3,cores=2,threads=2, + * maxcpus=2880 */ -.config = SMP_CONFIG_WITH_FULL_TOPO(3361, 3, 5, 2, 4, 2, 7, 2, 3360), +.config = SMP_CONFIG_WITH_FULL_TOPO(2881, 3, 5, 2, 4, +2, 3, 2, 2, 2880), .expect_error = "Invalid CPU topology: " "maxcpus must be equal to or greater than smp: " -"drawers (3) * books (5) * sockets (2) * dies (4) * " -"clusters (2) * cores (7) * threads (2) " -"== maxcpus (3360) < smp_cpus (3361)", +"drawers (3) * books (5) * sockets (2) * " +"dies (4) * clusters (2) * modules (3) * " +"cores (2) * threads (2) == maxcpus (2880) " +"< smp
[PULL 12/15] tests/unit/test-smp-parse: Test "modules" parameter in -smp
From: Zhao Liu Cover the module cases in test-smp-parse. Signed-off-by: Zhao Liu Tested-by: Yongwei Ma Message-ID: <20240529061925.350323-7-zhao1@intel.com> Signed-off-by: Thomas Huth --- tests/unit/test-smp-parse.c | 112 +--- 1 file changed, 103 insertions(+), 9 deletions(-) diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c index 2214e47ba9..01832e5eda 100644 --- a/tests/unit/test-smp-parse.c +++ b/tests/unit/test-smp-parse.c @@ -48,17 +48,19 @@ } /* - * Currently a 4-level topology hierarchy is supported on PC machines - * -sockets/dies/cores/threads + * Currently a 5-level topology hierarchy is supported on PC machines + * -sockets/dies/modules/cores/threads */ -#define SMP_CONFIG_WITH_DIES(ha, a, hb, b, hc, c, hd, d, he, e, hf, f) \ +#define SMP_CONFIG_WITH_MODS_DIES(ha, a, hb, b, hc, c, hd, d, \ + he, e, hf, f, hg, g)\ { \ .has_cpus= ha, .cpus= a, \ .has_sockets = hb, .sockets = b, \ .has_dies= hc, .dies= c, \ -.has_cores = hd, .cores = d, \ -.has_threads = he, .threads = e, \ -.has_maxcpus = hf, .maxcpus = f, \ +.has_modules = hd, .modules = d, \ +.has_cores = he, .cores = e, \ +.has_threads = hf, .threads = f, \ +.has_maxcpus = hg, .maxcpus = g, \ } /* @@ -345,8 +347,14 @@ static const struct SMPTestData data_generic_valid[] = { static const struct SMPTestData data_generic_invalid[] = { { +/* config: -smp 2,modules=2 */ +.config = SMP_CONFIG_WITH_MODS_DIES(T, 2, F, 0, F, 0, T, 2, +F, 0, F, 0, F, 0), +.expect_error = "modules > 1 not supported by this machine's CPU topology", +}, { /* config: -smp 2,dies=2 */ -.config = SMP_CONFIG_WITH_DIES(T, 2, F, 0, T, 2, F, 0, F, 0, F, 0), +.config = SMP_CONFIG_WITH_MODS_DIES(T, 2, F, 0, T, 2, F, 0, +F, 0, F, 0, F, 0), .expect_error = "dies > 1 not supported by this machine's CPU topology", }, { /* config: -smp 2,clusters=2 */ @@ -397,17 +405,39 @@ static const struct SMPTestData data_generic_invalid[] = { }, }; +static const struct SMPTestData data_with_modules_invalid[] = { +{ +/* config: -smp 16,sockets=2,modules=2,cores=4,threads=2,maxcpus=16 */ +.config = SMP_CONFIG_WITH_MODS_DIES(T, 16, T, 2, F, 0, T, 2, +T, 4, T, 2, T, 16), +.expect_error = "Invalid CPU topology: " +"product of the hierarchy must match maxcpus: " +"sockets (2) * modules (2) * cores (4) * threads (2) " +"!= maxcpus (16)", +}, { +/* config: -smp 34,sockets=2,modules=2,cores=4,threads=2,maxcpus=32 */ +.config = SMP_CONFIG_WITH_MODS_DIES(T, 34, T, 2, F, 0, T, 2, +T, 4, T, 2, T, 32), +.expect_error = "Invalid CPU topology: " +"maxcpus must be equal to or greater than smp: " +"sockets (2) * modules (2) * cores (4) * threads (2) " +"== maxcpus (32) < smp_cpus (34)", +}, +}; + static const struct SMPTestData data_with_dies_invalid[] = { { /* config: -smp 16,sockets=2,dies=2,cores=4,threads=2,maxcpus=16 */ -.config = SMP_CONFIG_WITH_DIES(T, 16, T, 2, T, 2, T, 4, T, 2, T, 16), +.config = SMP_CONFIG_WITH_MODS_DIES(T, 16, T, 2, T, 2, F, 0, +T, 4, T, 2, T, 16), .expect_error = "Invalid CPU topology: " "product of the hierarchy must match maxcpus: " "sockets (2) * dies (2) * cores (4) * threads (2) " "!= maxcpus (16)", }, { /* config: -smp 34,sockets=2,dies=2,cores=4,threads=2,maxcpus=32 */ -.config = SMP_CONFIG_WITH_DIES(T, 34, T, 2, T, 2, T, 4, T, 2, T, 32), +.config = SMP_CONFIG_WITH_MODS_DIES(T, 34, T, 2, T, 2, F, 0, +T, 4, T, 2, T, 32), .expect_error = "Invalid CPU topology: " "maxcpus must be equal to or greater than smp: " "sockets (2) * dies (2) * cores (4) * threads (2) " @@ -861,6 +891,13 @@ static void machine_generic_invalid_class_init(ObjectClass
[PULL 00/15] CPU-related test updates
The following changes since commit 80e8f0602168f451a93e71cbb1d59e93d745e62e: Merge tag 'bsd-user-misc-2024q2-pull-request' of gitlab.com:bsdimp/qemu into staging (2024-06-09 11:21:55 -0700) are available in the Git repository at: https://gitlab.com/thuth/qemu.git tags/pull-request-2024-06-12 for you to fetch changes up to 26a09ead7351f117ae780781b347f014da03c20b: tests/tcg/s390x: Allow specifying extra QEMU options on the command line (2024-06-12 12:12:28 +0200) * Fix loongarch64 avocado test * Make qtests more flexible with regards to non-available CPU models * Improvements for the test-smp-parse unit test Ani Sinha (3): qtest/x86/numa-test: do not use the obsolete 'pentium' cpu tests/qtest/libqtest: add qtest_has_cpu_model() api tests/qtest/x86: check for availability of older cpu models before running tests Ilya Leoshkevich (1): tests/tcg/s390x: Allow specifying extra QEMU options on the command line Song Gao (1): tests/avocado: Update LoongArch bios file Zhao Liu (8): tests/unit/test-smp-parse: Fix comments of drawers and books case tests/unit/test-smp-parse: Fix comment of parameters=1 case tests/unit/test-smp-parse: Fix an invalid topology case tests/unit/test-smp-parse: Use default parameters=0 when not set in -smp tests/unit/test-smp-parse: Make test cases aware of module level tests/unit/test-smp-parse: Test "modules" parameter in -smp tests/unit/test-smp-parse: Test "modules" and "dies" combination case tests/unit/test-smp-parse: Test the full 8-levels topology hierarchy Zhenwei Pi (2): meson: Remove libibumad dependence test: Remove libibumad dependence meson.build| 4 +- tests/qtest/libqtest.h | 8 + tests/qtest/libqtest.c | 83 + tests/qtest/numa-test.c| 3 +- tests/qtest/test-x86-cpuid-compat.c| 170 ++ tests/unit/test-smp-parse.c| 373 + scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml | 1 - scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml | 1 - tests/avocado/machine_loongarch.py | 8 +- tests/docker/dockerfiles/debian-amd64-cross.docker | 1 - tests/docker/dockerfiles/debian-arm64-cross.docker | 1 - tests/docker/dockerfiles/debian-armel-cross.docker | 1 - tests/docker/dockerfiles/debian-armhf-cross.docker | 1 - tests/docker/dockerfiles/debian-i686-cross.docker | 1 - .../dockerfiles/debian-mips64el-cross.docker | 1 - .../docker/dockerfiles/debian-mipsel-cross.docker | 1 - .../docker/dockerfiles/debian-ppc64el-cross.docker | 1 - tests/docker/dockerfiles/debian-s390x-cross.docker | 1 - tests/docker/dockerfiles/debian.docker | 1 - tests/docker/dockerfiles/ubuntu2204.docker | 1 - tests/lcitool/projects/qemu.yml| 1 - tests/tcg/s390x/Makefile.softmmu-target| 2 +- 22 files changed, 518 insertions(+), 147 deletions(-)
[PULL 08/15] tests/unit/test-smp-parse: Fix comment of parameters=1 case
From: Zhao Liu SMP_CONFIG_WITH_FULL_TOPO hasn't support module level, so the parameter should indicate the "clusters". Additionally, reorder the parameters of -smp to match the topology hierarchy order. Signed-off-by: Zhao Liu Reviewed-by: Thomas Huth Tested-by: Yongwei Ma Message-ID: <20240529061925.350323-3-zhao1@intel.com> Signed-off-by: Thomas Huth --- tests/unit/test-smp-parse.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c index fa8e7d83a7..c9cbc89c21 100644 --- a/tests/unit/test-smp-parse.c +++ b/tests/unit/test-smp-parse.c @@ -333,7 +333,9 @@ static const struct SMPTestData data_generic_valid[] = { }, { /* * Unsupported parameters are always allowed to be set to '1' - * config: -smp 8,books=1,drawers=1,sockets=2,modules=1,dies=1,cores=2,threads=2,maxcpus=8 + * config: + * -smp 8,drawers=1,books=1,sockets=2,dies=1,clusters=1,cores=2,\ + *threads=2,maxcpus=8 * expect: cpus=8,sockets=2,cores=2,threads=2,maxcpus=8 */ .config = SMP_CONFIG_WITH_FULL_TOPO(8, 1, 1, 2, 1, 1, 2, 2, 8), .expect_prefer_sockets = CPU_TOPOLOGY_GENERIC(8, 2, 2, 2, 8), -- 2.45.2
[PULL 02/15] qtest/x86/numa-test: do not use the obsolete 'pentium' cpu
From: Ani Sinha 'pentium' cpu is old and obsolete and should be avoided for running tests if its not strictly needed. Use 'max' cpu instead for generic non-cpu specific numa test. Reviewed-by: Thomas Huth Reviewed-by: Igor Mammedov Tested-by: Mario Casquero Signed-off-by: Ani Sinha Message-ID: <20240610155303.7933-2-anisi...@redhat.com> Signed-off-by: Thomas Huth --- tests/qtest/numa-test.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c index 5518f6596b..ede418963c 100644 --- a/tests/qtest/numa-test.c +++ b/tests/qtest/numa-test.c @@ -125,7 +125,8 @@ static void pc_numa_cpu(const void *data) QTestState *qts; g_autofree char *cli = NULL; -cli = make_cli(data, "-cpu pentium -machine smp.cpus=8,smp.sockets=2,smp.cores=2,smp.threads=2 " +cli = make_cli(data, +"-cpu max -machine smp.cpus=8,smp.sockets=2,smp.cores=2,smp.threads=2 " "-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 " "-numa cpu,node-id=1,socket-id=0 " "-numa cpu,node-id=0,socket-id=1,core-id=0 " -- 2.45.2
[PULL 11/15] tests/unit/test-smp-parse: Make test cases aware of module level
From: Zhao Liu Currently, -smp supports module level. It is necessary to consider the effects of module in the test cases to ensure that the calculations are correct. This is also the preparation to add module test cases. Signed-off-by: Zhao Liu Reviewed-by: Thomas Huth Tested-by: Yongwei Ma Message-ID: <20240529061925.350323-6-zhao1@intel.com> Signed-off-by: Thomas Huth --- tests/unit/test-smp-parse.c | 13 +++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c index e3a0a9d12d..2214e47ba9 100644 --- a/tests/unit/test-smp-parse.c +++ b/tests/unit/test-smp-parse.c @@ -629,6 +629,7 @@ static char *smp_config_to_string(const SMPConfiguration *config) ".has_sockets = %5s, sockets = %" PRId64 ",\n" ".has_dies = %5s, dies = %" PRId64 ",\n" ".has_clusters = %5s, clusters = %" PRId64 ",\n" +".has_modules = %5s, modules = %" PRId64 ",\n" ".has_cores= %5s, cores= %" PRId64 ",\n" ".has_threads = %5s, threads = %" PRId64 ",\n" ".has_maxcpus = %5s, maxcpus = %" PRId64 ",\n" @@ -639,6 +640,7 @@ static char *smp_config_to_string(const SMPConfiguration *config) config->has_sockets ? "true" : "false", config->sockets, config->has_dies ? "true" : "false", config->dies, config->has_clusters ? "true" : "false", config->clusters, +config->has_modules ? "true" : "false", config->modules, config->has_cores ? "true" : "false", config->cores, config->has_threads ? "true" : "false", config->threads, config->has_maxcpus ? "true" : "false", config->maxcpus); @@ -679,6 +681,7 @@ static char *cpu_topology_to_string(const CpuTopology *topo, ".sockets= %u,\n" ".dies = %u,\n" ".clusters = %u,\n" +".modules= %u,\n" ".cores = %u,\n" ".threads= %u,\n" ".max_cpus = %u,\n" @@ -688,8 +691,8 @@ static char *cpu_topology_to_string(const CpuTopology *topo, "}", topo->cpus, topo->drawers, topo->books, topo->sockets, topo->dies, topo->clusters, -topo->cores, topo->threads, topo->max_cpus, -threads_per_socket, cores_per_socket, +topo->modules, topo->cores, topo->threads, +topo->max_cpus, threads_per_socket, cores_per_socket, has_clusters ? "true" : "false"); } @@ -732,6 +735,7 @@ static void check_parse(MachineState *ms, const SMPConfiguration *config, (ms->smp.sockets == expect_topo->sockets) && (ms->smp.dies == expect_topo->dies) && (ms->smp.clusters == expect_topo->clusters) && +(ms->smp.modules == expect_topo->modules) && (ms->smp.cores == expect_topo->cores) && (ms->smp.threads == expect_topo->threads) && (ms->smp.max_cpus == expect_topo->max_cpus) && @@ -812,6 +816,11 @@ static void smp_parse_test(MachineState *ms, SMPTestData *data, bool is_valid) /* The parsed results of the unsupported parameters should be 1 */ static void unsupported_params_init(const MachineClass *mc, SMPTestData *data) { +if (!mc->smp_props.modules_supported) { +data->expect_prefer_sockets.modules = 1; +data->expect_prefer_cores.modules = 1; +} + if (!mc->smp_props.dies_supported) { data->expect_prefer_sockets.dies = 1; data->expect_prefer_cores.dies = 1; -- 2.45.2
[PULL 07/15] tests/unit/test-smp-parse: Fix comments of drawers and books case
From: Zhao Liu Fix the comments to match the actual configurations. Signed-off-by: Zhao Liu Reviewed-by: Thomas Huth Tested-by: Yongwei Ma Message-ID: <20240529061925.350323-2-zhao1@intel.com> Signed-off-by: Thomas Huth --- tests/unit/test-smp-parse.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c index 9fdba24fce..fa8e7d83a7 100644 --- a/tests/unit/test-smp-parse.c +++ b/tests/unit/test-smp-parse.c @@ -474,8 +474,8 @@ static const struct SMPTestData data_with_drawers_invalid[] = { static const struct SMPTestData data_with_drawers_books_invalid[] = { { /* - * config: -smp 200,drawers=2,books=2,sockets=2,cores=4,\ - * threads=2,maxcpus=200 + * config: -smp 200,drawers=3,books=5,sockets=2,cores=4,\ + * threads=2,maxcpus=200 */ .config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 200, T, 3, T, 5, T, 2, T, 4, T, 2, T, 200), @@ -485,8 +485,8 @@ static const struct SMPTestData data_with_drawers_books_invalid[] = { "cores (4) * threads (2) != maxcpus (200)", }, { /* - * config: -smp 242,drawers=2,books=2,sockets=2,cores=4,\ - * threads=2,maxcpus=240 + * config: -smp 242,drawers=3,books=5,sockets=2,cores=4,\ + * threads=2,maxcpus=240 */ .config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 242, T, 3, T, 5, T, 2, T, 4, T, 2, T, 240), -- 2.45.2
[PULL 01/15] tests/avocado: Update LoongArch bios file
From: Song Gao The VM uses old bios to boot up only 1 cpu, causing the test case to fail. Update the bios to solve this problem. Reported-by: Thomas Huth Signed-off-by: Song Gao Message-ID: <20240604030058.2327145-1-gaos...@loongson.cn> Signed-off-by: Thomas Huth --- tests/avocado/machine_loongarch.py | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tests/avocado/machine_loongarch.py b/tests/avocado/machine_loongarch.py index 7d8a3c1fa5..8de308f2d6 100644 --- a/tests/avocado/machine_loongarch.py +++ b/tests/avocado/machine_loongarch.py @@ -27,18 +27,18 @@ def test_loongarch64_devices(self): """ kernel_url = ('https://github.com/yangxiaojuan-loongson/qemu-binary/' - 'releases/download/binary-files/vmlinuz.efi') + 'releases/download/2024-05-30/vmlinuz.efi') kernel_hash = '951b485b16e3788b6db03a3e1793c067009e31a2' kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash) initrd_url = ('https://github.com/yangxiaojuan-loongson/qemu-binary/' - 'releases/download/binary-files/ramdisk') + 'releases/download/2024-05-30/ramdisk') initrd_hash = 'c67658d9b2a447ce7db2f73ba3d373c9b2b90ab2' initrd_path = self.fetch_asset(initrd_url, asset_hash=initrd_hash) bios_url = ('https://github.com/yangxiaojuan-loongson/qemu-binary/' -'releases/download/binary-files/QEMU_EFI.fd') -bios_hash = ('dfc1bfba4853cd763b9d392d0031827e8addbca8') +'releases/download/2024-05-30/QEMU_EFI.fd') +bios_hash = ('f4d0966b5117d4cd82327c050dd668741046be69') bios_path = self.fetch_asset(bios_url, asset_hash=bios_hash) self.vm.set_console() -- 2.45.2
[PULL 09/15] tests/unit/test-smp-parse: Fix an invalid topology case
From: Zhao Liu Adjust the "cpus" parameter to match the comment configuration. Signed-off-by: Zhao Liu Reviewed-by: Thomas Huth Tested-by: Yongwei Ma Message-ID: <20240529061925.350323-4-zhao1@intel.com> Signed-off-by: Thomas Huth --- tests/unit/test-smp-parse.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c index c9cbc89c21..5d99e0d923 100644 --- a/tests/unit/test-smp-parse.c +++ b/tests/unit/test-smp-parse.c @@ -528,7 +528,7 @@ static const struct SMPTestData data_full_topo_invalid[] = { * config: -smp 1,drawers=3,books=5,sockets=2,dies=4,\ * clusters=2,cores=7,threads=3,maxcpus=5040 */ -.config = SMP_CONFIG_WITH_FULL_TOPO(3361, 3, 5, 2, 4, 2, 7, 3, 5040), +.config = SMP_CONFIG_WITH_FULL_TOPO(1, 3, 5, 2, 4, 2, 7, 3, 5040), .expect_error = "Invalid SMP CPUs 5040. The max CPUs supported " "by machine '" SMP_MACHINE_NAME "' is 4096", }, -- 2.45.2
[PULL 05/15] meson: Remove libibumad dependence
From: zhenwei pi RDMA based migration has no dependence on libumad. libibverbs and librdmacm are enough. libumad was used by rdmacm-mux which has been already removed. It's remained mistakenly. Fixes: 1dfd42c4264b ("hw/rdma: Remove deprecated pvrdma device and rdmacm-mux helper") Cc: Philippe Mathieu-Daudé Signed-off-by: zhenwei pi Reviewed-by: Philippe Mathieu-Daudé Message-ID: <20240611105427.61395-2-pizhen...@bytedance.com> Signed-off-by: Thomas Huth --- meson.build | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/meson.build b/meson.build index ec59effca2..226b97ea26 100644 --- a/meson.build +++ b/meson.build @@ -1885,11 +1885,9 @@ endif rdma = not_found if not get_option('rdma').auto() or have_system - libumad = cc.find_library('ibumad', required: get_option('rdma')) rdma_libs = [cc.find_library('rdmacm', has_headers: ['rdma/rdma_cma.h'], required: get_option('rdma')), - cc.find_library('ibverbs', required: get_option('rdma')), - libumad] + cc.find_library('ibverbs', required: get_option('rdma'))] rdma = declare_dependency(dependencies: rdma_libs) foreach lib: rdma_libs if not lib.found() -- 2.45.2
[PULL 10/15] tests/unit/test-smp-parse: Use default parameters=0 when not set in -smp
From: Zhao Liu Since -smp allows parameters=1 whether the level is supported by machine, to avoid the test scenarios where the parameter defaults to 1 cause some errors to be masked, explicitly set undesired parameters to 0. Signed-off-by: Zhao Liu Reviewed-by: Thomas Huth Tested-by: Yongwei Ma Message-ID: <20240529061925.350323-5-zhao1@intel.com> Signed-off-by: Thomas Huth --- tests/unit/test-smp-parse.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tests/unit/test-smp-parse.c b/tests/unit/test-smp-parse.c index 5d99e0d923..e3a0a9d12d 100644 --- a/tests/unit/test-smp-parse.c +++ b/tests/unit/test-smp-parse.c @@ -436,7 +436,7 @@ static const struct SMPTestData data_with_clusters_invalid[] = { static const struct SMPTestData data_with_books_invalid[] = { { /* config: -smp 16,books=2,sockets=2,cores=4,threads=2,maxcpus=16 */ -.config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 16, F, 1, T, 2, T, +.config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 16, F, 0, T, 2, T, 2, T, 4, T, 2, T, 16), .expect_error = "Invalid CPU topology: " "product of the hierarchy must match maxcpus: " @@ -444,7 +444,7 @@ static const struct SMPTestData data_with_books_invalid[] = { "!= maxcpus (16)", }, { /* config: -smp 34,books=2,sockets=2,cores=4,threads=2,maxcpus=32 */ -.config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 34, F, 1, T, 2, T, +.config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 34, F, 0, T, 2, T, 2, T, 4, T, 2, T, 32), .expect_error = "Invalid CPU topology: " "maxcpus must be equal to or greater than smp: " @@ -456,7 +456,7 @@ static const struct SMPTestData data_with_books_invalid[] = { static const struct SMPTestData data_with_drawers_invalid[] = { { /* config: -smp 16,drawers=2,sockets=2,cores=4,threads=2,maxcpus=16 */ -.config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 16, T, 2, F, 1, T, +.config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 16, T, 2, F, 0, T, 2, T, 4, T, 2, T, 16), .expect_error = "Invalid CPU topology: " "product of the hierarchy must match maxcpus: " @@ -464,7 +464,7 @@ static const struct SMPTestData data_with_drawers_invalid[] = { "!= maxcpus (16)", }, { /* config: -smp 34,drawers=2,sockets=2,cores=4,threads=2,maxcpus=32 */ -.config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 34, T, 2, F, 1, T, +.config = SMP_CONFIG_WITH_BOOKS_DRAWERS(T, 34, T, 2, F, 0, T, 2, T, 4, T, 2, T, 32), .expect_error = "Invalid CPU topology: " "maxcpus must be equal to or greater than smp: " -- 2.45.2
[PULL 06/15] test: Remove libibumad dependence
From: zhenwei pi Remove libibumad dependence from the test environment. Suggested-by: Philippe Mathieu-Daudé Signed-off-by: zhenwei pi Reviewed-by: Philippe Mathieu-Daudé Message-ID: <20240611105427.61395-3-pizhen...@bytedance.com> Signed-off-by: Thomas Huth --- scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml | 1 - scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml| 1 - tests/docker/dockerfiles/debian-amd64-cross.docker| 1 - tests/docker/dockerfiles/debian-arm64-cross.docker| 1 - tests/docker/dockerfiles/debian-armel-cross.docker| 1 - tests/docker/dockerfiles/debian-armhf-cross.docker| 1 - tests/docker/dockerfiles/debian-i686-cross.docker | 1 - tests/docker/dockerfiles/debian-mips64el-cross.docker | 1 - tests/docker/dockerfiles/debian-mipsel-cross.docker | 1 - tests/docker/dockerfiles/debian-ppc64el-cross.docker | 1 - tests/docker/dockerfiles/debian-s390x-cross.docker| 1 - tests/docker/dockerfiles/debian.docker| 1 - tests/docker/dockerfiles/ubuntu2204.docker| 1 - tests/lcitool/projects/qemu.yml | 1 - 14 files changed, 14 deletions(-) diff --git a/scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml b/scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml index 8d7d8725fb..fd5489cd82 100644 --- a/scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml +++ b/scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml @@ -49,7 +49,6 @@ packages: - libglusterfs-dev - libgnutls28-dev - libgtk-3-dev - - libibumad-dev - libibverbs-dev - libiscsi-dev - libjemalloc-dev diff --git a/scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml b/scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml index 16050a5058..afa04502cf 100644 --- a/scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml +++ b/scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml @@ -49,7 +49,6 @@ packages: - libglusterfs-dev - libgnutls28-dev - libgtk-3-dev - - libibumad-dev - libibverbs-dev - libiscsi-dev - libjemalloc-dev diff --git a/tests/docker/dockerfiles/debian-amd64-cross.docker b/tests/docker/dockerfiles/debian-amd64-cross.docker index f8c61d1191..8058695979 100644 --- a/tests/docker/dockerfiles/debian-amd64-cross.docker +++ b/tests/docker/dockerfiles/debian-amd64-cross.docker @@ -105,7 +105,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ libglusterfs-dev:amd64 \ libgnutls28-dev:amd64 \ libgtk-3-dev:amd64 \ - libibumad-dev:amd64 \ libibverbs-dev:amd64 \ libiscsi-dev:amd64 \ libjemalloc-dev:amd64 \ diff --git a/tests/docker/dockerfiles/debian-arm64-cross.docker b/tests/docker/dockerfiles/debian-arm64-cross.docker index 6510872279..15457d7657 100644 --- a/tests/docker/dockerfiles/debian-arm64-cross.docker +++ b/tests/docker/dockerfiles/debian-arm64-cross.docker @@ -105,7 +105,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ libglusterfs-dev:arm64 \ libgnutls28-dev:arm64 \ libgtk-3-dev:arm64 \ - libibumad-dev:arm64 \ libibverbs-dev:arm64 \ libiscsi-dev:arm64 \ libjemalloc-dev:arm64 \ diff --git a/tests/docker/dockerfiles/debian-armel-cross.docker b/tests/docker/dockerfiles/debian-armel-cross.docker index f227d42987..c26ffc2e9e 100644 --- a/tests/docker/dockerfiles/debian-armel-cross.docker +++ b/tests/docker/dockerfiles/debian-armel-cross.docker @@ -108,7 +108,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ libglusterfs-dev:armel \ libgnutls28-dev:armel \ libgtk-3-dev:armel \ - libibumad-dev:armel \ libibverbs-dev:armel \ libiscsi-dev:armel \ libjemalloc-dev:armel \ diff --git a/tests/docker/dockerfiles/debian-armhf-cross.docker b/tests/docker/dockerfiles/debian-armhf-cross.docker index 58bdf07223..8f87656d89 100644 --- a/tests/docker/dockerfiles/debian-armhf-cross.docker +++ b/tests/docker/dockerfiles/debian-armhf-cross.docker @@ -105,7 +105,6 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ libglusterfs-dev:armhf \ libgnutls28-dev:armhf \ libgtk-3-dev:armhf \ - libibumad-dev:armhf \ libibverbs-dev:armhf \ libiscsi-dev:armhf \ libjemalloc-dev:armhf \ diff --git a/tests/docker/dockerfiles/debian-i686-cross.docker b/tests/docker/dockerfiles/debian-i686-cross.docker index 9f4102be8f..f1e5b0b877 100644 --- a/tests/docker/dockerfiles/debian-i686-cross.docker +++ b/tests/docker/dockerfiles/debian-i686-cross.docker @@ -108,7 +108,6 @@ RUN export DEBIAN_FRONTEND=noninteractive &a
Re: [PATCH 5/8] tests/unit/test-smp-parse: Make test cases aware of module level
On 29/05/2024 08.19, Zhao Liu wrote: Currently, -smp supports module level. It is necessary to consider the effects of module in the test cases to ensure that the calculations are correct. This is also the preparation to add module test cases. Signed-off-by: Zhao Liu --- tests/unit/test-smp-parse.c | 13 +++-- 1 file changed, 11 insertions(+), 2 deletions(-) Reviewed-by: Thomas Huth
Re: [PATCH 4/8] tests/unit/test-smp-parse: Use default parameters=0 when not set in -smp
On 29/05/2024 08.19, Zhao Liu wrote: Since -smp allows parameters=1 whether the level is supported by machine, to avoid the test scenarios where the parameter defaults to 1 cause some errors to be masked, explicitly set undesired parameters to 0. Signed-off-by: Zhao Liu --- tests/unit/test-smp-parse.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) Reviewed-by: Thomas Huth
Re: [PATCH 3/8] tests/unit/test-smp-parse: Fix an invalid topology case
On 29/05/2024 08.19, Zhao Liu wrote: Adjust the "cpus" parameter to match the comment configuration. Signed-off-by: Zhao Liu --- tests/unit/test-smp-parse.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Reviewed-by: Thomas Huth
Re: [PATCH 2/8] tests/unit/test-smp-parse: Fix comment of parameters=1 case
On 29/05/2024 08.19, Zhao Liu wrote: SMP_CONFIG_WITH_FULL_TOPO hasn't support module level, so the parameter should indicate the "clusters". Additionally, reorder the parameters of -smp to match the topology hierarchy order. Signed-off-by: Zhao Liu --- tests/unit/test-smp-parse.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) Reviewed-by: Thomas Huth
Re: [PATCH 1/8] tests/unit/test-smp-parse: Fix comments of drawers and books case
On 29/05/2024 08.19, Zhao Liu wrote: Fix the comments to match the actual configurations. Signed-off-by: Zhao Liu --- tests/unit/test-smp-parse.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) Reviewed-by: Thomas Huth
Re: [PATCH 0/5] s390x: Add Full Boot Order Support
On 06/06/2024 21.22, Jared Rossi wrote: On 6/5/24 4:02 AM, Thomas Huth wrote: On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi This patch set primarily adds support for the specification of multiple boot devices, allowing for the guest to automatically use an alternative device on a failed boot without needing to be reconfigured. It additionally provides the ability to define the loadparm attribute on a per-device bases, which allows boot devices to use different loadparm values if needed. In brief, an IPLB is generated for each designated boot device (up to a maximum of 8) and stored in guest memory immediately before BIOS. If a device fails to boot, the next IPLB is retrieved and we jump back to the start of BIOS. Devices can be specified using the standard qemu device tag "bootindex" as with other architectures. Lower number indices are tried first, with "bootindex=0" indicating the first device to try. Is this supposed with multiple scsi-hd devices, too? I tried to boot a guest with two scsi disks (attached to a single virtio-scsi-ccw adapter) where only the second disk had a bootable installation, but that failed...? Thomas Hi Thomas, Yes, I would expect that to work. I tried to reproduce this using a non-bootable scsi disk as the first boot device and then a known-good bootable scsi disk as the second boot device, with one controller. In my instance the BIOS was not able to identify the first disk as bootable and so that device failed to IPL, but it did move on to the next disk after that, and the guest successfully IPL'd from the second device. When you say it failed, do you mean the first disk failed to boot (as expected), but then the guest died without attempting to boot from the second disk? Or did something else happen? I am either not understanding your configuration or I am not understanding your error. I did this: $ ./qemu-system-s390x -bios pc-bios/s390-ccw/s390-ccw.img -accel kvm \ -device virtio-scsi-ccw -drive if=none,id=d2,file=/tmp/bad.qcow2 \ -device scsi-hd,drive=d2,bootindex=2 \ -drive if=none,id=d8,file=/tmp/good.qcow2 \ -device scsi-hd,drive=d8,bootindex=3 -m 4G -nographic LOADPARM=[] Using virtio-scsi. Using guessed DASD geometry. Using ECKD scheme (block size 512), CDL No zIPL section in IPL2 record. zIPL load failed. Trying next boot device... LOADPARM=[] Using virtio-scsi. Using guessed DASD geometry. Using ECKD scheme (block size 512), CDL No zIPL section in IPL2 record. zIPL load failed. So it claims to try to load from the second disk, but it fails. If I change the "bootindex=3" of the second disk to "bootindex=1", it boots perfectly fine, so I'm sure that the installation on good.qcow2 is working fine. Thomas
Re: [PATCH 3/5] s390x: Build IPLB chain for multiple boot devices
On 05/06/2024 22.01, Jared Rossi wrote: On 6/4/24 2:26 PM, Thomas Huth wrote: On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi Write a chain of IPLBs into memory for future use. The IPLB chain is placed immediately before the BIOS in memory at the highest unused page boundary providing sufficient space to fit the chain. Because this is not a fixed address, the location of the next IPLB and number of remaining boot devices is stored in the QIPL global variable for later access. At this stage the IPLB chain is not accessed by the guest during IPL. Signed-off-by: Jared Rossi --- hw/s390x/ipl.h | 1 + include/hw/s390x/ipl/qipl.h | 4 +- hw/s390x/ipl.c | 129 +++- 3 files changed, 103 insertions(+), 31 deletions(-) diff --git a/hw/s390x/ipl.h b/hw/s390x/ipl.h index 1dcb8984bb..4f098d3a81 100644 --- a/hw/s390x/ipl.h +++ b/hw/s390x/ipl.h @@ -20,6 +20,7 @@ #include "qom/object.h" #define DIAG308_FLAGS_LP_VALID 0x80 +#define MAX_IPLB_CHAIN 7 void s390_ipl_set_loadparm(char *ascii_lp, uint8_t *ebcdic_lp); void s390_ipl_fmt_loadparm(uint8_t *loadparm, char *str, Error **errp); diff --git a/include/hw/s390x/ipl/qipl.h b/include/hw/s390x/ipl/qipl.h index a6ce6ddfe3..481c459a53 100644 --- a/include/hw/s390x/ipl/qipl.h +++ b/include/hw/s390x/ipl/qipl.h @@ -34,7 +34,9 @@ struct QemuIplParameters { uint8_t reserved1[3]; uint64_t netboot_start_addr; uint32_t boot_menu_timeout; - uint8_t reserved2[12]; + uint8_t reserved2[2]; + uint16_t num_iplbs; + uint64_t next_iplb; } QEMU_PACKED; typedef struct QemuIplParameters QemuIplParameters; diff --git a/hw/s390x/ipl.c b/hw/s390x/ipl.c index 2d4f5152b3..79429acabd 100644 --- a/hw/s390x/ipl.c +++ b/hw/s390x/ipl.c @@ -55,6 +55,13 @@ static bool iplb_extended_needed(void *opaque) return ipl->iplbext_migration; } +/* Start IPLB chain from the boundary of the first unused page before BIOS */ I'd maybe say "upper boundary" to make it clear that this is at the end of the page, not at the beginning? The chain does start at the beginning of a page. That being said, the comment still needs to be reworded, I'm just not sure exactly how. "Start the IPLB chain from the nearest page boundary providing sufficient space before BIOS?" Basically because each IPLB is 4K, the chain will occupy the N unused pages before the start of BIOS, where N is the number of chained IPLBS (assuming 4K pages). Ah, right, I missed that sizeof(IplParameterBlock) == 4096 (I guess I was looking at the old version in pc-bios/s390-ccw/iplb.h that does not seem to have the padding), sorry for the confusion! It's really good that you now unify the headers in your first patch! Thomas
Re: [PATCH 5/5] s390x: Enable and document boot device fallback on panic
On 05/06/2024 16.48, Jared Rossi wrote: diff --git a/pc-bios/s390-ccw/s390-ccw.h b/pc-bios/s390-ccw/s390-ccw.h index c977a52b50..de3d1f0d5a 100644 --- a/pc-bios/s390-ccw/s390-ccw.h +++ b/pc-bios/s390-ccw/s390-ccw.h @@ -43,6 +43,7 @@ typedef unsigned long long u64; #include "iplb.h" /* start.s */ +extern char _start[]; void disabled_wait(void) __attribute__ ((__noreturn__)); void consume_sclp_int(void); void consume_io_int(void); @@ -88,6 +89,11 @@ __attribute__ ((__noreturn__)) static inline void panic(const char *string) { sclp_print(string); + if (load_next_iplb()) { + sclp_print("\nTrying next boot device..."); + jump_to_IPL_code((long)_start); + } + disabled_wait(); } Honestly, I am unsure whether this is a really cool idea or a very ugly hack ... but I think I tend towards the latter, sorry. Jumping back to the startup code might cause various problem, e.g. pre-initialized variables don't get their values reset, causing different behavior when the s390-ccw bios runs a function a second time this way. Thus this sounds very fragile. Could we please try to get things cleaned up correctly, so that functions return with error codes instead of panicking when we can continue with another boot device? Even if its more work right now, I think this will be much more maintainable in the future. Thomas Thanks Thomas, I appreciate your insight. Your hesitation is perfectly understandable as well. My initial design was like you suggest, where the functions return instead of panic, but the issue I ran into is that netboot uses a separate image, which we jump in to at the start of IPL from a network device (see zipl_load() in pc-bios/s390-ccw/bootmap.c). I wasn't able to come up with a simple way to return to the main BIOS code if a netboot fails other than by jumping back. So, it seems to me that netboot kind of throws a monkeywrench into the basic idea of reworking the panics into returns. I'm open to suggestions on a better way to recover from a failed netboot, and it's certainly possible I've overlooked something, but as far as I can tell a jump is necessary in that particular case at least. Netboot could perhaps be handled as a special case where the jump back is permitted whereas other device types return, but I don't think that actually solves the main issue. What are your thoughts on this? Yes, I agree that jumping is currently required to get back from the netboot code. So if you could rework your patches in a way that limits the jumping to a failed netboot, that would be acceptable, I think. Apart from that: We originally decided to put the netboot code into a separate binary since the required roms/SLOF module might not always have been checked out (it needed to be done manually), so we were not able to compile it in all cases. But nowadays, this is handled in a much nicer way, the submodule is automatically checked out once you compile the s390x-softmmu target and have a s390x compiler available, so I wonder whether we should maybe do the next step and integrate the netboot code into the main s390-ccw.img now? Anybody got an opinion on this? Thomas
Re: [PATCH 5/5] s390x: Enable and document boot device fallback on panic
On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi On a panic during IPL (i.e. a device failed to boot) check for another device to boot from, as indicated by the presence of an unused IPLB. If an IPLB is successfully loaded, then jump to the start of BIOS, restarting IPL using the updated IPLB. Otherwise enter disabled wait. Signed-off-by: Jared Rossi --- docs/system/bootindex.rst | 7 --- docs/system/s390x/bootdevices.rst | 9 ++--- pc-bios/s390-ccw/s390-ccw.h | 6 ++ 3 files changed, 16 insertions(+), 6 deletions(-) Could you please split the documentation changes into a separate patch in v2 ? ... I think that would be cleaner. diff --git a/docs/system/bootindex.rst b/docs/system/bootindex.rst index 8b057f812f..de597561bd 100644 --- a/docs/system/bootindex.rst +++ b/docs/system/bootindex.rst @@ -50,9 +50,10 @@ Limitations Some firmware has limitations on which devices can be considered for booting. For instance, the PC BIOS boot specification allows only one -disk to be bootable. If boot from disk fails for some reason, the BIOS -won't retry booting from other disk. It can still try to boot from -floppy or net, though. +disk to be bootable, except for on s390x machines. If boot from disk fails for +some reason, the BIOS won't retry booting from other disk. It can still try to +boot from floppy or net, though. In the case of s390x, the BIOS will try up to +8 total devices, any number of which may be disks. Since the old text was already talking about "PC BIOS", I'd rather leave that paragraph as it is (maybe just replace "PC BIOS" with "x86 PC BIOS"), and add a separate paragraph afterwards about s390x instead. diff --git a/pc-bios/s390-ccw/s390-ccw.h b/pc-bios/s390-ccw/s390-ccw.h index c977a52b50..de3d1f0d5a 100644 --- a/pc-bios/s390-ccw/s390-ccw.h +++ b/pc-bios/s390-ccw/s390-ccw.h @@ -43,6 +43,7 @@ typedef unsigned long long u64; #include "iplb.h" /* start.s */ +extern char _start[]; void disabled_wait(void) __attribute__ ((__noreturn__)); void consume_sclp_int(void); void consume_io_int(void); @@ -88,6 +89,11 @@ __attribute__ ((__noreturn__)) static inline void panic(const char *string) { sclp_print(string); +if (load_next_iplb()) { +sclp_print("\nTrying next boot device..."); +jump_to_IPL_code((long)_start); +} + disabled_wait(); } Honestly, I am unsure whether this is a really cool idea or a very ugly hack ... but I think I tend towards the latter, sorry. Jumping back to the startup code might cause various problem, e.g. pre-initialized variables don't get their values reset, causing different behavior when the s390-ccw bios runs a function a second time this way. Thus this sounds very fragile. Could we please try to get things cleaned up correctly, so that functions return with error codes instead of panicking when we can continue with another boot device? Even if its more work right now, I think this will be much more maintainable in the future. Thomas
Re: [PATCH 4/5] s390x: Add boot device fallback infrastructure
On 05/06/2024 10.20, Thomas Huth wrote: On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi Add a routine for loading the next IPLB if a device fails to boot. This includes some minor changes to the List-Directed IPL routine so that the failing device may be retried using the legacy boot pointers before moving on to the next device. Signed-off-by: Jared Rossi --- ... diff --git a/pc-bios/s390-ccw/bootmap.c b/pc-bios/s390-ccw/bootmap.c index a2137449dc..69391557fa 100644 --- a/pc-bios/s390-ccw/bootmap.c +++ b/pc-bios/s390-ccw/bootmap.c @@ -144,7 +144,10 @@ static block_number_t load_eckd_segments(block_number_t blk, bool ldipl, bool more_data; memset(_bprs, FREE_SPACE_FILLER, sizeof(_bprs)); - read_block(blk, bprs, "BPRS read failed"); + if (!read_block_nonfatal(blk, bprs)) { + IPL_assert(ldipl, "BPRS read failed"); + return -1; + } do { more_data = false; @@ -188,7 +191,10 @@ static block_number_t load_eckd_segments(block_number_t blk, bool ldipl, * I.e. the next ptr must point to the unused memory area */ memset(_bprs, FREE_SPACE_FILLER, sizeof(_bprs)); - read_block(block_nr, bprs, "BPRS continuation read failed"); + if (!read_block_nonfatal(block_nr, bprs)) { + IPL_assert(ldipl, "BPRS continuation read failed"); + break; + } more_data = true; break; } @@ -197,7 +203,10 @@ static block_number_t load_eckd_segments(block_number_t blk, bool ldipl, * to memory (address). */ rc = virtio_read_many(block_nr, (void *)(*address), count + 1); - IPL_assert(rc == 0, "code chunk read failed"); + if (rc != 0) { + IPL_assert(ldipl, "code chunk read failed"); + break; + } *address += (count + 1) * virtio_get_block_size(); } @@ -295,13 +304,22 @@ static void run_eckd_boot_script(block_number_t bmt_block_nr, " maximum number of boot entries allowed"); memset(sec, FREE_SPACE_FILLER, sizeof(sec)); - read_block(bmt_block_nr, sec, "Cannot read Boot Map Table"); + if (!read_block_nonfatal(bmt_block_nr, sec)) { + IPL_assert(ldipl, "Cannot read Boot Map Table"); + return; + } block_nr = gen_eckd_block_num(>entry[loadparm].xeckd, ldipl); - IPL_assert(block_nr != -1, "Cannot find Boot Map Table Entry"); + if (block_nr == -1) { + IPL_assert(ldipl, "Cannot find Boot Map Table Entry"); + return; + } memset(sec, FREE_SPACE_FILLER, sizeof(sec)); - read_block(block_nr, sec, "Cannot read Boot Map Script"); + if (!read_block_nonfatal(block_nr, sec)) { + IPL_assert(ldipl, "Cannot read Boot Map Script"); + return; + } for (i = 0; bms->entry[i].type == BOOT_SCRIPT_LOAD || bms->entry[i].type == BOOT_SCRIPT_SIGNATURE; i++) { @@ -319,13 +337,10 @@ static void run_eckd_boot_script(block_number_t bmt_block_nr, } while (block_nr != -1); } - if (ldipl && bms->entry[i].type != BOOT_SCRIPT_EXEC) { - /* Abort LD-IPL and retry as CCW-IPL */ + if (bms->entry[i].type != BOOT_SCRIPT_EXEC) { + IPL_assert(ldipl, "Unknown script entry type"); return; } - - IPL_assert(bms->entry[i].type == BOOT_SCRIPT_EXEC, - "Unknown script entry type"); write_reset_psw(bms->entry[i].address.load_address); /* no return */ jump_to_IPL_code(0); /* no return */ } @@ -492,7 +507,7 @@ static void ipl_eckd(void) /* LD-IPL does not use the S1B bock, just make it NULL */ run_eckd_boot_script(ldipl_bmt, NULL_BLOCK_NR); /* Only return in error, retry as CCW-IPL */ - sclp_print("Retrying IPL "); + sclp_print("LD-IPL failed, retrying device\n"); print_eckd_msg(); } memset(sec, FREE_SPACE_FILLER, sizeof(sec)); @@ -944,5 +959,5 @@ void zipl_load(void) panic("\n! Unknown IPL device type !\n"); } - sclp_print("zIPL load failed.\n"); + panic("zIPL load failed.\n"); Why replacing the sclp_print() here? Wouldn't it be nicer to continue panicking on the calling site instead? Ok, after looking at the 5th patch, I think I understand it now: panic() is not fatal anymore and might restart with the next boot device... not sure whether I like that, but let's discuss that on patch 5 instead... Thomas
Re: [PATCH 4/5] s390x: Add boot device fallback infrastructure
On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi Add a routine for loading the next IPLB if a device fails to boot. This includes some minor changes to the List-Directed IPL routine so that the failing device may be retried using the legacy boot pointers before moving on to the next device. Signed-off-by: Jared Rossi --- ... diff --git a/pc-bios/s390-ccw/bootmap.c b/pc-bios/s390-ccw/bootmap.c index a2137449dc..69391557fa 100644 --- a/pc-bios/s390-ccw/bootmap.c +++ b/pc-bios/s390-ccw/bootmap.c @@ -144,7 +144,10 @@ static block_number_t load_eckd_segments(block_number_t blk, bool ldipl, bool more_data; memset(_bprs, FREE_SPACE_FILLER, sizeof(_bprs)); -read_block(blk, bprs, "BPRS read failed"); +if (!read_block_nonfatal(blk, bprs)) { +IPL_assert(ldipl, "BPRS read failed"); +return -1; +} do { more_data = false; @@ -188,7 +191,10 @@ static block_number_t load_eckd_segments(block_number_t blk, bool ldipl, * I.e. the next ptr must point to the unused memory area */ memset(_bprs, FREE_SPACE_FILLER, sizeof(_bprs)); -read_block(block_nr, bprs, "BPRS continuation read failed"); +if (!read_block_nonfatal(block_nr, bprs)) { +IPL_assert(ldipl, "BPRS continuation read failed"); +break; +} more_data = true; break; } @@ -197,7 +203,10 @@ static block_number_t load_eckd_segments(block_number_t blk, bool ldipl, * to memory (address). */ rc = virtio_read_many(block_nr, (void *)(*address), count + 1); -IPL_assert(rc == 0, "code chunk read failed"); +if (rc != 0) { +IPL_assert(ldipl, "code chunk read failed"); +break; +} *address += (count + 1) * virtio_get_block_size(); } @@ -295,13 +304,22 @@ static void run_eckd_boot_script(block_number_t bmt_block_nr, " maximum number of boot entries allowed"); memset(sec, FREE_SPACE_FILLER, sizeof(sec)); -read_block(bmt_block_nr, sec, "Cannot read Boot Map Table"); +if (!read_block_nonfatal(bmt_block_nr, sec)) { +IPL_assert(ldipl, "Cannot read Boot Map Table"); +return; +} block_nr = gen_eckd_block_num(>entry[loadparm].xeckd, ldipl); -IPL_assert(block_nr != -1, "Cannot find Boot Map Table Entry"); +if (block_nr == -1) { +IPL_assert(ldipl, "Cannot find Boot Map Table Entry"); +return; +} memset(sec, FREE_SPACE_FILLER, sizeof(sec)); -read_block(block_nr, sec, "Cannot read Boot Map Script"); +if (!read_block_nonfatal(block_nr, sec)) { +IPL_assert(ldipl, "Cannot read Boot Map Script"); +return; +} for (i = 0; bms->entry[i].type == BOOT_SCRIPT_LOAD || bms->entry[i].type == BOOT_SCRIPT_SIGNATURE; i++) { @@ -319,13 +337,10 @@ static void run_eckd_boot_script(block_number_t bmt_block_nr, } while (block_nr != -1); } -if (ldipl && bms->entry[i].type != BOOT_SCRIPT_EXEC) { -/* Abort LD-IPL and retry as CCW-IPL */ +if (bms->entry[i].type != BOOT_SCRIPT_EXEC) { +IPL_assert(ldipl, "Unknown script entry type"); return; } - -IPL_assert(bms->entry[i].type == BOOT_SCRIPT_EXEC, - "Unknown script entry type"); write_reset_psw(bms->entry[i].address.load_address); /* no return */ jump_to_IPL_code(0); /* no return */ } @@ -492,7 +507,7 @@ static void ipl_eckd(void) /* LD-IPL does not use the S1B bock, just make it NULL */ run_eckd_boot_script(ldipl_bmt, NULL_BLOCK_NR); /* Only return in error, retry as CCW-IPL */ -sclp_print("Retrying IPL "); +sclp_print("LD-IPL failed, retrying device\n"); print_eckd_msg(); } memset(sec, FREE_SPACE_FILLER, sizeof(sec)); @@ -944,5 +959,5 @@ void zipl_load(void) panic("\n! Unknown IPL device type !\n"); } -sclp_print("zIPL load failed.\n"); +panic("zIPL load failed.\n"); Why replacing the sclp_print() here? Wouldn't it be nicer to continue panicking on the calling site instead? } diff --git a/pc-bios/s390-ccw/main.c b/pc-bios/s390-ccw/main.c index 3e51d698d7..248ed5a410 100644 --- a/pc-bios/s390-ccw/main.c +++ b/pc-bios/s390-ccw/main.c @@ -53,6 +53,12 @@ unsigned int get_loadparm_index(void) return atoui(loadparm_str); } +static void copy_qipl(void) +{ +QemuIplParameters *early_qipl = (QemuIplParameters *)QIPL_ADDRESS; +memcpy(, early_qipl, sizeof(QemuIplParameters)); +} You could move this function as a static inline into iplb.h ... ... diff --git a/pc-bios/s390-ccw/netmain.c b/pc-bios/s390-ccw/netmain.c index 5cd619b2d6..65cee15fef 100644 ---
Re: [PATCH 0/5] s390x: Add Full Boot Order Support
On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi This patch set primarily adds support for the specification of multiple boot devices, allowing for the guest to automatically use an alternative device on a failed boot without needing to be reconfigured. It additionally provides the ability to define the loadparm attribute on a per-device bases, which allows boot devices to use different loadparm values if needed. In brief, an IPLB is generated for each designated boot device (up to a maximum of 8) and stored in guest memory immediately before BIOS. If a device fails to boot, the next IPLB is retrieved and we jump back to the start of BIOS. Devices can be specified using the standard qemu device tag "bootindex" as with other architectures. Lower number indices are tried first, with "bootindex=0" indicating the first device to try. Is this supposed with multiple scsi-hd devices, too? I tried to boot a guest with two scsi disks (attached to a single virtio-scsi-ccw adapter) where only the second disk had a bootable installation, but that failed...? Thomas
Re: [PATCH 2/5] s390x: Add loadparm to CcwDevice
On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi Add a loadparm property to the CcwDevice object so that different loadparms can be defined on a per-device basis when using multiple boot devices. The machine/global loadparm is still supported. If both a global and per-device loadparm are defined, the per-device value will override the global value for that device, but any other devices that do not specify a per-device loadparm will still use the global loadparm. Assigning a loadparm to a non-boot device is invalid and will be rejected. Signed-off-by: Jared Rossi --- ... diff --git a/hw/s390x/ccw-device.c b/hw/s390x/ccw-device.c index fb8c1acc64..143e085279 100644 --- a/hw/s390x/ccw-device.c +++ b/hw/s390x/ccw-device.c @@ -13,6 +13,10 @@ #include "ccw-device.h" #include "hw/qdev-properties.h" #include "qemu/module.h" +#include "ipl.h" +#include "qapi/visitor.h" +#include "qemu/ctype.h" +#include "qapi/error.h" static void ccw_device_refill_ids(CcwDevice *dev) { @@ -36,10 +40,55 @@ static void ccw_device_realize(CcwDevice *dev, Error **errp) ccw_device_refill_ids(dev); } +static void ccw_device_get_loadparm(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) +{ +CcwDevice *dev = CCW_DEVICE(obj); +char *str = g_strndup((char *) dev->loadparm, sizeof(dev->loadparm)); + +visit_type_str(v, name, , errp); +g_free(str); +} + +static void ccw_device_set_loadparm(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) +{ +CcwDevice *dev = CCW_DEVICE(obj); +char *val; +int index; + +index = object_property_get_int(obj, "bootindex", _abort); The error_abort is rather unfortunate here, it can be used to terminate QEMU unexpectedly: $ ./qemu-system-s390x -no-shutdown -nographic -serial none -monitor stdio QEMU 9.0.50 monitor - type 'help' for more information (qemu) device_add virtio-rng-ccw,loadparm=1 Unexpected error in object_property_find_err() at ../../devel/qemu/qom/object.c:1358: Property 'virtio-rng-ccw.bootindex' not found Aborted (core dumped) Since you check for the error via "index" afterwards anyway, I think it's likely ok to replace _abort with NULL here. But apart from that, it's also a bit ugly that each and every ccw device gets a loadparm property now. Would it be possible to add it to the devices that can be used for booting only? Thomas
Re: [PATCH 0/5] s390x: Add Full Boot Order Support
On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi This patch set primarily adds support for the specification of multiple boot devices, allowing for the guest to automatically use an alternative device on a failed boot without needing to be reconfigured. It additionally provides the ability to define the loadparm attribute on a per-device bases, which allows boot devices to use different loadparm values if needed. In brief, an IPLB is generated for each designated boot device (up to a maximum of 8) and stored in guest memory immediately before BIOS. If a device fails to boot, the next IPLB is retrieved and we jump back to the start of BIOS. Devices can be specified using the standard qemu device tag "bootindex" as with other architectures. Lower number indices are tried first, with "bootindex=0" indicating the first device to try. A subsequent Libvirt patch will be necessary to allow assignment of per-device loadparms in the guest XML Jared Rossi (5): Create include files for s390x IPL definitions Add loadparm to CcwDevice Build IPLB chain for multiple boot devices Add boot device fallback infrastructure Enable and document boot device fallback on panic docs/system/bootindex.rst | 7 +- docs/system/s390x/bootdevices.rst | 9 +- hw/s390x/ccw-device.h | 2 + hw/s390x/ipl.h| 117 +--- include/hw/s390x/ipl/qipl.h | 128 ++ pc-bios/s390-ccw/bootmap.h| 5 + pc-bios/s390-ccw/iplb.h | 108 +-- pc-bios/s390-ccw/s390-ccw.h | 6 ++ hw/s390x/ccw-device.c | 49 + hw/s390x/ipl.c| 170 ++ hw/s390x/s390-virtio-ccw.c| 18 +--- hw/s390x/sclp.c | 3 +- pc-bios/s390-ccw/bootmap.c| 41 --- pc-bios/s390-ccw/main.c | 25 +++-- pc-bios/s390-ccw/netmain.c| 4 + pc-bios/s390-ccw/Makefile | 2 +- Hi Jared! For v2, could you please add at least two tests: one that check booting from the second disk and one that checks booting from the last boot disk when the previous ones are invalid? I could think of two easy ways for adding such tests, up to you what you prefer: - Extend the tests/qtest/cdrom-test.c - see add_s390x_tests() there - Add an avocado test - see "grep -l s390 tests/avocado/*.py" for examples. Thomas
Re: [PATCH 3/5] s390x: Build IPLB chain for multiple boot devices
On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi Write a chain of IPLBs into memory for future use. The IPLB chain is placed immediately before the BIOS in memory at the highest unused page boundary providing sufficient space to fit the chain. Because this is not a fixed address, the location of the next IPLB and number of remaining boot devices is stored in the QIPL global variable for later access. At this stage the IPLB chain is not accessed by the guest during IPL. Signed-off-by: Jared Rossi --- hw/s390x/ipl.h | 1 + include/hw/s390x/ipl/qipl.h | 4 +- hw/s390x/ipl.c | 129 +++- 3 files changed, 103 insertions(+), 31 deletions(-) diff --git a/hw/s390x/ipl.h b/hw/s390x/ipl.h index 1dcb8984bb..4f098d3a81 100644 --- a/hw/s390x/ipl.h +++ b/hw/s390x/ipl.h @@ -20,6 +20,7 @@ #include "qom/object.h" #define DIAG308_FLAGS_LP_VALID 0x80 +#define MAX_IPLB_CHAIN 7 void s390_ipl_set_loadparm(char *ascii_lp, uint8_t *ebcdic_lp); void s390_ipl_fmt_loadparm(uint8_t *loadparm, char *str, Error **errp); diff --git a/include/hw/s390x/ipl/qipl.h b/include/hw/s390x/ipl/qipl.h index a6ce6ddfe3..481c459a53 100644 --- a/include/hw/s390x/ipl/qipl.h +++ b/include/hw/s390x/ipl/qipl.h @@ -34,7 +34,9 @@ struct QemuIplParameters { uint8_t reserved1[3]; uint64_t netboot_start_addr; uint32_t boot_menu_timeout; -uint8_t reserved2[12]; +uint8_t reserved2[2]; +uint16_t num_iplbs; +uint64_t next_iplb; } QEMU_PACKED; typedef struct QemuIplParameters QemuIplParameters; diff --git a/hw/s390x/ipl.c b/hw/s390x/ipl.c index 2d4f5152b3..79429acabd 100644 --- a/hw/s390x/ipl.c +++ b/hw/s390x/ipl.c @@ -55,6 +55,13 @@ static bool iplb_extended_needed(void *opaque) return ipl->iplbext_migration; } +/* Start IPLB chain from the boundary of the first unused page before BIOS */ I'd maybe say "upper boundary" to make it clear that this is at the end of the page, not at the beginning? +static uint64_t find_iplb_chain_addr(uint64_t bios_addr, uint16_t count) +{ +return (bios_addr & TARGET_PAGE_MASK) +- (count * sizeof(IplParameterBlock)); +} + static const VMStateDescription vmstate_iplb_extended = { .name = "ipl/iplb_extended", .version_id = 0, @@ -391,6 +398,17 @@ static CcwDevice *s390_get_ccw_device(DeviceState *dev_st, int *devtype) return ccw_dev; } +static void s390_ipl_map_iplb_chain(IplParameterBlock *iplb_chain) +{ +S390IPLState *ipl = get_ipl_device(); +uint16_t count = ipl->qipl.num_iplbs; +uint64_t len = sizeof(IplParameterBlock) * count; +uint64_t chain_addr = find_iplb_chain_addr(ipl->bios_start_addr, count); + +cpu_physical_memory_write(chain_addr, iplb_chain, be32_to_cpu(len)); The be32_to_cpu looks wrong here, since you just computed len in native endianness. +ipl->qipl.next_iplb = chain_addr; Just a matter of taste, but I'd prefer to set ipl->qipl.next_iplb in the same function where you set ipl->qipl.num_iplbs ... so I'd rather return chain_addr here and then do this on the calling site: ipl->qipl.next_iplb = s390_ipl_map_iplb_chain(...); +} + void s390_ipl_fmt_loadparm(uint8_t *loadparm, char *str, Error **errp) { int i; @@ -422,54 +440,51 @@ void s390_ipl_set_loadparm(char *ascii_lp, uint8_t *ebcdic_lp) } } -static bool s390_gen_initial_iplb(S390IPLState *ipl) +static bool s390_build_iplb(DeviceState *dev_st, IplParameterBlock *iplb) { -DeviceState *dev_st; +S390IPLState *ipl = get_ipl_device(); CcwDevice *ccw_dev = NULL; SCSIDevice *sd; int devtype; uint8_t *lp; -dev_st = get_boot_device(0); -if (dev_st) { -ccw_dev = s390_get_ccw_device(dev_st, ); -} - /* * Currently allow IPL only from CCW devices. */ +ccw_dev = s390_get_ccw_device(dev_st, ); if (ccw_dev) { lp = ccw_dev->loadparm; -switch (devtype) { -case CCW_DEVTYPE_SCSI: + switch (devtype) { + case CCW_DEVTYPE_SCSI: sd = SCSI_DEVICE(dev_st); -ipl->iplb.len = cpu_to_be32(S390_IPLB_MIN_QEMU_SCSI_LEN); -ipl->iplb.blk0_len = +iplb->len = cpu_to_be32(S390_IPLB_MIN_QEMU_SCSI_LEN); +iplb->blk0_len = cpu_to_be32(S390_IPLB_MIN_QEMU_SCSI_LEN - S390_IPLB_HEADER_LEN); -ipl->iplb.pbt = S390_IPL_TYPE_QEMU_SCSI; -ipl->iplb.scsi.lun = cpu_to_be32(sd->lun); -ipl->iplb.scsi.target = cpu_to_be16(sd->id); -ipl->iplb.scsi.channel = cpu_to_be16(sd->channel); -ipl->iplb.scsi.devno = cpu_to_be16(ccw_dev->sch->devno); -ipl->iplb.scsi.ssid = ccw_dev->sch->ssid & 3; +iplb->pbt = S390_IPL_TYPE_QEMU_SCSI; +iplb->scsi.lun = cpu_to_be32(sd->lun); +iplb->scsi.target = cpu_to_be16(sd->id); +
Re: [PATCH 2/5] s390x: Add loadparm to CcwDevice
On 04/06/2024 18.27, Jared Rossi wrote: Hi Thomas, Firstly, thanks for the reviews and I agree with your suggestions about function names, info messages, simplifications, etc... I will make those changes. As for the DIAG308_FLAGS_LP_VALID flag... [snip...] @@ -438,40 +473,20 @@ static bool s390_gen_initial_iplb(S390IPLState *ipl) break; } - if (!s390_ipl_set_loadparm(ipl->iplb.loadparm)) { - ipl->iplb.flags |= DIAG308_FLAGS_LP_VALID; + /* If the device loadparm is empty use the global machine loadparm */ + if (memcmp(lp, "\0\0\0\0\0\0\0\0", 8) == 0) { + lp = S390_CCW_MACHINE(qdev_get_machine())->loadparm; } + s390_ipl_set_loadparm((char *)lp, ipl->iplb.loadparm); + ipl->iplb.flags |= DIAG308_FLAGS_LP_VALID; That means DIAG308_FLAGS_LP_VALID is now always set, even if no loadparm has been specified? Shouldn't it be omitted if the user did not specify the loadparm property? No, I don't think it should be omitted if the loadparm hasn't been specified because it is optional and uses a default value if not set. My understanding is that the flag should, actually, always be set here. As best as I can tell, the existing check is a redundant validation that cannot fail and therefore is not needed. Currently the only way s390_ipl_set_loadparm() can return non-zero is if object_property_get_str() itself returns NULL pointer when getting the loadparm; however, the loadparm is already validated at this point because the string is initialized when the loadparm property is set. If there were a problem with the loadparm string an error would have already been thrown during initialization. Furthermore, object_property_get_str() is no longer used here. As such, s390_ipl_set_loadparm() is changed to a void function and the check is removed. Ok, makes sense thanks for the explanation! Thomas
Re: [PATCH 2/5] s390x: Add loadparm to CcwDevice
On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi Add a loadparm property to the CcwDevice object so that different loadparms can be defined on a per-device basis when using multiple boot devices. The machine/global loadparm is still supported. If both a global and per-device loadparm are defined, the per-device value will override the global value for that device, but any other devices that do not specify a per-device loadparm will still use the global loadparm. Assigning a loadparm to a non-boot device is invalid and will be rejected. Signed-off-by: Jared Rossi --- ... diff --git a/hw/s390x/ccw-device.c b/hw/s390x/ccw-device.c index fb8c1acc64..143e085279 100644 --- a/hw/s390x/ccw-device.c +++ b/hw/s390x/ccw-device.c @@ -13,6 +13,10 @@ #include "ccw-device.h" #include "hw/qdev-properties.h" #include "qemu/module.h" +#include "ipl.h" +#include "qapi/visitor.h" +#include "qemu/ctype.h" +#include "qapi/error.h" static void ccw_device_refill_ids(CcwDevice *dev) { @@ -36,10 +40,55 @@ static void ccw_device_realize(CcwDevice *dev, Error **errp) ccw_device_refill_ids(dev); } +static void ccw_device_get_loadparm(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) +{ +CcwDevice *dev = CCW_DEVICE(obj); +char *str = g_strndup((char *) dev->loadparm, sizeof(dev->loadparm)); + +visit_type_str(v, name, , errp); +g_free(str); +} + +static void ccw_device_set_loadparm(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) +{ +CcwDevice *dev = CCW_DEVICE(obj); +char *val; +int index; + +index = object_property_get_int(obj, "bootindex", _abort); + +if (index < 0) { +error_setg(errp, "LOADPARM: non-boot device"); If the user is not very experienced, this error message is not very helpful. Maybe: "loadparm is only supported on devices with bootindex property" ? +} + +if (!visit_type_str(v, name, , errp)) { +return; +} + +s390_ipl_fmt_loadparm(dev->loadparm, val, errp); +} + +static const PropertyInfo ccw_loadparm = { +.name = "ccw_loadparm", +.description = "Up to 8 chars in set of [A-Za-z0-9. ] (lower case" +" chars converted to upper case) to pass to machine loader," +" boot manager, and guest kernel", I know, we're using the same text for the description of the machine property already, but maybe we could do better here: The string is very long, it wraps around unless you use a very huge console window when running QEMU with e.g. "-device virtio-blk-ccw,help". What do you think about using something like this: .description = "Up to 8 chars in set of [A-Za-z0-9. ] to pass to the guest loader/kernel"; ? +.get = ccw_device_get_loadparm, +.set = ccw_device_set_loadparm, +}; + +#define DEFINE_PROP_CCW_LOADPARM(_n, _s, _f) \ +DEFINE_PROP(_n, _s, _f, ccw_loadparm, typeof(uint8_t[8])) Not sure if it's worth the effort to create a macro just for this if you only need it once below. I'd maybe rather inline the DEFINE_PROP below. static Property ccw_device_properties[] = { DEFINE_PROP_CSS_DEV_ID("devno", CcwDevice, devno), DEFINE_PROP_CSS_DEV_ID_RO("dev_id", CcwDevice, dev_id), DEFINE_PROP_CSS_DEV_ID_RO("subch_id", CcwDevice, subch_id), +DEFINE_PROP_CCW_LOADPARM("loadparm", CcwDevice, loadparm), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/s390x/ipl.c b/hw/s390x/ipl.c index e934bf89d1..2d4f5152b3 100644 --- a/hw/s390x/ipl.c +++ b/hw/s390x/ipl.c @@ -34,6 +34,7 @@ #include "qemu/config-file.h" #include "qemu/cutils.h" #include "qemu/option.h" +#include "qemu/ctype.h" #include "standard-headers/linux/virtio_ids.h" #define KERN_IMAGE_START0x01UL @@ -390,12 +391,44 @@ static CcwDevice *s390_get_ccw_device(DeviceState *dev_st, int *devtype) return ccw_dev; } +void s390_ipl_fmt_loadparm(uint8_t *loadparm, char *str, Error **errp) +{ +int i; + +/* Initialize the loadparm with spaces */ +memset(loadparm, ' ', LOADPARM_LEN); +for (i = 0; i < LOADPARM_LEN && str[i]; i++) { +uint8_t c = qemu_toupper(str[i]); /* mimic HMC */ + +if (('A' <= c && c <= 'Z') || ('0' <= c && c <= '9') || (c == '.') || +(c == ' ')) { You could simplify it to: if (qemu_isalpha(c) || c == '.' || c == ' ') +loadparm[i] = c; +} else { +error_setg(errp, "LOADPARM: invalid character '%c' (ASCII 0x%02x)", + c, c); +return; +} +} +} + +void s390_ipl_set_loadparm(char *ascii_lp, uint8_t *ebcdic_lp) I'd maybe rename this function now, maybe s390_ipl_convert_loadparm() or something similar? +{ +int i; + +/* Initialize the loadparm with EBCDIC spaces (0x40) */ +memset(ebcdic_lp, '@',
Re: [kvm-unit-tests PATCH v9 31/31] powerpc: gitlab CI update
On 04/05/2024 14.28, Nicholas Piggin wrote: This adds testing for the powernv machine, and adds a gitlab-ci test group instead of specifying all tests in .gitlab-ci.yml, and adds a few new tests (smp, atomics) that are known to work in CI. Signed-off-by: Nicholas Piggin --- .gitlab-ci.yml| 30 -- powerpc/unittests.cfg | 32 ++-- 2 files changed, 34 insertions(+), 28 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 23bb69e24..31a2a4e34 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -97,17 +97,10 @@ build-ppc64be: - cd build - ../configure --arch=ppc64 --endian=big --cross-prefix=powerpc64-linux-gnu- - make -j2 - - ACCEL=tcg ./run_tests.sh - selftest-setup - selftest-migration - selftest-migration-skip - spapr_hcall - rtas-get-time-of-day - rtas-get-time-of-day-base - rtas-set-time-of-day - emulator - | tee results.txt - - if grep -q FAIL results.txt ; then exit 1 ; fi + - ACCEL=tcg MAX_SMP=8 ./run_tests.sh -g gitlab-ci | tee results.txt + - grep -q PASS results.txt && ! grep -q FAIL results.txt + - ACCEL=tcg MAX_SMP=8 MACHINE=powernv ./run_tests.sh -g gitlab-ci | tee results.txt + - grep -q PASS results.txt && ! grep -q FAIL results.txt build-ppc64le: extends: .intree_template @@ -115,17 +108,10 @@ build-ppc64le: - dnf install -y qemu-system-ppc gcc-powerpc64-linux-gnu nmap-ncat - ./configure --arch=ppc64 --endian=little --cross-prefix=powerpc64-linux-gnu- - make -j2 - - ACCEL=tcg ./run_tests.sh - selftest-setup - selftest-migration - selftest-migration-skip - spapr_hcall - rtas-get-time-of-day - rtas-get-time-of-day-base - rtas-set-time-of-day - emulator - | tee results.txt - - if grep -q FAIL results.txt ; then exit 1 ; fi + - ACCEL=tcg MAX_SMP=8 ./run_tests.sh -g gitlab-ci | tee results.txt + - grep -q PASS results.txt && ! grep -q FAIL results.txt + - ACCEL=tcg MAX_SMP=8 MACHINE=powernv ./run_tests.sh -g gitlab-ci | tee results.txt + - grep -q PASS results.txt && ! grep -q FAIL results.txt # build-riscv32: # Fedora doesn't package a riscv32 compiler for QEMU. Oh, well. diff --git a/powerpc/unittests.cfg b/powerpc/unittests.cfg index d767f5d68..6fae688a8 100644 --- a/powerpc/unittests.cfg +++ b/powerpc/unittests.cfg @@ -16,17 +16,25 @@ file = selftest.elf smp = 2 extra_params = -m 1g -append 'setup smp=2 mem=1024' -groups = selftest +groups = selftest gitlab-ci [selftest-migration] file = selftest-migration.elf machine = pseries groups = selftest migration +# QEMU 7.0 (Fedora 37) in gitlab CI has known migration bugs in TCG, so +# make a kvm-only version for CI +[selftest-migration-ci] +file = selftest-migration.elf +machine = pseries +groups = nodefault selftest migration gitlab-ci +accel = kvm + [selftest-migration-skip] file = selftest-migration.elf machine = pseries -groups = selftest migration +groups = selftest migration gitlab-ci extra_params = -append "skip" [migration-memory] @@ -37,6 +45,7 @@ groups = migration [spapr_hcall] file = spapr_hcall.elf machine = pseries +groups = gitlab-ci [spapr_vpa] file = spapr_vpa.elf @@ -47,38 +56,43 @@ file = rtas.elf machine = pseries timeout = 5 extra_params = -append "get-time-of-day date=$(date +%s)" -groups = rtas +groups = rtas gitlab-ci [rtas-get-time-of-day-base] file = rtas.elf machine = pseries timeout = 5 extra_params = -rtc base="2006-06-17" -append "get-time-of-day date=$(date --date="2006-06-17 UTC" +%s)" -groups = rtas +groups = rtas gitlab-ci [rtas-set-time-of-day] file = rtas.elf machine = pseries extra_params = -append "set-time-of-day" timeout = 5 -groups = rtas +groups = rtas gitlab-ci [emulator] file = emulator.elf +groups = gitlab-ci +# QEMU 7.0 (Fedora 37) in gitlab CI fails this [interrupts] file = interrupts.elf +# QEMU 7.0 (Fedora 37) in gitlab CI fails this [mmu] file = mmu.elf smp = $MAX_SMP +# QEMU 7.0 (Fedora 37) in gitlab CI fails this [pmu] file = pmu.elf [smp] file = smp.elf smp = 2 +groups = gitlab-ci [smp-smt] file = smp.elf @@ -92,16 +106,19 @@ accel = tcg,thread=single [atomics] file = atomics.elf +groups = gitlab-ci [atomics-migration] file = atomics.elf machine = pseries extra_params = -append "migration -m" -groups = migration +groups = migration gitlab-ci +# QEMU 7.0 (Fedora 37) in gitlab CI fails this [timebase] file = timebase.elf +# QEMU 7.0 (Fedora 37) in gitlab CI fails this [timebase-icount] file = timebase.elf accel = tcg @@ -115,14 +132,17 @@ smp = 2,threads=2 extra_params = -machine cap-htm=on -append "h_cede_tm" groups = h_cede_tm +# QEMU 7.0 (Fedora 37) in gitlab CI fails this [sprs] file = sprs.elf +# QEMU 7.0 (Fedora 37) in gitlab CI fails this [sprs-migration] file = sprs.elf machine = pseries
Re: [kvm-unit-tests PATCH v9 30/31] powerpc: Add facility to query TCG or KVM host
On 04/05/2024 14.28, Nicholas Piggin wrote: Use device tree properties to determine whether KVM or TCG is in use. Logically these are not the inverse of one another, because KVM can be used on top of a TCG processor (if TCG is emulating HV mode, or if it provides a nested hypervisor interface with spapr). This can be a problem because some issues relate to TCG CPU emulation, and some to the spapr hypervisor implementation. At the moment there is no way to determine TCG is running a KVM host that is running the tests, but the two independent variables are added in case that is able to be determined in future. For now that case is just incorrectly considered to be kvm && !tcg. Use this facility to restrict some of the known test failures to TCG. Signed-off-by: Nicholas Piggin --- lib/powerpc/asm/processor.h | 3 +++ lib/powerpc/setup.c | 25 + powerpc/atomics.c | 2 +- powerpc/interrupts.c| 6 -- powerpc/mmu.c | 2 +- powerpc/pmu.c | 6 +++--- powerpc/sprs.c | 2 +- powerpc/timebase.c | 4 ++-- powerpc/tm.c| 2 +- 9 files changed, 41 insertions(+), 11 deletions(-) As mentioned elsewhere, it would be nice to have this earlier in the series so you could use the conditions in the earlier patches already (but if it is too cumbersome to rework, I don't insist on that). Reviewed-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 29/31] powerpc: Remove remnants of ppc64 directory and build structure
On 04/05/2024 14.28, Nicholas Piggin wrote: This moves merges ppc64 directories and files into powerpc, and merges the 3 makefiles into one. The configure --arch=powerpc option is aliased to ppc64 for good measure. Signed-off-by: Nicholas Piggin --- ... diff --git a/powerpc/Makefile b/powerpc/Makefile index 8a007ab54..e4b5312a2 100644 --- a/powerpc/Makefile +++ b/powerpc/Makefile @@ -1 +1,111 @@ -include $(SRCDIR)/$(TEST_DIR)/Makefile.$(ARCH) +# +# powerpc makefile +# +# Authors: Andrew Jones I'd maybe drop that e-mail address now since it it not valid anymore. Andrew, do want to see your new mail address here? Apart from that: Acked-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 27/31] powerpc: add pmu tests
On 04/05/2024 14.28, Nicholas Piggin wrote: Add some initial PMU testing. - PMC5/6 tests - PMAE / PMI test - BHRB basic tests Signed-off-by: Nicholas Piggin --- ... diff --git a/lib/powerpc/setup.c b/lib/powerpc/setup.c index a4ff678ce..8ff4939e2 100644 --- a/lib/powerpc/setup.c +++ b/lib/powerpc/setup.c @@ -33,6 +33,7 @@ u32 initrd_size; u32 cpu_to_hwid[NR_CPUS] = { [0 ... NR_CPUS-1] = (~0U) }; int nr_cpus_present; uint64_t tb_hz; +uint64_t cpu_hz; struct mem_region mem_regions[NR_MEM_REGIONS]; phys_addr_t __physical_start, __physical_end; @@ -42,6 +43,7 @@ struct cpu_set_params { unsigned icache_bytes; unsigned dcache_bytes; uint64_t tb_hz; + uint64_t cpu_hz; }; static void cpu_set(int fdtnode, u64 regval, void *info) @@ -95,6 +97,22 @@ static void cpu_set(int fdtnode, u64 regval, void *info) data = (u32 *)prop->data; params->tb_hz = fdt32_to_cpu(*data); + prop = fdt_get_property(dt_fdt(), fdtnode, + "ibm,extended-clock-frequency", NULL); + if (prop) { + data = (u32 *)prop->data; + params->cpu_hz = fdt32_to_cpu(*data); + params->cpu_hz <<= 32; + data = (u32 *)prop->data + 1; + params->cpu_hz |= fdt32_to_cpu(*data); Why don't you simply cast to (u64 *) and use fdt64_to_cpu() here instead? ... diff --git a/powerpc/pmu.c b/powerpc/pmu.c new file mode 100644 index 0..8b13ee4cd --- /dev/null +++ b/powerpc/pmu.c @@ -0,0 +1,403 @@ ... +static void test_pmc5_with_fault(void) +{ + unsigned long pmc5_1, pmc5_2; + + handle_exception(0x700, _handler, NULL); + handle_exception(0xe40, _handler, NULL); + + reset_mmcr0(); + mtspr(SPR_PMC5, 0); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) & ~(MMCR0_FC | MMCR0_FC56)); + asm volatile(".long 0x0" ::: "memory"); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) | (MMCR0_FC | MMCR0_FC56)); + assert(got_interrupt); + got_interrupt = false; + pmc5_1 = mfspr(SPR_PMC5); + + reset_mmcr0(); + mtspr(SPR_PMC5, 0); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) & ~(MMCR0_FC | MMCR0_FC56)); + asm volatile(".rep 20 ; nop ; .endr ; .long 0x0" ::: "memory"); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) | (MMCR0_FC | MMCR0_FC56)); + assert(got_interrupt); + got_interrupt = false; + pmc5_2 = mfspr(SPR_PMC5); + + /* TCG and POWER9 do not count instructions around faults correctly */ + report_kfail(true, pmc5_1 + 20 == pmc5_2, "PMC5 counts instructions with fault"); It would be nice to have the TCG detection patch before this patch, so you could use the right condition here right from the start. + handle_exception(0x700, NULL, NULL); + handle_exception(0xe40, NULL, NULL); +} + +static void test_pmc5_with_sc(void) +{ + unsigned long pmc5_1, pmc5_2; + + handle_exception(0xc00, _handler, NULL); + + reset_mmcr0(); + mtspr(SPR_PMC5, 0); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) & ~(MMCR0_FC | MMCR0_FC56)); + asm volatile("sc 0" ::: "memory"); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) | (MMCR0_FC | MMCR0_FC56)); + assert(got_interrupt); + got_interrupt = false; + pmc5_1 = mfspr(SPR_PMC5); + + reset_mmcr0(); + mtspr(SPR_PMC5, 0); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) & ~(MMCR0_FC | MMCR0_FC56)); + asm volatile(".rep 20 ; nop ; .endr ; sc 0" ::: "memory"); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) | (MMCR0_FC | MMCR0_FC56)); + assert(got_interrupt); + got_interrupt = false; + pmc5_2 = mfspr(SPR_PMC5); + + /* TCG does not count instructions around syscalls correctly */ + report_kfail(true, pmc5_1 + 20 == pmc5_2, "PMC5 counts instructions with syscall"); dito + handle_exception(0xc00, NULL, NULL); +} + +static void test_pmc56(void) +{ + unsigned long tmp; + + report_prefix_push("pmc56"); + + reset_mmcr0(); + mtspr(SPR_PMC5, 0); + mtspr(SPR_PMC6, 0); + report(mfspr(SPR_PMC5) == 0, "PMC5 zeroed"); + report(mfspr(SPR_PMC6) == 0, "PMC6 zeroed"); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) & ~MMCR0_FC); + msleep(100); + report(mfspr(SPR_PMC5) == 0, "PMC5 frozen"); + report(mfspr(SPR_PMC6) == 0, "PMC6 frozen"); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) & ~MMCR0_FC56); + mdelay(100); + mtspr(SPR_MMCR0, mfspr(SPR_MMCR0) | (MMCR0_FC | MMCR0_FC56)); + report(mfspr(SPR_PMC5) != 0, "PMC5 counting"); + report(mfspr(SPR_PMC6) != 0, "PMC6 counting"); + + /* Dynamic frequency scaling could cause to be out, so don't fail. */ + tmp = mfspr(SPR_PMC6); + report(true, "PMC6 ratio to reported clock frequency is %ld%%", tmp * 1000 / cpu_hz); report(true, ...) looks weird. Use report_info() instead? Thomas
Re: [kvm-unit-tests PATCH v9 26/31] powerpc: add usermode support
On 04/05/2024 14.28, Nicholas Piggin wrote: The biggest difficulty for user mode is MMU support. Otherwise it is a simple matter of setting and clearing MSR[PR] with rfid and sc respectively. Some common harness operations will fail in usermode, so some workarounds are reqiured (e.g., puts() can't be used directly). A usermode privileged instruction interrupt test is added. Signed-off-by: Nicholas Piggin --- Reviewed-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 25/31] powerpc: Add sieve.c common test
On 04/05/2024 14.28, Nicholas Piggin wrote: Now that sieve copes with lack of MMU support, it can be run by powerpc. Signed-off-by: Nicholas Piggin --- powerpc/Makefile.common | 1 + powerpc/sieve.c | 1 + powerpc/unittests.cfg | 3 +++ 3 files changed, 5 insertions(+) create mode 12 powerpc/sieve.c Reviewed-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 24/31] common/sieve: Support machines without MMU
On 04/05/2024 14.28, Nicholas Piggin wrote: Not all powerpc CPUs provide MMU support. Define vm_available() that is true by default but archs can override it. Use this to run VM tests. Cc: Paolo Bonzini Cc: Thomas Huth Cc: k...@vger.kernel.org Reviewed-by: Andrew Jones Signed-off-by: Nicholas Piggin --- Reviewed-by: Thomas Huth
Re: [PATCH] qtest/x86/numa-test: do not use the obsolete 'pentium' cpu
On 04/06/2024 08.21, Ani Sinha wrote: 'pentium' cpu is old and obsolete and should be avoided for running tests if its not strictly needed. Use 'max' cpu instead for generic non-cpu specific numa test. CC: th...@redhat.com Signed-off-by: Ani Sinha --- tests/qtest/numa-test.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c index 7aa262dbb9..f01f19592d 100644 --- a/tests/qtest/numa-test.c +++ b/tests/qtest/numa-test.c @@ -125,7 +125,8 @@ static void pc_numa_cpu(const void *data) QTestState *qts; g_autofree char *cli = NULL; -cli = make_cli(data, "-cpu pentium -machine smp.cpus=8,smp.sockets=2,smp.cores=2,smp.threads=2 " +cli = make_cli(data, +"-cpu max -machine smp.cpus=8,smp.sockets=2,smp.cores=2,smp.threads=2 " "-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 " "-numa cpu,node-id=1,socket-id=0 " "-numa cpu,node-id=0,socket-id=1,core-id=0 " Reviewed-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 23/31] common/sieve: Use vmalloc.h for setup_mmu definition
On 04/05/2024 14.28, Nicholas Piggin wrote: There is no good reason to put setup_vm in libcflat.h when it's defined in vmalloc.h. Cc: Paolo Bonzini Cc: Thomas Huth Cc: Janosch Frank Cc: Claudio Imbrenda Cc: Nico Böhr Cc: David Hildenbrand Cc: k...@vger.kernel.org Cc: linux-s...@vger.kernel.org Acked-by: Andrew Jones Signed-off-by: Nicholas Piggin --- Reviewed-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 22/31] powerpc: Add MMU support
On 04/05/2024 14.28, Nicholas Piggin wrote: Add support for radix MMU, 4kB and 64kB pages. This also adds MMU interrupt test cases, and runs the interrupts test entirely with MMU enabled if it is available (aside from machine check tests). Acked-by: Andrew Jones (configure changes) Signed-off-by: Nicholas Piggin --- ... diff --git a/lib/ppc64/mmu.c b/lib/ppc64/mmu.c new file mode 100644 index 0..5307cd862 --- /dev/null +++ b/lib/ppc64/mmu.c @@ -0,0 +1,281 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Radix MMU support + * + * Copyright (C) 2024, IBM Inc, Nicholas Piggin + * + * Derived from Linux kernel MMU code. + */ +#include +#include +#include +#include +#include +#include +#include + +#include "alloc_page.h" +#include "vmalloc.h" +#include +#include + +#include + +static pgd_t *identity_pgd; + +bool vm_available(void) +{ + return cpu_has_radix; +} + +bool mmu_enabled(void) +{ + return current_cpu()->pgtable != NULL; +} + +void mmu_enable(pgd_t *pgtable) +{ + struct cpu *cpu = current_cpu(); + + if (!pgtable) + pgtable = identity_pgd; + + cpu->pgtable = pgtable; + + mtmsr(mfmsr() | (MSR_IR|MSR_DR)); +} + +void mmu_disable(void) +{ + struct cpu *cpu = current_cpu(); + + cpu->pgtable = NULL; + + mtmsr(mfmsr() & ~(MSR_IR|MSR_DR)); +} + +static inline void tlbie(unsigned long rb, unsigned long rs, int ric, int prs, int r) +{ + asm volatile(".machine push ; .machine power9; ptesync ; tlbie %0,%1,%2,%3,%4 ; eieio ; tlbsync ; ptesync ; .machine pop" :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r) : "memory"); That's a very long line, please split it up after every assembly instruction (using \n for new lines). +} ... diff --git a/powerpc/mmu.c b/powerpc/mmu.c new file mode 100644 index 0..fef790506 --- /dev/null +++ b/powerpc/mmu.c @@ -0,0 +1,283 @@ +/* SPDX-License-Identifier: LGPL-2.0-only */ +/* + * MMU Tests + * + * Copyright 2024 Nicholas Piggin, IBM Corp. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static inline void tlbie(unsigned long rb, unsigned long rs, int ric, int prs, int r) +{ + asm volatile(".machine push ; .machine power9; ptesync ; tlbie %0,%1,%2,%3,%4 ; eieio ; tlbsync ; ptesync ; .machine pop" :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r) : "memory"); +} Same function again? Maybe it could go into mmu.h instead? +static inline void tlbiel(unsigned long rb, unsigned long rs, int ric, int prs, int r) +{ + asm volatile(".machine push ; .machine power9; ptesync ; tlbiel %0,%1,%2,%3,%4 ; ptesync ; .machine pop" :: "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r) : "memory"); +} Please also split up the above long line. It would also be cool if you could get one of the other ppc guys at IBM to review this patch, since I don't have a clue about this MMU stuff at all. Thanks, Thomas
Re: [kvm-unit-tests PATCH v9 21/31] powerpc: Add timebase tests
On 04/05/2024 14.28, Nicholas Piggin wrote: This has a known failure on QEMU TCG machines where the decrementer interrupt is not lowered when the DEC wraps from -ve to +ve. Would it then make sense to mark the test with accel = kvm to avoid the test failure when running with TCG? diff --git a/powerpc/timebase.c b/powerpc/timebase.c new file mode 100644 index 0..02a4e33c0 --- /dev/null +++ b/powerpc/timebase.c @@ -0,0 +1,331 @@ +/* SPDX-License-Identifier: LGPL-2.0-only */ +/* + * Test Timebase + * + * Copyright 2024 Nicholas Piggin, IBM Corp. + * + * This contains tests of timebase facility, TB, DEC, etc. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static int dec_bits = 0; + +static void cpu_dec_bits(int fdtnode, u64 regval __unused, void *arg __unused) +{ + const struct fdt_property *prop; + int plen; + + prop = fdt_get_property(dt_fdt(), fdtnode, "ibm,dec-bits", ); + if (!prop) { + dec_bits = 32; + return; + } + + /* Sanity check for the property layout (first two bytes are header) */ + assert(plen == 4); + + dec_bits = fdt32_to_cpu(*(uint32_t *)prop->data); +} + +/* Check amount of CPUs nodes that have the TM flag */ +static int find_dec_bits(void) +{ + int ret; + + ret = dt_for_each_cpu_node(cpu_dec_bits, NULL); What sense does it make to run this for each CPU node if the cpu_dec_bits function always overwrites the global dec_bits variable? Wouldn't it be sufficient to run this for the first node only? Or should the cpu_dec_bits function maybe check that all nodes have the same value? + if (ret < 0) + return ret; + + return dec_bits; +} + + +static bool do_migrate = false; +static volatile bool got_interrupt; +static volatile struct pt_regs recorded_regs; + +static uint64_t dec_max; +static uint64_t dec_min; + +static void test_tb(int argc, char **argv) +{ + uint64_t tb; + + tb = get_tb(); + if (do_migrate) + migrate(); + report(get_tb() >= tb, "timebase is incrementing"); If you use >= for testing, it could also mean that the TB stays at the same value, so "timebase is incrementing" sounds misleading. Maybe rather "timebase is not decreasing" ? Or wait a little bit, then check with ">" only ? +} + +static void dec_stop_handler(struct pt_regs *regs, void *data) +{ + mtspr(SPR_DEC, dec_max); +} + +static void dec_handler(struct pt_regs *regs, void *data) +{ + got_interrupt = true; + memcpy((void *)_regs, regs, sizeof(struct pt_regs)); + regs->msr &= ~MSR_EE; +} + +static void test_dec(int argc, char **argv) +{ + uint64_t tb1, tb2, dec; + int i; + + handle_exception(0x900, _handler, NULL); + + for (i = 0; i < 100; i++) { + tb1 = get_tb(); + mtspr(SPR_DEC, dec_max); + dec = mfspr(SPR_DEC); + tb2 = get_tb(); + if (tb2 - tb1 < dec_max - dec) + break; + } + /* POWER CPUs can have a slight (few ticks) variation here */ + report_kfail(true, tb2 - tb1 >= dec_max - dec, "decrementer remains within TB after mtDEC"); + + tb1 = get_tb(); + mtspr(SPR_DEC, dec_max); + mdelay(1000); + dec = mfspr(SPR_DEC); + tb2 = get_tb(); + report(tb2 - tb1 >= dec_max - dec, "decrementer remains within TB after 1s"); + + mtspr(SPR_DEC, dec_max); + local_irq_enable(); + local_irq_disable(); + if (mfspr(SPR_DEC) <= dec_max) { + report(!got_interrupt, "no interrupt on decrementer positive"); + } + got_interrupt = false; + + mtspr(SPR_DEC, 1); + mdelay(100); /* Give the timer a chance to run */ + if (do_migrate) + migrate(); + local_irq_enable(); + local_irq_disable(); + report(got_interrupt, "interrupt on decrementer underflow"); + got_interrupt = false; + + if (do_migrate) + migrate(); + local_irq_enable(); + local_irq_disable(); + report(got_interrupt, "interrupt on decrementer still underflown"); + got_interrupt = false; + + mtspr(SPR_DEC, 0); + mdelay(100); /* Give the timer a chance to run */ + if (do_migrate) + migrate(); + local_irq_enable(); + local_irq_disable(); + report(got_interrupt, "DEC deal with set to 0"); + got_interrupt = false; + + /* Test for level-triggered decrementer */ + mtspr(SPR_DEC, -1ULL); + if (do_migrate) + migrate(); + local_irq_enable(); + local_irq_disable(); + report(got_interrupt, "interrupt on decrementer write MSB"); + got_interrupt = false; + + mtspr(SPR_DEC, dec_max); + local_irq_enable(); + if (do_migrate) + migrate(); + mtspr(SPR_DEC, -1); +
Re: [kvm-unit-tests PATCH v9 20/31] powerpc: Add atomics tests
On 04/05/2024 14.28, Nicholas Piggin wrote: Signed-off-by: Nicholas Piggin --- Please provide at least a short patch description about what is being tested here! Thanks, Thomas
Re: [kvm-unit-tests PATCH v9 19/31] powerpc: Avoid using larx/stcx. in spinlocks when only one CPU is running
On 04/05/2024 14.28, Nicholas Piggin wrote: The test harness uses spinlocks if they are implemented with larx/stcx. it can prevent some test scenarios such as testing migration of a reservation. I'm having a hard time to understand that patch description. Maybe you could rephrase it / elaborate what's the exact problem here? Thanks, Thomas Signed-off-by: Nicholas Piggin --- lib/powerpc/asm/smp.h| 1 + lib/powerpc/smp.c| 5 + lib/powerpc/spinlock.c | 29 + lib/ppc64/asm/spinlock.h | 7 ++- powerpc/Makefile.common | 1 + 5 files changed, 42 insertions(+), 1 deletion(-) create mode 100644 lib/powerpc/spinlock.c diff --git a/lib/powerpc/asm/smp.h b/lib/powerpc/asm/smp.h index c45988bfa..66188b9dd 100644 --- a/lib/powerpc/asm/smp.h +++ b/lib/powerpc/asm/smp.h @@ -15,6 +15,7 @@ struct cpu { extern int nr_cpus_present; extern int nr_cpus_online; +extern bool multithreaded; extern struct cpu cpus[]; register struct cpu *__current_cpu asm("r13"); diff --git a/lib/powerpc/smp.c b/lib/powerpc/smp.c index 27b169841..73c0ef214 100644 --- a/lib/powerpc/smp.c +++ b/lib/powerpc/smp.c @@ -276,6 +276,8 @@ static void start_each_secondary(int fdtnode, u64 regval __unused, void *info) start_core(fdtnode, datap->entry); } +bool multithreaded = false; + /* * Start all stopped cpus on the guest at entry with register 3 set to r3 * We expect that we come in with only one thread currently started @@ -290,6 +292,7 @@ bool start_all_cpus(secondary_entry_fn entry) assert(nr_cpus_online == 1); assert(nr_started == 1); + multithreaded = true; ret = dt_for_each_cpu_node(start_each_secondary, ); assert(ret == 0); assert(nr_started == nr_cpus_present); @@ -361,10 +364,12 @@ static void wait_each_secondary(int fdtnode, u64 regval __unused, void *info) void stop_all_cpus(void) { + assert(multithreaded); while (nr_cpus_online > 1) cpu_relax(); dt_for_each_cpu_node(wait_each_secondary, NULL); mb(); nr_started = 1; + multithreaded = false; } diff --git a/lib/powerpc/spinlock.c b/lib/powerpc/spinlock.c new file mode 100644 index 0..623a1f2c1 --- /dev/null +++ b/lib/powerpc/spinlock.c @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: LGPL-2.0 */ +#include +#include + +/* + * Skip the atomic when single-threaded, which helps avoid larx/stcx. in + * the harness when testing tricky larx/stcx. sequences (e.g., migration + * vs reservation). + */ +void spin_lock(struct spinlock *lock) +{ + if (!multithreaded) { + assert(lock->v == 0); + lock->v = 1; + } else { + while (__sync_lock_test_and_set(>v, 1)) + ; + } +} + +void spin_unlock(struct spinlock *lock) +{ + assert(lock->v == 1); + if (!multithreaded) { + lock->v = 0; + } else { + __sync_lock_release(>v); + } +} diff --git a/lib/ppc64/asm/spinlock.h b/lib/ppc64/asm/spinlock.h index f59eed191..b952386da 100644 --- a/lib/ppc64/asm/spinlock.h +++ b/lib/ppc64/asm/spinlock.h @@ -1,6 +1,11 @@ #ifndef _ASMPPC64_SPINLOCK_H_ #define _ASMPPC64_SPINLOCK_H_ -#include +struct spinlock { + unsigned int v; +}; + +void spin_lock(struct spinlock *lock); +void spin_unlock(struct spinlock *lock); #endif /* _ASMPPC64_SPINLOCK_H_ */ diff --git a/powerpc/Makefile.common b/powerpc/Makefile.common index b98f71c2f..1ee9c25d6 100644 --- a/powerpc/Makefile.common +++ b/powerpc/Makefile.common @@ -49,6 +49,7 @@ cflatobjs += lib/powerpc/rtas.o cflatobjs += lib/powerpc/processor.o cflatobjs += lib/powerpc/handlers.o cflatobjs += lib/powerpc/smp.o +cflatobjs += lib/powerpc/spinlock.o OBJDIRS += lib/powerpc
Re: [kvm-unit-tests PATCH v9 18/31] powerpc: Permit ACCEL=tcg,thread=single
On 04/05/2024 14.28, Nicholas Piggin wrote: Modify run script to permit single vs mttcg threading, add a thread=single smp case to unittests.cfg. Signed-off-by: Nicholas Piggin --- powerpc/run | 4 ++-- powerpc/unittests.cfg | 6 ++ 2 files changed, 8 insertions(+), 2 deletions(-) Reviewed-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 16/31] powerpc: add SMP and IPI support
On 04/05/2024 14.28, Nicholas Piggin wrote: powerpc SMP support is very primitive and does not set up a first-class runtime environment for secondary CPUs. This reworks SMP support, and provides a complete C and harness environment for the secondaries, including interrupt handling, as well as IPI support. Signed-off-by: Nicholas Piggin I now skimmed through the patch and it looks fine so far: Acked-by: Thomas Huth
Re: [PATCH 3/5] s390x: Build IPLB chain for multiple boot devices
On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi Write a chain of IPLBs into memory for future use. The IPLB chain is placed immediately before the BIOS in memory at the highest unused page boundary providing sufficient space to fit the chain. Because this is not a fixed address, the location of the next IPLB and number of remaining boot devices is stored in the QIPL global variable for later access. At this stage the IPLB chain is not accessed by the guest during IPL. Signed-off-by: Jared Rossi --- ... @@ -422,54 +440,51 @@ void s390_ipl_set_loadparm(char *ascii_lp, uint8_t *ebcdic_lp) } } -static bool s390_gen_initial_iplb(S390IPLState *ipl) +static bool s390_build_iplb(DeviceState *dev_st, IplParameterBlock *iplb) { -DeviceState *dev_st; +S390IPLState *ipl = get_ipl_device(); CcwDevice *ccw_dev = NULL; SCSIDevice *sd; int devtype; uint8_t *lp; -dev_st = get_boot_device(0); -if (dev_st) { -ccw_dev = s390_get_ccw_device(dev_st, ); -} - /* * Currently allow IPL only from CCW devices. */ +ccw_dev = s390_get_ccw_device(dev_st, ); if (ccw_dev) { lp = ccw_dev->loadparm; -switch (devtype) { -case CCW_DEVTYPE_SCSI: + switch (devtype) { + case CCW_DEVTYPE_SCSI: Bad indentation? sd = SCSI_DEVICE(dev_st); -ipl->iplb.len = cpu_to_be32(S390_IPLB_MIN_QEMU_SCSI_LEN); -ipl->iplb.blk0_len = +iplb->len = cpu_to_be32(S390_IPLB_MIN_QEMU_SCSI_LEN); +iplb->blk0_len = cpu_to_be32(S390_IPLB_MIN_QEMU_SCSI_LEN - S390_IPLB_HEADER_LEN); -ipl->iplb.pbt = S390_IPL_TYPE_QEMU_SCSI; -ipl->iplb.scsi.lun = cpu_to_be32(sd->lun); -ipl->iplb.scsi.target = cpu_to_be16(sd->id); -ipl->iplb.scsi.channel = cpu_to_be16(sd->channel); -ipl->iplb.scsi.devno = cpu_to_be16(ccw_dev->sch->devno); -ipl->iplb.scsi.ssid = ccw_dev->sch->ssid & 3; +iplb->pbt = S390_IPL_TYPE_QEMU_SCSI; +iplb->scsi.lun = cpu_to_be32(sd->lun); +iplb->scsi.target = cpu_to_be16(sd->id); +iplb->scsi.channel = cpu_to_be16(sd->channel); +iplb->scsi.devno = cpu_to_be16(ccw_dev->sch->devno); +iplb->scsi.ssid = ccw_dev->sch->ssid & 3; break; case CCW_DEVTYPE_VFIO: -ipl->iplb.len = cpu_to_be32(S390_IPLB_MIN_CCW_LEN); -ipl->iplb.pbt = S390_IPL_TYPE_CCW; -ipl->iplb.ccw.devno = cpu_to_be16(ccw_dev->sch->devno); -ipl->iplb.ccw.ssid = ccw_dev->sch->ssid & 3; +iplb->len = cpu_to_be32(S390_IPLB_MIN_CCW_LEN); +iplb->pbt = S390_IPL_TYPE_CCW; +iplb->ccw.devno = cpu_to_be16(ccw_dev->sch->devno); +iplb->ccw.ssid = ccw_dev->sch->ssid & 3; break; case CCW_DEVTYPE_VIRTIO_NET: +/* The S390IPLState netboot is ture if ANY IPLB may use netboot */ Typo: ture --> true Thomas
Re: [PATCH 1/5] s390x: Create include files for s390x IPL definitions
Hi Jared! On 29/05/2024 17.43, jro...@linux.ibm.com wrote: From: Jared Rossi Currently, stuctures defined in both hw/s390x/ipl.h and pc-bios/s390-ccw/iplb.h Typo: s/stuctures/structures/ must be kept in sync, which is prone to error. Instead, create a new directory at include/hw/s390x/ipl/ to contain the definitions that must be shared. Signed-off-by: Jared Rossi --- hw/s390x/ipl.h | 113 +--- include/hw/s390x/ipl/qipl.h | 126 pc-bios/s390-ccw/iplb.h | 84 ++-- pc-bios/s390-ccw/Makefile | 2 +- 4 files changed, 133 insertions(+), 192 deletions(-) create mode 100644 include/hw/s390x/ipl/qipl.h ... diff --git a/pc-bios/s390-ccw/Makefile b/pc-bios/s390-ccw/Makefile index acfcd1e71a..a771439acf 100644 --- a/pc-bios/s390-ccw/Makefile +++ b/pc-bios/s390-ccw/Makefile @@ -3,7 +3,7 @@ all: build-all @true include config-host.mak -CFLAGS = -O2 -g +CFLAGS = -O2 -g -I $(SRC_PATH)/../..//include/hw/s390x/ipl Duplicate slash ---^ Apart from these two nits, patch looks fine to me. Thomas
Re: [PATCH] io/channel-socket: Fix -fsanitize=undefined problem with latest Clang
On 03/06/2024 17.52, Daniel P. Berrangé wrote: On Mon, Jun 03, 2024 at 04:12:34PM +0100, Peter Maydell wrote: On Mon, 3 Jun 2024 at 15:58, Peter Maydell wrote: On Mon, 3 Jun 2024 at 15:49, Daniel P. Berrangé wrote: We can't rely on the sanitizers to catch all cases where we're casting functions, as we don't have good enough code coverage in tests to identify all places that way. Unless there's a warning flag we can use to get diagnosis of this problem at compile time and then fix all reported issues, I won't have any confidence in our ability to remove -fsanitize-cfi-icall-generalize-pointers for CFI. You might think that -Wcast-function-type would detect them at compile time, but that has two loopholes: 1. void(*) (void) matches everything 2. any parameter of pointer type matches any other pointer type It seems to me rather inconsistent that the sanitizers do not match up with the warning semantics here. I think I would vote for raising that with the compiler folks -- either the sanitizer should be made looser so it matches the -Wcast-function-type semantics, or else a new tighter warning option that matches the sanitizer should be provided. Ideally both, I think. But it's definitely silly to have a runtime check that flags up things that the compiler perfectly well could detect at compile time but is choosing not to. Slightly further investigation suggests clang 16 and later have -Wcast-function-type-strict for the "report all the same casts that the sanitizer does". gcc doesn't I think have that yet. I didn't spot any option in the sanitizer to loosen the set of things it considers mismatched function pointers. I just tried that with CC=clang ./configure --target-list=x86_64-softmmu --extra-cflags="-Wcast-function-type-strict" --disable-werror and it rather shows the futility of the task, picking one reoprted warning that is repeated over & over in differnt contexts: In file included from qapi/qapi-types-block-core.c:15: qapi/qapi-types-block-core.h:3620:1: warning: cast from 'void (*)(DummyBlockCoreForceArrays *)' (aka 'void (*)(struct DummyBlockCoreForceArrays *)') to 'void (*)(void)' converts to incompatible function type [-Wcast-function-type-strict] 3620 | G_DEFINE_AUTOPTR_CLEANUP_FUNC(DummyBlockCoreForceArrays, qapi_free_DummyBlockCoreForceArrays) | ^ /usr/include/glib-2.0/glib/gmacros.h:1372:3: note: expanded from macro 'G_DEFINE_AUTOPTR_CLEANUP_FUNC' 1372 | _GLIB_DEFINE_AUTOPTR_CLEANUP_FUNCS(TypeName, TypeName, func) | ^~~~ /usr/include/glib-2.0/glib/gmacros.h:1364:57: note: expanded from macro '_GLIB_DEFINE_AUTOPTR_CLEANUP_FUNCS' 1364 | { if (*_q) g_queue_free_full (*_q, (GDestroyNotify) (void(*)(void)) cleanup); } \ | ^~~ IOW, the GLib headers themselves have bad casts in macros which we rely on. So we'll never be cast safe until GLib changes its own code...if it ever does. Ok, thanks for checking! So the ultimate answer to the problem (at least right now) is: Let's use -fno-sanitize=function in QEMU. Thomas
Re: [PATCH 4/4] update-linux-headers: import linux/kvm_para.h header
On 03/06/2024 15.11, Paolo Bonzini wrote: Right now QEMU is importing arch/x86/include/uapi/asm/kvm_para.h because it includes definitions for kvmclock and for KVM CPUID bits. However, other definitions for KVM hypercall values and return codes are included in include/uapi/linux/kvm_para.h and they will be used by SEV-SNP. To ensure that it is possible to include both and "standard-headers/asm-x86/kvm_para.h" without conflicts, include linux/kvm_para.h as a portable header too, and forward linux-headers/ files to those in include/standard-headers. Note that will include architecture-specific definitions as well, but "standard-headers/linux/kvm_para.h" will not because it can be used in architecture-independent files. This could easily be extended to other architectures, but right now they do not need any symbol in their specific kvm_para.h files. Signed-off-by: Paolo Bonzini --- include/standard-headers/linux/kvm_para.h | 38 +++ linux-headers/asm-x86/kvm_para.h | 1 + linux-headers/linux/kvm_para.h| 2 ++ scripts/update-linux-headers.sh | 22 - 4 files changed, 62 insertions(+), 1 deletion(-) create mode 100644 include/standard-headers/linux/kvm_para.h create mode 100644 linux-headers/asm-x86/kvm_para.h create mode 100644 linux-headers/linux/kvm_para.h Reviewed-by: Thomas Huth
Re: [PATCH 2/4] update-linux-headers: move pvpanic.h to correct directory
On 03/06/2024 15.11, Paolo Bonzini wrote: Linux has , not . Use the same directory for QEMU's include/standard-headers/ copy. Signed-off-by: Paolo Bonzini --- include/standard-headers/{linux => misc}/pvpanic.h | 0 hw/misc/pvpanic-isa.c | 2 +- hw/misc/pvpanic-pci.c | 2 +- hw/misc/pvpanic.c | 2 +- scripts/update-linux-headers.sh| 6 -- 5 files changed, 7 insertions(+), 5 deletions(-) rename include/standard-headers/{linux => misc}/pvpanic.h (100%) Reviewed-by: Thomas Huth
Re: [PATCH 1/4] update-linux-headers: fix forwarding to asm-generic headers
On 03/06/2024 15.11, Paolo Bonzini wrote: Afer commit 3efc75ad9d9 ("scripts/update-linux-headers.sh: Remove temporary directory inbetween", 2024-05-29), updating linux-headers/ results in errors such as cp: cannot stat '/tmp/tmp.1A1Eejh1UE/headers/include/asm/bitsperlong.h': No such file or directory Oops, sorry, I was pretty sure the update was working for me when I tested the patch ... maybe I was on an older branch that didn't have loongarch support yet. diff --git a/scripts/update-linux-headers.sh b/scripts/update-linux-headers.sh index 23afe8c08ad..ae34d18572b 100755 --- a/scripts/update-linux-headers.sh +++ b/scripts/update-linux-headers.sh @@ -118,7 +118,14 @@ for arch in $ARCHLIST; do rm -rf "$output/linux-headers/asm-$arch" mkdir -p "$output/linux-headers/asm-$arch" for header in kvm.h unistd.h bitsperlong.h mman.h; do -cp "$hdrdir/include/asm/$header" "$output/linux-headers/asm-$arch" +if test -f "$hdrdir/include/asm/$header"; then +cp "$hdrdir/include/asm/$header" "$output/linux-headers/asm-$arch" +elif test -f "$hdrdir/include/asm-generic/$header"; then +# not installed as , but used as such in kernel sources Maybe change the comment to talk about instead of ? +cat <$output/linux-headers/asm-$arch/$header +#include +EOF +fi done if [ $arch = mips ]; then Reviewed-by: Thomas Huth
Re: [PATCH] io/channel-socket: Fix -fsanitize=undefined problem with latest Clang
On 03/06/2024 14.48, Daniel P. Berrangé wrote: On Wed, May 29, 2024 at 02:53:37PM +0100, Peter Maydell wrote: On Wed, 29 May 2024 at 14:32, Thomas Huth wrote: Casting function pointers from one type to another causes undefined behavior errors when compiling with -fsanitize=undefined with Clang v18: $ QTEST_QEMU_BINARY=./qemu-system-mips64 tests/qtest/netdev-socket TAP version 13 # random seed: R02S4424f4f460de783fdd3d72c5571d3adc 1..10 # Start of mips64 tests # Start of netdev tests # Start of stream tests # starting QEMU: exec ./qemu-system-mips64 -qtest unix:/tmp/qtest-1213196.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-1213196.qmp,id=char0 -mon chardev=char0,mode=control -display none -audio none -nodefaults -M none -netdev stream,id=st0,addr.type=fd,addr.str=3 -accel qtest ../io/task.c:78:13: runtime error: call to function qapi_free_SocketAddress through pointer to incorrect function type 'void (*)(void *)' /tmp/qemu-sanitize/qapi/qapi-types-sockets.c:170: note: qapi_free_SocketAddress defined here SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../io/task.c:78:13 It's a pity the sanitizer error doesn't tell you the actual function type as well as the incorrect one it got cast to (especially since in this case the function and its declaration are both in generated code in the build tree not conveniently findable with 'git grep'.) In this case the function being called is: void qapi_free_SocketAddress(SocketAddress *obj) and it's cast to a GDestroyNotify, which is typedef void(*GDestroyNotify) (gpointer data); (and gpointer is void*) and although you can pass a foo* to a function expecting void*, you can't treat a pointer to a function taking foo* as if it was a pointer to a function taking void*, just in case the compiler needs to do some clever trickery with the pointer value. So the wrapper function looks like it doesn't do anything, but it's doing the permitted implicit-cast of the argument I guess that's the letter of the law in C, but does that actually matter in practice, historically ? The use of "(GDestroyNotify)blah" casts is standard practice across any application using GLib, and even in QEMU this is far from the only place that does such a cast: ... I presume this means the rest of the code merely isn't exercised by the CI job test case this is fixing, but if we're saying this needs fixing, then we should be fixing all usage. There are more spots discovered by the CI, I just didn't tackle them yet. Another problem is e.g. the call_rcu macro from include/qemu/rcu.h... So overall, I'm not in favour of this patch unless there's a compelling functional reason why just this 1 case is special and all the others can be safely ignored. We'd need to tackle them all. I thought it might be a good idea since we then could also switch to a more strict CFI mode (i.e. stop using -fsanitize-cfi-icall-generalize-pointers) ... but yes, looking at the amount of spots that need fixes, it feels like tilting at windmills. Maybe best if we really just add -fno-sanitize=function to the CFLAGS when we're compiling with the sanitizer enabled. Thomas
Re: [PATCH] target/i386: fix xsave.flat from kvm-unit-tests
On 03/06/2024 12.04, Paolo Bonzini wrote: xsave.flat checks that "executing the XSETBV instruction causes a general- protection fault (#GP) if ECX = 0 and EAX[2:1] has the value 10b". QEMU allows that option, so the test fails. Add the condition. Cc: qemu-sta...@nongnu.org Fixes: 892544317fe ("target/i386: implement XSAVE and XRSTOR of AVX registers", 2022-10-18) Reported-by: Thomas Huth Signed-off-by: Paolo Bonzini --- target/i386/tcg/fpu_helper.c | 5 + 1 file changed, 5 insertions(+) diff --git a/target/i386/tcg/fpu_helper.c b/target/i386/tcg/fpu_helper.c index e322293371c..e1b850f3fc2 100644 --- a/target/i386/tcg/fpu_helper.c +++ b/target/i386/tcg/fpu_helper.c @@ -3142,6 +3142,11 @@ void helper_xsetbv(CPUX86State *env, uint32_t ecx, uint64_t mask) goto do_gpf; } +/* SSE can be disabled, but only if AVX is disabled too. */ +if ((mask & (XSTATE_SSE_MASK | XSTATE_YMM_MASK)) == XSTATE_YMM_MASK) { +goto do_gpf; +} That fixes the kvm-unit-test, indeed! Thanks! Tested-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 15/31] powerpc: Enable page alloc operations
On 04/05/2024 14.28, Nicholas Piggin wrote: These will be used for stack allocation for secondary CPUs. Signed-off-by: Nicholas Piggin --- lib/powerpc/setup.c | 8 powerpc/Makefile.common | 1 + 2 files changed, 9 insertions(+) Reviewed-by: Thomas Huth
Re: [kvm-unit-tests PATCH v9 14/31] powerpc: Remove broken SMP exception stack setup
On 04/05/2024 14.28, Nicholas Piggin wrote: The exception stack setup does not work correctly for SMP, because it is the boot processor that calls cpu_set() which sets SPRG2 to the exception stack, not the target CPU itself. So secondaries never got their SPRG2 set to a valid exception stack. So secondary CPUs currently must not run into any exceptions? Remove the SMP code and just set an exception stack for the boot processor. Make the stack 64kB while we're here, to match the size of the regular stack. Signed-off-by: Nicholas Piggin --- lib/powerpc/setup.c | 16 1 file changed, 8 insertions(+), 8 deletions(-) Reviewed-by: Thomas Huth
[PATCH 4/5] tests/lcitool: Install mingw-w64-tools for the Windows cross-builds
Beside g++ we also need the mingw-w64-tools for properly building the code in qga/vss-win32/ , so let's install that package now, too. Signed-off-by: Thomas Huth --- tests/lcitool/projects/qemu-win-installer.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/tests/lcitool/projects/qemu-win-installer.yml b/tests/lcitool/projects/qemu-win-installer.yml index 86aa22297c..f3663ba030 100644 --- a/tests/lcitool/projects/qemu-win-installer.yml +++ b/tests/lcitool/projects/qemu-win-installer.yml @@ -2,3 +2,4 @@ --- packages: - g++ + - mingw-w64-tools -- 2.45.1
[PATCH 5/5] tests/docker/dockerfiles: Run lcitool-refresh to update Fedora and Alpine
Run "make lcitool-refresh" to update the dockerfiles with the recent changes to the lcitool. Signed-off-by: Thomas Huth --- tests/docker/dockerfiles/alpine.docker | 4 ++-- tests/docker/dockerfiles/fedora-win64-cross.docker | 6 -- tests/docker/dockerfiles/fedora.docker | 5 +++-- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/tests/docker/dockerfiles/alpine.docker b/tests/docker/dockerfiles/alpine.docker index 554464f31e..b079a83fe2 100644 --- a/tests/docker/dockerfiles/alpine.docker +++ b/tests/docker/dockerfiles/alpine.docker @@ -1,10 +1,10 @@ # THIS FILE WAS AUTO-GENERATED # -# $ lcitool dockerfile --layers all alpine-318 qemu +# $ lcitool dockerfile --layers all alpine-319 qemu # # https://gitlab.com/libvirt/libvirt-ci -FROM docker.io/library/alpine:3.18 +FROM docker.io/library/alpine:3.19 RUN apk update && \ apk upgrade && \ diff --git a/tests/docker/dockerfiles/fedora-win64-cross.docker b/tests/docker/dockerfiles/fedora-win64-cross.docker index 0f78711876..007e1574bd 100644 --- a/tests/docker/dockerfiles/fedora-win64-cross.docker +++ b/tests/docker/dockerfiles/fedora-win64-cross.docker @@ -1,10 +1,10 @@ # THIS FILE WAS AUTO-GENERATED # -# $ lcitool dockerfile --layers all --cross-arch mingw64 fedora-38 qemu,qemu-win-installer +# $ lcitool dockerfile --layers all --cross-arch mingw64 fedora-40 qemu,qemu-win-installer # # https://gitlab.com/libvirt/libvirt-ci -FROM registry.fedoraproject.org/fedora:38 +FROM registry.fedoraproject.org/fedora:40 RUN dnf install -y nosync && \ printf '#!/bin/sh\n\ @@ -51,6 +51,7 @@ exec "$@"\n' > /usr/bin/nosync && \ python3-pip \ python3-sphinx \ python3-sphinx_rtd_theme \ + python3-zombie-imp \ sed \ socat \ sparse \ @@ -74,6 +75,7 @@ ENV NINJA "/usr/bin/ninja" ENV PYTHON "/usr/bin/python3" RUN nosync dnf install -y \ + mingw-w64-tools \ mingw32-nsis \ mingw64-SDL2 \ mingw64-SDL2_image \ diff --git a/tests/docker/dockerfiles/fedora.docker b/tests/docker/dockerfiles/fedora.docker index 098c894d10..44f239c088 100644 --- a/tests/docker/dockerfiles/fedora.docker +++ b/tests/docker/dockerfiles/fedora.docker @@ -1,10 +1,10 @@ # THIS FILE WAS AUTO-GENERATED # -# $ lcitool dockerfile --layers all fedora-38 qemu +# $ lcitool dockerfile --layers all fedora-40 qemu # # https://gitlab.com/libvirt/libvirt-ci -FROM registry.fedoraproject.org/fedora:38 +FROM registry.fedoraproject.org/fedora:40 RUN dnf install -y nosync && \ printf '#!/bin/sh\n\ @@ -110,6 +110,7 @@ exec "$@"\n' > /usr/bin/nosync && \ python3-pip \ python3-sphinx \ python3-sphinx_rtd_theme \ + python3-zombie-imp \ rdma-core-devel \ sed \ snappy-devel \ -- 2.45.1
[PATCH 0/5] tests: Update Fedora and Alpine containers via lcitool
According to QEMU's support policy, Fedora 38 and Alpine 3.18 are not supported anymore, so let's bump the containers to a newer version. Both, Alpine 3.20 and Fedora 40 ship with Python 3.12 that breaks our old version of Avocado since the "imp" module has been removed there. To work around this problem, Fedora fortunately still ships a separate "python3-zombi-imp" package that we can install. And with regards to Alpine, we only upgrade to 3.19 that is still using Python 3.11. Another problem are improvements in the undefined-behavior sanitizer of the latest versions of Clang that we use in the "clang-system" Fedora container: We now need to compile with -fno-sanitize=function there until all spots in the source code have been fixed (and that might take while since many of the issues are not trivial). Thomas Huth (5): tests/lcitool: Delete obsolete centos-stream-8.yml file tests/lcitool: Bump to latest libvirt-ci and update Fedora and Alpine version .gitlab-ci.d/buildtest.yml: Use -fno-sanitize=function in the clang-system job tests/lcitool: Install mingw-w64-tools for the Windows cross-builds tests/docker/dockerfiles: Run lcitool-refresh to update Fedora and Alpine .gitlab-ci.d/buildtest.yml | 1 + tests/docker/dockerfiles/alpine.docker | 4 ++-- tests/docker/dockerfiles/fedora-win64-cross.docker | 6 -- tests/docker/dockerfiles/fedora.docker | 5 +++-- tests/lcitool/libvirt-ci | 2 +- tests/lcitool/projects/qemu-win-installer.yml | 1 + tests/lcitool/projects/qemu.yml| 1 + tests/lcitool/refresh | 6 +++--- tests/lcitool/targets/centos-stream-8.yml | 3 --- 9 files changed, 16 insertions(+), 13 deletions(-) delete mode 100644 tests/lcitool/targets/centos-stream-8.yml -- 2.45.1
[PATCH 2/5] tests/lcitool: Bump to latest libvirt-ci and update Fedora and Alpine version
Update to the latest version of lcitool. It dropped support for Fedora 38 and Alpine 3.18, so we have to update these to newer versions here, too. Python 3.12 dropped the "imp" module which we still need for running Avocado. Fortunately Fedora 40 still ships with a work-around package that we can use until somebody updates our Avocado to a newer version. Signed-off-by: Thomas Huth --- tests/lcitool/libvirt-ci| 2 +- tests/lcitool/projects/qemu.yml | 1 + tests/lcitool/refresh | 6 +++--- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/tests/lcitool/libvirt-ci b/tests/lcitool/libvirt-ci index cec6703971..0e9490cebc 16 --- a/tests/lcitool/libvirt-ci +++ b/tests/lcitool/libvirt-ci @@ -1 +1 @@ -Subproject commit cec67039719becbfbab866f9c23574f389cf9559 +Subproject commit 0e9490cebc726ef772b6c9e27dac32e7ae99f9b2 diff --git a/tests/lcitool/projects/qemu.yml b/tests/lcitool/projects/qemu.yml index 7511ec7ccb..070d7f4706 100644 --- a/tests/lcitool/projects/qemu.yml +++ b/tests/lcitool/projects/qemu.yml @@ -89,6 +89,7 @@ packages: - pkg-config - pulseaudio - python3 + - python3-imp - python3-numpy - python3-opencv - python3-pillow diff --git a/tests/lcitool/refresh b/tests/lcitool/refresh index 789acefb75..9d8e9c6a4a 100755 --- a/tests/lcitool/refresh +++ b/tests/lcitool/refresh @@ -124,11 +124,11 @@ try: # # Standard native builds # -generate_dockerfile("alpine", "alpine-318") +generate_dockerfile("alpine", "alpine-319") generate_dockerfile("centos9", "centos-stream-9") generate_dockerfile("debian", "debian-12", trailer="".join(debian12_extras)) -generate_dockerfile("fedora", "fedora-38") +generate_dockerfile("fedora", "fedora-40") generate_dockerfile("opensuse-leap", "opensuse-leap-15") generate_dockerfile("ubuntu2204", "ubuntu-2204") @@ -191,7 +191,7 @@ try: trailer=cross_build("s390x-linux-gnu-", "s390x-softmmu,s390x-linux-user")) -generate_dockerfile("fedora-win64-cross", "fedora-38", +generate_dockerfile("fedora-win64-cross", "fedora-40", project='qemu,qemu-win-installer', cross="mingw64", trailer=cross_build("x86_64-w64-mingw32-", -- 2.45.1
[PATCH 1/5] tests/lcitool: Delete obsolete centos-stream-8.yml file
We've missed to delete this file when removing support for CentOS 8. Since the current upstream version of the lcitool removed support for CentOS 8 now, too, we have to remove the file before updating. Signed-off-by: Thomas Huth --- tests/lcitool/targets/centos-stream-8.yml | 3 --- 1 file changed, 3 deletions(-) delete mode 100644 tests/lcitool/targets/centos-stream-8.yml diff --git a/tests/lcitool/targets/centos-stream-8.yml b/tests/lcitool/targets/centos-stream-8.yml deleted file mode 100644 index 6b11160fd1..00 --- a/tests/lcitool/targets/centos-stream-8.yml +++ /dev/null @@ -1,3 +0,0 @@ -paths: - pip3: /usr/bin/pip3.8 - python: /usr/bin/python3.8 -- 2.45.1
[PATCH 3/5] .gitlab-ci.d/buildtest.yml: Use -fno-sanitize=function in the clang-system job
The latest version of Clang (version 18 from Fedora 40) now reports bad function pointer casts as undefined behavior. Unfortunately, we are still doing this in quite a lot of places in the QEMU code and some of them are not easy to fix. So for the time being, temporarily switch this off in the failing clang-system job until all spots in the QEMU sources have been tackled. Signed-off-by: Thomas Huth --- .gitlab-ci.d/buildtest.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml index 91c57efded..0eec570310 100644 --- a/.gitlab-ci.d/buildtest.yml +++ b/.gitlab-ci.d/buildtest.yml @@ -432,6 +432,7 @@ clang-system: IMAGE: fedora CONFIGURE_ARGS: --cc=clang --cxx=clang++ --extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined + --extra-cflags=-fno-sanitize=function TARGETS: alpha-softmmu arm-softmmu m68k-softmmu mips64-softmmu s390x-softmmu MAKE_CHECK_ARGS: check-qtest check-tcg -- 2.45.1
Re: [PATCH 0/4] hw/s390x: Alias @dump-skeys -> @dump-s390-skey and deprecate
On 31/05/2024 16.02, Dr. David Alan Gilbert wrote: * Thomas Huth (th...@redhat.com) wrote: On 30/05/2024 09.45, Philippe Mathieu-Daudé wrote: We are trying to unify all qemu-system-FOO to a single binary. In order to do that we need to remove QAPI target specific code. @dump-skeys is only available on qemu-system-s390x. This series rename it as @dump-s390-skey, making it available on other binaries. We take care of backward compatibility via deprecation. Philippe Mathieu-Daudé (4): hw/s390x: Introduce the @dump-s390-skeys QMP command hw/s390x: Introduce the 'dump_s390_skeys' HMP command hw/s390x: Deprecate the HMP 'dump_skeys' command hw/s390x: Deprecate the QMP @dump-skeys command Why do we have to rename the command? Just for the sake of it? I think renaming HMP commands is maybe ok, but breaking the API in QMP is something you should consider twice. And even if we decide to rename ... maybe we should discuss whether it makes sense to come up with a generic command instead: As far as I know, ARM also has something similar, called MTE. Maybe we also want to dump MTE keys one day? So the new command should maybe be called "dump-memory-keys" instead? I think there are at least two different concepts; but I agree it would be nice to keep a single command for matching concepts across different architectures; I can't say I know the details of any, but: a) Page table things - I think x86 PKRU/PKEY (???) is a page table thing where pages marked a special way are associated with keys. That sounds similar to what the skeys are??? Sounds a little bit similar, but s390 storage keys are independent from page tables. It's rather that each page (4096 bytes) of RAM has an additional 7-bit value that contains the storage key and some additional bits. It's also usable when page tables are still disabled. > I'm not sure the two fit in the same command. Does it make sense to dump all the MTE or x86 keys all at once? If so, we could maybe come up with an unified command. Otherwise it might not make sense, indeed. Thomas