Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Fri, Nov 4, 2022 at 02:02 Michael S. Tsirkin wrote: > On Thu, Nov 03, 2022 at 11:14:21PM +0530, Ani Sinha wrote: > > > > > > On Thu, Nov 3, 2022 at 23:11 Daniel P. Berrangé > wrote: > > > > On Thu, Nov 03, 2022 at 10:26:26PM +0530, Ani Sinha wrote: > > > On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha wrote: > > > > > > > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha > wrote: > > > > > > > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha > wrote: > > > > > > > > > > > > > To pull this image: > > > > > > > > > > > > > $ docker pull > registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > > > > > > > Actually the URL is: > > > > > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/ > > fedora:latest > > > > > > > > > > > > > (or to be sure to pull the very same:) > > > > > > > > > > > > > $ docker pull > > > > > > > registry.gitlab.com/qemu-project/qemu/ > > > fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > > > > > Same here, > > > > > > > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/ > > > fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > > > I pulled this container, > > > > > > This is fc35, the same mst is using: > > > > > > # cat /etc/fedora-release > > > Fedora release 35 (Thirty Five) > > > > > > Hmm. Something else is going on in the gitlab specific environment. > > > > Or it is a non-deterministic race condition and the chance of > hitting > > it varies based on your hardware and/or CPU load. > > > > > > Can we kick off the same CI job again? Does it pass this time? > > > > It's completely deterministic on gitlab. Stefan also reproduced on > his F36 box. Then this means it’s not enough to simply use the same container as the CI and same configure line to reproduce all the issues.
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 09:29:56AM -0400, Stefan Hajnoczi wrote: > On Thu, 3 Nov 2022 at 08:14, Michael S. Tsirkin wrote: > > On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: > > > On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > > > > Changes from v1: > > > > > > > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > > > > bugs that tripped up the pipeline. > > > > Updated expected files for core-count test. > > > > > > Several "make check" CI failures have occurred. They look like they are > > > related. Here is one (see the URLs at the bottom of this email for more > > > details): > > > > > > 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child > > > process > > > (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess > > > [8609]) failed unexpectedly ERROR > > > 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test > > > ERROR 31.44s killed by signal 6 SIGABRT > > > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh > > > >>> MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img > > > >>> QTEST_QEMU_BINARY=./qemu-system-arm > > > >>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon > > > >>> /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k > > > ― ✀ > > > ― > > > stderr: > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: -chardev > > > socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: > > > info: QEMU waiting for connection on: > > > disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: -chardev > > > socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: > > > info: QEMU waiting for connection on: > > > disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > > qemu-system-arm: -netdev > > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read > > > msg header. Read 0 instead of 12. Original request 1. > > > qemu-system-arm: -netdev > > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: > > > vhost_backend_init failed: Protocol error > > > qemu-system-arm: -netdev > > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init > > > vhost_net for queue 0 > > > qemu-system-arm: -netdev > > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU > > > waiting for connection on: > > > disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: -chardev > > > socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: > > > info: QEMU waiting for connection on: > > > disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. > > > qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument > > > (22) > > > UndefinedBehaviorSanitizer:DEADLYSIGNAL > > > ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address > > > 0x (pc 0x55e34deccab0 bp 0x sp 0x7ffc94894710 > > > T8618) > > > ==8618==The signal is caused by a READ memory access. > > > ==8618==Hint: address points to the zero page. > > > #0 0x55e34deccab0 in ldl_he_p > > > /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 > > > #1 0x55e34deccab0 in ldn_he_p > > > /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 > > > #2 0x55e34deccab0 in flatview_write_continue > > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 > > > #3 0x55e34dec9f21 in flatview_write > > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > > #4 0x55e34dec9f21 in address_space_write > > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > > #5 0x55e34decace7 in address_space_unmap > > > /builds/qemu-pro
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 11:14:21PM +0530, Ani Sinha wrote: > > > On Thu, Nov 3, 2022 at 23:11 Daniel P. Berrangé wrote: > > On Thu, Nov 03, 2022 at 10:26:26PM +0530, Ani Sinha wrote: > > On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha wrote: > > > > > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha wrote: > > > > > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha wrote: > > > > > > > > > > > To pull this image: > > > > > > > > > > > $ docker pull > registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > > > > > Actually the URL is: > > > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/ > fedora:latest > > > > > > > > > > > (or to be sure to pull the very same:) > > > > > > > > > > > $ docker pull > > > > > > registry.gitlab.com/qemu-project/qemu/ > fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > > > Same here, > > > > > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/ > fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > I pulled this container, > > > > This is fc35, the same mst is using: > > > > # cat /etc/fedora-release > > Fedora release 35 (Thirty Five) > > > > Hmm. Something else is going on in the gitlab specific environment. > > Or it is a non-deterministic race condition and the chance of hitting > it varies based on your hardware and/or CPU load. > > > Can we kick off the same CI job again? Does it pass this time? > It's completely deterministic on gitlab. Stefan also reproduced on his F36 box. -- MST
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 3, 2022 at 23:11 Daniel P. Berrangé wrote: > On Thu, Nov 03, 2022 at 10:26:26PM +0530, Ani Sinha wrote: > > On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha wrote: > > > > > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha wrote: > > > > > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha wrote: > > > > > > > > > > > To pull this image: > > > > > > > > > > > $ docker pull > registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > > > > > Actually the URL is: > > > > > > > > > > $ docker pull > registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > > > > > > > > > (or to be sure to pull the very same:) > > > > > > > > > > > $ docker pull > > > > > > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > > > Same here, > > > > > > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > I pulled this container, > > > > This is fc35, the same mst is using: > > > > # cat /etc/fedora-release > > Fedora release 35 (Thirty Five) > > > > Hmm. Something else is going on in the gitlab specific environment. > > Or it is a non-deterministic race condition and the chance of hitting > it varies based on your hardware and/or CPU load. Can we kick off the same CI job again? Does it pass this time? >
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 10:26:26PM +0530, Ani Sinha wrote: > On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha wrote: > > > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha wrote: > > > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha wrote: > > > > > > > > > To pull this image: > > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > > > Actually the URL is: > > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > > > > > > > (or to be sure to pull the very same:) > > > > > > > > > $ docker pull > > > > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > > > Same here, > > > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > I pulled this container, > > This is fc35, the same mst is using: > > # cat /etc/fedora-release > Fedora release 35 (Thirty Five) > > Hmm. Something else is going on in the gitlab specific environment. Or it is a non-deterministic race condition and the chance of hitting it varies based on your hardware and/or CPU load. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 3, 2022, 12:49 Daniel P. Berrangé wrote: > On Thu, Nov 03, 2022 at 04:47:03PM +, Peter Maydell wrote: > > On Thu, 3 Nov 2022 at 16:38, Daniel P. Berrangé > wrote: > > > On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > > > > 2. The GitLab output does not contain the full command lines because > > > > environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). > > > > > > Note, $QEMU_CONFIGURE_OPTS is set by the container image itself, so > > > there's no need to know that one. > > > > > > $CONFIGURE_ARGS meanwhile is set in the build-X template and > > > easy to find. > > > > Not all that easy if you're looking at some specific gitlab > > job output... it would be helpful if the scripts > > echoed the exact configure command line before running it, > > then you wouldn't need to go ferreting around in the gitlab > > config files and hoping you've found the right bit. > > That's easy enough to do, I'll send a patch. > Awesome, thank you! Stefan >
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 3, 2022 at 10:18 PM Ani Sinha wrote: > > On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha wrote: > > > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha wrote: > > > > > > > To pull this image: > > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > > > Actually the URL is: > > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > > > > > (or to be sure to pull the very same:) > > > > > > > $ docker pull > > > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > > > Same here, > > > > > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > I pulled this container, This is fc35, the same mst is using: # cat /etc/fedora-release Fedora release 35 (Thirty Five) Hmm. Something else is going on in the gitlab specific environment.
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 04:47:03PM +, Peter Maydell wrote: > On Thu, 3 Nov 2022 at 16:38, Daniel P. Berrangé wrote: > > On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > > > 2. The GitLab output does not contain the full command lines because > > > environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). > > > > Note, $QEMU_CONFIGURE_OPTS is set by the container image itself, so > > there's no need to know that one. > > > > $CONFIGURE_ARGS meanwhile is set in the build-X template and > > easy to find. > > Not all that easy if you're looking at some specific gitlab > job output... it would be helpful if the scripts > echoed the exact configure command line before running it, > then you wouldn't need to go ferreting around in the gitlab > config files and hoping you've found the right bit. That's easy enough to do, I'll send a patch. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 3, 2022 at 10:17 PM Ani Sinha wrote: > > On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha wrote: > > > > > To pull this image: > > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > > > Actually the URL is: > > > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > > > (or to be sure to pull the very same:) > > > > > $ docker pull > > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > > > Same here, > > > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > I pulled this container, used the configure line Stefan mentioned > earlier in the thread and re-ran make check-qtest and still could not > repro the crash. All tests pass. > [root@6089e5581e63 build]# git status On branch master Your branch is ahead of 'origin/master' by 82 commits. (use "git push" to publish your local commits) nothing to commit, working tree clean [root@6089e5581e63 build]# git log --oneline -1 77dd1e2b09 (HEAD -> master, tag: for_upstream, tag: for_autotest_next, tag: for_autotest, mst/pci, mst/next) intel-iommu: PASID support [root@6089e5581e63 build]# git log --oneline -5 77dd1e2b09 (HEAD -> master, tag: for_upstream, tag: for_autotest_next, tag: for_autotest, mst/pci, mst/next) intel-iommu: PASID support a0f831c879 intel-iommu: convert VTD_PE_GET_FPD_ERR() to be a function 840d70c49b intel-iommu: drop VTDBus c89dbf5551 intel-iommu: don't warn guest errors when getting rid2pasid entry d8ebe4ce22 vfio: move implement of vfio_get_xlat_addr() to memory.c [root@6089e5581e63 build]#
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, 3 Nov 2022 at 16:38, Daniel P. Berrangé wrote: > On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > > 2. The GitLab output does not contain the full command lines because > > environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). > > Note, $QEMU_CONFIGURE_OPTS is set by the container image itself, so > there's no need to know that one. > > $CONFIGURE_ARGS meanwhile is set in the build-X template and > easy to find. Not all that easy if you're looking at some specific gitlab job output... it would be helpful if the scripts echoed the exact configure command line before running it, then you wouldn't need to go ferreting around in the gitlab config files and hoping you've found the right bit. -- PMM
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 3, 2022 at 9:12 PM Ani Sinha wrote: > > > To pull this image: > > > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest > > Actually the URL is: > > $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > > > (or to be sure to pull the very same:) > > > $ docker pull > > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 > > Same here, > > registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 I pulled this container, used the configure line Stefan mentioned earlier in the thread and re-ran make check-qtest and still could not repro the crash. All tests pass. /usr/bin/meson test --no-rebuild -t 0 --num-processes 1 --print-errorlogs --suite qtest 1/31 qemu:qtest+qtest-arm / qtest-arm/qom-test OK 293.59s 85 subtests passed 2/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_pwm-test OK 96.69s 24 subtests passed 3/31 qemu:qtest+qtest-arm / qtest-arm/test-hmp OK 56.11s 86 subtests passed 4/31 qemu:qtest+qtest-arm / qtest-arm/boot-serial-test OK 0.45s 3 subtests passed 5/31 qemu:qtest+qtest-arm / qtest-arm/qos-test OK 20.50s 115 subtests passed 6/31 qemu:qtest+qtest-arm / qtest-arm/sse-timer-test OK 0.29s 3 subtests passed 7/31 qemu:qtest+qtest-arm / qtest-arm/cmsdk-apb-dualtimer-test OK 0.20s 2 subtests passed 8/31 qemu:qtest+qtest-arm / qtest-arm/cmsdk-apb-timer-test OK 0.22s 1 subtests passed 9/31 qemu:qtest+qtest-arm / qtest-arm/cmsdk-apb-watchdog-test OK 0.25s 2 subtests passed 10/31 qemu:qtest+qtest-arm / qtest-arm/pflash-cfi02-test OK 4.31s 4 subtests passed 11/31 qemu:qtest+qtest-arm / qtest-arm/aspeed_hace-test OK 22.36s 16 subtests passed 12/31 qemu:qtest+qtest-arm / qtest-arm/aspeed_smc-test OK 144.47s 10 subtests passed 13/31 qemu:qtest+qtest-arm / qtest-arm/aspeed_gpio-test OK 0.21s 2 subtests passed 14/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_adc-test OK 1.88s 6 subtests passed 15/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_gpio-test OK 0.24s 18 subtests passed 16/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_rng-test OK 0.26s 2 subtests passed 17/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_sdhci-test OK 0.97s 3 subtests passed 18/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_smbus-test OK 11.23s 40 subtests passed 19/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_timer-test OK 1.91s 180 subtests passed 20/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_watchdog_timer-test OK 20.69s 15 subtests passed 21/31 qemu:qtest+qtest-arm / qtest-arm/npcm7xx_emc-test OK 0.90s 6 subtests passed 22/31 qemu:qtest+qtest-arm / qtest-arm/arm-cpu-features OK 0.15s 1 subtests passed 23/31 qemu:qtest+qtest-arm / qtest-arm/microbit-test OK 4.46s 5 subtests passed 24/31 qemu:qtest+qtest-arm / qtest-arm/test-arm-mptimer OK 0.20s 61 subtests passed 25/31 qemu:qtest+qtest-arm / qtest-arm/hexloader-test OK 0.14s 1 subtests passed 26/31 qemu:qtest+qtest-arm / qtest-arm/cdrom-test OK 1.06s 9 subtests passed 27/31 qemu:qtest+qtest-arm / qtest-arm/device-introspect-test OK 3.18s 6 subtests passed 28/31 qemu:qtest+qtest-arm / qtest-arm/machine-none-test OK 0.09s 1 subtests passed 29/31 qemu:qtest+qtest-arm / qtest-arm/qmp-test OK 0.34s 4 subtests passed 30/31 qemu:qtest+qtest-arm / qtest-arm/qmp-cmd-test OK 7.80s 62 subtests passed 31/31 qemu:qtest+qtest-arm / qtest-arm/readconfig-test OK 0.22s 2 subtests passed Ok: 31 Expected Fail: 0 Fail: 0 Unexpected Pass:0 Skipped:0 Timeout:0 Full log written to /qemu/qemu/build/meson-logs/testlog.txt
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 04:38:35PM +, Daniel P. Berrangé wrote: > On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > > On Thu, 3 Nov 2022 at 11:59, Daniel P. Berrangé wrote: > > > > > > On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > > > > gitlab-runner can run locally with minimal setup: > > > > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > > > > > > > I haven't tried it yet, but that seems like the most reliable (and > > > > easiest) way to reproduce the CI environment. > > > > > > IMHO that is total overkill. > > > > > > Just running the containers directly is what I'd recommend for any > > > attempt to reproduce problems. There isn't actually anything gitlab > > > specific in our CI environment, gitlab merely provides the harness > > > for invoking jobs. This is good as it means we can move our CI to > > > another systems if we find Gitlab no longer meets our needs, and > > > our actual build env won't change, as it'll be the same containers > > > still. > > > > > > I wouldn't recommend QEMU contributors to tie their local workflow > > > into the use of gitlab-runner, when they can avoid that dependency. > > > > If there was a complete list of commands to run I would agree with > > you. Unfortunately there is no easy way to run the container locally: > > 1. The container image path is hidden in the GitLab output and easy to > > get wrong (see Ani's reply). > > That is bizarre > >Pulling docker image > registry.gitlab.com/qemu-project/[MASKED]/fedora:latest ... > > I've not seen any other gitlab project where the paths are 'MASKED' in > this way. Makes me wonder if there's some setting in the QEMU gitlab > project causing this, as its certainly not expected behaviour. Spoke with Peter on IRC, and we had a variable set CIRRUS_GITHUB_REPO with value 'qemu/qemu' that was marked as 'masked'. This caused gitlab to scrub that string from the build logs. We've unmasked that now, so the container URLs should be intact from the next CI pipeline onwards. Masking is only needed for security sensitive variables like tokens, passwords, etc With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 12:25:49PM -0400, Stefan Hajnoczi wrote: > On Thu, 3 Nov 2022 at 11:59, Daniel P. Berrangé wrote: > > > > On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > > > gitlab-runner can run locally with minimal setup: > > > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > > > > > I haven't tried it yet, but that seems like the most reliable (and > > > easiest) way to reproduce the CI environment. > > > > IMHO that is total overkill. > > > > Just running the containers directly is what I'd recommend for any > > attempt to reproduce problems. There isn't actually anything gitlab > > specific in our CI environment, gitlab merely provides the harness > > for invoking jobs. This is good as it means we can move our CI to > > another systems if we find Gitlab no longer meets our needs, and > > our actual build env won't change, as it'll be the same containers > > still. > > > > I wouldn't recommend QEMU contributors to tie their local workflow > > into the use of gitlab-runner, when they can avoid that dependency. > > If there was a complete list of commands to run I would agree with > you. Unfortunately there is no easy way to run the container locally: > 1. The container image path is hidden in the GitLab output and easy to > get wrong (see Ani's reply). That is bizarre Pulling docker image registry.gitlab.com/qemu-project/[MASKED]/fedora:latest ... I've not seen any other gitlab project where the paths are 'MASKED' in this way. Makes me wonder if there's some setting in the QEMU gitlab project causing this, as its certainly not expected behaviour. Grabbing the container URL from line 8 of the build log is my standard goto approach. > 2. The GitLab output does not contain the full command lines because > environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). Note, $QEMU_CONFIGURE_OPTS is set by the container image itself, so there's no need to know that one. $CONFIGURE_ARGS meanwhile is set in the build-X template and easy to find. > 3. The .gitlab-ci.d/ is non-trivial (uses YAML templates and who knows > what else GitLab CI does when running the YAML). You shouldn't need to understand that to reproduce problems. At most just need to find the $CONFIGURE_ARGS and $MAKE_CHECK_ARGS settings for the build-XXX job at hand. > When doing what you suggested, how easy is it and how confident are > you that you're reproducing the same environment? Unless I missed > something it doesn't work very well. Running the containers directly in docker/podman is how I reproduce pretty much everything locally, and its been pretty straightforward IME. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, 3 Nov 2022 at 11:59, Michael S. Tsirkin wrote: > > On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > > gitlab-runner can run locally with minimal setup: > > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > > > I haven't tried it yet, but that seems like the most reliable (and > > easiest) way to reproduce the CI environment. > > > > Stefan > > How does one pass in variables do you know? Environment? Haven't tried it yet, sorry. Stefan
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, 3 Nov 2022 at 11:59, Daniel P. Berrangé wrote: > > On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > > gitlab-runner can run locally with minimal setup: > > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > > > I haven't tried it yet, but that seems like the most reliable (and > > easiest) way to reproduce the CI environment. > > IMHO that is total overkill. > > Just running the containers directly is what I'd recommend for any > attempt to reproduce problems. There isn't actually anything gitlab > specific in our CI environment, gitlab merely provides the harness > for invoking jobs. This is good as it means we can move our CI to > another systems if we find Gitlab no longer meets our needs, and > our actual build env won't change, as it'll be the same containers > still. > > I wouldn't recommend QEMU contributors to tie their local workflow > into the use of gitlab-runner, when they can avoid that dependency. If there was a complete list of commands to run I would agree with you. Unfortunately there is no easy way to run the container locally: 1. The container image path is hidden in the GitLab output and easy to get wrong (see Ani's reply). 2. The GitLab output does not contain the full command lines because environment variables are hidden (e.g. $QEMU_CONFIGURE_OPTS). 3. The .gitlab-ci.d/ is non-trivial (uses YAML templates and who knows what else GitLab CI does when running the YAML). When doing what you suggested, how easy is it and how confident are you that you're reproducing the same environment? Unless I missed something it doesn't work very well. Stefan
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > gitlab-runner can run locally with minimal setup: > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > I haven't tried it yet, but that seems like the most reliable (and > easiest) way to reproduce the CI environment. IMHO that is total overkill. Just running the containers directly is what I'd recommend for any attempt to reproduce problems. There isn't actually anything gitlab specific in our CI environment, gitlab merely provides the harness for invoking jobs. This is good as it means we can move our CI to another systems if we find Gitlab no longer meets our needs, and our actual build env won't change, as it'll be the same containers still. I wouldn't recommend QEMU contributors to tie their local workflow into the use of gitlab-runner, when they can avoid that dependency. With regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 11:49:21AM -0400, Stefan Hajnoczi wrote: > gitlab-runner can run locally with minimal setup: > https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ > > I haven't tried it yet, but that seems like the most reliable (and > easiest) way to reproduce the CI environment. > > Stefan How does one pass in variables do you know? Environment? -- MST
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
gitlab-runner can run locally with minimal setup: https://bagong.gitlab.io/posts/run-gitlab-ci-locally/ I haven't tried it yet, but that seems like the most reliable (and easiest) way to reproduce the CI environment. Stefan
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
> To pull this image: > $ docker pull registry.gitlab.com/qemu-project/qemu/fedora:latest Actually the URL is: $ docker pull registry.gitlab.com/qemu-project/qemu/qemu/fedora:latest > (or to be sure to pull the very same:) > $ docker pull > registry.gitlab.com/qemu-project/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 Same here, registry.gitlab.com/qemu-project/qemu/qemu/fedora:d6d20c1c6aede3a652eb01b781530cc10392de2764503c84f9bf4eb1d7a89d26 See https://gitlab.com/qemu-project/qemu/container_registry/1215910
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, Nov 03, 2022 at 09:29:56AM -0400, Stefan Hajnoczi wrote: > On Thu, 3 Nov 2022 at 08:14, Michael S. Tsirkin wrote: > > On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: > > > On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > > > > Changes from v1: > > > > > > > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > > > > bugs that tripped up the pipeline. > > > > Updated expected files for core-count test. > > > > > > Several "make check" CI failures have occurred. They look like they are > > > related. Here is one (see the URLs at the bottom of this email for more > > > details): > > > > > > 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child > > > process > > > (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess > > > [8609]) failed unexpectedly ERROR > > > 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test > > > ERROR 31.44s killed by signal 6 SIGABRT > > > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh > > > >>> MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img > > > >>> QTEST_QEMU_BINARY=./qemu-system-arm > > > >>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon > > > >>> /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k > > > ― ✀ > > > ― > > > stderr: > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: -chardev > > > socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: > > > info: QEMU waiting for connection on: > > > disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: -chardev > > > socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: > > > info: QEMU waiting for connection on: > > > disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > > qemu-system-arm: -netdev > > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read > > > msg header. Read 0 instead of 12. Original request 1. > > > qemu-system-arm: -netdev > > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: > > > vhost_backend_init failed: Protocol error > > > qemu-system-arm: -netdev > > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init > > > vhost_net for queue 0 > > > qemu-system-arm: -netdev > > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU > > > waiting for connection on: > > > disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument > > > (22) > > > qemu-system-arm: -chardev > > > socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: > > > info: QEMU waiting for connection on: > > > disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on > > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. > > > qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) > > > qemu-system-arm: Failed to set msg fds. > > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument > > > (22) > > > UndefinedBehaviorSanitizer:DEADLYSIGNAL > > > ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address > > > 0x (pc 0x55e34deccab0 bp 0x sp 0x7ffc94894710 > > > T8618) > > > ==8618==The signal is caused by a READ memory access. > > > ==8618==Hint: address points to the zero page. > > > #0 0x55e34deccab0 in ldl_he_p > > > /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 > > > #1 0x55e34deccab0 in ldn_he_p > > > /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 > > > #2 0x55e34deccab0 in flatview_write_continue > > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 > > > #3 0x55e34dec9f21 in flatview_write > > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > > #4 0x55e34dec9f21 in address_space_write > > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > > #5 0x55e34decace7 in address_space_unmap > > > /builds/qemu-pro
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On 3/11/22 13:13, Michael S. Tsirkin wrote: On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: Changes from v1: Applied and squashed fixes by Igor, Lei He, Hesham Almatary for bugs that tripped up the pipeline. Updated expected files for core-count test. Several "make check" CI failures have occurred. They look like they are related. Here is one (see the URLs at the bottom of this email for more details): 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly ERROR 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR 31.44s killed by signal 6 SIGABRT G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img QTEST_QEMU_BINARY=./qemu-system-arm QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k ― ✀ ― stderr: qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) UndefinedBehaviorSanitizer:DEADLYSIGNAL ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x (pc 0x55e34deccab0 bp 0x sp 0x7ffc94894710 T8618) ==8618==The signal is caused by a READ memory access. ==8618==Hint: address points to the zero page. #0 0x55e34deccab0 in ldl_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 #1 0x55e34deccab0 in ldn_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 #2 0x55e34deccab0 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 #3 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 #4 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 #5 0x55e34decace7 in address_space_unmap /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 #6 0x55e34de6d4ec in vhost_memory_unmap /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 #7 0x55e34de6d4ec in vhost_virtqueue_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 #8 0x55e34de72904 in vhost_dev_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 #9 0x55e34d890514 in vhost_net_stop_one /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 #10 0x55e34d88fef6 in vhost_net_start /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:404:13 #11 0x55e34de0bec6 in virtio_net_vhost_status /builds/qemu-project/qemu/bui
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Thu, 3 Nov 2022 at 08:14, Michael S. Tsirkin wrote: > On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: > > On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > > > Changes from v1: > > > > > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > > > bugs that tripped up the pipeline. > > > Updated expected files for core-count test. > > > > Several "make check" CI failures have occurred. They look like they are > > related. Here is one (see the URLs at the bottom of this email for more > > details): > > > > 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child > > process > > (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess > > [8609]) failed unexpectedly ERROR > > 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR > > 31.44s killed by signal 6 SIGABRT > > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh > > >>> MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img > > >>> QTEST_QEMU_BINARY=./qemu-system-arm > > >>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon > > >>> /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k > > ― ✀ > > ― > > stderr: > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: Failed to set msg fds. > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: -chardev > > socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: > > info: QEMU waiting for connection on: > > disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: Failed to set msg fds. > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: -chardev > > socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: > > info: QEMU waiting for connection on: > > disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > qemu-system-arm: -netdev > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read > > msg header. Read 0 instead of 12. Original request 1. > > qemu-system-arm: -netdev > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: > > vhost_backend_init failed: Protocol error > > qemu-system-arm: -netdev > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init > > vhost_net for queue 0 > > qemu-system-arm: -netdev > > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU > > waiting for connection on: > > disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: Failed to set msg fds. > > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > > qemu-system-arm: -chardev > > socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: > > info: QEMU waiting for connection on: > > disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on > > qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. > > qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) > > qemu-system-arm: Failed to set msg fds. > > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > > UndefinedBehaviorSanitizer:DEADLYSIGNAL > > ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address > > 0x (pc 0x55e34deccab0 bp 0x sp 0x7ffc94894710 T8618) > > ==8618==The signal is caused by a READ memory access. > > ==8618==Hint: address points to the zero page. > > #0 0x55e34deccab0 in ldl_he_p > > /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 > > #1 0x55e34deccab0 in ldn_he_p > > /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 > > #2 0x55e34deccab0 in flatview_write_continue > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 > > #3 0x55e34dec9f21 in flatview_write > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > > #4 0x55e34dec9f21 in address_space_write > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > > #5 0x55e34decace7 in address_space_unmap > > /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 > > #6 0x55e34de6d4ec in vhost_memory_unmap > > /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 > > #7 0x55e34de6d4ec in vhost_virtqueue_stop > > /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 > > #8 0x55e34de
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Wed, Nov 02, 2022 at 03:47:43PM -0400, Stefan Hajnoczi wrote: > On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > > Changes from v1: > > > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > > bugs that tripped up the pipeline. > > Updated expected files for core-count test. > > Several "make check" CI failures have occurred. They look like they are > related. Here is one (see the URLs at the bottom of this email for more > details): > > 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child > process > (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess > [8609]) failed unexpectedly ERROR > 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR >31.44s killed by signal 6 SIGABRT > >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh > >>> MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img > >>> QTEST_QEMU_BINARY=./qemu-system-arm > >>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon > >>> /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k > ― ✀ ― > stderr: > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: Failed to set msg fds. > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: -chardev > socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: > info: QEMU waiting for connection on: > disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: Failed to set msg fds. > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: -chardev > socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: > info: QEMU waiting for connection on: > disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > qemu-system-arm: -netdev > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg > header. Read 0 instead of 12. Original request 1. > qemu-system-arm: -netdev > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init > failed: Protocol error > qemu-system-arm: -netdev > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init > vhost_net for queue 0 > qemu-system-arm: -netdev > vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting > for connection on: > disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on > qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: Failed to set msg fds. > qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) > qemu-system-arm: -chardev > socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: > info: QEMU waiting for connection on: > disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on > qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. > qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) > qemu-system-arm: Failed to set msg fds. > qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) > UndefinedBehaviorSanitizer:DEADLYSIGNAL > ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address > 0x (pc 0x55e34deccab0 bp 0x sp 0x7ffc94894710 T8618) > ==8618==The signal is caused by a READ memory access. > ==8618==Hint: address points to the zero page. > #0 0x55e34deccab0 in ldl_he_p > /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 > #1 0x55e34deccab0 in ldn_he_p > /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 > #2 0x55e34deccab0 in flatview_write_continue > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 > #3 0x55e34dec9f21 in flatview_write > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 > #4 0x55e34dec9f21 in address_space_write > /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 > #5 0x55e34decace7 in address_space_unmap > /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 > #6 0x55e34de6d4ec in vhost_memory_unmap > /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 > #7 0x55e34de6d4ec in vhost_virtqueue_stop > /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 > #8 0x55e34de72904 in vhost_dev_stop > /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 > #9 0x55e34d890514 in vhost_net_stop_one > /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 > #10 0x55e34d88fef6 in vhost_net_
Re: [PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
On Wed, Nov 02, 2022 at 12:02:14PM -0400, Michael S. Tsirkin wrote: > Changes from v1: > > Applied and squashed fixes by Igor, Lei He, Hesham Almatary for > bugs that tripped up the pipeline. > Updated expected files for core-count test. Several "make check" CI failures have occurred. They look like they are related. Here is one (see the URLs at the bottom of this email for more details): 17/106 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/arm/virt/virtio-mmio/virtio-bus/virtio-net-device/virtio-net/virtio-net-tests/vhost-user/flags-mismatch/subprocess [8609]) failed unexpectedly ERROR 17/106 qemu:qtest+qtest-arm / qtest-arm/qos-test ERROR 31.44s killed by signal 6 SIGABRT >>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh >>> MALLOC_PERTURB_=49 QTEST_QEMU_IMG=./qemu-img >>> QTEST_QEMU_BINARY=./qemu-system-arm >>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon >>> /builds/qemu-project/qemu/build/tests/qtest/qos-test --tap -k ― ✀ ― stderr: qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-6PT2U1/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-6PT2U1/reconnect.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-connect-fail,path=/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: Failed to read msg header. Read 0 instead of 12. Original request 1. qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: vhost_backend_init failed: Protocol error qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: failed to init vhost_net for queue 0 qemu-system-arm: -netdev vhost-user,id=hs0,chardev=chr-connect-fail,vhostforce=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-H8G7U1/connect-fail.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 20. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 1 ring restore failed: -22: Invalid argument (22) qemu-system-arm: -chardev socket,id=chr-flags-mismatch,path=/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-94UYU1/flags-mismatch.sock,server=on qemu-system-arm: Failed to write msg. Wrote -1 instead of 52. qemu-system-arm: vhost_set_mem_table failed: Invalid argument (22) qemu-system-arm: Failed to set msg fds. qemu-system-arm: vhost VQ 0 ring restore failed: -22: Invalid argument (22) UndefinedBehaviorSanitizer:DEADLYSIGNAL ==8618==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x (pc 0x55e34deccab0 bp 0x sp 0x7ffc94894710 T8618) ==8618==The signal is caused by a READ memory access. ==8618==Hint: address points to the zero page. #0 0x55e34deccab0 in ldl_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:301:5 #1 0x55e34deccab0 in ldn_he_p /builds/qemu-project/qemu/include/qemu/bswap.h:440:1 #2 0x55e34deccab0 in flatview_write_continue /builds/qemu-project/qemu/build/../softmmu/physmem.c:2824:19 #3 0x55e34dec9f21 in flatview_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2867:12 #4 0x55e34dec9f21 in address_space_write /builds/qemu-project/qemu/build/../softmmu/physmem.c:2963:18 #5 0x55e34decace7 in address_space_unmap /builds/qemu-project/qemu/build/../softmmu/physmem.c:3306:9 #6 0x55e34de6d4ec in vhost_memory_unmap /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:342:9 #7 0x55e34de6d4ec in vhost_virtqueue_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1242:5 #8 0x55e34de72904 in vhost_dev_stop /builds/qemu-project/qemu/build/../hw/virtio/vhost.c:1882:9 #9 0x55e34d890514 in vhost_net_stop_one /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:331:5 #10 0x55e34d88fef6 in vhost_net_start /builds/qemu-project/qemu/build/../hw/net/vhost_net.c:404:13 #11 0x55e34de0bec6 in virtio_net_vhost_status /builds/qemu-project/qemu/build/../hw/net/virtio-net.c:307:13 #12 0x55e34de0bec6 in virtio_net_set_status /builds/qemu-proj
[PULL v2 00/82] pci,pc,virtio: features, tests, fixes, cleanups
Changes from v1: Applied and squashed fixes by Igor, Lei He, Hesham Almatary for bugs that tripped up the pipeline. Updated expected files for core-count test. The following changes since commit a11f65ec1b8adcb012b89c92819cbda4dc25aaf1: Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2022-11-01 13:49:33 -0400) are available in the Git repository at: https://git.kernel.org/pub/scm/virt/kvm/mst/qemu.git tags/for_upstream for you to fetch changes up to 77dd1e2b092bb92978a2d68bed7d048ed74a5d23: intel-iommu: PASID support (2022-11-02 07:55:26 -0400) pci,pc,virtio: features, tests, fixes, cleanups lots of acpi rework first version of biosbits infrastructure ASID support in vhost-vdpa core_count2 support in smbios PCIe DOE emulation virtio vq reset HMAT support part of infrastructure for viommu support in vhost-vdpa VTD PASID support fixes, tests all over the place Signed-off-by: Michael S. Tsirkin Akihiko Odaki (1): msix: Assert that specified vector is in range Alex Bennée (1): virtio: re-order vm_running and use_started checks Ani Sinha (7): hw/i386/e820: remove legacy reserved entries for e820 acpi/tests/avocado/bits: initial commit of test scripts that are run by biosbits acpi/tests/avocado/bits: disable acpi PSS tests that are failing in biosbits acpi/tests/avocado/bits: add biosbits config file for running bios tests acpi/tests/avocado/bits: add acpi and smbios avocado tests that uses biosbits acpi/tests/avocado/bits/doc: add a doc file to describe the acpi bits test MAINTAINERS: add myself as the maintainer for acpi biosbits avocado tests Bernhard Beschow (3): hw/i386/acpi-build: Remove unused struct hw/i386/acpi-build: Resolve redundant attribute hw/i386/acpi-build: Resolve north rather than south bridges Brice Goglin (4): hmat acpi: Don't require initiator value in -numa tests: acpi: add and whitelist *.hmat-noinitiator expected blobs tests: acpi: q35: add test for hmat nodes without initiators tests: acpi: q35: update expected blobs *.hmat-noinitiators expected HMAT: Christian A. Ehrhardt (1): hw/acpi/erst.c: Fix memory handling issues Cindy Lu (1): vfio: move implement of vfio_get_xlat_addr() to memory.c David Daney (1): virtio-rng-pci: Allow setting nvectors, so we can use MSI-X Eric Auger (1): hw/virtio/virtio-iommu-pci: Enforce the device is plugged on the root bus Gregory Price (1): hw/i386/pc.c: CXL Fixed Memory Window should not reserve e820 in bios Hesham Almatary (3): tests: Add HMAT AArch64/virt empty table files tests: acpi: aarch64/virt: add a test for hmat nodes with no initiators tests: virt: Update expected *.acpihmatvirt tables Huai-Cheng Kuo (3): hw/pci: PCIe Data Object Exchange emulation hw/cxl/cdat: CXL CDAT Data Object Exchange implementation hw/mem/cxl-type3: Add CXL CDAT Data Object Exchange Igor Mammedov (11): acpi: pc: vga: use AcpiDevAmlIf interface to build VGA device descriptors tests: acpi: whitelist DSDT before generating PCI-ISA bridge AML automatically acpi: pc/q35: drop ad-hoc PCI-ISA bridge AML routines and let bus ennumeration generate AML tests: acpi: update expected DSDT after ISA bridge is moved directly under PCI host bridge tests: acpi: whitelist DSDT before generating ICH9_SMB AML automatically acpi: add get_dev_aml_func() helper acpi: enumerate SMB bridge automatically along with other PCI devices tests: acpi: update expected blobs tests: acpi: pc/q35 whitelist DSDT before \_GPE cleanup acpi: pc/35: sanitize _GPE declaration order tests: acpi: update expected blobs Jason Wang (4): intel-iommu: don't warn guest errors when getting rid2pasid entry intel-iommu: drop VTDBus intel-iommu: convert VTD_PE_GET_FPD_ERR() to be a function intel-iommu: PASID support Jonathan Cameron (2): hw/mem/cxl-type3: Add MSIX support hw/pci-bridge/cxl-upstream: Add a CDAT table access DOE Julia Suvorova (5): hw/smbios: add core_count2 to smbios table type 4 bios-tables-test: teach test to use smbios 3.0 tables tests/acpi: allow changes for core_count2 test bios-tables-test: add test for number of cores > 255 tests/acpi: update tables for new core count test Kangjie Xu (10): virtio: introduce virtio_queue_enable() virtio: core: vq reset feature negotation support virtio-pci: support queue enable vhost: expose vhost_virtqueue_start() vhost: expose vhost_virtqueue_stop() vhost-net: vhost-kernel: introduce vhost_net_virtqueue_reset() vhost-net: vhost-kernel: introduce vhost_net_virtqueue_restart() virtio-net: introduce flush_or_purge_queued_packets()