Re: [BUG] Unable to boot Xen 4.11 (shipped with Ubuntu) on Intel 10i3 CPU

2020-12-28 Thread Ondrej Balaz
Thanks for explanation! I'll stick with older HW until this is out.

On Tue, 29 Dec 2020 at 03:49, Andrew Cooper 
wrote:

> On 28/12/2020 18:08, Ondrej Balaz wrote:
> > Hi,
> > I recently updated my home server running Ubuntu 20.04 (Focal) with
> > Xen hypervisor 4.11 (installed using Ubuntu packages). Before the
> > upgrade all was running fine and both dom0 and all domUs were booting
> > fine. Upgrade was literally taking harddrive from 6th gen Intel CPU
> > system to 10th gen Intel CPU one and redoing EFI entries from Ubuntu
> > live USB.
> >
> > After doing so standalone Ubuntu (without Xen multiboot) boots just
> > fine but Ubuntu as dom0 with Xen fails pretty early on with following
> > error (hand-copied from phone snaps I took with loglvl=all as this is
> > barebone system without serial port and I don't know how to dump full
> > logs in case of panic):
> >
> > (XEN) ACPI: IOAPIC (id[0x02] address[0xfec0] gsi_base[01])
> > (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec0, GSI 0-119
> > (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> > (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> > (XEN) ACPI: IRQ0 used by override.
> > (XEN) ACPI: IRQ2 used by override
> > (XEN) ACPI: IRQ9 used by override
> > (XEN) Enabling APIC mode: Flat.  Using 1 I/O APICs
> > (XEN) ACPI: HPET id: 0x8086a201 base: 0xfed0
> > (XEN) ERST table was not found
> > (XEN) ACPI: BGRT: invalidating v1 image at 0x7d7c1018
> > (XEN) Using ACPI (MADT) for SMP configuration information
> > ...
> > (XEN) Switched to APIC driver x2apic_cluster
> > ...
> > (XEN) Initing memory sharing.
> > (XEN) alt table 82d08042b840 -> 82d08042d7ce
> > ...
> > (XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
> > (XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
> > (XEN) Intel VT-d Snoop Control not enabled
> > (XEN) Intel VT-d Dom0 DMA Passthrough not enabled
> > (XEN) Intel VT-d Queued Invalidation enabled
> > (XEN) Intel VT-d Interrupt Remapping enabled
> > (XEN) Intel VT-d Posted Interrupt not enabled
> > (XEN) Intel VT-d Shared EPT tables enabled
> > (XEN) I/O virtualisation enabled
> > (XEN)  - Dom0 mode: Relaxed
> > (XEN) Interrupt remapping enabled
> > (XEN) nr_sockets: 1
> > (XEN) Enabled directed EOI with ioapic_ack_old on!
> > (XEN) ENABLING IO-APIC IRQs
> > (XEN)  -> Using old ACK method
> > (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> > (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> > (XEN) ...trying to set up timer (IRQ0) through the 8259A ... failed.
> > (XEN) ...trying to set up timer as Virtual Wire IRQ... failed.
> > (XEN) ...trying to set up timer as ExtINT IRQ...spurious 8259A
> > interrupt IRQ7.
> > (XEN) CPU0: No irq handler for vector e7 (IRQ -8)
> > (XEN) IRQ7 a=0001[0001,] v=60[] t=IO-APIC-edge s=0002
> > (XEN)  failed :(.
> > (XEN)
> > (XEN) ***
> > (XEN) Panic on CPU 0:
> > (XEN) IO-APIC + timer doesn't work!  Boot with apic_verbosity=debug
> > and send report.  Then try booting with the `noapic` option
> > (XEN) ***
> >
> > I suspected that migration of drive could cause problem so I took an
> > empty SSD and installed fresh Ubuntu and added Xen hypervisor, after
> > reboot I ended up with same panic. I tried booting with noapic (gave
> > general page fault) and iommu=0 (said it needs iommu=required/force).
> > Trying to boot this exact fresh install on older (6th gen) Intel CPU
> > succeeded. I happen to have access to one more system with 10th gen
> > Intel CPUs (Lenovo laptop) and no luck booting Xen there too and same
> > panic in the end.
> >
> > Back to my barebone I tried to match BIOS settings between working and
> > non-working but it didn't help. Virtualization is enabled, both
> > systems are from the same maker (Intel NUC barebones), both systems
> > are EFI enabled/secure boot disabled (the later one doesn't seem to
> > have an option to disable EFI boot and boot using MBR).
> >
> > Is this something known? Are there any boot options that can
> > potentially fix this?
> >
> > Any help (including how to dump full Xen boot logs without serial)
> > appreciated.
>
> Yes we're aware of it.  It is because modern Intel systems no longer
> have a legacy PIT configured by default, and Xen depends on this.  (The
> error message is misleading.  It's not checking for a timer, so much as
> checking that interrupts works, and depends on the legacy PIT "working"
> as the source of interrupts.)
>
> I'm working on a fix.
>
> ~Andrew
>


[linux-linus test] 157945: regressions - FAIL

2020-12-28 Thread osstest service owner
flight 157945 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157945/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-xl-xsm7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine   6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-raw7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-amd64-xl  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start   fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start   fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-amd64-libvirt 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian  fail REGR. vs. 152332
 test-arm64-arm64-xl  12 debian-install   fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-pair25 guest-start/debian   fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start  fail REGR. vs. 152332
 test-arm64-arm64-examine 13 examine-iommufail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop   fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim 7 xen-install  fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start  fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 157937 REGR. vs. 
152332
 test-arm64-arm64-xl-credit1  12 debian-install fail in 157937 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl   10 host-ping-check-xen fail in 157937 pass in 157945
 test-arm64-arm64-examine  8 reboot   fail in 157937 pass in 157945
 test-arm64-arm64-libvirt-xsm  8 xen-boot   fail pass in 157937
 test-arm64-arm64-xl-credit2   8 xen-boot   

[qemu-mainline test] 157943: regressions - FAIL

2020-12-28 Thread osstest service owner
flight 157943 qemu-mainline real [real]
flight 157948 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157943/
http://logs.test-lab.xenproject.org/osstest/logs/157948/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 152631
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 152631
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeatfail  like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass

version targeted for testing:
 qemuua05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  130 days
Failing since152659  2020-08-21 14:07:39 Z  129 days  267 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   10 days   20 attempts


323 people touched revisions under test,
not listing them all

jobs:
 

[linux-linus test] 157937: regressions - FAIL

2020-12-28 Thread osstest service owner
flight 157937 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157937/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-xl-xsm7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine   6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-raw7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-amd64-xl  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen  fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start   fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start   fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-amd64-libvirt 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian  fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  12 debian-install   fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-pair25 guest-start/debian   fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop   fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim 7 xen-install  fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start  fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-xl  12 debian-install fail in 157930 REGR. vs. 152332
 test-arm64-arm64-examine 13 examine-iommu  fail in 157930 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 157930 pass in 
157937
 test-arm64-arm64-xl-credit1   8 xen-boot fail in 157930 pass in 157937
 test-arm64-arm64-libvirt-xsm  8 xen-boot fail in 157930 pass in 157937
 test-arm64-arm64-xl-credit2   8 xen-boot fail in 

Re: [BUG] Unable to boot Xen 4.11 (shipped with Ubuntu) on Intel 10i3 CPU

2020-12-28 Thread Andrew Cooper
On 28/12/2020 18:08, Ondrej Balaz wrote:
> Hi,
> I recently updated my home server running Ubuntu 20.04 (Focal) with
> Xen hypervisor 4.11 (installed using Ubuntu packages). Before the
> upgrade all was running fine and both dom0 and all domUs were booting
> fine. Upgrade was literally taking harddrive from 6th gen Intel CPU
> system to 10th gen Intel CPU one and redoing EFI entries from Ubuntu
> live USB.
>
> After doing so standalone Ubuntu (without Xen multiboot) boots just
> fine but Ubuntu as dom0 with Xen fails pretty early on with following
> error (hand-copied from phone snaps I took with loglvl=all as this is
> barebone system without serial port and I don't know how to dump full
> logs in case of panic):
>
> (XEN) ACPI: IOAPIC (id[0x02] address[0xfec0] gsi_base[01])
> (XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec0, GSI 0-119
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> (XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
> (XEN) ACPI: IRQ0 used by override.
> (XEN) ACPI: IRQ2 used by override 
> (XEN) ACPI: IRQ9 used by override
> (XEN) Enabling APIC mode: Flat.  Using 1 I/O APICs
> (XEN) ACPI: HPET id: 0x8086a201 base: 0xfed0
> (XEN) ERST table was not found
> (XEN) ACPI: BGRT: invalidating v1 image at 0x7d7c1018
> (XEN) Using ACPI (MADT) for SMP configuration information
> ...
> (XEN) Switched to APIC driver x2apic_cluster
> ...  
> (XEN) Initing memory sharing.
> (XEN) alt table 82d08042b840 -> 82d08042d7ce
> ...
> (XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
> (XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
> (XEN) Intel VT-d Snoop Control not enabled 
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled 
> (XEN) Intel VT-d Queued Invalidation enabled 
> (XEN) Intel VT-d Interrupt Remapping enabled
> (XEN) Intel VT-d Posted Interrupt not enabled  
> (XEN) Intel VT-d Shared EPT tables enabled
> (XEN) I/O virtualisation enabled
> (XEN)  - Dom0 mode: Relaxed
> (XEN) Interrupt remapping enabled
> (XEN) nr_sockets: 1
> (XEN) Enabled directed EOI with ioapic_ack_old on!
> (XEN) ENABLING IO-APIC IRQs
> (XEN)  -> Using old ACK method
> (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
> (XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> (XEN) ...trying to set up timer (IRQ0) through the 8259A ... failed.
> (XEN) ...trying to set up timer as Virtual Wire IRQ... failed.
> (XEN) ...trying to set up timer as ExtINT IRQ...spurious 8259A
> interrupt IRQ7.
> (XEN) CPU0: No irq handler for vector e7 (IRQ -8)
> (XEN) IRQ7 a=0001[0001,] v=60[] t=IO-APIC-edge s=0002
> (XEN)  failed :(.
> (XEN)
> (XEN) ***
> (XEN) Panic on CPU 0:
> (XEN) IO-APIC + timer doesn't work!  Boot with apic_verbosity=debug
> and send report.  Then try booting with the `noapic` option
> (XEN) ***
>
> I suspected that migration of drive could cause problem so I took an
> empty SSD and installed fresh Ubuntu and added Xen hypervisor, after
> reboot I ended up with same panic. I tried booting with noapic (gave
> general page fault) and iommu=0 (said it needs iommu=required/force).
> Trying to boot this exact fresh install on older (6th gen) Intel CPU
> succeeded. I happen to have access to one more system with 10th gen
> Intel CPUs (Lenovo laptop) and no luck booting Xen there too and same
> panic in the end.
>
> Back to my barebone I tried to match BIOS settings between working and
> non-working but it didn't help. Virtualization is enabled, both
> systems are from the same maker (Intel NUC barebones), both systems
> are EFI enabled/secure boot disabled (the later one doesn't seem to
> have an option to disable EFI boot and boot using MBR).
>
> Is this something known? Are there any boot options that can
> potentially fix this?
>
> Any help (including how to dump full Xen boot logs without serial)
> appreciated.

Yes we're aware of it.  It is because modern Intel systems no longer
have a legacy PIT configured by default, and Xen depends on this.  (The
error message is misleading.  It's not checking for a timer, so much as
checking that interrupts works, and depends on the legacy PIT "working"
as the source of interrupts.)

I'm working on a fix.

~Andrew



Re: [PATCH 1/5] x86/vPCI: tolerate (un)masking a disabled MSI-X entry

2020-12-28 Thread Roger Pau Monné
On Mon, Dec 07, 2020 at 11:36:38AM +0100, Jan Beulich wrote:
> None of the four reasons causing vpci_msix_arch_mask_entry() to get
> called (there's just a single call site) are impossible or illegal prior
> to an entry actually having got set up:
> - the entry may remain masked (in this case, however, a prior masked ->
>   unmasked transition would already not have worked),
> - MSI-X may not be enabled,
> - the global mask bit may be set,
> - the entry may not otherwise have been updated.
> Hence the function asserting that the entry was previously set up was
> simply wrong. Since the caller tracks the masked state (and setting up
> of an entry would only be effected when that software bit is clear),
> it's okay to skip both masking and unmasking requests in this case.

On the original approach I just added this because I convinced myself
that scenario was impossible. I think we could also do:

diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 64dd0a929c..509cf3962c 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -357,7 +357,11 @@ static int msix_write(struct vcpu *v, unsigned long addr, 
unsigned int len,
  * so that it picks the new state.
  */
 entry->masked = new_masked;
-if ( !new_masked && msix->enabled && !msix->masked && entry->updated )
+
+if ( !msix->enabled )
+break;
+
+if ( !new_masked && !msix->masked && entry->updated )
 {
 /*
  * If MSI-X is enabled, the function mask is not active, the entry
@@ -470,6 +474,7 @@ static int init_msix(struct pci_dev *pdev)
 for ( i = 0; i < pdev->vpci->msix->max_entries; i++)
 {
 pdev->vpci->msix->entries[i].masked = true;
+pdev->vpci->msix->entries[i].updated = true;
 vpci_msix_arch_init_entry(>vpci->msix->entries[i]);
 }

In order to solve the issue.

As pointed out in another patch, regardless of what we end up doing
with the issue at hand we might have to consider setting updated to
true in init_msix in case we want to somehow support enabling an entry
that has it's address and data fields set to 0.

> 
> Fixes: d6281be9d0145 ('vpci/msix: add MSI-X handlers')
> Reported-by: Manuel Bouyer 
> Signed-off-by: Jan Beulich 

Reviewed-by: Roger Pau Monné 

Manuel, can we get confirmation that this fixes your issue?

Thanks, Roger.



PVH mode PCI passthrough status

2020-12-28 Thread tosher 1
Hi,

As of Xen 4.10, PCI passthrough support was not available in PVH mode. I was 
wondering if the PCI passthrough support was added in a later version.

It would be great to know the latest status of the PCI passthrough support for 
the Xen PVM mode. Please let me know if you have any updates on this.

 Thanks,
Mehrab



[BUG] Unable to boot Xen 4.11 (shipped with Ubuntu) on Intel 10i3 CPU

2020-12-28 Thread Ondrej Balaz
Hi,
I recently updated my home server running Ubuntu 20.04 (Focal) with Xen
hypervisor 4.11 (installed using Ubuntu packages). Before the upgrade all
was running fine and both dom0 and all domUs were booting fine. Upgrade was
literally taking harddrive from 6th gen Intel CPU system to 10th gen Intel
CPU one and redoing EFI entries from Ubuntu live USB.

After doing so standalone Ubuntu (without Xen multiboot) boots just fine
but Ubuntu as dom0 with Xen fails pretty early on with following error
(hand-copied from phone snaps I took with loglvl=all as this is barebone
system without serial port and I don't know how to dump full logs in case
of panic):

(XEN) ACPI: IOAPIC (id[0x02] address[0xfec0] gsi_base[01])
(XEN) IOAPIC[0]: apic_id 2, version 32, address 0xfec0, GSI 0-119
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override
(XEN) ACPI: IRQ9 used by override
(XEN) Enabling APIC mode: Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x8086a201 base: 0xfed0
(XEN) ERST table was not found
(XEN) ACPI: BGRT: invalidating v1 image at 0x7d7c1018
(XEN) Using ACPI (MADT) for SMP configuration information
...
(XEN) Switched to APIC driver x2apic_cluster
...
(XEN) Initing memory sharing.
(XEN) alt table 82d08042b840 -> 82d08042d7ce
...
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d iommu 1 supported page sizes: 4kB, 2MB, 1GB.
(XEN) Intel VT-d Snoop Control not enabled
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled
(XEN) Intel VT-d Queued Invalidation enabled
(XEN) Intel VT-d Interrupt Remapping enabled
(XEN) Intel VT-d Posted Interrupt not enabled
(XEN) Intel VT-d Shared EPT tables enabled
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) nr_sockets: 1
(XEN) Enabled directed EOI with ioapic_ack_old on!
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using old ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) ..MP-BIOS bug: 8254 timer not connected to IO-APIC
(XEN) ...trying to set up timer (IRQ0) through the 8259A ... failed.
(XEN) ...trying to set up timer as Virtual Wire IRQ... failed.
(XEN) ...trying to set up timer as ExtINT IRQ...spurious 8259A interrupt
IRQ7.
(XEN) CPU0: No irq handler for vector e7 (IRQ -8)
(XEN) IRQ7 a=0001[0001,] v=60[] t=IO-APIC-edge s=0002
(XEN)  failed :(.
(XEN)
(XEN) ***
(XEN) Panic on CPU 0:
(XEN) IO-APIC + timer doesn't work!  Boot with apic_verbosity=debug and
send report.  Then try booting with the `noapic` option
(XEN) ***

I suspected that migration of drive could cause problem so I took an empty
SSD and installed fresh Ubuntu and added Xen hypervisor, after reboot I
ended up with same panic. I tried booting with noapic (gave general page
fault) and iommu=0 (said it needs iommu=required/force). Trying to boot
this exact fresh install on older (6th gen) Intel CPU succeeded. I happen
to have access to one more system with 10th gen Intel CPUs (Lenovo laptop)
and no luck booting Xen there too and same panic in the end.

Back to my barebone I tried to match BIOS settings between working and
non-working but it didn't help. Virtualization is enabled, both systems are
from the same maker (Intel NUC barebones), both systems are EFI
enabled/secure boot disabled (the later one doesn't seem to have an option
to disable EFI boot and boot using MBR).

Is this something known? Are there any boot options that can potentially
fix this?

Any help (including how to dump full Xen boot logs without serial)
appreciated.

Thanks,
Ondrej


Re: [PATCH 5/5] vPCI/MSI-X: tidy init_msix()

2020-12-28 Thread Roger Pau Monné
On Mon, Dec 07, 2020 at 11:38:42AM +0100, Jan Beulich wrote:
> First of all introduce a local variable for the to be allocated struct.
> The compiler can't CSE all the occurrences (I'm observing 80 bytes of
> code saved with gcc 10). Additionally, while the caller can cope and
> there was no memory leak, globally "announce" the struct only once done
> initializing it. This also removes the dependency of the function on
> the caller cleaning up after it in case of an error.
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Roger Pau Monné 

Just a couple of comments.

> ---
> I was heavily tempted to also move up the call to vpci_add_register(),
> such that there would be no pointless init done in case of an error
> coming back from there.

Feel free to do so.

> 
> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -436,6 +436,7 @@ static int init_msix(struct pci_dev *pde
>  uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
>  unsigned int msix_offset, i, max_entries;
>  uint16_t control;
> +struct vpci_msix *msix;
>  int rc;
>  
>  msix_offset = pci_find_cap_offset(pdev->seg, pdev->bus, slot, func,
> @@ -447,34 +448,37 @@ static int init_msix(struct pci_dev *pde
>  
>  max_entries = msix_table_size(control);
>  
> -pdev->vpci->msix = xzalloc_flex_struct(struct vpci_msix, entries,
> -   max_entries);
> -if ( !pdev->vpci->msix )
> +msix = xzalloc_flex_struct(struct vpci_msix, entries, max_entries);
> +if ( !msix )
>  return -ENOMEM;
>  
> -pdev->vpci->msix->max_entries = max_entries;
> -pdev->vpci->msix->pdev = pdev;
> +msix->max_entries = max_entries;
> +msix->pdev = pdev;
>  
> -pdev->vpci->msix->tables[VPCI_MSIX_TABLE] =
> +msix->tables[VPCI_MSIX_TABLE] =
>  pci_conf_read32(pdev->sbdf, msix_table_offset_reg(msix_offset));
> -pdev->vpci->msix->tables[VPCI_MSIX_PBA] =
> +msix->tables[VPCI_MSIX_PBA] =
>  pci_conf_read32(pdev->sbdf, msix_pba_offset_reg(msix_offset));
>  
> -for ( i = 0; i < pdev->vpci->msix->max_entries; i++)
> +for ( i = 0; i < msix->max_entries; i++)

Feel free to just use max_entries directly here.

>  {
> -pdev->vpci->msix->entries[i].masked = true;
> -vpci_msix_arch_init_entry(>vpci->msix->entries[i]);
> +msix->entries[i].masked = true;

I think we should also set msix->entries[i].updated = true; for
correctness? Albeit this will never lead to a working configuration,
as the address field will be 0 and thus cause and error to trigger if
enabled without prior setup.

Maybe on a different patch anyway.

Thanks, Roger.



Re: [PATCH 4/5] vPCI/MSI-X: make use of xzalloc_flex_struct()

2020-12-28 Thread Roger Pau Monné
On Mon, Dec 07, 2020 at 11:38:21AM +0100, Jan Beulich wrote:
> ... instead of effectively open-coding it in a type-unsafe way.
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Roger Pau Monné 

Thanks.



Re: [PATCH 3/5] vPCI/MSI-X: fold clearing of entry->updated

2020-12-28 Thread Roger Pau Monné
On Mon, Dec 07, 2020 at 11:37:51AM +0100, Jan Beulich wrote:
> Both call sites clear the flag after a successfull call to
> update_entry(). This can be simplified by moving the clearing into the
> function, onto its success path.

The point of returning a value was to set the updated field, as there
was no failure log message printed by the callers.

> Signed-off-by: Jan Beulich 
> ---
> As a result it becomes clear that the return value of the function is of
> no interest to either of the callers. I'm not sure whether ditching it
> is the right thing to do, or whether this rather hints at some problem.

I think you should make the function void as part of this change,
there's a log message printed by update_entry in the failure case
which IMO should be enough.

There's not much else callers can do AFAICT.

> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -64,6 +64,8 @@ static int update_entry(struct vpci_msix
>  return rc;
>  }
>  
> +entry->updated = false;
> +
>  return 0;
>  }
>  
> @@ -92,13 +94,8 @@ static void control_write(const struct p
>  if ( new_enabled && !new_masked && (!msix->enabled || msix->masked) )
>  {
>  for ( i = 0; i < msix->max_entries; i++ )
> -{
> -if ( msix->entries[i].masked || !msix->entries[i].updated ||
> - update_entry(>entries[i], pdev, i) )
> -continue;
> -
> -msix->entries[i].updated = false;
> -}
> +if ( !msix->entries[i].masked && msix->entries[i].updated )
> +update_entry(>entries[i], pdev, i);
>  }
>  else if ( !new_enabled && msix->enabled )
>  {
> @@ -365,10 +362,7 @@ static int msix_write(struct vcpu *v, un
>   * data fields Xen needs to disable and enable the entry in order
>   * to pick up the changes.
>   */
> -if ( update_entry(entry, pdev, vmsix_entry_nr(msix, entry)) )
> -break;
> -
> -entry->updated = false;
> +update_entry(entry, pdev, vmsix_entry_nr(msix, entry));
>  }

You can also drop this braces now if you feel like.

Thanks, Roger.



Re: [PATCH 2/5] x86/vPCI: check address in vpci_msi_update()

2020-12-28 Thread Roger Pau Monné
On Mon, Dec 07, 2020 at 11:37:22AM +0100, Jan Beulich wrote:
> If the upper address bits don't match the interrupt delivery address
> space window, entirely different behavior would need to be implemented.
> Refuse such requests for the time being.
> 
> Replace adjacent hard tabs while introducing MSI_ADDR_BASE_MASK.
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Roger Pau Monné 

Thanks, Roger.



[qemu-mainline test] 157936: regressions - FAIL

2020-12-28 Thread osstest service owner
flight 157936 qemu-mainline real [real]
flight 157942 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/157936/
http://logs.test-lab.xenproject.org/osstest/logs/157942/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd 17 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 152631
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass

version targeted for testing:
 qemuua05f8ecd88f15273d033b6f044b850a8af84a5b8
baseline version:
 qemuu1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  130 days
Failing since152659  2020-08-21 14:07:39 Z  129 days  266 attempts
Testing same since   157670  2020-12-18 13:57:58 Z   10 days   19 attempts


323 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm 

Re: [PATCH 5/5] x86: don't build unused entry code when !PV32

2020-12-28 Thread Roger Pau Monné
On Wed, Nov 25, 2020 at 09:51:33AM +0100, Jan Beulich wrote:
> Except for the initial part of cstar_enter compat/entry.S is all dead
> code in this case. Further, along the lines of the PV conditionals we
> already have in entry.S, make code PV32-conditional there too (to a
> fair part because this code actually references compat/entry.S).
> 
> Signed-off-by: Jan Beulich 
> ---
> TBD: I'm on the fence of whether (in a separate patch) to also make
>  conditional struct pv_domain's is_32bit field.
> 
> --- a/xen/arch/x86/x86_64/asm-offsets.c
> +++ b/xen/arch/x86/x86_64/asm-offsets.c
> @@ -9,7 +9,7 @@
>  #include 
>  #endif
>  #include 
> -#ifdef CONFIG_PV
> +#ifdef CONFIG_PV32
>  #include 
>  #endif
>  #include 
> @@ -102,19 +102,21 @@ void __dummy__(void)
>  BLANK();
>  #endif
>  
> -#ifdef CONFIG_PV
> +#ifdef CONFIG_PV32
>  OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.pv.is_32bit);
>  BLANK();
>  
> -OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
> -OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
> -BLANK();
> -
>  OFFSET(COMPAT_VCPUINFO_upcall_pending, struct compat_vcpu_info, 
> evtchn_upcall_pending);
>  OFFSET(COMPAT_VCPUINFO_upcall_mask, struct compat_vcpu_info, 
> evtchn_upcall_mask);
>  BLANK();
>  #endif
>  
> +#ifdef CONFIG_PV
> +OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
> +OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
> +BLANK();
> +#endif
> +
>  OFFSET(CPUINFO_guest_cpu_user_regs, struct cpu_info, 
> guest_cpu_user_regs);
>  OFFSET(CPUINFO_verw_sel, struct cpu_info, verw_sel);
>  OFFSET(CPUINFO_current_vcpu, struct cpu_info, current_vcpu);
> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -29,8 +29,6 @@ ENTRY(entry_int82)
>  mov   %rsp, %rdi
>  call  do_entry_int82
>  
> -#endif /* CONFIG_PV32 */
> -
>  /* %rbx: struct vcpu */
>  ENTRY(compat_test_all_events)
>  ASSERT_NOT_IN_ATOMIC
> @@ -197,6 +195,8 @@ ENTRY(cr4_pv32_restore)
>  xor   %eax, %eax
>  ret
>  
> +#endif /* CONFIG_PV32 */

I've also wondered, it feels weird to add CONFIG_PV32 gates to the
compat entry.S, since that's supposed to be only used when there's
support for 32bit PV guests?

Wouldn't this file only get built when such support is enabled?

> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -328,8 +328,10 @@ UNLIKELY_END(sysenter_gpf)
>  movq  VCPU_domain(%rbx),%rdi
>  movq  %rax,TRAPBOUNCE_eip(%rdx)
>  movb  %cl,TRAPBOUNCE_flags(%rdx)
> +#ifdef CONFIG_PV32
>  cmpb  $0, DOMAIN_is_32bit_pv(%rdi)
>  jne   compat_sysenter
> +#endif
>  jmp   .Lbounce_exception
>  
>  ENTRY(int80_direct_trap)
> @@ -370,6 +372,7 @@ UNLIKELY_END(msi_check)
>  mov0x80 * TRAPINFO_sizeof + TRAPINFO_eip(%rsi), %rdi
>  movzwl 0x80 * TRAPINFO_sizeof + TRAPINFO_cs (%rsi), %ecx
>  
> +#ifdef CONFIG_PV32
>  mov   %ecx, %edx
>  and   $~3, %edx
>  
> @@ -378,6 +381,10 @@ UNLIKELY_END(msi_check)
>  
>  test  %rdx, %rdx
>  jzint80_slow_path
> +#else
> +test  %rdi, %rdi
> +jzint80_slow_path
> +#endif
>  
>  /* Construct trap_bounce from trap_ctxt[0x80]. */
>  lea   VCPU_trap_bounce(%rbx), %rdx
> @@ -390,8 +397,10 @@ UNLIKELY_END(msi_check)
>  lea   (, %rcx, TBF_INTERRUPT), %ecx
>  mov   %cl, TRAPBOUNCE_flags(%rdx)
>  
> +#ifdef CONFIG_PV32
>  cmpb  $0, DOMAIN_is_32bit_pv(%rax)
>  jne   compat_int80_direct_trap
> +#endif
>  
>  call  create_bounce_frame
>  jmp   test_all_events
> @@ -541,12 +550,16 @@ ENTRY(dom_crash_sync_extable)
>  GET_STACK_END(ax)
>  leaq  STACK_CPUINFO_FIELD(guest_cpu_user_regs)(%rax),%rsp
>  # create_bounce_frame() temporarily clobbers CS.RPL. Fix up.
> +#ifdef CONFIG_PV32
>  movq  STACK_CPUINFO_FIELD(current_vcpu)(%rax), %rax
>  movq  VCPU_domain(%rax),%rax
>  cmpb  $0, DOMAIN_is_32bit_pv(%rax)
>  sete  %al
>  leal  (%rax,%rax,2),%eax
>  orb   %al,UREGS_cs(%rsp)
> +#else
> +orb   $3, UREGS_cs(%rsp)
> +#endif
>  xorl  %edi,%edi
>  jmp   asm_domain_crash_synchronous /* Does not return */
>  .popsection
> @@ -562,11 +575,15 @@ ENTRY(ret_from_intr)
>  GET_CURRENT(bx)
>  testb $3, UREGS_cs(%rsp)
>  jzrestore_all_xen
> +#ifdef CONFIG_PV32
>  movq  VCPU_domain(%rbx), %rax
>  cmpb  $0, DOMAIN_is_32bit_pv(%rax)
>  jetest_all_events
>  jmp   compat_test_all_events
>  #else
> +jmp   test_all_events
> +#endif
> +#else
>  ASSERT_CONTEXT_IS_XEN
>  jmp   restore_all_xen
>  #endif
> @@ -652,7 +669,7 @@ handle_exception_saved:
>  testb $X86_EFLAGS_IF>>8,UREGS_eflags+1(%rsp)
>  jz

Re: [PATCH 4/5] x86: hypercall vector is unused when !PV32

2020-12-28 Thread Roger Pau Monné
On Wed, Nov 25, 2020 at 09:50:51AM +0100, Jan Beulich wrote:
> This vector can be used as an ordinary interrupt handling one in this
> case. To be sure no references are left, make the #define itself
> conditional.
> 
> Signed-off-by: Jan Beulich 

Acked-by: Roger Pau Monné 

Thanks, Roger.



Re: [PATCH 3/5] x86/build: restrict contents of asm-offsets.h when !HVM / !PV

2020-12-28 Thread Roger Pau Monné
On Wed, Nov 25, 2020 at 09:49:54AM +0100, Jan Beulich wrote:
> This file has a long dependencies list (through asm-offsets.[cs]) and a
> long list of dependents. IOW if any of the former changes, all of the
> latter will be rebuilt, even if there's no actual change to the
> generated file. Therefore avoid producing symbols we don't actually
> need, depending on configuration.
> 
> Signed-off-by: Jan Beulich 

I think that replies my question on the previous patch, hence you can
add:

Acked-by: Roger Pau Monné 

To both.

Thanks, Roger.



Re: [PATCH 2/5] x86/build: limit #include-ing by asm-offsets.c

2020-12-28 Thread Roger Pau Monné
On Wed, Nov 25, 2020 at 09:49:21AM +0100, Jan Beulich wrote:
> This file has a long dependencies list and asm-offsets.h, generated from
> it, has a long list of dependents. IOW if any of the former changes, all
> of the latter will be rebuilt, even if there's no actual change to the
> generated file. Therefore avoid including headers we don't actually need
> (generally or configuration dependent).
> 
> Signed-off-by: Jan Beulich 
> 
> --- a/xen/arch/x86/x86_64/asm-offsets.c
> +++ b/xen/arch/x86/x86_64/asm-offsets.c
> @@ -5,11 +5,13 @@
>   */
>  #define COMPILE_OFFSETS
>  
> +#ifdef CONFIG_PERF_COUNTERS
>  #include 
> +#endif
>  #include 
> -#include 
> +#ifdef CONFIG_PV
>  #include 
> -#include 
> +#endif
>  #include 
>  #include 
>  #include 
> @@ -101,7 +103,6 @@ void __dummy__(void)
>  #ifdef CONFIG_PV
>  OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.pv.is_32bit);
>  BLANK();
> -#endif
>  
>  OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
>  OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
> @@ -110,6 +111,7 @@ void __dummy__(void)
>  OFFSET(COMPAT_VCPUINFO_upcall_pending, struct compat_vcpu_info, 
> evtchn_upcall_pending);
>  OFFSET(COMPAT_VCPUINFO_upcall_mask, struct compat_vcpu_info, 
> evtchn_upcall_mask);
>  BLANK();
> +#endif

Since you are playing with this, the TRAPINFO/TRAPBOUNCE also seem
like ones to gate with CONFIG_PV. And the VCPU_svm/vmx could be gated
on CONFIG_HVM AFAICT?

Thanks, Roger.



[libvirt test] 157933: regressions - FAIL

2020-12-28 Thread osstest service owner
flight 157933 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157933/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 151777
 build-arm64-libvirt   6 libvirt-buildfail REGR. vs. 151777
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  bed50bcbbb2796aa88f1c85a2424fe9bd7944f89
baseline version:
 libvirt  2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  171 days
Failing since151818  2020-07-11 04:18:52 Z  170 days  165 attempts
Testing same since   157715  2020-12-19 04:19:22 Z9 days   10 attempts


People who touched revisions under test:
  Adolfo Jayme Barrientos 
  Aleksandr Alekseev 
  Andika Triwidada 
  Andrea Bolognani 
  Balázs Meskó 
  Barrett Schonefeld 
  Bastien Orivel 
  Bihong Yu 
  Binfeng Wu 
  Boris Fiuczynski 
  Brian Turek 
  Christian Ehrhardt 
  Christian Schoenebeck 
  Cole Robinson 
  Collin Walling 
  Cornelia Huck 
  Côme Borsoi 
  Daniel Henrique Barboza 
  Daniel Letai 
  Daniel P. Berrange 
  Daniel P. Berrangé 
  Erik Skultety 
  Fabian Affolter 
  Fabian Freyer 
  Fangge Jin 
  Farhan Ali 
  Fedora Weblate Translation 
  Guoyi Tu
  Göran Uddeborg 
  Halil Pasic 
  Han Han 
  Hao Wang 
  Ian Wienand 
  Jamie Strandboge 
  Jamie Strandboge 
  Jean-Baptiste Holcroft 
  Jianan Gao 
  Jim Fehlig 
  Jin Yan 
  Jiri Denemark 
  John Ferlan 
  Jonathan Watt 
  Jonathon Jongsma 
  Julio Faracco 
  Ján Tomko 
  Kashyap Chamarthy 
  Kevin Locke 
  Laine Stump 
  Liao Pingfang 
  Lin Ma 
  Lin Ma 
  Lin Ma 
  Marc Hartmayer 
  Marc-André Lureau 
  Marek Marczykowski-Górecki 
  Markus Schade 
  Martin Kletzander 
  Masayoshi Mizuma 
  Matt Coleman 
  Matt Coleman 
  Mauro Matteo Cascella 
  Michal Privoznik 
  Michał Smyk 
  Milo Casagrande 
  Neal Gompa 
  Nico Pache 
  Nikolay Shirokovskiy 
  Olaf Hering 
  Olesya Gerasimenko 
  Orion Poplawski 
  Patrick Magauran 
  Paulo de Rezende Pinatti 
  Pavel Hrdina 
  Peter Krempa 
  Pino Toscano 
  Pino Toscano 
  Piotr Drąg 
  Prathamesh Chavan 
  Ricky Tigg 
  Roman Bogorodskiy 
  Roman Bolshakov 
  Ryan Gahagan 
  Ryan Schmidt 
  Sam Hartman 
  Scott Shambarger 
  Sebastian Mitterle 
  Shalini Chellathurai Saroja 
  Shaojun Yang 
  Shi Lei 
  Simon Gaiser 
  Stefan Bader 
  Stefan Berger 
  Szymon Scholz 
  Thomas Huth 
  Tim Wiederhake 
  Tomáš Golembiovský 
  Tuguoyi 
  Wang Xin 
  Weblate 
  Yang Hang 
  Yanqiu Zhang 
  Yi Li 
  Yi Wang 
  Yuri Chornoivan 
  Zheng Chuan 
  zhenwei pi 
  Zhenyu Zheng 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-arm64-libvirt  fail
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 

Re: [PATCH 1/5] x86/build: limit rebuilding of asm-offsets.h

2020-12-28 Thread Roger Pau Monné
On Wed, Nov 25, 2020 at 09:45:56AM +0100, Jan Beulich wrote:
> This file has a long dependencies list (through asm-offsets.s) and a
> long list of dependents. IOW if any of the former changes, all of the
> latter will be rebuilt, even if there's no actual change to the
> generated file. This is the primary scenario we have the move-if-changed
> macro for.
> 
> Since debug information may easily cause the file contents to change in
> benign ways, also avoid emitting this into the output file.
> 
> Finally already before this change *.new files needed including in what
> gets removed by the "clean" target.
> 
> Signed-off-by: Jan Beulich 

Acked-by: Roger Pau Monné 

> ---
> Perhaps Arm would want doing the same. In fact perhaps the rules should
> be unified by moving to common code?

Having the rule in common code would be my preference, the
prerequisites are slightly different, but I think we can sort this
out?

Thanks, Roger.



[xen-unstable test] 157931: tolerable FAIL

2020-12-28 Thread osstest service owner
flight 157931 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157931/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stopfail like 157918
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stopfail like 157918
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 157918
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stopfail like 157918
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 157918
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 157918
 test-armhf-armhf-libvirt-raw 15 saverestore-support-checkfail  like 157918
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stopfail like 157918
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 157918
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 157918
 test-armhf-armhf-libvirt 16 saverestore-support-checkfail  like 157918
 test-amd64-i386-xl-pvshim14 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  15 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-vhd  14 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  15 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 15 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc
baseline version:
 xen  98d4d6d8a6329ea3a8dcf8aab65acdd70c6397fc

Last test of basis   157931  2020-12-28 01:52:33 Z0 days
Testing same since  (not found) 0 attempts

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-arm64

Re: [PATCH] x86/CPUID: suppress IOMMU related hypervisor leaf data

2020-12-28 Thread Roger Pau Monné
On Mon, Nov 09, 2020 at 11:54:09AM +0100, Jan Beulich wrote:
> Now that the IOMMU for guests can't be enabled "on demand" anymore,
> there's also no reason to expose the related CPUID bit "just in case".
> 
> Signed-off-by: Jan Beulich 

I'm not sure this is helpful from a guest PoV.

How does the guest know whether it has pass through devices, and thus
whether it needs to check if this flag is present or not in order to
safely pass foreign mapped pages (or grants) to the underlying devices?

Ie: prior to this change I would just check whether the flag is
present in CPUID to know whether FreeBSD needs to use a bounce buffer
in blkback and netback when running as a domU. If this is now
conditionally set only when the IOMMU is enabled for the guest I
also need to figure a way to know whether the domU has any passed
through device or not, which doesn't seem trivial.

Roger.



[linux-linus test] 157930: regressions - FAIL

2020-12-28 Thread osstest service owner
flight 157930 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/157930/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. 
vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-xl-xsm7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-examine   6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 
152332
 test-amd64-i386-freebsd10-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-raw7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow 7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-installfail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install   fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. 
vs. 152332
 test-amd64-amd64-xl-pvshim   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-xl  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-intel 14 guest-start   fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start   fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  14 guest-start  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-installfail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen  fail REGR. vs. 152332
 test-amd64-amd64-xl-shadow   14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-libvirt 14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-xl-xsm  14 guest-start  fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 14 guest-start fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 25 guest-start/debian  fail REGR. vs. 152332
 test-arm64-arm64-xl  12 debian-install   fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm   8 xen-boot fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 14 guest-start fail REGR. vs. 152332
 test-amd64-coresched-amd64-xl 14 guest-start fail REGR. vs. 152332
 test-amd64-amd64-pair25 guest-start/debian   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot fail REGR. vs. 152332
 test-amd64-amd64-libvirt-xsm 14 guest-start  fail REGR. vs. 152332
 test-arm64-arm64-examine 13 examine-iommufail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop   fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim 7 xen-install  fail REGR. vs. 152332
 test-amd64-amd64-xl-credit1  14 guest-start  fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 14 guest-start  fail REGR. vs. 152332

Tests which did not succeed, but are not blocking: