Re: [PATCH fsgsbase v2 4/4] x86/fsgsbase: Fix Xen PV support

2020-06-28 Thread Jürgen Groß

On 26.06.20 19:24, Andy Lutomirski wrote:

On Xen PV, SWAPGS doesn't work.  Teach __rdfsbase_inactive() and
__wrgsbase_inactive() to use rdmsrl()/wrmsrl() on Xen PV.  The Xen
pvop code will understand this and issue the correct hypercalls.

Cc: Boris Ostrovsky 
Cc: Juergen Gross 
Cc: Stefano Stabellini 
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Andy Lutomirski 
---
  arch/x86/kernel/process_64.c | 20 ++--
  1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index cb8e37d3acaa..457d02aa10d8 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -163,9 +163,13 @@ static noinstr unsigned long __rdgsbase_inactive(void)
  
  	lockdep_assert_irqs_disabled();
  
-	native_swapgs();

-   gsbase = rdgsbase();
-   native_swapgs();
+   if (!static_cpu_has(X86_FEATURE_XENPV)) {
+   native_swapgs();
+   gsbase = rdgsbase();
+   native_swapgs();
+   } else {
+   rdmsrl(MSR_KERNEL_GS_BASE, gsbase);
+   }
  
  	return gsbase;

  }
@@ -182,9 +186,13 @@ static noinstr void __wrgsbase_inactive(unsigned long 
gsbase)
  {
lockdep_assert_irqs_disabled();
  
-	native_swapgs();

-   wrgsbase(gsbase);
-   native_swapgs();
+   if (!static_cpu_has(X86_FEATURE_XENPV)) {
+   native_swapgs();
+   wrgsbase(gsbase);
+   native_swapgs();
+   } else {
+   wrmsrl(MSR_KERNEL_GS_BASE, gsbase);
+   }
  }
  
  /*




Another possibility would be to just do (I'm fine either way):

diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index acc49fa6a097..b78dd373adbf 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -318,6 +318,8 @@ static void __init xen_init_capabilities(void)
setup_clear_cpu_cap(X86_FEATURE_XSAVE);
setup_clear_cpu_cap(X86_FEATURE_OSXSAVE);
}
+
+   setup_clear_cpu_cap(X86_FEATURE_FSGSBASE);
 }

 static void xen_set_debugreg(int reg, unsigned long val)


Juergen



[qemu-mainline test] 151435: regressions - FAIL

2020-06-28 Thread osstest service owner
flight 151435 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151435/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-xsm  12 guest-start  fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian   fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt 12 guest-start  fail REGR. vs. 151065
 test-arm64-arm64-xl-seattle   7 xen-boot fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-startfail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start  fail REGR. vs. 151065
 test-amd64-i386-libvirt  12 guest-start  fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow210 debian-di-installfail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-installfail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-installfail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 151065
 test-armhf-armhf-xl-vhd  10 debian-di-installfail REGR. vs. 151065
 test-armhf-armhf-libvirt 12 guest-start  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 17 guest-start.2   fail blocked in 151065
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 

RE: [PATCH] xen: introduce xen_vring_use_dma

2020-06-28 Thread Peng Fan
> Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
> 
> On Thu, Jun 25, 2020 at 10:31:27AM -0700, Stefano Stabellini wrote:
> > On Wed, 24 Jun 2020, Michael S. Tsirkin wrote:
> > > On Wed, Jun 24, 2020 at 02:53:54PM -0700, Stefano Stabellini wrote:
> > > > On Wed, 24 Jun 2020, Michael S. Tsirkin wrote:
> > > > > On Wed, Jun 24, 2020 at 10:59:47AM -0700, Stefano Stabellini wrote:
> > > > > > On Wed, 24 Jun 2020, Michael S. Tsirkin wrote:
> > > > > > > On Wed, Jun 24, 2020 at 05:17:32PM +0800, Peng Fan wrote:
> > > > > > > > Export xen_swiotlb for all platforms using xen swiotlb
> > > > > > > >
> > > > > > > > Use xen_swiotlb to determine when vring should use dma
> > > > > > > > APIs to map the
> > > > > > > > ring: when xen_swiotlb is enabled the dma API is required.
> > > > > > > > When it is disabled, it is not required.
> > > > > > > >
> > > > > > > > Signed-off-by: Peng Fan 
> > > > > > >
> > > > > > > Isn't there some way to use VIRTIO_F_IOMMU_PLATFORM for
> this?
> > > > > > > Xen was there first, but everyone else is using that now.
> > > > > >
> > > > > > Unfortunately it is complicated and it is not related to
> > > > > > VIRTIO_F_IOMMU_PLATFORM :-(
> > > > > >
> > > > > >
> > > > > > The Xen subsystem in Linux uses dma_ops via swiotlb_xen to
> > > > > > translate foreign mappings (memory coming from other VMs) to
> physical addresses.
> > > > > > On x86, it also uses dma_ops to translate Linux's idea of a
> > > > > > physical address into a real physical address (this is
> > > > > > unneeded on ARM.)
> > > > > >
> > > > > >
> > > > > > So regardless of VIRTIO_F_IOMMU_PLATFORM, dma_ops should be
> > > > > > used on Xen/x86 always and on Xen/ARM if Linux is Dom0
> > > > > > (because it has foreign
> > > > > > mappings.) That is why we have the if (xen_domain) return
> > > > > > true; in vring_use_dma_api.
> > > > >
> > > > > VIRTIO_F_IOMMU_PLATFORM makes guest always use DMA ops.
> > > > >
> > > > > Xen hack predates VIRTIO_F_IOMMU_PLATFORM so it *also* forces
> > > > > DMA ops even if VIRTIO_F_IOMMU_PLATFORM is clear.
> > > > >
> > > > > Unfortunately as a result Xen never got around to properly
> > > > > setting VIRTIO_F_IOMMU_PLATFORM.
> > > >
> > > > I don't think VIRTIO_F_IOMMU_PLATFORM would be correct for this
> > > > because the usage of swiotlb_xen is not a property of virtio,
> > >
> > >
> > > Basically any device without VIRTIO_F_ACCESS_PLATFORM (that is it's
> > > name in latest virtio spec, VIRTIO_F_IOMMU_PLATFORM is what linux
> > > calls it) is declared as "special, don't follow normal rules for
> > > access".
> > >
> > > So yes swiotlb_xen is not a property of virtio, but what *is* a
> > > property of virtio is that it's not special, just a regular device from 
> > > DMA
> POV.
> >
> > I am trying to understand what you meant but I think I am missing
> > something.
> >
> > Are you saying that modern virtio should always have
> > VIRTIO_F_ACCESS_PLATFORM, hence use normal dma_ops as any other
> devices?
> 
> I am saying it's a safe default. Clear VIRTIO_F_ACCESS_PLATFORM if you have
> some special needs e.g. you are very sure it's ok to bypass DMA ops, or you
> need to support a legacy guest (produced in the window between virtio 1
> support in 2014 and support for VIRTIO_F_ACCESS_PLATFORM in 2016).
> 
> 
> > If that is the case, how is it possible that virtio breaks on ARM
> > using the default dma_ops? The breakage is not Xen related (except
> > that Xen turns dma_ops on). The original message from Peng was:
> >
> >   vring_map_one_sg -> vring_use_dma_api
> >-> dma_map_page
> >-> __swiotlb_map_page
> > ->swiotlb_map_page
> > ->__dma_map_area(phys_to_virt(dma_to_phys(dev,
> dev_addr)), size, dir);
> >   However we are using per device dma area for rpmsg, phys_to_virt
> >   could not return a correct virtual address for virtual address in
> >   vmalloc area. Then kernel panic.
> >
> > I must be missing something. Maybe it is because it has to do with RPMesg?
> 
> I think it's an RPMesg bug, yes

rpmsg bug is another issue, it should not use dma_alloc_coherent for reserved 
area,
and use vmalloc_to_page.

Anyway here using dma api will also trigger issue.

> 
> >
> > > > > > You might have noticed that I missed one possible case above:
> > > > > > Xen/ARM DomU :-)
> > > > > >
> > > > > > Xen/ARM domUs don't need swiotlb_xen, it is not even
> > > > > > initialized. So if
> > > > > > (xen_domain) return true; would give the wrong answer in that case.
> > > > > > Linux would end up calling the "normal" dma_ops, not
> > > > > > swiotlb-xen, and the "normal" dma_ops fail.
> > > > > >
> > > > > >
> > > > > > The solution I suggested was to make the check in
> > > > > > vring_use_dma_api more flexible by returning true if the
> > > > > > swiotlb_xen is supposed to be used, not in general for all Xen
> > > > > > domains, because that is what the check was really meant to do.
> > > > >
> > > > 

RE: [PATCH] xen: introduce xen_vring_use_dma

2020-06-28 Thread Peng Fan
> Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
> 
> On Wed, 24 Jun 2020, Michael S. Tsirkin wrote:
> > On Wed, Jun 24, 2020 at 02:53:54PM -0700, Stefano Stabellini wrote:
> > > On Wed, 24 Jun 2020, Michael S. Tsirkin wrote:
> > > > On Wed, Jun 24, 2020 at 10:59:47AM -0700, Stefano Stabellini wrote:
> > > > > On Wed, 24 Jun 2020, Michael S. Tsirkin wrote:
> > > > > > On Wed, Jun 24, 2020 at 05:17:32PM +0800, Peng Fan wrote:
> > > > > > > Export xen_swiotlb for all platforms using xen swiotlb
> > > > > > >
> > > > > > > Use xen_swiotlb to determine when vring should use dma APIs
> > > > > > > to map the
> > > > > > > ring: when xen_swiotlb is enabled the dma API is required.
> > > > > > > When it is disabled, it is not required.
> > > > > > >
> > > > > > > Signed-off-by: Peng Fan 
> > > > > >
> > > > > > Isn't there some way to use VIRTIO_F_IOMMU_PLATFORM for this?
> > > > > > Xen was there first, but everyone else is using that now.
> > > > >
> > > > > Unfortunately it is complicated and it is not related to
> > > > > VIRTIO_F_IOMMU_PLATFORM :-(
> > > > >
> > > > >
> > > > > The Xen subsystem in Linux uses dma_ops via swiotlb_xen to
> > > > > translate foreign mappings (memory coming from other VMs) to
> physical addresses.
> > > > > On x86, it also uses dma_ops to translate Linux's idea of a
> > > > > physical address into a real physical address (this is unneeded
> > > > > on ARM.)
> > > > >
> > > > >
> > > > > So regardless of VIRTIO_F_IOMMU_PLATFORM, dma_ops should be
> used
> > > > > on Xen/x86 always and on Xen/ARM if Linux is Dom0 (because it
> > > > > has foreign
> > > > > mappings.) That is why we have the if (xen_domain) return true;
> > > > > in vring_use_dma_api.
> > > >
> > > > VIRTIO_F_IOMMU_PLATFORM makes guest always use DMA ops.
> > > >
> > > > Xen hack predates VIRTIO_F_IOMMU_PLATFORM so it *also* forces
> DMA
> > > > ops even if VIRTIO_F_IOMMU_PLATFORM is clear.
> > > >
> > > > Unfortunately as a result Xen never got around to properly setting
> > > > VIRTIO_F_IOMMU_PLATFORM.
> > >
> > > I don't think VIRTIO_F_IOMMU_PLATFORM would be correct for this
> > > because the usage of swiotlb_xen is not a property of virtio,
> >
> >
> > Basically any device without VIRTIO_F_ACCESS_PLATFORM (that is it's
> > name in latest virtio spec, VIRTIO_F_IOMMU_PLATFORM is what linux
> > calls it) is declared as "special, don't follow normal rules for
> > access".
> >
> > So yes swiotlb_xen is not a property of virtio, but what *is* a
> > property of virtio is that it's not special, just a regular device from DMA 
> > POV.
> 
> I am trying to understand what you meant but I think I am missing something.
> 
> Are you saying that modern virtio should always have
> VIRTIO_F_ACCESS_PLATFORM, hence use normal dma_ops as any other
> devices?
> 
> If that is the case, how is it possible that virtio breaks on ARM using the
> default dma_ops? The breakage is not Xen related (except that Xen turns
> dma_ops on). The original message from Peng was:
> 
>   vring_map_one_sg -> vring_use_dma_api
>-> dma_map_page
>  -> __swiotlb_map_page
>   ->swiotlb_map_page
>   ->__dma_map_area(phys_to_virt(dma_to_phys(dev,
> dev_addr)), size, dir);
>   However we are using per device dma area for rpmsg, phys_to_virt
>   could not return a correct virtual address for virtual address in
>   vmalloc area. Then kernel panic.
> 
> I must be missing something. Maybe it is because it has to do with RPMesg?

I am not going to fix the rpmsg issue with this patch. It is when ARM DomU
Android os communicate with secure world trusty os using virtio, the
vring_use_dma_api will return true for xen domu, but I no need it return
true and fall into swiotlb.

Thanks,
Peng.

> 
> 
> > > > > You might have noticed that I missed one possible case above:
> > > > > Xen/ARM DomU :-)
> > > > >
> > > > > Xen/ARM domUs don't need swiotlb_xen, it is not even
> > > > > initialized. So if
> > > > > (xen_domain) return true; would give the wrong answer in that case.
> > > > > Linux would end up calling the "normal" dma_ops, not
> > > > > swiotlb-xen, and the "normal" dma_ops fail.
> > > > >
> > > > >
> > > > > The solution I suggested was to make the check in
> > > > > vring_use_dma_api more flexible by returning true if the
> > > > > swiotlb_xen is supposed to be used, not in general for all Xen
> > > > > domains, because that is what the check was really meant to do.
> > > >
> > > > Why not fix DMA ops so they DTRT (nop) on Xen/ARM DomU? What is
> wrong with that?
> > >
> > > swiotlb-xen is not used on Xen/ARM DomU, the default dma_ops are the
> > > ones that are used. So you are saying, why don't we fix the default
> > > dma_ops to work with virtio?
> > >
> > > It is bad that the default dma_ops crash with virtio, so yes I think
> > > it would be good to fix that. However, even if we fixed that, the if
> > > (xen_domain()) check in vring_use_dma_api is still 

[linux-linus test] 151429: regressions - FAIL

2020-06-28 Thread osstest service owner
flight 151429 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151429/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 151214
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 151214
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass

version targeted for testing:
 linux719fdd32921fb7e3208db8832d32ae1c2d68900f
baseline version:
 linux1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   10 days
Failing since151236  2020-06-19 19:10:35 Z9 days   10 attempts
Testing same since   151429  2020-06-28 09:40:49 Z0 days1 attempts


[xen-unstable test] 151416: regressions - FAIL

2020-06-28 Thread osstest service owner
flight 151416 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151416/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine  4 memdisk-try-append   fail REGR. vs. 151382

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-multivcpu  7 xen-boot  fail pass in 151402

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 151402 like 
151382
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check fail in 151402 never 
pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check fail in 151402 
never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 151382
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 151382
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 151382
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 151382
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 151382
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 151382
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 151382
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 151382
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 151382
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen  88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
baseline version:
 xen  d3688bf60f798074bf38d712a3e15c88cfb81ed4

Last test of basis   151382  2020-06-26 15:39:39 Z2 days
Testing same since   151402  2020-06-27 10:19:44 Z1 days2 attempts


[libvirt test] 151417: tolerable all pass - PUSHED

2020-06-28 Thread osstest service owner
flight 151417 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151417/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 151396
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 151396
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  d482cf6bef484e697f1dbb99f2504e7d67b149e7
baseline version:
 libvirt  f6f745297d884453ef4ed65643d267069f778517

Last test of basis   151396  2020-06-27 04:19:57 Z1 days
Testing same since   151417  2020-06-28 04:19:04 Z0 days1 attempts


People who touched revisions under test:
  Daniel Henrique Barboza 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-arm64-arm64-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-arm64-arm64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-arm64-arm64-libvirt-qcow2   pass
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   f6f745297d..d482cf6bef  d482cf6bef484e697f1dbb99f2504e7d67b149e7 -> 
xen-tested-master



[qemu-mainline test] 151414: regressions - FAIL

2020-06-28 Thread osstest service owner
flight 151414 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151414/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-xsm  12 guest-start  fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian   fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt 12 guest-start  fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-startfail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start  fail REGR. vs. 151065
 test-amd64-i386-libvirt  12 guest-start  fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow210 debian-di-installfail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 
151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail 
REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail 
REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-installfail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-installfail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. 
vs. 151065
 test-armhf-armhf-xl-vhd  10 debian-di-installfail REGR. vs. 151065
 test-armhf-armhf-libvirt 12 guest-start  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 

[qemu-mainline bisection] complete test-amd64-i386-qemuu-rhel6hvm-intel

2020-06-28 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-qemuu-rhel6hvm-intel
testid redhat-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151431/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster 
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
  qdev: Assert devices are plugged into a bus that can take them
  
  This would have caught some of the bugs I just fixed.
  
  Signed-off-by: Markus Armbruster 
  Reviewed-by: Mark Cave-Ayland 
  Message-Id: <20200609122339.937862-23-arm...@redhat.com>


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-qemuu-rhel6hvm-intel.redhat-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-qemuu-rhel6hvm-intel.redhat-install
 --summary-out=tmp/151431.bisection-summary --basis-template=151065 
--blessings=real,real-bisect qemu-mainline test-amd64-i386-qemuu-rhel6hvm-intel 
redhat-install
Searching for failure / basis pass:
 151399 fail [host=chardonnay0] / 151149 [host=huxelrebe0] 151101 
[host=albana1] 151065 [host=albana0] 151047 [host=huxelrebe1] 150970 
[host=huxelrebe0] 150930 [host=elbling0] 150916 [host=elbling1] 150909 
[host=italia0] 150895 [host=fiano0] 150831 [host=fiano1] 150694 ok.
Failure / basis pass flights: 151399 / 150694
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
a4a2258a1fec5481b0bd929b049921cb07a0 
3c659044118e34603161457db9934a34f816d78b 
553cf5d7c47bee05a3dec9461c1f8430316d516b 
d11c75185276ded944f2ea0277532b7fee849bbc 
fde76f895d0aa817a1207d844d793239c6639bc6
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
bb78cfbec07eda45118b630a09b0af549b43a135 
3c659044118e34603161457db9934a34f816d78b 
66234fee9c2d37bfbc523aa8d0ae5300a14cc10e 
2e3de6253422112ae43e608661ba94ea6b345694 
1497e78068421d83956f8e82fb6e1bf1fc3b1199
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/osstest/ovmf.git#bb78cfbec07eda45118b630a09b0af549b43a135-a4a2258a1fec5481b0bd929b049921cb07a0
 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b 
git://git.qemu.org/qemu.git#66234fee9c2d37bfbc523aa8d0ae5300a14cc10e-553cf5d7c47bee05a3dec9461c1f8430316d516b
 
git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-d11c75185276ded944f2ea0277532b7fee849bbc
 
git://xenbits.xen.org/xen.git#1497e78068421d83956f8e82fb6e1bf1fc3b1199-fde76f895d0aa817a1207d844d793239c6639bc6
Use of uninitialized value $parents in array dereference at 
./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at 
./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at 
./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at 
./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at 
./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at 
./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at 
./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at 
./adhoc-revtuple-generator line 465.
Loaded 55319 nodes in revision graph
Searching for test results:
 150694 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 

[xen-unstable-coverity test] 151428: all pass - PUSHED

2020-06-28 Thread osstest service owner
flight 151428 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151428/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen  88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
baseline version:
 xen  fde76f895d0aa817a1207d844d793239c6639bc6

Last test of basis   151271  2020-06-21 09:18:26 Z7 days
Testing same since   151428  2020-06-28 09:23:57 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Grzegorz Uriasz 
  Ian Jackson 
  Jan Beulich 
  Jason Andryuk 
  Tim Deegan 
  Wei Liu 

jobs:
 coverity-amd64   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fde76f895d..88cfd062e8  88cfd062e8318dfeb67c7d2eb50b6cd224b0738a -> 
coverity-tested/smoke



[linux-linus test] 151410: regressions - FAIL

2020-06-28 Thread osstest service owner
flight 151410 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151410/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-amd64-amd64-examine  4 memdisk-try-append   fail REGR. vs. 151214
 test-armhf-armhf-libvirt  7 xen-boot fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 151214
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass

version targeted for testing:
 linux42afe7d1c6ef77212250af3459e549d1a944cc8a
baseline version:
 linux1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   10 days
Failing since151236  2020-06-19 19:10:35 Z8 days9 attempts
Testing same since   151410  2020-06-27 20:39:04 Z0 days1 attempts


Re: [PATCH] efi: avoid error message when booting under Xen

2020-06-28 Thread Jürgen Groß

Ping?

On 10.06.20 16:10, Juergen Gross wrote:

efifb_probe() will issue an error message in case the kernel is booted
as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoid
that message by calling efi_mem_desc_lookup() only if EFI_PARAVIRT
isn't set.

Fixes: 38ac0287b7f4 ("fbdev/efifb: Honour UEFI memory map attributes when mapping 
the FB")
Signed-off-by: Juergen Gross 
---
  drivers/video/fbdev/efifb.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
index 65491ae74808..f5eccd1373e9 100644
--- a/drivers/video/fbdev/efifb.c
+++ b/drivers/video/fbdev/efifb.c
@@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev)
info->apertures->ranges[0].base = efifb_fix.smem_start;
info->apertures->ranges[0].size = size_remap;
  
-	if (efi_enabled(EFI_BOOT) &&

+   if (efi_enabled(EFI_BOOT) && !efi_enabled(EFI_PARAVIRT) &&
!efi_mem_desc_lookup(efifb_fix.smem_start, )) {
if ((efifb_fix.smem_start + efifb_fix.smem_len) >
(md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {






Questions about PVH memory layout

2020-06-28 Thread joshua_peter
Hello everyone,

I hope this is the right forum for these kinds of questions (the website
states no "technical support queries"; I'm not sure if this qualifies).
If not, sorry for the disturbance; just simply direct me elsewhere then.

Anyway, I'm currently trying to get into how Xen works in detail, so
for one I've been reading a lot of code, but also I dumped the P2M table
of my PVH guest to get a feel for how things are layed out in memory. I
mean there is the usual stuff, such as lots of RAM, and the APIC is
mapped at 0xFEE0 and the APCI tables at 0xFC00 onwards. But two
things stuck out to me, which for the life of me I couldn't figure out
from just reading the code. The first one is, there are a few pages at
the end of the 32bit address space (from 0xFEFF8000 to 0xFEFFF000),
which according to the E820 is declared simply as "reserved". The other
thing is, the first 512 pages at the beginning of the address space are
mapped linearly, which usually leads to them being mapped as a single
2MB pages. But there is this one page at 0x1000 that sticks out
completely. By that I mean (to make things more concrete), in my PVH
guest the page at 0x maps to 0x13C20, 0x2000 maps to
0x13C202000, 0x3000 maps to 0x13C203000, etc. But 0x1000 maps
to 0xB8DBD000, which seems very odd to me (at least from simply looking
at the addresses). My initial guess was that this is some bootloader
related stuff, but Google didn't show up any info related to that
memory area, and most of the x86/PC boot stuff seems to happen below
the 0x1000 mark.

Would someone be so kind to tell me what those two things are? Many
thanks in advance.

(Btw. I'm running Xen 4.13.1 on Intel x64, booting my PVH guest with
PyGRUB, if it's relevant.)

Best regards,
Peter