Re: [Xen-devel] [iGVT-g] [vfio-users] [PATCH v3 00/11] igd passthrough chipset tweaks

2016-01-31 Thread Alex Williamson
On Sat, 2016-01-30 at 01:18 +, Kay, Allen M wrote:
> 
> > -Original Message-
> > From: iGVT-g [mailto:igvt-g-boun...@lists.01.org] On Behalf Of Alex
> > Williamson
> > Sent: Friday, January 29, 2016 10:00 AM
> > To: Gerd Hoffmann
> > Cc: igv...@ml01.01.org; xen-de...@lists.xensource.com; Eduardo Habkost;
> > Stefano Stabellini; qemu-de...@nongnu.org; Cao jin; vfio-
> > us...@redhat.com
> > Subject: Re: [iGVT-g] [vfio-users] [PATCH v3 00/11] igd passthrough chipset
> > tweaks
> > 
> > Do guest drivers depend on IGD appearing at 00:02.0?  I'm currently testing
> > for any Intel VGA device, but I wonder if I should only be enabling anything
> > opregion if it also appears at a specific address.
> > 
> 
> No.  Both Windows and Linux IGD driver should work at any PCI slot.  We have 
> seen 0:5.0 in the guest and the driver works.

Thanks Allen.  Another question, when I boot a VM with an assigned HD
P4000 GPU, my console stream with IOMMU faults, like:

DMAR: DMAR:[DMA Write] Request device [00:02.0] fault addr 9fa3 
DMAR: DMAR:[DMA Write] Request device [00:02.0] fault addr 9fa3 
DMAR: DMAR:[DMA Write] Request device [00:02.0] fault addr 9fa3 
DMAR: DMAR:[DMA Write] Request device [00:02.0] fault addr 9fa3 
DMAR: DMAR:[DMA Write] Request device [00:02.0] fault addr 9fa3 

All of these fall within the host RMRR range for the device:

DMAR: Setting RMRR:
DMAR: Setting identity map for device :00:02.0 [0x9f80 - 0xaf9f]

A while back, we excluded devices using RMRRs from participating in
IOMMU API domains because they may continue to DMA to these reserved
regions after assignment, possibly corrupting VM memory
(c875d2c1b808).  Intel later decided this exclusion shouldn't apply to
graphics devices (18436afdc11a).  Don't the above IOMMU faults reveal
that exactly the problem we're trying to prevent by general exclusion of
RMRR encumbered devices from the IOMMU API is actually occuring?  If I
were to have VM memory within the RMRR address range, I wouldn't be
seeing these faults, I'd be having the GPU corrupt my VM memory.

David notes in the latter commit above:

"We should be able to successfully assign graphics devices to guests
too, as long as the initial handling of stolen memory is reconfigured
appropriately."

What code is supposed to be doing that reconfiguration when a device is
assigned?  Clearly we don't have it yet, making assignment of these
devices very unsafe.  It seems like vfio or IOMMU code  in the kernel
needs device specific code to clear these settings to make it safe for
userspace, then perhaps VM BIOS support to reallocate.  Is there any
consistency across IGD revisions for doing this?  Is there a spec?
Thanks,

Alex


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 09/10] vring: Use the DMA API on Xen

2016-01-31 Thread Andy Lutomirski
On Sun, Jan 31, 2016 at 12:09 PM, Michael S. Tsirkin  wrote:
> On Fri, Jan 29, 2016 at 10:34:59AM +, David Vrabel wrote:
>> On 29/01/16 02:31, Andy Lutomirski wrote:
>> > Signed-off-by: Andy Lutomirski 
>> > ---
>> >  drivers/virtio/virtio_ring.c | 12 
>> >  1 file changed, 12 insertions(+)
>> >
>> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>> > index c169c6444637..305c05cc249a 100644
>> > --- a/drivers/virtio/virtio_ring.c
>> > +++ b/drivers/virtio/virtio_ring.c
>> > @@ -47,6 +47,18 @@
>> >
>> >  static bool vring_use_dma_api(void)
>> >  {
>> > +#if defined(CONFIG_X86) && defined(CONFIG_XEN)
>> > +   /*
>> > +* In theory, it's possible to have a buggy QEMU-supposed
>> > +* emulated Q35 IOMMU and Xen enabled at the same time.  On
>> > +* such a configuration, virtio has never worked and will
>> > +* not work without an even larger kludge.  Instead, enable
>> > +* the DMA API if we're a Xen guest, which at least allows
>> > +* all of the sensible Xen configurations to work correctly.
>> > +*/
>> > +   return static_cpu_has(X86_FEATURE_XENPV);
>>
>> You want:
>>
>> if (xen_domain())
>> return true;
>>
>> Without the #if so we use the DMA API for all types of Xen guest on all
>> architectures.
>>
>> David
>
> I doubt HVM domains can have virtio devices.
>

They certainly can under nested virt (L0 provides virtio device, L1 is
Xen, and L2 is Linux).  Of course, this won't work given the current
QEMU situation unless Xen can pass things through to dom0 without an
IOMMU, which seems plausible to me.

But yes, xen_domain() sounds right to me.  I just failed to find that
function when I wrote this patch.

Michael, if you like the rest of the series, I'd be okay if you
changed this patch to use xen_domain() when you apply it.  If I send a
v2, I'll fix it up.

--Andy

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 00/10] virtio DMA API, yet again

2016-01-31 Thread Andy Lutomirski
On Sun, Jan 31, 2016 at 12:12 PM, Michael S. Tsirkin  wrote:
> On Thu, Jan 28, 2016 at 06:31:13PM -0800, Andy Lutomirski wrote:
>> This switches virtio to use the DMA API on Xen and if requested by
>> module option.
>>
>> This fixes virtio on Xen, and it should break anything because it's
>> off by default on everything except Xen PV on x86.
>>
>> To the Xen people: is this okay?  If it doesn't work on other Xen
>> variants (PVH? HVM?), can you submit follow-up patches to fix it?
>>
>> To everyone else: we've waffled on this for way too long.  I think
>> we should to get DMA API implementation in with a conservative
>> policy like this rather than waiting until we achieve perfection.
>> I'm tired of carrying these patches around.
>
> I agree, thanks for working on this!
>
>> Michael, if these survive review, can you stage these in your tree?
>
> Yes, I'll stage everything except 10/10. I'd rather not maintain a
> module option like this, things work for now and I'm working on a
> clean solution for things like dpdk within guest.

The module option was mainly for testing, but patching in a "return
true" works just as well.

I ran the code through the DMA API debugging stuff and swiotlb=force
with the module option set under KVM (no Xen), and everything seemed
to work.

--Andy

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-4.1 test] 79527: regressions - FAIL

2016-01-31 Thread osstest service owner
flight 79527 linux-4.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/79527/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-rumpuserxen   6 xen-build fail REGR. vs. 66399
 build-i386-rumpuserxen6 xen-build fail REGR. vs. 66399
 test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat fail REGR. vs. 66399
 test-armhf-armhf-xl-xsm  15 guest-start/debian.repeat fail REGR. vs. 66399
 test-armhf-armhf-xl-cubietruck 15 guest-start/debian.repeat fail in 79387 
REGR. vs. 66399
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop running in 79090

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds  3 host-install(3)  broken in 79329 pass in 79527
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail in 
79090 pass in 79527
 test-armhf-armhf-xl-credit2  11 guest-startfail in 79090 pass in 79527
 test-armhf-armhf-xl-xsm  11 guest-startfail in 79090 pass in 79527
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
in 79090 pass in 79527
 test-armhf-armhf-xl   15 guest-start/debian.repeat fail in 79387 pass in 79090
 test-armhf-armhf-xl-multivcpu 15 guest-start/debian.repeat  fail pass in 79329
 test-armhf-armhf-xl  11 guest-start fail pass in 79387
 test-armhf-armhf-xl-cubietruck 11 guest-start   fail pass in 79387
 test-armhf-armhf-libvirt  6 xen-bootfail pass in 79430
 test-armhf-armhf-xl-arndale   6 xen-bootfail pass in 79430

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-rumpuserxen-i386 10 guest-start   fail in 79090 like 66399
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail in 79090 like 66399
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 66399
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 66399
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail   like 66399
 test-armhf-armhf-xl-vhd   9 debian-di-installfail   like 66399

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-armhf-armhf-xl  12 migrate-support-check fail in 79387 never pass
 test-armhf-armhf-xl  13 saverestore-support-check fail in 79387 never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check fail in 79387 never 
pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check fail in 79387 
never pass
 test-armhf-armhf-libvirt 14 guest-saverestore fail in 79430 never pass
 test-armhf-armhf-libvirt 12 migrate-support-check fail in 79430 never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check fail in 79430 never pass
 test-armhf-armhf-xl-arndale 13 saverestore-support-check fail in 79430 never 
pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-installfail   never pass

version targeted for testing:
 linux

[Xen-devel] [xen-unstable test] 79502: regressions - FAIL

2016-01-31 Thread osstest service owner
flight 79502 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/79502/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i3865 xen-build fail REGR. vs. 79422

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 11 guest-start  fail   like 79379
 build-amd64-rumpuserxen   6 xen-buildfail   like 79422

Tests which did not succeed, but are not blocking:
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 build-i386-rumpuserxen1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-migrupgrade   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass


Re: [Xen-devel] [PATCH v5 09/10] vring: Use the DMA API on Xen

2016-01-31 Thread Andy Lutomirski
On Sun, Jan 31, 2016 at 12:18 PM, Michael S. Tsirkin  wrote:
> On Sun, Jan 31, 2016 at 12:13:58PM -0800, Andy Lutomirski wrote:
>> On Sun, Jan 31, 2016 at 12:09 PM, Michael S. Tsirkin  wrote:
>> > On Fri, Jan 29, 2016 at 10:34:59AM +, David Vrabel wrote:
>> >> On 29/01/16 02:31, Andy Lutomirski wrote:
>> >> > Signed-off-by: Andy Lutomirski 
>> >> > ---
>> >> >  drivers/virtio/virtio_ring.c | 12 
>> >> >  1 file changed, 12 insertions(+)
>> >> >
>> >> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>> >> > index c169c6444637..305c05cc249a 100644
>> >> > --- a/drivers/virtio/virtio_ring.c
>> >> > +++ b/drivers/virtio/virtio_ring.c
>> >> > @@ -47,6 +47,18 @@
>> >> >
>> >> >  static bool vring_use_dma_api(void)
>> >> >  {
>> >> > +#if defined(CONFIG_X86) && defined(CONFIG_XEN)
>> >> > +   /*
>> >> > +* In theory, it's possible to have a buggy QEMU-supposed
>> >> > +* emulated Q35 IOMMU and Xen enabled at the same time.  On
>> >> > +* such a configuration, virtio has never worked and will
>> >> > +* not work without an even larger kludge.  Instead, enable
>> >> > +* the DMA API if we're a Xen guest, which at least allows
>> >> > +* all of the sensible Xen configurations to work correctly.
>> >> > +*/
>> >> > +   return static_cpu_has(X86_FEATURE_XENPV);
>> >>
>> >> You want:
>> >>
>> >> if (xen_domain())
>> >> return true;
>> >>
>> >> Without the #if so we use the DMA API for all types of Xen guest on all
>> >> architectures.
>> >>
>> >> David
>> >
>> > I doubt HVM domains can have virtio devices.
>> >
>>
>> They certainly can under nested virt (L0 provides virtio device, L1 is
>> Xen, and L2 is Linux).  Of course, this won't work given the current
>> QEMU situation unless Xen can pass things through to dom0 without an
>> IOMMU, which seems plausible to me.
>>
>> But yes, xen_domain() sounds right to me.  I just failed to find that
>> function when I wrote this patch.
>>
>> Michael, if you like the rest of the series, I'd be okay if you
>> changed this patch to use xen_domain() when you apply it.  If I send a
>> v2, I'll fix it up.
>>
>> --Andy
>
> I'd rather you just posted a tested v2 of 9/10 for now as I don't test
> Xen.  It seems easy but I had more than my share of obvious fixes
> failing spectacularly.
>

In that case, let me test for real.  Can you point me to a git tree
when you have patches 1-8 staged and I'll spin patch 9 v2 and test it
with the real context?

--Andy

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] Clarifying PVH mode requirements

2016-01-31 Thread PGNet Dev

I run Xen 4.6 Dom0

rpm -qa | egrep -i "kernel-default-4|xen-4"
kernel-default-devel-4.4.0-8.1.g9f68b90.x86_64
xen-4.6.0_08-405.1.x86_64

My guests are currently HVM in PVHVM mode; I'm exploring PVH.

IIUC, for 4.6, this doc

http://xenbits.xen.org/docs/4.6-testing/misc/pvh-readme.txt

instructs the following necessary changes:

@ GRUBG cfg

-   GRUB_CMDLINE_XEN=" ..."
+   GRUB_CMDLINE_XEN=" dom0pvh ..."

&, @ guest.cfg

+   pvh = 1

For my guest.cfg, currently in PVHVM mode, I have

builder = 'hvm'
xen_platform_pci = 1
device_model_version="qemu-xen"
hap = 1
...

Q:
Do any of these^^ params need to also change with the addition of

pvh = 1



At the moment HAP is required for PVH.


As above, I've 'hap = 1' enabled.

But checking cpu,

hwinfo --cpu | egrep "Arch|Model"
  Arch: X86-64
  Model: 6.60.3 "Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz"

neither 'hap' nor 'emt' are specifically called out,

egrep -wo 'vmx|lm|aes' /proc/cpuinfo  | sort | uniq \
 | sed -e 's/aes/Hardware encryption=Yes (&)/g' \
	 -e 's/lm/64 bit cpu=Yes (&)/g' -e 's/vmx/Intel hardware 
virtualization=Yes (&)/g'


Hardware encryption=Yes (aes)
64 bit cpu=Yes (lm)
	egrep -wo 'hap|vmx|ept|vpid|npt|tpr_shadow|flexpriority|vnmi|lm|aes' 
/proc/cpuinfo  | sort | uniq

aes
lm

Iiuc, Intel introduced EPT with Nehalem arch, which preceds Haswell by ~ 
5 years.


Q:
	Am I out of luck re: PVH with more modern Haswell? Or is there a 
different check I should be running ?



At present the only PVH guest is an x86 64bit PV linux.


Is this still current/true info?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 09/10] vring: Use the DMA API on Xen

2016-01-31 Thread Michael S. Tsirkin
On Sun, Jan 31, 2016 at 12:13:58PM -0800, Andy Lutomirski wrote:
> On Sun, Jan 31, 2016 at 12:09 PM, Michael S. Tsirkin  wrote:
> > On Fri, Jan 29, 2016 at 10:34:59AM +, David Vrabel wrote:
> >> On 29/01/16 02:31, Andy Lutomirski wrote:
> >> > Signed-off-by: Andy Lutomirski 
> >> > ---
> >> >  drivers/virtio/virtio_ring.c | 12 
> >> >  1 file changed, 12 insertions(+)
> >> >
> >> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> >> > index c169c6444637..305c05cc249a 100644
> >> > --- a/drivers/virtio/virtio_ring.c
> >> > +++ b/drivers/virtio/virtio_ring.c
> >> > @@ -47,6 +47,18 @@
> >> >
> >> >  static bool vring_use_dma_api(void)
> >> >  {
> >> > +#if defined(CONFIG_X86) && defined(CONFIG_XEN)
> >> > +   /*
> >> > +* In theory, it's possible to have a buggy QEMU-supposed
> >> > +* emulated Q35 IOMMU and Xen enabled at the same time.  On
> >> > +* such a configuration, virtio has never worked and will
> >> > +* not work without an even larger kludge.  Instead, enable
> >> > +* the DMA API if we're a Xen guest, which at least allows
> >> > +* all of the sensible Xen configurations to work correctly.
> >> > +*/
> >> > +   return static_cpu_has(X86_FEATURE_XENPV);
> >>
> >> You want:
> >>
> >> if (xen_domain())
> >> return true;
> >>
> >> Without the #if so we use the DMA API for all types of Xen guest on all
> >> architectures.
> >>
> >> David
> >
> > I doubt HVM domains can have virtio devices.
> >
> 
> They certainly can under nested virt (L0 provides virtio device, L1 is
> Xen, and L2 is Linux).  Of course, this won't work given the current
> QEMU situation unless Xen can pass things through to dom0 without an
> IOMMU, which seems plausible to me.
> 
> But yes, xen_domain() sounds right to me.  I just failed to find that
> function when I wrote this patch.
> 
> Michael, if you like the rest of the series, I'd be okay if you
> changed this patch to use xen_domain() when you apply it.  If I send a
> v2, I'll fix it up.
> 
> --Andy

I'd rather you just posted a tested v2 of 9/10 for now as I don't test
Xen.  It seems easy but I had more than my share of obvious fixes
failing spectacularly.

-- 
MST

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 00/10] virtio DMA API, yet again

2016-01-31 Thread Michael S. Tsirkin
On Thu, Jan 28, 2016 at 06:31:13PM -0800, Andy Lutomirski wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.
> 
> This fixes virtio on Xen, and it should break anything because it's
> off by default on everything except Xen PV on x86.
> 
> To the Xen people: is this okay?  If it doesn't work on other Xen
> variants (PVH? HVM?), can you submit follow-up patches to fix it?
> 
> To everyone else: we've waffled on this for way too long.  I think
> we should to get DMA API implementation in with a conservative
> policy like this rather than waiting until we achieve perfection.
> I'm tired of carrying these patches around.

I agree, thanks for working on this!

> Michael, if these survive review, can you stage these in your tree?

Yes, I'll stage everything except 10/10. I'd rather not maintain a
module option like this, things work for now and I'm working on a
clean solution for things like dpdk within guest.

So far I saw some comments on 9/10.

> Can you also take a look at tools/virtio?  I probably broke it, but I
> couldn't get it to build without these patches either, so I'm stuck.

Will do.

> Changes from v4:
>  - Bake vring_use_dma_api in from the beginning.
>  - Automatically enable only on Xen.
>  - Add module parameter.
>  - Add s390 and alpha DMA API implementations.
>  - Rebase to 4.5-rc1.
> 
> Changes from v3:
>  - More big-endian fixes.
>  - Added better virtio-ring APIs that handle allocation and use them in
>virtio-mmio and virtio-pci.
>  - Switch to Michael's virtio-net patch.
> 
> Changes from v2:
>  - Fix vring_mapping_error incorrect argument
> 
> Changes from v1:
>  - Fix an endian conversion error causing a BUG to hit.
>  - Fix a DMA ordering issue (swiotlb=force works now).
>  - Minor cleanups.
> 
> Andy Lutomirski (7):
>   vring: Introduce vring_use_dma_api()
>   virtio_ring: Support DMA APIs
>   virtio: Add improved queue allocation API
>   virtio_mmio: Use the DMA API if enabled
>   virtio_pci: Use the DMA API if enabled
>   vring: Use the DMA API on Xen
>   vring: Add a module parameter to force-enable the DMA API
> 
> Christian Borntraeger (3):
>   dma: Provide simple noop dma ops
>   alpha/dma: use common noop dma ops
>   s390/dma: Allow per device dma ops
> 
>  arch/alpha/kernel/pci-noop.c|  46 +---
>  arch/s390/Kconfig   |   6 +-
>  arch/s390/include/asm/device.h  |   6 +-
>  arch/s390/include/asm/dma-mapping.h |   6 +-
>  arch/s390/pci/pci.c |   1 +
>  arch/s390/pci/pci_dma.c |   4 +-
>  drivers/virtio/Kconfig  |   2 +-
>  drivers/virtio/virtio_mmio.c|  67 ++
>  drivers/virtio/virtio_pci_common.h  |   6 -
>  drivers/virtio/virtio_pci_legacy.c  |  42 ++--
>  drivers/virtio/virtio_pci_modern.c  |  61 ++
>  drivers/virtio/virtio_ring.c| 412 
> ++--
>  include/linux/dma-mapping.h |   2 +
>  include/linux/virtio.h  |  23 +-
>  include/linux/virtio_ring.h |  35 +++
>  lib/Makefile|   1 +
>  lib/dma-noop.c  |  75 +++
>  tools/virtio/linux/dma-mapping.h|  17 ++
>  18 files changed, 568 insertions(+), 244 deletions(-)
>  create mode 100644 lib/dma-noop.c
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> -- 
> 2.5.0

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 00/10] virtio DMA API, yet again

2016-01-31 Thread Christoph Hellwig
On Fri, Jan 29, 2016 at 11:01:00AM +, David Woodhouse wrote:
> Also, wasn't Christoph looking at making per-device DMA ops more
> generic instead of an 'archdata' thing on basically every platform? Or
> did I just imagine that part?

What I've done for 4.5 is to switch all architectures to use DMA ops.
This should make it fairly easy to have a generic dma ops pointer in the
devie structure, but I have no need for that yet, and thus no short term
plans to do that work myself.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen

2016-01-31 Thread Haozhong Zhang
Hi,

The following document describes the design of adding vNVDIMM support
for Xen. Any comments are welcome.

Thanks,
Haozhong


Content
===
1. Background
 1.1 Access Mechanisms: Persistent Memory and Block Window
 1.2 ACPI Support
  1.2.1 NFIT
  1.2.2 _DSM and _FIT
 1.3 Namespace
 1.4 clwb/clflushopt/pcommit
2. NVDIMM/vNVDIMM Support in Linux Kernel/KVM/QEMU
 2.1 NVDIMM Driver in Linux Kernel
 2.2 vNVDIMM Implementation in KVM/QEMU
3. Design of vNVDIMM in Xen
 3.1 Guest clwb/clflushopt/pcommit Enabling
 3.2 Address Mapping
  3.2.1 My Design
  3.2.2 Alternative Design
 3.3 Guest ACPI Emulation
  3.3.1 My Design
  3.3.2 Alternative Design 1: switching to QEMU
  3.3.3 Alternative Design 2: keeping in Xen
References


Non-Volatile DIMM or NVDIMM is a type of RAM device that provides
persistent storage and retains data across reboot and even power
failures. This document describes the design to support virtual NVDIMM
devices or vNVDIMM in Xen. 

The rest of this document is organized as below.
 - Section 1 briefly introduces the background knowledge of NVDIMM
   hardware, which is used by other parts of this document.

 - Section 2 briefly introduces the current/future NVDIMM/vNVDIMM
   support in Linux kernel/KVM/QEMU. They will affect the vNVDIMM
   design in Xen.

 - Section 3 proposes design details of vNVDIMM in Xen. Several
   alternatives are also listed in this section.



1. Background

1.1 Access Mechanisms: Persistent Memory and Block Window

 NVDIMM provides two access mechanisms: byte-addressable persistent
 memory (pmem) and block window (pblk). An NVDIMM can contain multiple
 ranges and each range can be accessed through either pmem or pblk
 (but not both).

 Byte-addressable persistent memory mechanism (pmem) maps NVDIMM or
 ranges of NVDIMM into the system physical address (SPA) space, so
 that software can access NVDIMM via normal memory loads and
 stores. If the virtual address is used, then MMU will translate it to
 the physical address.

 In the virtualization circumstance, we can pass through a pmem range
 or partial of it to a guest by mapping it in EPT (i.e. mapping guest
 vNVDIMM physical address to host NVDIMM physical address), so that
 guest accesses are applied directly to the host NVDIMM device without
 hypervisor's interceptions.

 Block window mechanism (pblk) provides one or multiple block windows
 (BW).  Each BW is composed of a command register, a status register
 and a 8 Kbytes aperture register. Software fills the direction of the
 transfer (read/write), the start address (LBA) and size on NVDIMM it
 is going to transfer. If nothing goes wrong, the transferred data can
 be read/write via the aperture register. The status and errors of the
 transfer can be got from the status register. Other vendor-specific
 commands and status can be implemented for BW as well. Details of the
 block window access mechanism can be found in [3].

 In the virtualization circumstance, different pblk regions on a
 single NVDIMM device may be accessed by different guests, so the
 hypervisor needs to emulate BW, which would introduce a high overhead
 for I/O intensive workload.

 Therefore, we are going to only implement pmem for vNVDIMM. The rest
 of this document will mostly concentrate on pmem.


1.2 ACPI Support

 ACPI provides two factors of support for NVDIMM. First, NVDIMM
 devices are described by firmware (BIOS/EFI) to OS via ACPI-defined
 NVDIMM Firmware Interface Table (NFIT). Second, several functions of
 NVDIMM, including operations on namespace labels, S.M.A.R.T and
 hotplug, are provided by ACPI methods (_DSM and _FIT).

1.2.1 NFIT

 NFIT is a new system description table added in ACPI v6 with
 signature "NFIT". It contains a set of structures.

 - System Physical Address Range Structure
   (SPA Range Structure)

   SPA range structure describes system physical address ranges
   occupied by NVDIMMs and types of regions.

   If Address Range Type GUID field of a SPA range structure is "Byte
   Addressable Persistent Memory (PM) Region", then the structure
   describes a NVDIMM region that is accessed via pmem. The System
   Physical Address Range Base and Length fields describe the start
   system physical address and the length that is occupied by that
   NVDIMM region.

   A SPA range structure is identified by a non-zero SPA range
   structure index.

   Note: [1] reserves E820 type 7: OSPM must comprehend this memory as
 having non-volatile attributes and handle distinct from
 conventional volatile memory (in Table 15-312 of [1]). The
 memory region supports byte-addressable non-volatility. E820
 type 12 (OEM defined) may be also used for legacy NVDIMM
 prior to ACPI v6.

   Note: Besides OS, EFI firmware may also parse NFIT for booting
 drives (Section 9.3.6.9 of [5]).

 - Memory Device to System Physical Address Range Mapping Structure
   (Range Mapping Structure)

   An NVDIMM region described by a SPA range structure can 

Re: [Xen-devel] [PATCH v4 00/10] Add VMX TSC scaling support

2016-01-31 Thread Haozhong Zhang
Hi Jan,

On 01/18/16 05:58, Haozhong Zhang wrote:
> This patchset adds support for VMX TSC scaling feature which is
> available on Intel Skylake Server CPU. The specification of VMX TSC
> scaling can be found at
> http://www.intel.com/content/www/us/en/processors/timestamp-counter-scaling-virtualization-white-paper.html
> 
> VMX TSC scaling allows guest TSC which is read by guest rdtsc(p)
> instructions increases in a rate that is customized by the hypervisor
> and can be different than the host TSC frequency. Basically, VMX TSC
> scaling adds a 64-bit field called TSC multiplier in VMCS so that, if
> VMX TSC scaling is enabled, TSC read by guest rdtsc(p) instructions
> will be calculated by the following formula:
> 
>   guest EDX:EAX = (Host TSC * TSC multiplier) >> 48 + VMX TSC Offset
> 
> where, Host TSC = Host MSR_IA32_TSC + Host MSR_IA32_TSC_ADJUST.
> 
> If the destination host supports VMX TSC scaling, this patchset allows
> guest programs in a HVM container in the default TSC mode or PVRDTSCP
> (native_paravirt) TSC mode to observe the same TSC frequency across
> the migration.
> 
> Changes in v4:
>  * v3 patch 1&2 have been committed so they are not included in v4.
>  * v3 patch 11 "x86/hvm: Detect TSC scaling through hvm_funcs" is merged
>early into v4 patch 4 "x86/hvm: Collect information of TSC scaling ratio".
>  * v4 patch 1 - 8 correspond to v3 patch 3 - 10.
>v4 patch 9 - 10 correspond to v3 patch 12 - 13.
>  * Other changes are logged in each patch respectively.
> 
> Changes in v3:
>  * v2 patch 1&2 have been merged so they do not appear in v3.
>  * Patch 1 - 6 correspond to v2 patch 3 - 8. Patch 7 is new.
>Patch 8 - 13 correspond to v2 patch 9 - 14.
>  * Other changes are logged in each patch respectively.
> 
> Changes in v2:
>  * Remove unnecessary v1 patch 1&13.
>  * Add and move all bug-fix patches to the beginning of this series.
>(Patch 1 - 6)
>  * Update changes in tsc_set_info() and tsc_get_info() to make both
>functions consistent with each other. (Patch 2 - 4)
>  * Move a part of scaling logic out of [vmx|svm]_set_tsc_offset().
>(Patch 7)
>  * Remove redundant hvm_funcs.tsc_scaling_ratio_rsvd. (Patch 8)
>  * Reimplement functions that calculate TSC ratio and scale TSC.
>(Patch 9&10)
>  * Merge setting VMX TSC multiplier into patch 13.
>  * Move initialing tsc_scaling_ratio in VMX ahead to
>vmx_vcpu_initialise() so as to make construct_vmcs() naturally
>use this field instead of a constant. (Patch 13)
>  * Update documents related to tsc_mode.
>  * Other code cleanup and style fixes.
> 
> Haozhong Zhang (10):
>   x86/hvm: Scale host TSC when setting/getting guest TSC
>   x86/time.c: Scale host TSC in pvclock properly
>   svm: Remove redundant TSC scaling in svm_set_tsc_offset()
>   x86/hvm: Collect information of TSC scaling ratio
>   x86: Add functions for 64-bit integer arithmetic
>   x86/hvm: Setup TSC scaling ratio
>   x86/hvm: Replace architecture TSC scaling by a common function
>   x86/hvm: Move saving/loading vcpu's TSC to common code
>   vmx: Add VMX RDTSC(P) scaling support
>   docs: Add descriptions of TSC scaling in xl.cfg and tscmode.txt
> 
>  docs/man/xl.cfg.pod.5  | 14 +++-
>  docs/misc/tscmode.txt  | 21 
>  xen/arch/x86/hvm/hvm.c | 69 
> --
>  xen/arch/x86/hvm/svm/svm.c | 36 +---
>  xen/arch/x86/hvm/vmx/vmcs.c| 12 +--
>  xen/arch/x86/hvm/vmx/vmx.c | 25 +++---
>  xen/arch/x86/time.c| 38 -
>  xen/include/asm-x86/hvm/hvm.h  | 20 +++
>  xen/include/asm-x86/hvm/svm/svm.h  |  3 --
>  xen/include/asm-x86/hvm/vcpu.h |  2 ++
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  7 
>  xen/include/asm-x86/math64.h   | 47 ++
>  12 files changed, 242 insertions(+), 52 deletions(-)
>  create mode 100644 xen/include/asm-x86/math64.h
> 
> -- 
> 2.7.0
> 

I notice that the first three of this patch series have been
committed. Any comments on other ones?

Thanks,
Haozhong

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.

2016-01-31 Thread Jan Beulich
>>> On 30.01.16 at 15:38,  wrote:

> On 1/30/2016 12:33 AM, Jan Beulich wrote:
> On 29.01.16 at 11:45,  wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -940,6 +940,8 @@ static int hvm_ioreq_server_alloc_rangesets(struct 
>>> hvm_ioreq_server *s,
>>>   {
>>>   unsigned int i;
>>>   int rc;
>>> +unsigned int max_wp_ram_ranges =
>>> +s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_WP_RAM_RANGES];
>>
>> You're still losing the upper 32 bits here. Iirc you agreed to range
>> check the value before storing into params[]...
> 
> Thanks, Jan. :)
> In this version, the check is added in routine parse_config_data().
> If option 'max_wp_ram_ranges' is configured with an unreasonable value,
> the xl will terminate, before calling xc_hvm_param_set(). Does this
> change meet your requirement? Or maybe did I have some misunderstanding
> on this issue?

Checking in the tools is desirable, but the hypervisor shouldn't rely
on any tool side checking.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC] Remove PV superpage support (v1).

2016-01-31 Thread Jan Beulich
>>> On 01.02.16 at 05:54,  wrote:
> On 29/01/16 17:46, Jan Beulich wrote:
> On 29.01.16 at 17:26,  wrote:
>>> On Fri, Jan 29, 2016 at 09:00:15AM -0700, Jan Beulich wrote:
>>> On 29.01.16 at 16:30,  wrote:
> I am hoping the maintainers can guide me in how they would like:
>  - Deal with documentation? I removed the allowsuperpage from 
> documentation
>but perhaps it should just mention deprecated?

 Since you delete the option, deleting it from doc seems fine to me.

>  - I left put_superpage as put_page_from_l2e calls it - but I can't see
>how the _PAGE_PSE bit would be set as you won't be able to even
>put that bit on (after my patch). Perhaps just make it an
>ASSERT in put_page_from_l2e?

 No, that can't be done. Did you check why it is the way it is
 (having to do with the alternative initial P2M placement, which
 does use superpages despite the guest itself not being able to
 create any)?
>>>
>>> If I recall - this was done for P2M array when put in a different virtual
>>> address space? And this being only the initial domain - would this ..
>>> Oh this can be done for normal guests as well I presume?
>> 
>> Iirc Jürgen enabled this for DomU too not so long ago.
> 
> I did. The Xen tools won't use superpages for the p2m array, however.

That's unfortunate. Is there a reason for this?

Jan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v10 3/9] libxc: allow creating domains without emulated devices.

2016-01-31 Thread Olaf Hering
On Mon, Dec 07, Roger Pau Monne wrote:

> Introduce a new flag in xc_dom_image that turns on and off the emulated
> devices. This prevents creating the VGA hole, the hvm_info page and the
> ioreq server pages. libxl unconditionally sets it to true for all HVM
> domains at the moment.

> @@ -1428,8 +1434,9 @@ static int meminit_hvm(struct xc_dom_image *dom)
>   * Under 2MB mode, we allocate pages in batches of no more than 8MB to 
>   * ensure that we can be preempted and hence dom0 remains responsive.
>   */
> -rc = xc_domain_populate_physmap_exact(
> -xch, domid, 0xa0, 0, memflags, >p2m_host[0x00]);
> +if ( dom->device_model )
> +rc = xc_domain_populate_physmap_exact(
> +xch, domid, 0xa0, 0, memflags, >p2m_host[0x00]);

I think this causes a build failure when building tools/ with -O1: 

[  149s] xc_dom_x86.c: In function 'meminit_hvm':
[  149s] xc_dom_x86.c:1262: error: 'rc' may be used uninitialized in this 
function
[  149s] make[4]: *** [xc_dom_x86.o] Error 1


Olaf

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH] x86: shrink 'struct domain', was already PAGE_SIZE

2016-01-31 Thread Corneliu ZUZU
The X86 domain structure already occupied PAGE_SIZE (4096).

Looking @ the memory layout of the structure, we could see that
overall most was occupied by (used the pahole tool on domain.o):
 * sizeof(domain.arch) = sizeof(arch_domain) = 3328 bytes.
 * sizeof(domain.arch.hvm_domain) = 2224 bytes.
 * sizeof(domain.arch.hvm_domain.pl_time) = 1088 bytes.
This patch attempts to free some space, by making the pl_time
field in hvm_domain dynamically allocated.
We xzalloc/xfree it @ hvm_domain_initialise/hvm_domain_destroy.

After this change, the domain structure shrunk w/ 1152 bytes (>1K!).

Signed-off-by: Corneliu ZUZU 
---
 xen/arch/x86/hvm/hpet.c  |  5 ++---
 xen/arch/x86/hvm/hvm.c   |  9 -
 xen/arch/x86/hvm/pmtimer.c   | 18 +-
 xen/arch/x86/hvm/rtc.c   |  5 ++---
 xen/arch/x86/hvm/vpt.c   | 10 +-
 xen/arch/x86/time.c  |  2 +-
 xen/include/asm-x86/hvm/domain.h |  2 +-
 xen/include/asm-x86/hvm/vpt.h|  3 +++
 8 files changed, 31 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 5e020ae..de5c088 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -26,10 +26,9 @@
 #include 
 #include 
 
-#define domain_vhpet(x) (&(x)->arch.hvm_domain.pl_time.vhpet)
+#define domain_vhpet(x) (&(x)->arch.hvm_domain.pl_time->vhpet)
 #define vcpu_vhpet(x)   (domain_vhpet((x)->domain))
-#define vhpet_domain(x) (container_of((x), struct domain, \
-  arch.hvm_domain.pl_time.vhpet))
+#define vhpet_domain(x) (container_of((x), struct pl_time, vhpet)->domain)
 #define vhpet_vcpu(x)   (pt_global_vcpu_target(vhpet_domain(x)))
 
 #define HPET_BASE_ADDRESS   0xfed0ULL
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 674feea..cc667ae 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1574,13 +1574,18 @@ int hvm_domain_initialise(struct domain *d)
 if ( rc != 0 )
 goto fail0;
 
+d->arch.hvm_domain.pl_time = xzalloc(struct pl_time);
 d->arch.hvm_domain.params = xzalloc_array(uint64_t, HVM_NR_PARAMS);
 d->arch.hvm_domain.io_handler = xzalloc_array(struct hvm_io_handler,
   NR_IO_HANDLERS);
 rc = -ENOMEM;
-if ( !d->arch.hvm_domain.params || !d->arch.hvm_domain.io_handler )
+if ( !d->arch.hvm_domain.pl_time ||
+ !d->arch.hvm_domain.params  || !d->arch.hvm_domain.io_handler )
 goto fail1;
 
+/* need link to containing domain */
+d->arch.hvm_domain.pl_time->domain = d;
+
 /* Set the default IO Bitmap. */
 if ( is_hardware_domain(d) )
 {
@@ -1637,6 +1642,7 @@ int hvm_domain_initialise(struct domain *d)
 xfree(d->arch.hvm_domain.io_bitmap);
 xfree(d->arch.hvm_domain.io_handler);
 xfree(d->arch.hvm_domain.params);
+xfree(d->arch.hvm_domain.pl_time);
  fail0:
 hvm_destroy_cacheattr_region_list(d);
 return rc;
@@ -1667,6 +1673,7 @@ void hvm_domain_destroy(struct domain *d)
 {
 xfree(d->arch.hvm_domain.io_handler);
 xfree(d->arch.hvm_domain.params);
+xfree(d->arch.hvm_domain.pl_time);
 
 hvm_destroy_cacheattr_region_list(d);
 
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 9c2e4bd..b1a5565 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -66,7 +66,7 @@ static void pmt_update_sci(PMTState *s)
 
 void hvm_acpi_power_button(struct domain *d)
 {
-PMTState *s = >arch.hvm_domain.pl_time.vpmt;
+PMTState *s = >arch.hvm_domain.pl_time->vpmt;
 
 if ( !has_vpm(d) )
 return;
@@ -79,7 +79,7 @@ void hvm_acpi_power_button(struct domain *d)
 
 void hvm_acpi_sleep_button(struct domain *d)
 {
-PMTState *s = >arch.hvm_domain.pl_time.vpmt;
+PMTState *s = >arch.hvm_domain.pl_time->vpmt;
 
 if ( !has_vpm(d) )
 return;
@@ -152,7 +152,7 @@ static int handle_evt_io(
 int dir, unsigned int port, unsigned int bytes, uint32_t *val)
 {
 struct vcpu *v = current;
-PMTState *s = >domain->arch.hvm_domain.pl_time.vpmt;
+PMTState *s = >domain->arch.hvm_domain.pl_time->vpmt;
 uint32_t addr, data, byte;
 int i;
 
@@ -215,7 +215,7 @@ static int handle_pmt_io(
 int dir, unsigned int port, unsigned int bytes, uint32_t *val)
 {
 struct vcpu *v = current;
-PMTState *s = >domain->arch.hvm_domain.pl_time.vpmt;
+PMTState *s = >domain->arch.hvm_domain.pl_time->vpmt;
 
 if ( bytes != 4 )
 {
@@ -251,7 +251,7 @@ static int handle_pmt_io(
 
 static int pmtimer_save(struct domain *d, hvm_domain_context_t *h)
 {
-PMTState *s = >arch.hvm_domain.pl_time.vpmt;
+PMTState *s = >arch.hvm_domain.pl_time->vpmt;
 uint32_t x, msb = s->pm.tmr_val & TMR_VAL_MSB;
 int rc;
 
@@ -282,7 +282,7 @@ static int pmtimer_save(struct domain *d, 
hvm_domain_context_t *h)
 
 static int pmtimer_load(struct domain *d, hvm_domain_context_t *h)
 {
-PMTState *s = 

Re: [Xen-devel] [PATCH v4 00/10] Add VMX TSC scaling support

2016-01-31 Thread Jan Beulich
>>> On 01.02.16 at 06:50,  wrote:
> I notice that the first three of this patch series have been
> committed. Any comments on other ones?

They haven't been forgotten, but I didn't get around to look at
(and perhaps commit) them yet.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-mingo-tip-master test] 79547: regressions - FAIL

2016-01-31 Thread osstest service owner
flight 79547 linux-mingo-tip-master real [real]
http://logs.test-lab.xenproject.org/osstest/logs/79547/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-rumpuserxen6 xen-build fail REGR. vs. 60684
 build-amd64-rumpuserxen   6 xen-build fail REGR. vs. 60684
 test-amd64-amd64-xl-credit2  15 guest-localmigratefail REGR. vs. 60684
 test-amd64-amd64-xl-xsm  15 guest-localmigratefail REGR. vs. 60684
 test-amd64-amd64-xl  15 guest-localmigratefail REGR. vs. 60684
 test-amd64-amd64-libvirt-xsm 15 guest-saverestore.2   fail REGR. vs. 60684
 test-amd64-amd64-xl-multivcpu 15 guest-localmigrate   fail REGR. vs. 60684
 test-amd64-amd64-pair  22 guest-migrate/dst_host/src_host fail REGR. vs. 60684
 test-amd64-amd64-libvirt 15 guest-saverestore.2   fail REGR. vs. 60684
 test-amd64-i386-xl-qemuu-debianhvm-amd64 20 leak-check/check fail REGR. vs. 
60684

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 15 guest-localmigratefail REGR. vs. 60684
 test-amd64-amd64-libvirt-pair 22 guest-migrate/dst_host/src_host fail blocked 
in 60684
 test-amd64-i386-libvirt  15 guest-saverestore.2  fail blocked in 60684
 test-amd64-i386-xl-xsm   15 guest-localmigrate   fail blocked in 60684
 test-amd64-i386-libvirt-xsm  15 guest-saverestore.2  fail blocked in 60684
 test-amd64-i386-xl   15 guest-localmigrate   fail blocked in 60684
 test-amd64-i386-libvirt-pair 22 guest-migrate/dst_host/src_host fail blocked 
in 60684
 test-amd64-i386-pair  22 guest-migrate/dst_host/src_host fail blocked in 60684
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 60684

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 13 xen-boot/l1   fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1 fail never pass

version targeted for testing:
 linuxb90ae94db220e220366081b4c32303c846da6327
baseline version:
 linux69f75ebe3b1d1e636c4ce0a0ee248edacc69cbe0

Last test of basis60684  2015-08-13 04:21:46 Z  171 days
Failing since 60712  2015-08-15 18:33:48 Z  169 days  119 attempts
Testing same since79547  2016-01-31 04:23:28 Z0 days1 attempts

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 build-amd64-rumpuserxen  fail
 build-i386-rumpuserxen   fail
 test-amd64-amd64-xl  fail
 test-amd64-i386-xl   fail
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass
 test-amd64-amd64-libvirt-xsm fail
 test-amd64-i386-libvirt-xsm  fail
 test-amd64-amd64-xl-xsm  fail
 test-amd64-i386-xl-xsm 

[Xen-devel] [PATCH v4] xen: sched: convert RTDS from time to event driven model

2016-01-31 Thread Tianyang Chen
v4 is meant for discussion on the addition of replq.

Changes since v3:
removed running queue.
added repl queue to keep track of repl events.
timer is now per scheduler.
timer is init on a valid cpu in a cpupool.

Bugs to be fixed: Cpupool and locks. When a pcpu is removed from a
pool and added to another, the lock equality assert in free_pdata()
fails when Pool-0 is using rtds.

This patch is based on master branch after commit 2e46e3
x86/mce: fix misleading indentation in init_nonfatal_mce_checker()

Signed-off-by: Tianyang Chen 
Signed-off-by: Meng Xu 
Signed-off-by: Dagaen Golomb 
---
 xen/common/sched_rt.c |  262 -
 1 file changed, 192 insertions(+), 70 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 2e5430f..c36e5de 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -87,7 +88,7 @@
 #define RTDS_DEFAULT_BUDGET (MICROSECS(4000))
 
 #define UPDATE_LIMIT_SHIFT  10
-#define MAX_SCHEDULE(MILLISECS(1))
+
 /*
  * Flags
  */
@@ -142,6 +143,9 @@ static cpumask_var_t *_cpumask_scratch;
  */
 static unsigned int nr_rt_ops;
 
+/* handler for the replenishment timer */
+static void repl_handler(void *data);
+
 /*
  * Systme-wide private data, include global RunQueue/DepletedQ
  * Global lock is referenced by schedule_data.schedule_lock from all
@@ -152,7 +156,9 @@ struct rt_private {
 struct list_head sdom;  /* list of availalbe domains, used for dump */
 struct list_head runq;  /* ordered list of runnable vcpus */
 struct list_head depletedq; /* unordered list of depleted vcpus */
+struct list_head replq; /* ordered list of vcpus that need repl */
 cpumask_t tickled;  /* cpus been tickled */
+struct timer *repl_timer;   /* replenishment timer */
 };
 
 /*
@@ -160,6 +166,7 @@ struct rt_private {
  */
 struct rt_vcpu {
 struct list_head q_elem;/* on the runq/depletedq list */
+struct list_head p_elem;/* on the repl event list */
 
 /* Up-pointers */
 struct rt_dom *sdom;
@@ -213,8 +220,14 @@ static inline struct list_head *rt_depletedq(const struct 
scheduler *ops)
 return _priv(ops)->depletedq;
 }
 
+static inline struct list_head *rt_replq(const struct scheduler *ops)
+{
+return _priv(ops)->replq;
+}
+
 /*
- * Queue helper functions for runq and depletedq
+ * Queue helper functions for runq, depletedq
+ * and repl event q
  */
 static int
 __vcpu_on_q(const struct rt_vcpu *svc)
@@ -228,6 +241,18 @@ __q_elem(struct list_head *elem)
 return list_entry(elem, struct rt_vcpu, q_elem);
 }
 
+static struct rt_vcpu *
+__p_elem(struct list_head *elem)
+{
+return list_entry(elem, struct rt_vcpu, p_elem);
+}
+
+static int
+__vcpu_on_p(const struct rt_vcpu *svc)
+{
+   return !list_empty(>p_elem);
+}
+
 /*
  * Debug related code, dump vcpu/cpu information
  */
@@ -387,6 +412,13 @@ __q_remove(struct rt_vcpu *svc)
 list_del_init(>q_elem);
 }
 
+static inline void
+__p_remove(struct rt_vcpu *svc)
+{
+if ( __vcpu_on_p(svc) )
+list_del_init(>p_elem);
+}
+
 /*
  * Insert svc with budget in RunQ according to EDF:
  * vcpus with smaller deadlines go first.
@@ -421,6 +453,32 @@ __runq_insert(const struct scheduler *ops, struct rt_vcpu 
*svc)
 }
 
 /*
+ * Insert svc into the repl even list:
+ * vcpus that needs to be repl earlier go first.
+ */
+static void
+__replq_insert(const struct scheduler *ops, struct rt_vcpu *svc)
+{
+struct rt_private *prv = rt_priv(ops);
+struct list_head *replq = rt_replq(ops);
+struct list_head *iter;
+
+ASSERT( spin_is_locked(>lock) );
+
+ASSERT( !__vcpu_on_p(svc) );
+
+list_for_each(iter, replq)
+{
+struct rt_vcpu * iter_svc = __p_elem(iter);
+if ( svc->cur_deadline <= iter_svc->cur_deadline )
+break;
+}
+
+list_add_tail(>p_elem, iter);
+}
+
+
+/*
  * Init/Free related code
  */
 static int
@@ -449,6 +507,7 @@ rt_init(struct scheduler *ops)
 INIT_LIST_HEAD(>sdom);
 INIT_LIST_HEAD(>runq);
 INIT_LIST_HEAD(>depletedq);
+INIT_LIST_HEAD(>replq);
 
 cpumask_clear(>tickled);
 
@@ -473,6 +532,9 @@ rt_deinit(const struct scheduler *ops)
 xfree(_cpumask_scratch);
 _cpumask_scratch = NULL;
 }
+
+kill_timer(prv->repl_timer);
+
 xfree(prv);
 }
 
@@ -586,6 +648,7 @@ rt_alloc_vdata(const struct scheduler *ops, struct vcpu 
*vc, void *dd)
 return NULL;
 
 INIT_LIST_HEAD(>q_elem);
+INIT_LIST_HEAD(>p_elem);
 svc->flags = 0U;
 svc->sdom = dd;
 svc->vcpu = vc;
@@ -618,6 +681,10 @@ static void
 rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
 struct rt_vcpu *svc = rt_vcpu(vc);
+struct rt_private *prv = rt_priv(ops);
+struct timer *repl_timer;
+

Re: [Xen-devel] Clarifying PVH mode requirements

2016-01-31 Thread PGNet Dev

In any case, the !st issue, prior to any guest being launched, simply adding


@ GRUBG cfg

-GRUB_CMDLINE_XEN=" ..."
+GRUB_CMDLINE_XEN=" dom0pvh ..."


causes boot fail,

...
(XEN) [2016-01-31 19:28:09] d0v0 EPT violation 0x1aa (-w-/r-x) gpa 
0x00f100054c mfn 0xf15

(XEN) [2016-01-31 19:28:09] d0v0 Walking EPT tables for GFN f1000:
(XEN) [2016-01-31 19:28:09] d0v0  epte 800845108107
(XEN) [2016-01-31 19:28:09] d0v0  epte 80085b680107
(XEN) [2016-01-31 19:28:09] d0v0  epte 800844af7107
(XEN) [2016-01-31 19:28:09] d0v0  epte 8050f1000905
(XEN) [2016-01-31 19:28:09] d0v0  --- GLA 0xc96a254c
(XEN) [2016-01-31 19:28:09] domain_crash called from vmx.c:2685
(XEN) [2016-01-31 19:28:09] Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) [2016-01-31 19:28:09] [ Xen-4.6.0_08-405  x86_64  debug=n 
Tainted:C ]

(XEN) [2016-01-31 19:28:09] CPU:0
(XEN) [2016-01-31 19:28:09] RIP:0010:[]
(XEN) [2016-01-31 19:28:09] RFLAGS: 00010246   CONTEXT: hvm 
guest (d0v0)
(XEN) [2016-01-31 19:28:09] rax: 000d   rbx: 
f100054c   rcx: 9e9f
(XEN) [2016-01-31 19:28:09] rdx:    rsi: 
0100   rdi: 81e0
(XEN) [2016-01-31 19:28:09] rbp: 880164b57908   rsp: 
880164b578d8   r8:  88016d88
(XEN) [2016-01-31 19:28:09] r9:  0241   r10: 
   r11: 0001
(XEN) [2016-01-31 19:28:09] r12: 0020   r13: 
88016453aec0   r14: c96c
(XEN) [2016-01-31 19:28:09] r15: 880164b57a20   cr0: 
80050033   cr4: 

(XEN) [2016-01-31 19:28:09] cr3: 01e0b000   cr2: 
(XEN) [2016-01-31 19:28:09] ds:    es:    fs:    gs:  
ss:    cs: 0010

(XEN) [2016-01-31 19:28:09] Guest stack trace from rsp=880164b578d8:
(XEN) [2016-01-31 19:28:09]   Fault while accessing guest memory.
(XEN) [2016-01-31 19:28:09] Hardware Dom0 crashed: rebooting machine in 
5 seconds.

...

Removing the dom0pvh gets me back up & running.



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 12/18] tools/libx{l, c}: add back channel to libxc

2016-01-31 Thread Wen Congyang
On 01/30/2016 12:38 AM, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 29, 2016 at 01:27:28PM +0800, Wen Congyang wrote:
>> In COLO mode, both VMs are running, and are considered in sync if the
>> visible network traffic is identical.  After some time, they fall out of
>> sync.
>>
>> At this point, the two VMs have definitely diverged.  Lets call the
>> primary dirty bitmap set A, while the secondary dirty bitmap set B.
>>
>> Sets A and B are different.
>>
>> Under normal migration, the page data for set A will be sent from the
>> primary to the secondary.
>>
>> However, the set difference B - A (the one in B but not in A, lets
>> call this C) is out-of-date on the secondary (with respect to the
>> primary) and will not be sent by the primary (to secondary), as it
>> was not memory dirtied by the primary. The secondary needs C page data
>> to reconstruct an exact copy of the primary at the checkpoint.
>>
>> The secondary cannot calculate C as it doesn't know A.  Instead, the
>> secondary must send B to the primary, at which point the primary
>> calculates the union of A and B (lets call this D) which is all the
>> pages dirtied by both the primary and the secondary, and sends all page
>> data covered by D.
>>
>> In the general case, D is a superset of both A and B.  Without the
>> backchannel dirty bitmap, a COLO checkpoint can't reconstruct a valid
>> copy of the primary.
>>
>> We transfer the dirty bitmap on libxc side, so we need to introduce back
>> channel to libxc.
>>
>> Note: it is different from the paper. We change the original design to
>> the current one, according to our following concerns:
>> 1. The original design needs extra memory on Secondary host. When there's
>>multiple backups on one host, the memory cost is high.
>> 2. The memory cache code will be another 1k+, it will make the review
>>more time consuming.
>>
>> Note: the back channel will be used in the patch
>>  libxc/restore: send dirty pfn list to primary when checkpoint under COLO
>> to send dirty pfn list from secondary to primary. The patch is posted in
>> another series.
>>
>> Signed-off-by: Yang Hongyang 
>> Signed-off-by: Andrew Cooper 
>> CC: Ian Campbell 
>> CC: Ian Jackson 
>> CC: Wei Liu 
> 
> It is a bit confusing to have 'back_fd' and then 'send_fd'. 
> 
> Could you change the 'send_fd' (in this patch) to be called 
> 'send_back_fd' so that the connection between:
>  tools/libxl: Add back channel to allow migration target send data back
> and this patch is clear?
> 
> Or perhaps also add it in the commit description that you are using
> the 'send_fd' provided by ' tools/libxl: Add back channel to allow migration 
> target send data back'

Before this series:
In libxl:
we have send_fd/recv_fd(libxl_domain_remus_start()), and only have 
restore_fd(libxl_domain_create_restore())
In libxc:
We have io_fd(xc_domain_save()/xc_domain_restore())
The fd in libxc is provided by libxl.

I think after this series, we can add the following fd:
1. add a send_back_fd in libxl_domain_create_restore()
2. add a recv_fd in xc_domain_save()
3. add a send_back_fd in xc_domain_restore()

What about this?

Thanks
Wen Congyang

> 
> Otherwise: Reviewed-by: Konrad Rzeszutek Wilk 
> 
> 
> .
> 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-3.14 test] 79562: regressions - FAIL

2016-01-31 Thread osstest service owner
flight 79562 linux-3.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/79562/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-rumpuserxen6 xen-build fail REGR. vs. 78986
 build-amd64-rumpuserxen   6 xen-build fail REGR. vs. 78986

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install 
fail pass in 79452

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail in 79452 like 78986
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 78986
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 78986

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux757bcff73ad4726504a3f40d12a970a593249350
baseline version:
 linuxe9977508d75a36c78c2167800bc9d19d174f7585

Last test of basis78986  2016-01-25 05:54:32 Z6 days
Testing same since79397  2016-01-29 06:14:12 Z2 days3 attempts


People who touched revisions under test:
  Acked-by: David Howells 
  Alexandra Yates 
  Andrew Morton 
  Andrey Ryabinin 
  Andy Gospodarek 
  Arnd Bergmann 
  Ashish Panwar 
  Ben Hutchings 
  Boqun Feng 
  Boris Ostrovsky 
  Catalin Marinas 
  Charles Keepax 
  Charles Ouyang 
  Christoffer Dall 
  Chunfeng Yun 
  Cong Wang 
  Corey Minyard 
  Dan Carpenter 
  Darren Hart 
  David Henningsson 
  David S. Miller 
  David Vrabel 
  Dmitry V. Levin 
  Dmitry Vyukov 
  Eric Dumazet 
  Eric W. Biederman 
  Evan Jones 
  Florian Westphal 
  Francesco Ruggeri 
  Francesco Ruggeri 
  Greg Kroah-Hartman 
  Guenter Roeck 
  H. Peter Anvin 
  H.J. Lu 
  Hannes Frederic Sowa 
  Helge Deller 
  Herbert Xu 
  Ido Schimmel 
  Ingo Molnar 
  Ivaylo Dimitrov 
  Jan Stancek 
  Jay Vosburgh 
  Jiri Kosina 
  Jiri Pirko 
  Johan Hovold 
  John Blackwood 
  Karl Heiss 
  Linus Torvalds 
  Mans Rullgard 
  Marc Zyngier 
  Marcelo Ricardo Leitner 
  Mario Kleiner 
  Mark Brown 
  Mathias Nyman 
  Michael Ellerman 
  Michael Neuling 
  Mikulas Patocka 
  Neal Cardwell 
  Nicolas Boichat 
  Nikesh Oswal 
  Oliver Freyermuth 
  Oliver Neukum 
  Ouyang Zhaowei (Charles) 
  Paul Mackerras 
  Paul Mackerras 
  Peter Zijlstra (Intel) 
  Richard 

Re: [Xen-devel] [PATCH RFC] Remove PV superpage support (v1).

2016-01-31 Thread Juergen Gross
On 29/01/16 17:46, Jan Beulich wrote:
 On 29.01.16 at 17:26,  wrote:
>> On Fri, Jan 29, 2016 at 09:00:15AM -0700, Jan Beulich wrote:
>> On 29.01.16 at 16:30,  wrote:
 I am hoping the maintainers can guide me in how they would like:
  - Deal with documentation? I removed the allowsuperpage from documentation
but perhaps it should just mention deprecated?
>>>
>>> Since you delete the option, deleting it from doc seems fine to me.
>>>
  - I left put_superpage as put_page_from_l2e calls it - but I can't see
how the _PAGE_PSE bit would be set as you won't be able to even
put that bit on (after my patch). Perhaps just make it an
ASSERT in put_page_from_l2e?
>>>
>>> No, that can't be done. Did you check why it is the way it is
>>> (having to do with the alternative initial P2M placement, which
>>> does use superpages despite the guest itself not being able to
>>> create any)?
>>
>> If I recall - this was done for P2M array when put in a different virtual
>> address space? And this being only the initial domain - would this ..
>> Oh this can be done for normal guests as well I presume?
> 
> Iirc Jürgen enabled this for DomU too not so long ago.

I did. The Xen tools won't use superpages for the p2m array, however.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH] MAINTAINERS: cover non-x86 vm_event files

2016-01-31 Thread Razvan Cojocaru
This patch covers modifications to xen/arch/*/vm_event.c, in order
to include ARM vm_event maintainership.

Signed-off-by: Razvan Cojocaru 
---
 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7c1bf82..b36d9be 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -362,7 +362,7 @@ F:  xen/common/vm_event.c
 F: xen/common/mem_access.c
 F: xen/arch/x86/hvm/event.c
 F: xen/arch/x86/monitor.c
-F: xen/arch/x86/vm_event.c
+F: xen/arch/*/vm_event.c
 F: tools/tests/xen-access
 
 VTPM
-- 
2.7.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 79450: regressions - FAIL

2016-01-31 Thread osstest service owner
flight 79450 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/79450/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-rumpuserxen   6 xen-build fail REGR. vs. 59254
 build-i386-rumpuserxen6 xen-build fail REGR. vs. 59254
 test-armhf-armhf-xl-xsm   6 xen-boot  fail REGR. vs. 59254
 test-armhf-armhf-xl-cubietruck  6 xen-bootfail REGR. vs. 59254
 test-armhf-armhf-xl   6 xen-boot  fail REGR. vs. 59254
 test-armhf-armhf-xl-credit2   6 xen-boot  fail REGR. vs. 59254
 test-amd64-i386-xl   15 guest-localmigratefail REGR. vs. 59254
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail REGR. vs. 59254
 test-amd64-amd64-xl-xsm  15 guest-localmigratefail REGR. vs. 59254
 test-amd64-i386-xl-xsm   15 guest-localmigratefail REGR. vs. 59254
 test-amd64-amd64-xl  15 guest-localmigratefail REGR. vs. 59254
 test-amd64-amd64-xl-credit2  15 guest-localmigratefail REGR. vs. 59254
 test-amd64-amd64-xl-multivcpu 15 guest-localmigrate   fail REGR. vs. 59254
 test-amd64-i386-pair   22 guest-migrate/dst_host/src_host fail REGR. vs. 59254
 test-amd64-amd64-pair  22 guest-migrate/dst_host/src_host fail REGR. vs. 59254

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds 11 guest-start   fail REGR. vs. 59254
 test-amd64-amd64-xl-rtds 15 guest-localmigratefail REGR. vs. 59254
 test-armhf-armhf-xl-vhd   6 xen-bootfail baseline untested
 test-amd64-amd64-libvirt-pair 22 guest-migrate/dst_host/src_host fail baseline 
untested
 test-amd64-i386-libvirt-pair 22 guest-migrate/dst_host/src_host fail baseline 
untested
 test-amd64-i386-libvirt  15 guest-saverestore.2  fail blocked in 59254
 test-amd64-i386-libvirt-xsm  15 guest-saverestore.2  fail blocked in 59254
 test-amd64-amd64-libvirt-xsm 15 guest-saverestore.2  fail blocked in 59254
 test-amd64-amd64-libvirt 15 guest-saverestore.2  fail blocked in 59254
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop  fail like 59254

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestorefail never pass
 test-armhf-armhf-libvirt 14 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestorefail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 14 guest-saverestorefail  never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-intel 13 xen-boot/l1 fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 13 xen-boot/l1   fail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass

version targeted for testing:
 linuxad0b40fa944628d6f30b40266a599b285d70a266
baseline version:
 linux45820c294fe1b1a9df495d57f40585ef2d069a39

Last test of basis59254  2015-07-09 04:20:48 Z  206 days
Failing since 59348  2015-07-10 04:24:05 Z  205 days  138 attempts
Testing same since79450  2016-01-30 04:29:49 Z1 days1 attempts


4001 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 

Re: [Xen-devel] [PATCH] MAINTAINERS: cover non-x86 vm_event files

2016-01-31 Thread Tamas K Lengyel
On Sun, Jan 31, 2016 at 2:07 AM, Razvan Cojocaru 
wrote:

> This patch covers modifications to xen/arch/*/vm_event.c, in order
> to include ARM vm_event maintainership.
>
> Signed-off-by: Razvan Cojocaru 
>

Once vm_event.c is added to ARM:

Acked-by: Tamas K Lengyel 


> ---
>  MAINTAINERS | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7c1bf82..b36d9be 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -362,7 +362,7 @@ F:  xen/common/vm_event.c
>  F: xen/common/mem_access.c
>  F: xen/arch/x86/hvm/event.c
>  F: xen/arch/x86/monitor.c
> -F: xen/arch/x86/vm_event.c
> +F: xen/arch/*/vm_event.c
>  F: tools/tests/xen-access
>
>  VTPM
> --
> 2.7.0
>
>
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [linux-3.10 test] 79465: regressions - FAIL

2016-01-31 Thread osstest service owner
flight 79465 linux-3.10 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/79465/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-rumpuserxen6 xen-build fail REGR. vs. 78980
 build-amd64-rumpuserxen   6 xen-build fail REGR. vs. 78980

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail REGR. vs. 78980
 test-amd64-amd64-libvirt-vhd 16 guest-start/debian.repeatfail   like 78923
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop  fail like 78980

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass

version targeted for testing:
 linuxe14ca734b547e3187713441909897aefdf4e4016
baseline version:
 linux14b58660bc26be42d272f7fb0d153ed8fc0a0c4e

Last test of basis78980  2016-01-25 04:55:41 Z6 days
Testing same since79398  2016-01-29 06:14:14 Z2 days2 attempts


People who touched revisions under test:
  Acked-by: David Howells 
  Alexandra Yates 
  Andrew Morton 
  Andrey Ryabinin 
  Arnd Bergmann 
  Ashish Panwar 
  Ben Hutchings 
  Boqun Feng 
  Boris Ostrovsky 
  Catalin Marinas 
  Charles Keepax 
  Charles Ouyang 
  Chunfeng Yun 
  Cong Wang 
  Corey Minyard 
  Dan Carpenter 
  Darren Hart 
  David Henningsson 
  David S. Miller 
  David Vrabel 
  Dmitry V. Levin 
  Dmitry Vyukov 
  Eric Dumazet 
  Eric W. Biederman 
  Evan Jones 
  Florian Westphal 
  Francesco Ruggeri 
  Francesco Ruggeri 
  Greg Kroah-Hartman 
  Guenter Roeck 
  H. Peter Anvin 
  H.J. Lu 
  Hannes Frederic Sowa 
  Helge Deller 
  Ido Schimmel 
  Ingo Molnar 
  Ivaylo Dimitrov 
  Jan Stancek 
  Jiri Kosina 
  Jiri Pirko 
  Johan Hovold 
  John Blackwood 
  Linus Torvalds 
  Marcelo Ricardo Leitner 
  Mario Kleiner 
  Mark Brown 
  Mathias Nyman 
  Michael Ellerman 
  Michael Neuling 
  Mikulas Patocka 
  Neal Cardwell 
  Nicolas Boichat 
  Nikesh Oswal 
  Oliver Freyermuth 
  Oliver Neukum 
  Ouyang Zhaowei (Charles) 
  Paul Mackerras 
  Paul Mackerras 
  Peter Zijlstra (Intel) 
  Richard Purdie 
  Sachin Pandhare 
  Steven Rostedt 
  Takashi Iwai 
  Thomas Gleixner 
  Tony Camuso 
  Ulrich Weigand 
  Vijay Pandurangan 
  Vinod Koul