[Xen-devel] [xen-4.9-testing test] 116409: trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116409 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116409/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-xsm  broken
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsmbroken
 test-xtf-amd64-amd64-2   broken
 test-amd64-amd64-pygrub  broken
 test-amd64-i386-qemuu-rhel6hvm-amd broken
 test-amd64-i386-qemuu-rhel6hvm-amd  4 host-install(4)  broken REGR. vs. 116234
 test-amd64-i386-xl-qemut-debianhvm-amd64  broken in 116378
 test-amd64-amd64-livepatch   broken  in 116378
 test-xtf-amd64-amd64-5   broken  in 116378
 test-amd64-i386-migrupgrade  broken  in 116378

Tests which are failing intermittently (not blocking):
 test-amd64-i386-migrupgrade 5 host-install/dst_host(5) broken in 116378 pass 
in 116409
 test-xtf-amd64-amd64-5   4 host-install(4) broken in 116378 pass in 116409
 test-amd64-amd64-livepatch   4 host-install(4) broken in 116378 pass in 116409
 test-amd64-i386-xl-qemut-debianhvm-amd64 4 host-install(4) broken in 116378 
pass in 116409
 test-xtf-amd64-amd64-24 host-install(4)  broken pass in 116378
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 4 host-install(4) broken 
pass in 116378
 test-amd64-amd64-xl-xsm   4 host-install(4)  broken pass in 116378
 test-amd64-amd64-pygrub   4 host-install(4)  broken pass in 116378
 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail in 116378 pass 
in 116409
 test-armhf-armhf-xl-vhd   6 xen-installfail pass in 116378
 test-amd64-i386-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail pass in 
116378

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop  fail blocked in 116234
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop   fail in 116378 like 116220
 test-amd64-amd64-xl-qemuu-ws16-amd64 14 guest-localmigrate fail in 116378 like 
116234
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 116378 like 
116234
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 116378 like 116234
 test-armhf-armhf-xl-vhd 12 migrate-support-check fail in 116378 never pass
 test-armhf-armhf-xl-vhd 13 saverestore-support-check fail in 116378 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116220
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116220
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 116220
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116234
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116234
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 116234
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 116234
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 tes

[Xen-devel] [linux-linus test] 116398: regressions - trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116398 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116398/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd broken
 test-amd64-amd64-xl-xsm  broken
 test-amd64-amd64-libvirt-pair broken
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsmbroken
 test-amd64-amd64-xl-multivcpu broken
 test-amd64-i386-qemuu-rhel6hvm-amd broken
 test-amd64-amd64-xl-xsm   4 host-install(4)broken REGR. vs. 115643
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 4 host-install(4) broken 
REGR. vs. 115643
 test-amd64-i386-qemuu-rhel6hvm-amd  4 host-install(4)  broken REGR. vs. 115643
 test-amd64-i386-qemut-rhel6hvm-amd  4 host-install(4)  broken REGR. vs. 115643
 test-amd64-amd64-libvirt-pair 5 host-install/dst_host(5) broken REGR. vs. 
115643
 test-amd64-amd64-xl-multivcpu  4 host-install(4)   broken REGR. vs. 115643
 test-amd64-amd64-xl-qemuu-win10-i386  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-freebsd10-amd64  7 xen-boot  fail REGR. vs. 115643
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-pvhv2-amd  7 xen-bootfail REGR. vs. 115643
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 115643
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 115643
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 115643
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-rumprun-i386  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-libvirt-qcow2  7 xen-bootfail REGR. vs. 115643
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-boot  fail REGR. vs. 115643
 test-amd64-amd64-xl-qemut-win7-amd64  7 xen-boot fail REGR. vs. 115643

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115643
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115643
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115643
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115643
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 115643
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115643
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 115643
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115643
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  

[Xen-devel] [PATCH 21/30] xen: deprecate pci_get_bus_and_slot()

2017-11-21 Thread Sinan Kaya
pci_get_bus_and_slot() is restrictive such that it assumes domain=0 as
where a PCI device is present. This restricts the device drivers to be
reused for other domain numbers.

Use pci_get_domain_bus_and_slot() with a domain number of 0 where we can't
extract the domain number. Other places, use the actual domain number from
the device.

Signed-off-by: Sinan Kaya 
---
 drivers/pci/xen-pcifront.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index 8fc2e95..94b25b5 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -595,6 +595,7 @@ static pci_ers_result_t pcifront_common_process(int cmd,
struct pci_driver *pdrv;
int bus = pdev->sh_info->aer_op.bus;
int devfn = pdev->sh_info->aer_op.devfn;
+   int domain = pdev->sh_info->aer_op.domain;
struct pci_dev *pcidev;
int flag = 0;
 
@@ -603,7 +604,7 @@ static pci_ers_result_t pcifront_common_process(int cmd,
cmd, bus, devfn);
result = PCI_ERS_RESULT_NONE;
 
-   pcidev = pci_get_bus_and_slot(bus, devfn);
+   pcidev = pci_get_domain_bus_and_slot(domain, bus, devfn);
if (!pcidev || !pcidev->driver) {
dev_err(&pdev->xdev->dev, "device or AER driver is NULL\n");
pci_dev_put(pcidev);
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.7-testing test] 116399: regressions - trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116399 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116399/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt  broken
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsmbroken
 test-amd64-amd64-amd64-pvgrub broken
 test-xtf-amd64-amd64-2   broken
 test-amd64-amd64-rumprun-amd64broken in 116377
 test-amd64-amd64-libvirt broken  in 116377
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  broken in 116377
 test-amd64-amd64-xl-qcow2broken  in 116377
 test-xtf-amd64-amd64-3 49 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 116348
 build-armhf-xsm   6 xen-build  fail in 116377 REGR. vs. 116348

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 4 host-install(4) broken in 
116377 pass in 116399
 test-amd64-amd64-xl-qcow24 host-install(4) broken in 116377 pass in 116399
 test-amd64-amd64-libvirt 4 host-install(4) broken in 116377 pass in 116399
 test-amd64-amd64-rumprun-amd64 4 host-install(4) broken in 116377 pass in 
116399
 test-xtf-amd64-amd64-24 host-install(4)  broken pass in 116377
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 4 host-install(4) broken 
pass in 116377
 test-amd64-i386-libvirt   4 host-install(4)  broken pass in 116377
 test-amd64-amd64-amd64-pvgrub  4 host-install(4) broken pass in 116377

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked in 116377 n/a
 test-armhf-armhf-xl-xsm   1 build-check(1)   blocked in 116377 n/a
 test-armhf-armhf-xl-rtds 12 guest-start fail in 116377 like 116219
 test-xtf-amd64-amd64-2 49 xtf/test-hvm64-lbr-tsx-vmentry fail in 116377 like 
116348
 test-xtf-amd64-amd64-1 49 xtf/test-hvm64-lbr-tsx-vmentry fail in 116377 like 
116348
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 116377 like 116348
 test-amd64-i386-libvirt 13 migrate-support-check fail in 116377 never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116321
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 116321
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116321
 test-xtf-amd64-amd64-5  49 xtf/test-hvm64-lbr-tsx-vmentry fail like 116348
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116348
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail like 116348
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116348
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 116348
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116348
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116348
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116348
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116348
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116348
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-

Re: [Xen-devel] Xen PV seems to be broken on Linus' tree

2017-11-21 Thread Andy Lutomirski
On Tue, Nov 21, 2017 at 8:11 PM, Andy Lutomirski  wrote:
> On Tue, Nov 21, 2017 at 7:33 PM, Andy Lutomirski  wrote:
>> I'm doing:
>>
>> /usr/bin/qemu-system-x86_64 -machine accel=kvm:tcg -cpu host -net none
>> -nographic -kernel xen-4.8.2 -initrd './arch/x86/boot/bzImage' -m 2G
>> -smp 2 -append console=com1
>>
>> With Linus' commit c8a0739b185d11d6e2ca7ad9f5835841d1cfc765 and the
>> attached config.
>>
>> It dies with a bunch of sensible log lines and then:
>>
>> (XEN) d0v0 Unhandled invalid opcode fault/trap [#6, ec=]
>> (XEN) domain_crash_sync called from entry.S: fault at 82d08023961a
>> entry.o#create_bounce_frame+0x137/0x146
>> (XEN) Domain 0 (vcpu#0) crashed on cpu#0:
>> (XEN) [ Xen-4.8.2  x86_64  debug=n   Not tainted ]
>> (XEN) CPU:0
>> (XEN) RIP:e033:[]
>> (XEN) RFLAGS: 0296   EM: 1   CONTEXT: pv guest (d0v0)
>> (XEN) rax: 002f   rbx: 81e65a48   rcx: 81e71288
>> (XEN) rdx: 81e27500   rsi: 0001   rdi: 81133f88
>> (XEN) rbp:    rsp: 81e03e78   r8:  
>> (XEN) r9:  0001   r10:    r11: 
>> (XEN) r12:    r13: 0001   r14: 0001
>> (XEN) r15:    cr0: 8005003b   cr4: 003506e0
>> (XEN) cr3: 7b0b3000   cr2: 
>> (XEN) ds:    es:    fs:    gs:    ss: e02b   cs: e033
>> (XEN) Guest stack trace from rsp=81e03e78:
>> (XEN)81e71288  811226eb 0001e030
>> (XEN)00010096 81e03eb8 e02b 811226eb
>> (XEN)81122c2e 0200  
>> (XEN)0030 81c69cf5 81080b20 81080560
>> (XEN) 810d3741 8107b420 81094660
>>
>> Is this familiar?
>>
>> I'll feel really dumb if it ends up being my fault.
>
> Nah, it's broken at least back to v4.13, and I suspect it's config
> related.  objdump gives me this:
>
> 8112b0e1:   e9 e8 fe ff ff  jmpq
> 8112afce 
> 8112b0e6:   48 c7 c6 2d f8 c8 81mov
> $0x81c8f82d,%rsi
> 8112b0ed:   48 c7 c7 58 b9 c8 81mov
> $0x81c8b958,%rdi
> 8112b0f4:   e8 13 2d 01 00  callq  8113de0c 
> 
> 8112b0f9:   0f ff   (bad)   <-- crash here
>
> That's "ud0", which is used by WARN.  So we're probably hitting an
> early warning and Xen probably has something busted with early
> exception handling.
>
> Anyone want to debug it and fix it?

Well, I think I debugged it.  x86_64 has a shiny function
idt_setup_early_handler(), and Xen doesn't call it.  Fixing the
problem may be as simple as calling it at an appropriate time and
doing whatever asm magic is needed to deal with Xen's weird IDT
calling convention.

--Andy

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen PV seems to be broken on Linus' tree

2017-11-21 Thread Andy Lutomirski
On Tue, Nov 21, 2017 at 7:33 PM, Andy Lutomirski  wrote:
> I'm doing:
>
> /usr/bin/qemu-system-x86_64 -machine accel=kvm:tcg -cpu host -net none
> -nographic -kernel xen-4.8.2 -initrd './arch/x86/boot/bzImage' -m 2G
> -smp 2 -append console=com1
>
> With Linus' commit c8a0739b185d11d6e2ca7ad9f5835841d1cfc765 and the
> attached config.
>
> It dies with a bunch of sensible log lines and then:
>
> (XEN) d0v0 Unhandled invalid opcode fault/trap [#6, ec=]
> (XEN) domain_crash_sync called from entry.S: fault at 82d08023961a
> entry.o#create_bounce_frame+0x137/0x146
> (XEN) Domain 0 (vcpu#0) crashed on cpu#0:
> (XEN) [ Xen-4.8.2  x86_64  debug=n   Not tainted ]
> (XEN) CPU:0
> (XEN) RIP:e033:[]
> (XEN) RFLAGS: 0296   EM: 1   CONTEXT: pv guest (d0v0)
> (XEN) rax: 002f   rbx: 81e65a48   rcx: 81e71288
> (XEN) rdx: 81e27500   rsi: 0001   rdi: 81133f88
> (XEN) rbp:    rsp: 81e03e78   r8:  
> (XEN) r9:  0001   r10:    r11: 
> (XEN) r12:    r13: 0001   r14: 0001
> (XEN) r15:    cr0: 8005003b   cr4: 003506e0
> (XEN) cr3: 7b0b3000   cr2: 
> (XEN) ds:    es:    fs:    gs:    ss: e02b   cs: e033
> (XEN) Guest stack trace from rsp=81e03e78:
> (XEN)81e71288  811226eb 0001e030
> (XEN)00010096 81e03eb8 e02b 811226eb
> (XEN)81122c2e 0200  
> (XEN)0030 81c69cf5 81080b20 81080560
> (XEN) 810d3741 8107b420 81094660
>
> Is this familiar?
>
> I'll feel really dumb if it ends up being my fault.

Nah, it's broken at least back to v4.13, and I suspect it's config
related.  objdump gives me this:

8112b0e1:   e9 e8 fe ff ff  jmpq
8112afce 
8112b0e6:   48 c7 c6 2d f8 c8 81mov$0x81c8f82d,%rsi
8112b0ed:   48 c7 c7 58 b9 c8 81mov$0x81c8b958,%rdi
8112b0f4:   e8 13 2d 01 00  callq  8113de0c 
8112b0f9:   0f ff   (bad)   <-- crash here

That's "ud0", which is used by WARN.  So we're probably hitting an
early warning and Xen probably has something busted with early
exception handling.

Anyone want to debug it and fix it?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] Xen PV seems to be broken on Linus' tree

2017-11-21 Thread Andy Lutomirski
I'm doing:

/usr/bin/qemu-system-x86_64 -machine accel=kvm:tcg -cpu host -net none
-nographic -kernel xen-4.8.2 -initrd './arch/x86/boot/bzImage' -m 2G
-smp 2 -append console=com1

With Linus' commit c8a0739b185d11d6e2ca7ad9f5835841d1cfc765 and the
attached config.

It dies with a bunch of sensible log lines and then:

(XEN) d0v0 Unhandled invalid opcode fault/trap [#6, ec=]
(XEN) domain_crash_sync called from entry.S: fault at 82d08023961a
entry.o#create_bounce_frame+0x137/0x146
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) [ Xen-4.8.2  x86_64  debug=n   Not tainted ]
(XEN) CPU:0
(XEN) RIP:e033:[]
(XEN) RFLAGS: 0296   EM: 1   CONTEXT: pv guest (d0v0)
(XEN) rax: 002f   rbx: 81e65a48   rcx: 81e71288
(XEN) rdx: 81e27500   rsi: 0001   rdi: 81133f88
(XEN) rbp:    rsp: 81e03e78   r8:  
(XEN) r9:  0001   r10:    r11: 
(XEN) r12:    r13: 0001   r14: 0001
(XEN) r15:    cr0: 8005003b   cr4: 003506e0
(XEN) cr3: 7b0b3000   cr2: 
(XEN) ds:    es:    fs:    gs:    ss: e02b   cs: e033
(XEN) Guest stack trace from rsp=81e03e78:
(XEN)81e71288  811226eb 0001e030
(XEN)00010096 81e03eb8 e02b 811226eb
(XEN)81122c2e 0200  
(XEN)0030 81c69cf5 81080b20 81080560
(XEN) 810d3741 8107b420 81094660

Is this familiar?

I'll feel really dumb if it ends up being my fault.


.config
Description: Binary data
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus bisection] complete test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm

2017-11-21 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm
testid xen-boot

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_gnulib git://git.sv.gnu.org/gnulib.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  0192f17529fa3f8d78ca0181a2b2aaa7cbb0784d
  Bug not present: 0b07194bb55ed836c2cc7c22e866b87a14681984
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/116428/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-linus/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-linus/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm.xen-boot
 --summary-out=tmp/116428.bisection-summary --basis-template=115643 
--blessings=real,real-bisect linux-linus 
test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm xen-boot
Searching for failure / basis pass:
 116343 fail [host=pinot1] / 116316 [host=nobling1] 116268 [host=italia0] 
116226 [host=huxelrebe1] 116136 [host=fiano1] 116119 [host=elbling0] 116103 
[host=baroque0] 115718 [host=godello0] 115690 [host=huxelrebe0] 115678 
[host=merlot1] 115643 [host=merlot0] 115628 [host=chardonnay0] 115615 
[host=nobling0] 115599 [host=nocera0] 115573 [host=elbling1] 115543 
[host=pinot0] 115487 [host=chardonnay1] 115475 [host=italia1] 115469 
[host=fiano0] 115459 [host=italia0] 115438 [host=huxelrebe1] 115414 
[host=godello1] 115387 ok.
Failure / basis pass flights: 116343 / 115387
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_gnulib git://git.sv.gnu.org/gnulib.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest ce3b585bc4cec7db88da010651fe9fad15bf7173 
5e9abf87163ad4aeaefef0b02961f8674b0a4879 
7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0 
0192f17529fa3f8d78ca0181a2b2aaa7cbb0784d 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
b79708a8ed1b3d18bee67baeaf33b3fa529493e2 
b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
Basis pass dc162adb9094b5f0e5c847ce2da726b7ab5e2068 
5e9abf87163ad4aeaefef0b02961f8674b0a4879 
7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0 
0b07194bb55ed836c2cc7c22e866b87a14681984 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
5cd7ce5dde3f228b3b669ed9ca432f588947bd40 
24fb44e971a62b345c7b6ca3c03b454a1e150abe
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/libvirt.git#dc162adb9094b5f0e5c847ce2da726b7ab5e2068-ce3b585bc4cec7db88da010651fe9fad15bf7173
 
git://git.sv.gnu.org/gnulib.git#5e9abf87163ad4aeaefef0b02961f8674b0a4879-5e9abf87163ad4aeaefef0b02961f8674b0a4879
 
https://gitlab.com/keycodemap/keycodemapdb.git#7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0-7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0
 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git#0b07194bb55ed836c2cc7c22e866b87a14681984-0192f17529fa3f8d78ca0181a2b2aaa7cbb0784d
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#c8ea0457495342c417c3dc033bba25148b279f60-c8ea0457495342c417c3dc033bba25148b279f60
 
git://xenbits.xen.org/qemu-xen.git#5cd7ce5dde3f228b3b669ed9ca432f588947bd40-b79708a8ed1b3d18bee67baeaf33b3fa529493e2
 
git://xenbits.xen.org/xen.git#24fb44e971a62b345c7b6ca3c03b454a1e150abe-b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
adhoc-revtuple-generator: tree discontiguous: linux-2.6
Loaded 3006 nodes in revision graph
Searching for test results:
 115321 [host=nobling1]
 115302 [host=nobling0]
 115338 [host=elbling0]
 115353 [host=baroque0]
 115387 pass dc162adb9094b5f0e5c847ce2da726b7ab5e2068 
5e9abf87163ad4aeaefef0b02961f8674b0a4879 
7bf5710b22aa8d58b7eeaaf3dc6960c26cade4f0 
0b07194bb55ed836c2cc7c22e866b87a14681984 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
5cd7ce5dde3f228b3b669ed9ca432f58

Re: [Xen-devel] [RFC v2 7/7] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver

2017-11-21 Thread Goel, Sameer


On 11/20/2017 7:25 AM, Julien Grall wrote:
> Hi Sameer,
> 
> On 19/11/17 07:45, Goel, Sameer wrote:
>> On 10/12/2017 10:36 AM, Julien Grall wrote:
 +
 +typedef paddr_t phys_addr_t;
 +typedef paddr_t dma_addr_t;
 +
 +/* Alias to Xen device tree helpers */
 +#define device_node dt_device_node
 +#define of_phandle_args dt_phandle_args
 +#define of_device_id dt_device_match
 +#define of_match_node dt_match_node
 +#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, 
 pname, out))
 +#define of_property_read_bool dt_property_read_bool
 +#define of_parse_phandle_with_args dt_parse_phandle_with_args
 +#define mutex spinlock_t
 +#define mutex_init spin_lock_init
 +#define mutex_lock spin_lock
 +#define mutex_unlock spin_unlock
>>>
>>> mutex and spinlock are not the same. The former is sleeping whilst the 
>>> later is not.
>>>
>>> Can you please explain why this is fine and possibly add that in a comment?
>>>
>> Mutex is used to protect the access to smmu device internal data structure 
>> when setting up the s2 config and installing stes for a given device in 
>> Linux. The ste programming  operation can be competitively long but in the 
>> current testing, I did not see this blocking for too long. I will put in a 
>> comment.
> 
> Well, I don't think that this is a justification. You tested on one platform 
> and does not explain how you perform them.
> 
> If I understand correctly, that mutex is only used when assigning device. So 
> it might be ok to switch to spinlock. But that's not because the operation is 
> not too long, it just because it would be only perform by the toolstack 
> (domctl) and will not be issued by guest.
Ok. I agree ans will update the comment. 
> 
>>
 +
 +/* Xen: Helpers to get device MMIO and IRQs */
 +struct resource {
 +    u64 addr;
 +    u64 size;
 +    unsigned int type;
 +};
>>>
>>> Likely we want a compat header for defining Linux helpers. This would avoid 
>>> replicating it everywhere.
>> Agreed.
>>
> That should be
>>>
 +
 +#define resource_size(res) ((res)->size)
 +
 +#define platform_device device
 +
 +#define IORESOURCE_MEM 0
 +#define IORESOURCE_IRQ 1
 +
 +static struct resource *platform_get_resource(struct platform_device 
 *pdev,
 +  unsigned int type,
 +  unsigned int num)
 +{
 +    /*
 + * The resource is only used between 2 calls of platform_get_resource.
 + * It's quite ugly but it's avoid to add too much code in the part
 + * imported from Linux
 + */
 +    static struct resource res;
 +    struct acpi_iort_node *iort_node;
 +    struct acpi_iort_smmu_v3 *node_smmu_data;
 +    int ret = 0;
 +
 +    res.type = type;
 +
 +    switch (type) {
 +    case IORESOURCE_MEM:
 +    if (pdev->type == DEV_ACPI) {
 +    ret = 1;
 +    iort_node = pdev->acpi_node;
 +    node_smmu_data =
 +    (struct acpi_iort_smmu_v3 *)iort_node->node_data;
 +
 +    if (node_smmu_data != NULL) {
 +    res.addr = node_smmu_data->base_address;
 +    res.size = SZ_128K;
 +    ret = 0;
 +    }
 +    } else {
 +    ret = dt_device_get_address(dev_to_dt(pdev), num,
 +    &res.addr, &res.size);
 +    }
 +
 +    return ((ret) ? NULL : &res);
 +
 +    case IORESOURCE_IRQ:
 +    ret = platform_get_irq(dev_to_dt(pdev), num);
>>>
>>> No IRQ for ACPI?
>> For IRQs the code calls platform_get_irq_byname. So, the IORESOURCE_IRQ 
>> implementation is not needed at all. (DT or ACPI)
> 
> Please document it then.
Ok.
> 
> [...]
> 
>>>
 +    udelay(sleep_us); \
 +    } \
 +    (cond) ? 0 : -ETIMEDOUT; \
 +})
 +
 +#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) 
 \
 +    readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, 
 timeout_us)
 +
 +/* Xen: Helpers for IRQ functions */
 +#define request_irq(irq, func, flags, name, dev) request_irq(irq, flags, 
 func, name, dev)
 +#define free_irq release_irq
 +
 +enum irqreturn {
 +    IRQ_NONE    = (0 << 0),
 +    IRQ_HANDLED    = (1 << 0),
 +};
 +
 +typedef enum irqreturn irqreturn_t;
 +
 +/* Device logger functions */
 +#define dev_print(dev, lvl, fmt, ...)    \
 + printk(lvl "smmu: " fmt, ## __VA_ARGS__)
 +
 +#define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## 
 __VA_ARGS__)
 +#define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## 
 __VA_ARGS__)
 +#define dev_warn(dev, fmt, ...) dev_print(dev, XENLOG_WARNING, fmt, ## 
 __VA_ARGS__)
 +#define dev_err

[Xen-devel] [distros-debian-snapshot test] 72475: tolerable FAIL

2017-11-21 Thread Platform Team regression test user
flight 72475 distros-debian-snapshot real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72475/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-i386-weekly-netinst-pygrub 10 debian-di-install fail like 
72445
 test-amd64-amd64-amd64-weekly-netinst-pygrub 10 debian-di-install fail like 
72445
 test-amd64-amd64-amd64-current-netinst-pygrub 10 debian-di-install fail like 
72445
 test-amd64-i386-amd64-weekly-netinst-pygrub 10 debian-di-install fail like 
72445
 test-armhf-armhf-armhf-daily-netboot-pygrub 10 debian-di-install fail like 
72445
 test-amd64-i386-amd64-daily-netboot-pygrub 10 debian-di-install fail like 72445
 test-amd64-amd64-amd64-daily-netboot-pvgrub 10 debian-di-install fail like 
72445
 test-amd64-amd64-i386-daily-netboot-pygrub 10 debian-di-install fail like 72445
 test-amd64-i386-i386-daily-netboot-pvgrub 10 debian-di-install fail like 72445
 test-amd64-i386-i386-weekly-netinst-pygrub 10 debian-di-install fail like 72445
 test-amd64-i386-i386-current-netinst-pygrub 10 debian-di-install fail like 
72445
 test-amd64-i386-amd64-current-netinst-pygrub 10 debian-di-install fail like 
72445
 test-amd64-amd64-i386-current-netinst-pygrub 10 debian-di-install fail like 
72445

baseline version:
 flight   72445

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-daily-netboot-pvgrub  fail
 test-amd64-i386-i386-daily-netboot-pvgrubfail
 test-amd64-i386-amd64-daily-netboot-pygrub   fail
 test-armhf-armhf-armhf-daily-netboot-pygrub  fail
 test-amd64-amd64-i386-daily-netboot-pygrub   fail
 test-amd64-amd64-amd64-current-netinst-pygrubfail
 test-amd64-i386-amd64-current-netinst-pygrub fail
 test-amd64-amd64-i386-current-netinst-pygrub fail
 test-amd64-i386-i386-current-netinst-pygrub  fail
 test-amd64-amd64-amd64-weekly-netinst-pygrub fail
 test-amd64-i386-amd64-weekly-netinst-pygrub  fail
 test-amd64-amd64-i386-weekly-netinst-pygrub  fail
 test-amd64-i386-i386-weekly-netinst-pygrub   fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.9 test] 116395: trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116395 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116395/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pvhv2-amd broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64broken
 test-amd64-i386-libvirt-xsm  broken
 test-amd64-i386-libvirt-xsm   4 host-install(4)broken REGR. vs. 116332
 test-amd64-amd64-xl-pvhv2-amd  4 host-install(4)   broken REGR. vs. 116332
 test-amd64-i386-xl-qemuu-debianhvm-amd64 4 host-install(4) broken REGR. vs. 
116332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116332
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116332
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116332
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux563c24f65f4fb009047cf4702dd16c7c592fd2b2
baseline version:
 linuxea88d5c5f41140cd531dab9cf718282b10996235

Last test of basis   116332  2017-11-19 08:43:09 Z2 days
Testing same since   116395  2017-11-21 09:03:03 Z0 days1 attempts


People who touched revisions under test:
  Aaron Brown 
  Aaron Sierra 
  Alan Stern 
  Alexander Duyck 
  Alexandre Belloni 
  Alexey Khoroshilov 
  Andrew Bowers 
  Andrew Gabbasov 
  Andrey Konovalov 
  Ard Biesheuvel 
  Arvind Yadav 
  Bernhard Rosenkraenzer 
  Borislav Petkov 
  Chanwoo Choi 
  Chris J Arges 
  Daniel Bristot de Oliveira 
  Daniel Vetter 
  David S. Miller 
  Dick Kennedy 
  Dmitry V. Levin 
  Don Skidmore 
  Douglas Fischer 
  Emil T

[Xen-devel] [seabios test] 116396: regressions - trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116396 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116396/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64broken
 test-amd64-i386-qemuu-rhel6hvm-amdbroken in 116373
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd 4 host-install(4) broken in 116373 pass in 
116396
 test-amd64-i386-xl-qemuu-debianhvm-amd64 4 host-install(4) broken pass in 
116373

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  df46d10c8a7b88eb82f3ceb2aa31782dee15593d
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   18 days
Failing since115733  2017-11-10 17:19:59 Z   11 days   18 attempts
Testing same since   116211  2017-11-16 00:20:45 Z5 days8 attempts


People who touched revisions under test:
  Kevin O'Connor 
  Stefan Berger 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 broken  
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 broken
broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64 host-install(4)
broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken

Not pushing.


commit df46d10c8a7b88eb82f3ceb2aa31782dee15593d
Author: Stefan Berger 
Date:   Tue Nov 14 15:03:47 2017 -0500

tpm: Add support for TPM2 ACPI table

Add support for the TPM2 ACPI table. If we find it and its
of the appropriate size, we can get the log_area_start_address
and log_area_minimum_size from it.

The latest version of the spec can be found here:

https://trustedcomputinggroup.org/tcg-acpi-specification/

Signed-off-by: Stefan Berger 

commit 0541f2f0f246e77d7c726926976920e8072d1119
Author: Kevin O'Connor 
Date:   Fri N

Re: [Xen-devel] [PATCH] mini-os: add config item for printing via hypervisor

2017-11-21 Thread Samuel Thibault
Hello,

Juergen Gross, on mar. 21 nov. 2017 16:15:09 +0100, wrote:
> Today Mini-OS will print all console output via the hypervisor, too.
> 
> Make this behavior configurable instead and default it to "off".

> -/* Copies all print output to the Xen emergency console apart
> -   of standard dom0 handled console */

Please keep this comment somewhere, probably here:

>  CONFIG_BALLOON ?= n
> +CONFIG_USE_XEN_CONSOLE ?= n

apart from that,

Acked-by: Samuel Thibault 

Samuel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-3.18 test] 116394: trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116394 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116394/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-raw   broken
 test-amd64-amd64-xl-xsm  broken
 test-amd64-amd64-xl-qemuu-debianhvm-amd64   broken
 test-amd64-amd64-xl-qemuu-ws16-amd64 broken
 test-amd64-i386-qemuu-rhel6hvm-amd broken
 test-amd64-amd64-xl-qemuu-ws16-amd64 4 host-install(4) broken REGR. vs. 116308
 test-amd64-i386-xl-raw4 host-install(4)broken REGR. vs. 116308
 test-amd64-amd64-xl-xsm   4 host-install(4)broken REGR. vs. 116308
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 4 host-install(4) broken REGR. vs. 
116308
 test-amd64-i386-qemuu-rhel6hvm-amd  4 host-install(4)  broken REGR. vs. 116308

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116308
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116308
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116308
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116308
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116308
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116308
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116308
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linuxc35c375efa4e2c832946a04e83155f928135e8f6
baseline version:
 linux2f95dcc30a114dae9c0e54cdb451050e9261fdc6

Last test of basis   116308  2017-11-18 10:20:01 Z3 days
Testing same since   116394  2017-11-21 08:20:33 Z0 days1 attempts


People who touched revisions under test:
  Aaron Brown 
  Aaron Sierra 
  Alan Stern 
  Ale

[Xen-devel] [xen-unstable test] 116388: regressions - trouble: blocked/broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116388 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116388/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm   broken
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsmbroken
 build-amd64   4 host-install(4)broken REGR. vs. 116214
 test-armhf-armhf-xl-xsm  broken  in 116366
 test-armhf-armhf-xl-credit2  17 guest-start.2  fail in 116337 REGR. vs. 116214

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-xsm  4 host-install(4) broken in 116366 pass in 116388
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 4 host-install(4) broken 
pass in 116366
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 4 host-install(4) broken pass in 
116366
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-saverestore fail in 116366 pass 
in 116337
 test-armhf-armhf-libvirt-raw  7 xen-boot fail in 116366 pass in 116388
 test-armhf-armhf-xl-credit2  16 guest-start/debian.repeat  fail pass in 116337
 test-armhf-armhf-xl   6 xen-installfail pass in 116366

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-xtf-amd64-amd64-11 build-check(1)   blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-win10-i386  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-examine   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-21 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 build-amd64-rumprun   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-livepatch 1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-41 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-xtf-amd64-amd64-31 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-xtf-amd64-amd64-51 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-chec

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-21 Thread Andrew Cooper
On 13/11/17 15:41, George Dunlap wrote:
> Signed-off-by: George Dunlap 
> ---
> CC: Ian Jackson 
> CC: Wei Liu 
> CC: Andrew Cooper 
> CC: Jan Beulich 
> CC: Stefano Stabellini 
> CC: Konrad Wilk 
> CC: Tim Deegan 
> CC: Tamas K Lengyel 
> ---
>  SUPPORT.md | 31 +++
>  1 file changed, 31 insertions(+)
>
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 0f7426593e..3e352198ce 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -187,6 +187,37 @@ Export hypervisor coverage data suitable for analysis by 
> gcov or lcov.
>  
>  Status: Supported
>  
> +### Memory Sharing
> +
> +Status, x86 HVM: Tech Preview
> +Status, ARM: Tech Preview
> +
> +Allow sharing of identical pages between guests

"Tech Preview" should imply there is any kind of `xl dedup-these-domains
$X $Y` functionality.

The only thing we appears to have an example wrapper around the libxc
interface, which requires the user to nominate individual frames, and
this doesn't qualify as "functionally complete" IMO.

There also doesn't appear to be any ARM support in the slightest. 
mem_sharing_{memop,domctl}() are only implemented for x86.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem

2017-11-21 Thread Andrew Cooper
On 21/11/17 19:05, Ian Jackson wrote:
> George Dunlap writes ("Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, 
> crash post-portem"):
>> gdbsx security support: Someone may want to debug an untrusted guest,
>> so I think we should say 'yes' here.
> I think running gdb on an potentially hostile program is foolish.
>
>> I don't have a strong opinion on gdbsx; I'd call it 'supported', but if
>> you think we need to exclude it from security support I'm happy with
>> that as well.
> gdbsx itself is probably simple enough to be fine but I would rather
> not call it security supported because that might encourage people to
> use it with gdb.
>
> If someone wants to use gdbsx with something that's not gdb then they
> might want to ask us to revisit that.

If gdbsx chooses (or gets tricked into using) DOMID_XEN, then it gets
arbitrary read/write access over hypervisor virtual address space, due
to the behaviour of the hypercalls it uses.

As a tool, it mostly functions (there are some rather sharp corners
which I've not gotten time to fix so far), but it is definitely not
something I would trust in a hostile environment.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem

2017-11-21 Thread Ian Jackson
George Dunlap writes ("Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, 
crash post-portem"):
> gdbsx security support: Someone may want to debug an untrusted guest,
> so I think we should say 'yes' here.

I think running gdb on an potentially hostile program is foolish.

> I don't have a strong opinion on gdbsx; I'd call it 'supported', but if
> you think we need to exclude it from security support I'm happy with
> that as well.

gdbsx itself is probably simple enough to be fine but I would rather
not call it security supported because that might encourage people to
use it with gdb.

If someone wants to use gdbsx with something that's not gdb then they
might want to ask us to revisit that.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem

2017-11-21 Thread George Dunlap
On 11/21/2017 08:48 AM, Jan Beulich wrote:
 On 13.11.17 at 16:41,  wrote:
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -152,6 +152,35 @@ Output of information in machine-parseable JSON format
>>  
>>  Status: Supported, Security support external
>>  
>> +## Debugging, analysis, and crash post-mortem
>> +
>> +### gdbsx
>> +
>> +Status, x86: Supported
>> +
>> +Debugger to debug ELF guests
>> +
>> +### Soft-reset for PV guests
>> +
>> +Status: Supported
>> +
>> +Soft-reset allows a new kernel to start 'from scratch' with a fresh VM 
>> state, 
>> +but with all the memory from the previous state of the VM intact.
>> +This is primarily designed to allow "crash kernels", 
>> +which can do core dumps of memory to help with debugging in the event of a 
>> crash.
>> +
>> +### xentrace
>> +
>> +Status, x86: Supported
>> +
>> +Tool to capture Xen trace buffer data
>> +
>> +### gcov
>> +
>> +Status: Supported, Not security supported
> 
> I agree with excluding security support here, but why wouldn't the
> same be the case for gdbsx and xentrace?

From my initial post:

---

gdbsx security support: Someone may want to debug an untrusted guest,
so I think we should say 'yes' here.

xentrace: Users may want to trace guests in production environments,
so I think we should say 'yes'.

gcov: No good reason to run a gcov hypervisor in a production
environment.  May be ways for a rogue guest to DoS.

---

xentrace I would argue for security support; I've asked customers to
send me xentrace data as part of analysis before.  I also know enough
about it that I'm reasonably confident the risk of an attack vector is
pretty low.

I don't have a strong opinion on gdbsx; I'd call it 'supported', but if
you think we need to exclude it from security support I'm happy with
that as well.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware

2017-11-21 Thread George Dunlap
On 11/21/2017 08:39 AM, Jan Beulich wrote:
 On 13.11.17 at 16:41,  wrote:
>> +### x86/Nested PV
>> +
>> +Status, x86 HVM: Tech Preview
>> +
>> +This means running a Xen hypervisor inside an HVM domain,
>> +with support for PV L2 guests only
>> +(i.e., hardware virtualization extensions not provided
>> +to the guest).
>> +
>> +This works, but has performance limitations
>> +because the L1 dom0 can only access emulated L1 devices.
> 
> So is this explicitly meaning Xen-on-Xen? Xen-on-KVM, for example,
> could be considered "nested PV", too. IOW I think it needs to be
> spelled out whether this means the host side of things here, the
> guest one, or both.

Yes, that's true.  But I forget: Can a Xen dom0 use virtio guest
drivers?  I'm pretty sure Stefano tried it at some point but I don't
remember what the result was.

>> +### x86/Nested HVM
>> +
>> +Status, x86 HVM: Experimental
>> +
>> +This means running a Xen hypervisor inside an HVM domain,
>> +with support for running both PV and HVM L2 guests
>> +(i.e., hardware virtualization extensions provided
>> +to the guest).
> 
> "Nested HVM" generally means more than using Xen as the L1
> hypervisor. If this is really to mean just L1 Xen, I think the title
> should already say so, not just the description.

Yes, I mean any sort of nested guest support here.

>> +### x86/Advanced Vector eXtension
>> +
>> +Status: Supported
> 
> As indicated before, I think this either needs to be dropped or
> be extended by an entry for virtually every CPUID bit exposed
> to guests. Furthermore, in this isolated fashion it is not clear
> what derived features (e.g. FMA, FMA4, AVX2, or even AVX-512)
> it is meant to imply. If any of them are implied, "with caveats"
> would need to be added as long as the instruction emulator isn't
> capable of handling the instructions, yet.

Adding a section for CPUID bits supported (and to what level) sounds
like a useful thing to do, perhaps in the next release.

>> +### x86/HVM EFI
>> +
>> +Status: Supported
>> +
>> +Booting a guest via guest EFI firmware
> 
> Shouldn't this say OVMF, to avoid covering possible other
> implementations?

I don't expect that we'll ever need more than one EFI implementation in
the tree.  If a time comes when it makes sense to have two, we can
adjust the entry accordingly.

 -George


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 06/16] SUPPORT.md: Add scalability features

2017-11-21 Thread George Dunlap
On 11/21/2017 05:31 PM, Julien Grall wrote:
> Hi George,
> 
> On 11/21/2017 04:43 PM, George Dunlap wrote:
>> On 11/16/2017 03:19 PM, Julien Grall wrote:
>>> On 13/11/17 15:41, George Dunlap wrote:
 Signed-off-by: George Dunlap 
 ---
 CC: Ian Jackson 
 CC: Wei Liu 
 CC: Andrew Cooper 
 CC: Jan Beulich 
 CC: Stefano Stabellini 
 CC: Konrad Wilk 
 CC: Tim Deegan 
 CC: Julien Grall 
 ---
    SUPPORT.md | 21 +
    1 file changed, 21 insertions(+)

 diff --git a/SUPPORT.md b/SUPPORT.md
 index c884fac7f5..a8c56d13dd 100644
 --- a/SUPPORT.md
 +++ b/SUPPORT.md
 @@ -195,6 +195,27 @@ on embedded platforms.
      Enables NUMA aware scheduling in Xen
    +## Scalability
 +
 +### 1GB/2MB super page support
 +
 +    Status, x86 HVM/PVH: : Supported
 +    Status, ARM: Supported
 +
 +NB that this refers to the ability of guests
 +to have higher-level page table entries point directly to memory,
 +improving TLB performance.
 +This is independent of the ARM "page granularity" feature (see below).
>>>
>>> I am not entirely sure about this paragraph for Arm. I understood this
>>> section as support for stage-2 page-table (aka EPT on x86) but the
>>> paragraph lead me to believe to it is for guest.
>>>
>>> The size of super pages of guests will depend on the page granularity
>>> used by itself and the format of the page-table (e.g LPAE vs short
>>> descriptor). We have no control on that.
>>>
>>> What we have control is the size of mapping used for stage-2 page-table.
>>
>> Stepping back from the document for a minute: would it make sense to use
>> "hardware assisted paging" (HAP) for Intel EPT, AMD RVI (previously
>> NPT), and ARM stage-2 pagetables?  HAP was already a general term used
>> to describe the two x86 technologies; and I think the description makes
>> sense, because if we didn't have hardware-assisted stage 2 pagetables
>> we'd need Xen-provided shadow pagetables.
> 
> I think using the term "hardware assisted paging" should be fine to
> refer the 3 technologies.

OK, great.

[snip]

> Short-descriptor is always using 4KB granularity supports 16MB, 1MB, 64KB
> 
> LPAE supports 4KB, 16KB, 64KB granularities. Each of them having
> different size of superpage.

Yes, that's why I started saying "L2 and L3 superpages", to mean
"Superpage entries in L2 or L3 pagetables", instead of 2MiB or 1GiB.
(Let me know if you can think of a better way to describe that.)

>> 3. Whether Xen provides the *interface* for a guest to use L2 or L3
>> superpages (for 4k page granularity, 2MiB or 1GiB respectively) in its
>> own pagetables.  I *think* HAP on x86 provides the interface whenever
>> the underlying hardware does.  I assume it's the same for ARM?  In the
>> case of shadow mode, we only provide the interface for 2MiB pagetables.
> 
> See above. We have no way to control that in the guest.

We don't control whether the guest uses *any* features.  Should we not
mention PV disks or SMMUv2 or whatever because we don't know if the
guest will use them?

Of course not.  This document describes whether the guest *has the
features available to use*, either provided by the hardware or emulated
by Xen.

It sounds like you may not have ever thought about whether an ARM guest
has L2 or L3 superpages available, because it's always had all of them;
but it's different on x86.

[snip]

>> 2. Whether Xen uses superpage mappings for HAP.  Xen uses this on x86
>> when hardware support is -- I take it Xen does this on ARM as well?
>
> The size of superpages supported will depend on the page-table format
> (short-descriptor vs LPAE) and the granularity used.
>
> Supersection (16MB) for short-descriptor is optional but mandatory when
> the processor support LPAE. LPAE is mandatory with virtualization. So
> all size of superpages are supported.
>
> Note that stage-2 page-tables can only use LPAE page-table.
>
> I would also rather avoid to mention any superpage size for Arm in
> SUPPORT.MD as there are a lot.

So it sounds like basically everything supported on native was supported
in virtualization (and under Xen) from the start, so it's probably less
important to mention.  But since we *will* need to do that for x86, we
probably need to say *something* in case people want to know.

Let me see what I can come up with.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86

2017-11-21 Thread George Dunlap
On 11/21/2017 08:29 AM, Jan Beulich wrote:
>> +### QEMU backend hotplugging for xl
>> +
>> +Status: Supported
> 
> Wouldn't this more appropriately be
> 
> ### QEMU backend hotplugging
> 
> Status, xl: Supported

You mean, for this whole section (i.e., everything here that says 'for
xl')?  If not, why this one in particular?

>> +## Virtual driver support, guest side
>> +
>> +### Blkfront
>> +
>> +Status, Linux: Supported
>> +Status, FreeBSD: Supported, Security support external
>> +Status, NetBSD: Supported, Security support external
>> +Status, Windows: Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV block protocol
>> +
>> +### Netfront
>> +
>> +Status, Linux: Supported
>> +States, Windows: Supported
>> +Status, FreeBSD: Supported, Security support external
>> +Status, NetBSD: Supported, Security support external
>> +Status, OpenBSD: Supported, Security support external
> 
> Seeing the difference in OSes between the two (with the variance
> increasing in entries further down) - what does the absence of an
> OS on one list, but its presence on another mean? While not
> impossible, I would find it surprising if e.g. OpenBSD had netfront
> but not even a basic blkfront.

Actually -- at least according to the paper presenting PV frontends for
OpenBSD in 2016 [1], they implemented xenstore and netfront frontends,
but not (at least at that point) a disk frontend.

However, blktfront does appear as a feature in OpenBSD 6.1, released in
April [2]; so I'll add that one in.  (Perhaps Roger hadn't heard that it
had been implemented.)

[1] https://www.openbsd.org/papers/asiabsdcon2016-xen-paper.pdf

[2] https://www.openbsd.org/61.html

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 06/16] SUPPORT.md: Add scalability features

2017-11-21 Thread Julien Grall

Hi George,

On 11/21/2017 04:43 PM, George Dunlap wrote:

On 11/16/2017 03:19 PM, Julien Grall wrote:

On 13/11/17 15:41, George Dunlap wrote:

Signed-off-by: George Dunlap 
---
CC: Ian Jackson 
CC: Wei Liu 
CC: Andrew Cooper 
CC: Jan Beulich 
CC: Stefano Stabellini 
CC: Konrad Wilk 
CC: Tim Deegan 
CC: Julien Grall 
---
   SUPPORT.md | 21 +
   1 file changed, 21 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index c884fac7f5..a8c56d13dd 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -195,6 +195,27 @@ on embedded platforms.
     Enables NUMA aware scheduling in Xen
   +## Scalability
+
+### 1GB/2MB super page support
+
+    Status, x86 HVM/PVH: : Supported
+    Status, ARM: Supported
+
+NB that this refers to the ability of guests
+to have higher-level page table entries point directly to memory,
+improving TLB performance.
+This is independent of the ARM "page granularity" feature (see below).


I am not entirely sure about this paragraph for Arm. I understood this
section as support for stage-2 page-table (aka EPT on x86) but the
paragraph lead me to believe to it is for guest.

The size of super pages of guests will depend on the page granularity
used by itself and the format of the page-table (e.g LPAE vs short
descriptor). We have no control on that.

What we have control is the size of mapping used for stage-2 page-table.


Stepping back from the document for a minute: would it make sense to use
"hardware assisted paging" (HAP) for Intel EPT, AMD RVI (previously
NPT), and ARM stage-2 pagetables?  HAP was already a general term used
to describe the two x86 technologies; and I think the description makes
sense, because if we didn't have hardware-assisted stage 2 pagetables
we'd need Xen-provided shadow pagetables.


I think using the term "hardware assisted paging" should be fine to 
refer the 3 technologies.




Back to the question at hand, there are four different things:

1. Whether Xen itself uses superpage mappings for its virtual address
space.  (Not sure if Xen does this or not.)


Xen is trying to use superpage mappings for itself whenever it is possible.



2. Whether Xen uses superpage mappings for HAP.  Xen uses this on x86
when hardware support is -- I take it Xen does this on ARM as well?


The size of superpages supported will depend on the page-table format 
(short-descriptor vs LPAE) and the granularity used.


Supersection (16MB) for short-descriptor is optional but mandatory when 
the processor support LPAE. LPAE is mandatory with virtualization. So 
all size of superpages are supported.


Note that stage-2 page-tables can only use LPAE page-table.

I would also rather avoid to mention any superpage size for Arm in 
SUPPORT.MD as there are a lot.


Short-descriptor is always using 4KB granularity supports 16MB, 1MB, 64KB

LPAE supports 4KB, 16KB, 64KB granularities. Each of them having 
different size of superpage.




3. Whether Xen provides the *interface* for a guest to use L2 or L3
superpages (for 4k page granularity, 2MiB or 1GiB respectively) in its
own pagetables.  I *think* HAP on x86 provides the interface whenever
the underlying hardware does.  I assume it's the same for ARM?  In the
case of shadow mode, we only provide the interface for 2MiB pagetables.


See above. We have no way to control that in the guest.



4. Whether a guest using L2 or L3 superpages will actually have
superpages, or whether it's "only emulated".  As Jan said, for shadow
pagetables on x86, the underlying pagetables still only have 4k pages,
so the guest will get no benefit from using L2 superpages in its
pagetables (either in terms of reduced memory reads on a tlb miss, or in
terms of larger effectiveness of each TLB entry).

#3 and #4 are probably the most pertinent to users, with #2 being next
on the list, and #1 being least.

Does that make sense?


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 18:00,  wrote:
> On Tue, 2017-11-21 at 08:29 -0700, Jan Beulich wrote:
>> > > > On 21.11.17 at 15:07,  wrote:
>> > 
>> > On 21/11/17 13:22, Jan Beulich wrote:
>> > > > > > On 09.11.17 at 15:49,  wrote:
>> > > > 
>> > > > See the code comment being added for why we need this.
>> > > > 
>> > > > Reported-by: Igor Druzhinin 
>> > > > Signed-off-by: Jan Beulich 
>> > > 
>> > > I realize we aren't settled yet on where to put the sync call. The
>> > > discussion appears to have stalled, though. Just to recap,
>> > > alternatives to the placement below are
>> > > - at the top of complete_domain_destroy(), being the specific
>> > >   RCU callback exhibiting the problem (others are unlikely to
>> > >   touch guest state)
>> > > - in rcu_do_batch(), paralleling the similar call from
>> > >   do_tasklet_work()
>> > 
>> > rcu_do_batch() sounds better to me. As I said before I think that the
>> > problem is general for the hypervisor (not for VMX only) and might
>> > appear in other places as well.
>> 
>> The question here is: In what other cases do we expect an RCU
>> callback to possibly touch guest state? I think the common use is
>> to merely free some memory in a delayed fashion.
>> 
>> > Those choices that you outlined appear to be different in terms whether
>> > we solve the general problem and probably have some minor performance
>> > impact or we solve the ad-hoc problem but make the system more
>> > entangled. Here I'm more inclined to the first choice because this
>> > particular scenario the performance impact should be negligible.
>> 
>> For the problem at hand there's no question about a
>> performance effect. The question is whether doing this for _other_
>> RCU callbacks would introduce a performance drop in certain cases.
> 
> So what are performance implications of my original suggestion of
> removing !v->is_running check from vmx_ctxt_switch_from() ?
> From what I can see:
> 
> 1. Another field in struct vcpu will be checked instead (vmcs_pa)
> 2. Additionally this_cpu(current_vmcs) will be loaded, which shouldn't
>be terrible, given how heavy a context switch already is.

There are no performance implications afaict; I'm simply of the
opinion this is not the way the issue should be addressed. The
sync approach seems much more natural to me.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 116390: regressions - trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116390 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116390/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pvhv2-amd broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsmbroken
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm   broken
 test-amd64-amd64-xl-pvhv2-amd  4 host-install(4)   broken REGR. vs. 116190
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 4 host-install(4) broken REGR. 
vs. 116190
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 4 host-install(4) broken 
REGR. vs. 116190
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
116190

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116190
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116190
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116190
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuub2996bb405e2806725a341c72d80be9e77ed8b82
baseline version:
 qemuu1fa0f627d03cd0d0755924247cafeb42969016bf

Last test of basis   116190  2017-11-15 06:53:12 Z6 days
Failing since116227  2017-11-16 13:17:17 Z5 days6 attempts
Testing same since   116390  2017-11-21 04:19:53 Z0 days1 attempts


People who touched revisions under test:
  "Daniel P. Berrange" 
  Alex Bennée 
  Alexey Kardashevskiy 
  Anton Nefedov 
  BALATON Zoltan 
  Christian Borntraeger 
  Daniel Henrique Barboza 
  Daniel P. Berrange 
  Dariusz Stojaczyk 
  David Gibson 
  Dou Liyang 
  Dr. David Alan Gilbert 
  Ed Swierk 
  Emilio G. Cota 
  Eric Blake 
  Gerd Hoffmann 
  Greg Kurz 
  Jason Wang 
  Jindrich Makovicka 
  Kevin Wolf 
  linzhecheng 
  Mao Zhongyi 
  Marc-André Lureau 
  Marcel Apfelbaum 
  Maria Klimushenkova 
  Max Reitz 
  Michael S. Tsirkin 
  Mike Nawrocki 
  Paolo Bonzini 
  Pavel Dovgalyuk 
  Peter Maydell 
  Philippe Mathieu-Daudé 
  Richard Henderson 
  Stefan Berger 
  Stefan Hajnoczi 
  Stefan Weil 
  Suraj Jitindar Singh 
  Thomas Huth 
  Vladimir Sementsov-Ogievskiy 
  Wang Guang 
  Wang Yong 
  Wanpeng Li 
  Yongbok Kim 

jobs:
 build-amd

Re: [Xen-devel] [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86

2017-11-21 Thread George Dunlap
On 11/21/2017 11:41 AM, Jan Beulich wrote:
 On 21.11.17 at 11:56,  wrote:
>> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>> On 13.11.17 at 16:41,  wrote:
 +### PV USB support for xl
 +
 +Status: Supported
 +
 +### PV 9pfs support for xl
 +
 +Status: Tech Preview
>>>
>>> Why are these two being called out, but xl support for other device
>>> types isn't?
>>
>> Do you see how big this document is? :-)  If you think something else
>> needs to be covered, don't ask why I didn't mention it, just say what
>> you think I missed.
> 
> Well, (not very) implicitly here: The same for all other PV protocols.

Oh, I see -- you didn't read my comment below the `---` pointing this
out.  :-)

Yes, I wasn't quite sure what to do here.  We already list all the PV
protocols in at least 2 places (frontend and backend support); it seemed
a bit redundant to list them all again in xl and/or libxl support.

Except, of course, that there are a number of protocols *not* plumbed
through the toolstack yet -- PVSCSI being one example.

Any suggestions would be welcome.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread Sergey Dyasli
On Tue, 2017-11-21 at 08:29 -0700, Jan Beulich wrote:
> > > > On 21.11.17 at 15:07,  wrote:
> > 
> > On 21/11/17 13:22, Jan Beulich wrote:
> > > > > > On 09.11.17 at 15:49,  wrote:
> > > > 
> > > > See the code comment being added for why we need this.
> > > > 
> > > > Reported-by: Igor Druzhinin 
> > > > Signed-off-by: Jan Beulich 
> > > 
> > > I realize we aren't settled yet on where to put the sync call. The
> > > discussion appears to have stalled, though. Just to recap,
> > > alternatives to the placement below are
> > > - at the top of complete_domain_destroy(), being the specific
> > >   RCU callback exhibiting the problem (others are unlikely to
> > >   touch guest state)
> > > - in rcu_do_batch(), paralleling the similar call from
> > >   do_tasklet_work()
> > 
> > rcu_do_batch() sounds better to me. As I said before I think that the
> > problem is general for the hypervisor (not for VMX only) and might
> > appear in other places as well.
> 
> The question here is: In what other cases do we expect an RCU
> callback to possibly touch guest state? I think the common use is
> to merely free some memory in a delayed fashion.
> 
> > Those choices that you outlined appear to be different in terms whether
> > we solve the general problem and probably have some minor performance
> > impact or we solve the ad-hoc problem but make the system more
> > entangled. Here I'm more inclined to the first choice because this
> > particular scenario the performance impact should be negligible.
> 
> For the problem at hand there's no question about a
> performance effect. The question is whether doing this for _other_
> RCU callbacks would introduce a performance drop in certain cases.

So what are performance implications of my original suggestion of
removing !v->is_running check from vmx_ctxt_switch_from() ?
From what I can see:

1. Another field in struct vcpu will be checked instead (vmcs_pa)
2. Additionally this_cpu(current_vmcs) will be loaded, which shouldn't
   be terrible, given how heavy a context switch already is.

-- 
Thanks,
Sergey
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 04/16] SUPPORT.md: Add core ARM features

2017-11-21 Thread Julien Grall

Hi George,

On 11/21/2017 10:45 AM, George Dunlap wrote:

On 11/21/2017 08:11 AM, Jan Beulich wrote:

On 13.11.17 at 16:41,  wrote:

+### ARM/SMMUv1
+
+Status: Supported
+
+### ARM/SMMUv2
+
+Status: Supported


Do these belong here, when IOMMU isn't part of the corresponding
x86 patch?


Since there was recently a time when these weren't supported, I think
it's useful to have them in here.  (Julien, let me know if you think
otherwise.)


I think it is useful to keep them. There are other IOMMUs existing on 
Arm (e.g SMMUv3, IPMMU-VMSA) that we don't yet support in Xen.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread George Dunlap
On 11/21/2017 04:42 PM, Dario Faggioli wrote:
> On Tue, 2017-11-21 at 08:29 -0700, Jan Beulich wrote:
> On 21.11.17 at 15:07,  wrote:
>>>
>> The question here is: In what other cases do we expect an RCU
>> callback to possibly touch guest state? I think the common use is
>> to merely free some memory in a delayed fashion.
>>
>>> Those choices that you outlined appear to be different in terms
>>> whether
>>> we solve the general problem and probably have some minor
>>> performance
>>> impact or we solve the ad-hoc problem but make the system more
>>> entangled. Here I'm more inclined to the first choice because this
>>> particular scenario the performance impact should be negligible.
>>
>> For the problem at hand there's no question about a
>> performance effect. The question is whether doing this for _other_
>> RCU callbacks would introduce a performance drop in certain cases.
>>
> Well, I personally favour the approach of making the piece of code that
> plays with the context responsible of not messing up when doing so.
> 
> And (replying to Igor comment above), I don't think that syncing
> context before RCU handlers solves the general problem --as you're
> calling it-- of "VMX code asynchronously messing up with the context". 
> In fzct, it solves the specific problem of "VMX code called via RCU,
> asynchronously messing up with the context".
> There may be other places where (VMX?) code messes with context, *not*
> from within an RCU handler, and that would still be an issue.

Yes, to expand on what I said earlier: Given that we cannot (at least
between now and the release) make it so that developers *never* have to
think about syncing state, it seems like the best thing to do is to make
coders *always* think about syncing state.  Syncing always in the RCU
handler means coders can get away sometimes without syncing; which makes
it more likely we'll forget in some other circumstance where it matters.

But that's my take on general principles; like Dario I wouldn't argue
too strongly if someone felt differently.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [libvirt test] 116391: trouble: broken/pass

2017-11-21 Thread osstest service owner
flight 116391 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116391/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-pair broken
 test-amd64-i386-libvirt-pair 5 host-install/dst_host(5) broken REGR. vs. 116362

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116362
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116362
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116362
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  0d110277c0b57d934415a2ea29d3e6cbf9f0f200
baseline version:
 libvirt  3343ab0cd99c04761c17a36d9af354536df9e741

Last test of basis   116362  2017-11-20 04:20:14 Z1 days
Testing same since   116391  2017-11-21 04:30:03 Z0 days1 attempts


People who touched revisions under test:
  Andrea Bolognani 
  Michal Privoznik 
  Nikolay Shirokovskiy 
  Pino Toscano 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair broken  
 test-amd64-i386-libvirt-qcow2pass
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-libvirt-pair broken
broken-step test-amd64-i386-libvirt-pair host-install/dst_host(5)

Not pushing.


commit 0d110277c0b57d934415a2ea29d3e6cbf9f0f200
Author: Nikolay Shirokovskiy 
Date:   Fri Nov 17 16:17:38 2017 +0300

tests: fix typo

Signed-off-by: Michal Privoznik 
Reviewed-by: Daniel P. Berrange 

commit 937f319536723fec57ad472b002a159d0f67a77c
Author: Michal Privoznik 
Date:   Tue Nov 14 17:19:58 2017 +0100

qemuBuildDeviceAddressStr: Prefer default alias for PCI bus

ht

Re: [Xen-devel] [PATCH 06/16] SUPPORT.md: Add scalability features

2017-11-21 Thread George Dunlap
On 11/16/2017 03:19 PM, Julien Grall wrote:
> Hi George,
> 
> On 13/11/17 15:41, George Dunlap wrote:
>> Superpage support and PVHVM.
>>
>> Signed-off-by: George Dunlap 
>> ---
>> CC: Ian Jackson 
>> CC: Wei Liu 
>> CC: Andrew Cooper 
>> CC: Jan Beulich 
>> CC: Stefano Stabellini 
>> CC: Konrad Wilk 
>> CC: Tim Deegan 
>> CC: Julien Grall 
>> ---
>>   SUPPORT.md | 21 +
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index c884fac7f5..a8c56d13dd 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -195,6 +195,27 @@ on embedded platforms.
>>     Enables NUMA aware scheduling in Xen
>>   +## Scalability
>> +
>> +### 1GB/2MB super page support
>> +
>> +    Status, x86 HVM/PVH: : Supported
>> +    Status, ARM: Supported
>> +
>> +NB that this refers to the ability of guests
>> +to have higher-level page table entries point directly to memory,
>> +improving TLB performance.
>> +This is independent of the ARM "page granularity" feature (see below).
> 
> I am not entirely sure about this paragraph for Arm. I understood this
> section as support for stage-2 page-table (aka EPT on x86) but the
> paragraph lead me to believe to it is for guest.
> 
> The size of super pages of guests will depend on the page granularity
> used by itself and the format of the page-table (e.g LPAE vs short
> descriptor). We have no control on that.
> 
> What we have control is the size of mapping used for stage-2 page-table.

Stepping back from the document for a minute: would it make sense to use
"hardware assisted paging" (HAP) for Intel EPT, AMD RVI (previously
NPT), and ARM stage-2 pagetables?  HAP was already a general term used
to describe the two x86 technologies; and I think the description makes
sense, because if we didn't have hardware-assisted stage 2 pagetables
we'd need Xen-provided shadow pagetables.

Back to the question at hand, there are four different things:

1. Whether Xen itself uses superpage mappings for its virtual address
space.  (Not sure if Xen does this or not.)

2. Whether Xen uses superpage mappings for HAP.  Xen uses this on x86
when hardware support is -- I take it Xen does this on ARM as well?

3. Whether Xen provides the *interface* for a guest to use L2 or L3
superpages (for 4k page granularity, 2MiB or 1GiB respectively) in its
own pagetables.  I *think* HAP on x86 provides the interface whenever
the underlying hardware does.  I assume it's the same for ARM?  In the
case of shadow mode, we only provide the interface for 2MiB pagetables.

4. Whether a guest using L2 or L3 superpages will actually have
superpages, or whether it's "only emulated".  As Jan said, for shadow
pagetables on x86, the underlying pagetables still only have 4k pages,
so the guest will get no benefit from using L2 superpages in its
pagetables (either in terms of reduced memory reads on a tlb miss, or in
terms of larger effectiveness of each TLB entry).

#3 and #4 are probably the most pertinent to users, with #2 being next
on the list, and #1 being least.

Does that make sense?

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread Dario Faggioli
On Tue, 2017-11-21 at 08:29 -0700, Jan Beulich wrote:
> > > > On 21.11.17 at 15:07,  wrote:
> > 
> The question here is: In what other cases do we expect an RCU
> callback to possibly touch guest state? I think the common use is
> to merely free some memory in a delayed fashion.
> 
> > Those choices that you outlined appear to be different in terms
> > whether
> > we solve the general problem and probably have some minor
> > performance
> > impact or we solve the ad-hoc problem but make the system more
> > entangled. Here I'm more inclined to the first choice because this
> > particular scenario the performance impact should be negligible.
> 
> For the problem at hand there's no question about a
> performance effect. The question is whether doing this for _other_
> RCU callbacks would introduce a performance drop in certain cases.
> 
Well, I personally favour the approach of making the piece of code that
plays with the context responsible of not messing up when doing so.

And (replying to Igor comment above), I don't think that syncing
context before RCU handlers solves the general problem --as you're
calling it-- of "VMX code asynchronously messing up with the context". 
In fzct, it solves the specific problem of "VMX code called via RCU,
asynchronously messing up with the context".
There may be other places where (VMX?) code messes with context, *not*
from within an RCU handler, and that would still be an issue.

All that being said, given the nature of RCUs themselves, and given the
"precedent" we have for tasklets, I don't think it's a problem to sync
the state in rcu_do_batch().

Looking at users of call_rcu() (and trying to follow the call chains),
I think the only occasion where there may be an impact on perf, would
be when it's used in del_msixtbl_entry() (e.g., when that is called by
msixtbl_pt_unregister())... but I'm not familiar with that area of
code, so I may very well be wrong.

So, to summarize, if it were me doing this, I'd sync either in
vmx_vcpu_destroy() or in complete_domain_destroy(). But (for what it's
worth) I'm fine with it happening in rcu_do_batch().

Regards,
Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli

signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.6-testing baseline-only test] 72473: regressions - FAIL

2017-11-21 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72473 xen-4.6-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72473/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-121 xtf/test-hvm32-invlpg~shadow fail REGR. vs. 72351
 test-xtf-amd64-amd64-1 36 xtf/test-hvm32pae-invlpg~shadow fail REGR. vs. 72351
 test-xtf-amd64-amd64-148 xtf/test-hvm64-invlpg~shadow fail REGR. vs. 72351
 test-amd64-i386-xl-qemut-ws16-amd64 10 windows-installfail REGR. vs. 72351
 test-armhf-armhf-libvirt-raw 16 guest-start.2 fail REGR. vs. 72351
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
72351
 test-amd64-amd64-xl-qemuu-win10-i386 16 guest-localmigrate/x10 fail REGR. vs. 
72351

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1   fail blocked in 72351
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   like 72351
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   like 72351
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   like 72351
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail like 72351
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 72351
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 72351
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-xtf-amd64-amd64-3   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-installfail never pass
 test-xtf-amd64-amd64-4   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-xtf-amd64-amd64-5   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-2   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-xtf-amd64-amd64-1   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 17 guest-stop  fail never pass

version targeted for testing:
 xen  9b0c2a223132a07f06f0be8e85da390defe998f5
baseline version:
 xen  9454e3030ae0835c11aa66471238a9e09db5074e

Last test of basis72351  2017-10-25 10:43:49 Z   27 days
Testing same since72473  2017-11-20 16:16:35 Z1 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  George Dunlap 
  Jan Beulich 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm

Re: [Xen-devel] [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread Igor Druzhinin
On 09/11/17 14:49, Jan Beulich wrote:
> See the code comment being added for why we need this.
> 
> Reported-by: Igor Druzhinin 
> Signed-off-by: Jan Beulich 
> 
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -479,7 +479,13 @@ static void vmx_vcpu_destroy(struct vcpu
>   * we should disable PML manually here. Note that vmx_vcpu_destroy is 
> called
>   * prior to vmx_domain_destroy so we need to disable PML for each vcpu
>   * separately here.
> + *
> + * Before doing that though, flush all state for the vCPU previously 
> having
> + * run on the current CPU, so that this flushing of state won't happen 
> from
> + * the TLB flush IPI handler behind the back of a vmx_vmcs_enter() /
> + * vmx_vmcs_exit() section.
>   */
> +sync_local_execstate();
>  vmx_vcpu_disable_pml(v);
>  vmx_destroy_vmcs(v);
>  passive_domain_destroy(v);
> 

Reviewed-by: Igor Druzhinin 

Igor

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 2/5] xen: Provide XEN_DMOP_add_to_physmap

2017-11-21 Thread Jan Beulich
>>> On 23.10.17 at 11:05,  wrote:

First of all, instead of xen: please consider using something more
specific, like x86/hvm:.

> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -368,6 +368,22 @@ struct xen_dm_op_remote_shutdown {
> /* (Other reason values are not blocked) */
>  };
>  
> +/*
> + * XEN_DMOP_add_to_physmap : Sets the GPFNs at which a page range appears in
> + *   the specified guest's pseudophysical address

Talking of "pseudophysical" is at least confusing for HVM guests. So
far it was my understanding that such exists for PV guests only.

> + *   space. Identical to XENMEM_add_to_physmap with
> + *   space == XENMAPSPACE_gmfn_range.
> + */
> +#define XEN_DMOP_add_to_physmap 17
> +
> +struct xen_dm_op_add_to_physmap {
> +uint16_t size; /* Number of GMFNs to process. */

Why limit this to 16 bits?

> +uint16_t pad0;
> +uint32_t pad1;
> +uint64_aligned_t idx;  /* Index into GMFN space. */

Why would you call this "idx"? The other interface and its naming
should have no significance here. So perhaps "src_gfn" and ...

> +uint64_aligned_t gpfn; /* Starting GPFN where the GMFNs should appear. */

... "dst_gfn"?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread George Dunlap
On 11/21/2017 01:22 PM, Jan Beulich wrote:
 On 09.11.17 at 15:49,  wrote:
>> See the code comment being added for why we need this.
>>
>> Reported-by: Igor Druzhinin 
>> Signed-off-by: Jan Beulich 
> 
> I realize we aren't settled yet on where to put the sync call. The
> discussion appears to have stalled, though. Just to recap,
> alternatives to the placement below are
> - at the top of complete_domain_destroy(), being the specific
>   RCU callback exhibiting the problem (others are unlikely to
>   touch guest state)
> - in rcu_do_batch(), paralleling the similar call from
>   do_tasklet_work()

I read through the discussion yesterday without digging into the code.
At the moment, I'd say that specific code needing to touch potentially
non-sync'd state should be marked to sync it, rather than syncing it all
the time.  But I don't have a strong opinion (particularly as I haven't
dug into the code).

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 1/5] xen/mm: Make xenmem_add_to_physmap global

2017-11-21 Thread Jan Beulich
>>> On 23.10.17 at 11:05,  wrote:
> Make it global in preparation to be called by a new dmop.
> 
> Signed-off-by: Ross Lagerwall 
> 
> ---
> Reviewed-by: Paul Durrant 

Misplaced tag.

I'd prefer if the function was made non-static in the patch which
needs it so, but anyway
Acked-by: Jan Beulich 

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread Igor Druzhinin
On 21/11/17 15:29, Jan Beulich wrote:
 On 21.11.17 at 15:07,  wrote:
>> On 21/11/17 13:22, Jan Beulich wrote:
>> On 09.11.17 at 15:49,  wrote:
 See the code comment being added for why we need this.

 Reported-by: Igor Druzhinin 
 Signed-off-by: Jan Beulich 
>>>
>>> I realize we aren't settled yet on where to put the sync call. The
>>> discussion appears to have stalled, though. Just to recap,
>>> alternatives to the placement below are
>>> - at the top of complete_domain_destroy(), being the specific
>>>   RCU callback exhibiting the problem (others are unlikely to
>>>   touch guest state)
>>> - in rcu_do_batch(), paralleling the similar call from
>>>   do_tasklet_work()
>>
>> rcu_do_batch() sounds better to me. As I said before I think that the
>> problem is general for the hypervisor (not for VMX only) and might
>> appear in other places as well.
> 
> The question here is: In what other cases do we expect an RCU
> callback to possibly touch guest state? I think the common use is
> to merely free some memory in a delayed fashion.
> 

I don't know for sure what the common scenario is for Xen but drawing
parallels between Linux - you're probably right.

>> Those choices that you outlined appear to be different in terms whether
>> we solve the general problem and probably have some minor performance
>> impact or we solve the ad-hoc problem but make the system more
>> entangled. Here I'm more inclined to the first choice because this
>> particular scenario the performance impact should be negligible.
> 
> For the problem at hand there's no question about a
> performance effect. The question is whether doing this for _other_
> RCU callbacks would introduce a performance drop in certain cases.
> 

Yes, right. In that case this placement would mean we are going to lose
the partial context each time we take RCU in idle, is this correct? If
so that sounds like a common scenario to me and means there will be some
performance degradation, although I don't know how common it really is.

Anyway, if you're in favor of the previous approach I have no objections
as my understanding of Xen codebase is still partial.

Igor


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 15:07,  wrote:
> On 21/11/17 13:22, Jan Beulich wrote:
> On 09.11.17 at 15:49,  wrote:
>>> See the code comment being added for why we need this.
>>>
>>> Reported-by: Igor Druzhinin 
>>> Signed-off-by: Jan Beulich 
>> 
>> I realize we aren't settled yet on where to put the sync call. The
>> discussion appears to have stalled, though. Just to recap,
>> alternatives to the placement below are
>> - at the top of complete_domain_destroy(), being the specific
>>   RCU callback exhibiting the problem (others are unlikely to
>>   touch guest state)
>> - in rcu_do_batch(), paralleling the similar call from
>>   do_tasklet_work()
> 
> rcu_do_batch() sounds better to me. As I said before I think that the
> problem is general for the hypervisor (not for VMX only) and might
> appear in other places as well.

The question here is: In what other cases do we expect an RCU
callback to possibly touch guest state? I think the common use is
to merely free some memory in a delayed fashion.

> Those choices that you outlined appear to be different in terms whether
> we solve the general problem and probably have some minor performance
> impact or we solve the ad-hoc problem but make the system more
> entangled. Here I'm more inclined to the first choice because this
> particular scenario the performance impact should be negligible.

For the problem at hand there's no question about a
performance effect. The question is whether doing this for _other_
RCU callbacks would introduce a performance drop in certain cases.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 13/16 RESEND] rbtree: place easiest case first in rb_erase()

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

In rb_erase, move the easy case (node to erase has no more than
1 child) first. I feel the code reads easier that way.

Signed-off-by: Michel Lespinasse 
Reviewed-by: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Andrea Arcangeli 
Cc: David Woodhouse 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 60670b8034d6e2ba860af79c9379b7788d09db73]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 35 ++-
 1 file changed, 18 insertions(+), 17 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 8d836cef81..13a622326d 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -368,17 +368,28 @@ static void __rb_erase_color(struct rb_node *node, struct 
rb_node *parent,
 
 void rb_erase(struct rb_node *node, struct rb_root *root)
 {
-   struct rb_node *child, *parent;
+   struct rb_node *child = node->rb_right, *tmp = node->rb_left;
+   struct rb_node *parent;
int color;
 
-   if (!node->rb_left)
-   child = node->rb_right;
-   else if (!node->rb_right)
-   child = node->rb_left;
-   else {
+   if (!tmp) {
+   case1:
+   /* Case 1: node to erase has no more than 1 child (easy!) */
+
+   parent = rb_parent(node);
+   color = rb_color(node);
+
+   if (child)
+   rb_set_parent(child, parent);
+   __rb_change_child(node, child, parent, root);
+   } else if (!child) {
+   /* Still case 1, but this time the child is node->rb_left */
+   child = tmp;
+   goto case1;
+   } else {
struct rb_node *old = node, *left;
 
-   node = node->rb_right;
+   node = child;
while ((left = node->rb_left) != NULL)
node = left;
 
@@ -402,18 +413,8 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
node->__rb_parent_color = old->__rb_parent_color;
node->rb_left = old->rb_left;
rb_set_parent(old->rb_left, node);
-
-   goto color;
}
 
-   parent = rb_parent(node);
-   color = rb_color(node);
-
-   if (child)
-   rb_set_parent(child, parent);
-   __rb_change_child(node, child, parent, root);
-
-color:
if (color == RB_BLACK)
__rb_erase_color(child, parent, root);
 }
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 16/16 RESEND] rbtree: fix typo in comment of rb_insert_color

2017-11-21 Thread Praveen Kumar
From: Wei Yang 

In case 1, it passes down the BLACK color from G to p and u, and maintains
the color of n.  By doing so, it maintains the black height of the sub-tree.

While in the comment, it marks the color of n to BLACK.  This is a typo
and not consistents with the code.

This patch fixs this typo in comment.

Signed-off-by: Wei Yang 
Acked-by: Michel Lespinasse 
Cc: Xiao Guangrong 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 1b9c53e849aa65776d4f611d99aa09f856518dad]

Ported to Xen for rb_insert_color API.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 5c4e239c24..8977aea487 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -135,7 +135,7 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
 *  / \  / \
 * p   u  -->   P   U
 *//
-*   nN
+*   nn
 *
 * However, since g's parent might be red, and
 * 4) does not allow this, we need to recurse
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 12/16 RESEND] rbtree: add __rb_change_child() helper function

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

Add __rb_change_child() as an inline helper function to replace code that
would otherwise be duplicated 4 times in the source.

No changes to binary size or speed.

Signed-off-by: Michel Lespinasse 
Reviewed-by: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Andrea Arcangeli 
Cc: David Woodhouse 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 7abc704ae399fcb9c51ca200b0456f8a975a8011]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 46 +-
 1 file changed, 17 insertions(+), 29 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 07b0875227..8d836cef81 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -66,6 +66,19 @@ static inline struct rb_node *rb_red_parent(struct rb_node 
*red)
return (struct rb_node *)red->__rb_parent_color;
 }
 
+static inline void
+__rb_change_child(struct rb_node *old, struct rb_node *new,
+ struct rb_node *parent, struct rb_root *root)
+{
+   if (parent) {
+   if (parent->rb_left == old)
+   parent->rb_left = new;
+   else
+   parent->rb_right = new;
+   } else
+   root->rb_node = new;
+}
+
 /*
  * Helper function for rotations:
  * - old's parent and color get assigned to new
@@ -78,13 +91,7 @@ __rb_rotate_set_parents(struct rb_node *old, struct rb_node 
*new,
struct rb_node *parent = rb_parent(old);
new->__rb_parent_color = old->__rb_parent_color;
rb_set_parent_color(old, new, color);
-   if (parent) {
-   if (parent->rb_left == old)
-   parent->rb_left = new;
-   else
-   parent->rb_right = new;
-   } else
-   root->rb_node = new;
+   __rb_change_child(old, new, parent, root);
 }
 
 void rb_insert_color(struct rb_node *node, struct rb_root *root)
@@ -375,13 +382,7 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
while ((left = node->rb_left) != NULL)
node = left;
 
-   if (rb_parent(old)) {
-   if (rb_parent(old)->rb_left == old)
-   rb_parent(old)->rb_left = node;
-   else
-   rb_parent(old)->rb_right = node;
-   } else
-   root->rb_node = node;
+   __rb_change_child(old, node, rb_parent(old), root);
 
child = node->rb_right;
parent = rb_parent(node);
@@ -410,13 +411,7 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 
if (child)
rb_set_parent(child, parent);
-   if (parent) {
-   if (parent->rb_left == node)
-   parent->rb_left = child;
-   else
-   parent->rb_right = child;
-   } else
-   root->rb_node = child;
+   __rb_change_child(node, child, parent, root);
 
 color:
if (color == RB_BLACK)
@@ -520,14 +515,7 @@ void rb_replace_node(struct rb_node *victim, struct 
rb_node *new,
struct rb_node *parent = rb_parent(victim);
 
/* Set the surrounding nodes to point to the replacement */
-   if (parent) {
-   if (victim == parent->rb_left)
-   parent->rb_left = new;
-   else
-   parent->rb_right = new;
-   } else {
-   root->rb_node = new;
-   }
+   __rb_change_child(victim, new, parent, root);
if (victim->rb_left)
rb_set_parent(victim->rb_left, new);
if (victim->rb_right)
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 06/16 RESEND] rbtree: low level optimizations in rb_insert_color()

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

- Use the newly introduced rb_set_parent_color() function to flip the color
  of nodes whose parent is already known.
- Optimize rb_parent() when the node is known to be red - there is no need
  to mask out the color in that case.
- Flipping gparent's color to red requires us to fetch its rb_parent_color
  field, so we can reuse it as the parent value for the next loop iteration.
- Do not use __rb_rotate_left() and __rb_rotate_right() to handle tree
  rotations: we already have pointers to all relevant nodes, and know their
  colors (either because we want to adjust it, or because we've tested it,
  or we can deduce it as black due to the node proximity to a known red node).
  So we can generate more efficient code by making use of the node pointers
  we already have, and setting both the parent and color attributes for
  nodes all at once. Also in Case 2, some node attributes don't have to
  be set because we know another tree rotation (Case 3) will always follow
  and override them.

Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 5bc9188aa207dafd47eab57df7c4fe5b3d3f636a]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 166 +---
 1 file changed, 131 insertions(+), 35 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 244f1d8818..72dfcf9acb 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -23,6 +23,25 @@
 #include 
 #include 
 
+/*
+ * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree
+ *
+ *  1) A node is either red or black
+ *  2) The root is black
+ *  3) All leaves (NULL) are black
+ *  4) Both children of every red node are black
+ *  5) Every simple path from root to leaves contains the same number
+ * of black nodes.
+ *
+ *  4 and 5 give the O(log n) guarantee, since 4 implies you cannot have two
+ *  consecutive red nodes in a path and every red node is therefore followed by
+ *  a black. So if B is the number of black nodes on every simple path (as per
+ *  5), then the longest possible path due to 4 is 2B.
+ *
+ *  We shall indicate color with case, where black nodes are uppercase and red
+ *  nodes will be lowercase.
+ */
+
 #defineRB_RED  0
 #defineRB_BLACK1
 
@@ -41,6 +60,17 @@ static inline void rb_set_color(struct rb_node *rb, int 
color)
rb->__rb_parent_color = (rb->__rb_parent_color & ~1) | color;
 }
 
+static inline void rb_set_parent_color(struct rb_node *rb,
+ struct rb_node *p, int color)
+{
+   rb->__rb_parent_color = (unsigned long)p | color;
+}
+
+static inline struct rb_node *rb_red_parent(struct rb_node *red)
+{
+   return (struct rb_node *)red->__rb_parent_color;
+}
+
 static void __rb_rotate_left(struct rb_node *node, struct rb_root *root)
 {
struct rb_node *right = node->rb_right;
@@ -87,9 +117,30 @@ static void __rb_rotate_right(struct rb_node *node, struct 
rb_root *root)
rb_set_parent(node, left);
 }
 
+/*
+ * Helper function for rotations:
+ * - old's parent and color get assigned to new
+ * - old gets assigned new as a parent and 'color' as a color.
+ */
+static inline void
+__rb_rotate_set_parents(struct rb_node *old, struct rb_node *new,
+   struct rb_root *root, int color)
+{
+   struct rb_node *parent = rb_parent(old);
+   new->__rb_parent_color = old->__rb_parent_color;
+   rb_set_parent_color(old, new, color);
+   if (parent) {
+   if (parent->rb_left == old)
+   parent->rb_left = new;
+   else
+   parent->rb_right = new;
+   } else
+   root->rb_node = new;
+}
+
 void rb_insert_color(struct rb_node *node, struct rb_root *root)
 {
-   struct rb_node *parent, *gparent;
+   struct rb_node *parent = rb_red_parent(node), *gparent, *tmp;
 
while (true) {
/*
@@ -99,59 +150,104 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
 * Otherwise, take some corrective action as we don't
 * want a red root or two consecutive red nodes.
 */
-   parent = rb_parent(node);
if (!parent) {
-   rb_set_black(node);
+   rb_set_parent_color(node, NULL, RB_BLACK);
break;
} else if (rb_is_black(parent))
break;
 
-   gparent = rb_parent(parent);
-
-   if (parent == gparent->rb_left)
-   {
-   {
-   register struct rb_node *uncle = 
gparent->rb_right;
-   if (uncle && rb_is

[Xen-devel] [PATCH v6 14/16] rbtree: handle 1-child recoloring in rb_erase() instead of rb_erase_color()

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

An interesting observation for rb_erase() is that when a node has
exactly one child, the node must be black and the child must be red.
An interesting consequence is that removing such a node can be done by
simply replacing it with its child and making the child black,
which we can do efficiently in rb_erase(). __rb_erase_color() then
only needs to handle the no-childs case and can be modified accordingly.

Signed-off-by: Michel Lespinasse 
Acked-by: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Andrea Arcangeli 
Cc: David Woodhouse 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 46b6135a7402ac23c5b25f2bd79b03bab8f98278]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
Removed new line from previous patch to make inline with Linux code base
---
 xen/common/rbtree.c | 105 +++-
 1 file changed, 62 insertions(+), 43 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 13a622326d..e7df273800 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -2,7 +2,8 @@
   Red Black Trees
   (C) 1999  Andrea Arcangeli 
   (C) 2002  David Woodhouse 
-  
+  (C) 2012  Michel Lespinasse 
+
   This program is free software; you can redistribute it and/or modify
   it under the terms of the GNU General Public License as published by
   the Free Software Foundation; either version 2 of the License, or
@@ -50,6 +51,11 @@
 #define rb_is_red(r)   (!rb_color(r))
 #define rb_is_black(r) rb_color(r)
 
+static inline void rb_set_black(struct rb_node *rb)
+{
+   rb->__rb_parent_color |= RB_BLACK;
+}
+
 static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
 {
rb->__rb_parent_color = rb_color(rb) | (unsigned long)p;
@@ -214,27 +220,18 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
 }
 EXPORT_SYMBOL(rb_insert_color);
 
-static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
-struct rb_root *root)
+static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 {
-   struct rb_node *sibling, *tmp1, *tmp2;
+   struct rb_node *node = NULL, *sibling, *tmp1, *tmp2;
 
while (true) {
/*
-* Loop invariant: all leaf paths going through node have a
-* black node count that is 1 lower than other leaf paths.
-*
-* If node is red, we can flip it to black to adjust.
-* If node is the root, all leaf paths go through it.
-* Otherwise, we need to adjust the tree through color flips
-* and tree rotations as per one of the 4 cases below.
+* Loop invariants:
+* - node is black (or NULL on first iteration)
+* - node is not the root (parent is not NULL)
+* - All leaf paths going through parent and node have a
+*   black node count that is 1 lower than other leaf paths.
 */
-   if (node && rb_is_red(node)) {
-   rb_set_parent_color(node, parent, RB_BLACK);
-   break;
-   } else if (!parent) {
-   break;
-   }
sibling = parent->rb_right;
if (node != sibling) {  /* node == parent->rb_left */
if (rb_is_red(sibling)) {
@@ -268,17 +265,22 @@ static void __rb_erase_color(struct rb_node *node, struct 
rb_node *parent,
*  / \   / \
* Sl  SrSl  Sr
*
-   * This leaves us violating 5), so
-   * recurse at p. If p is red, the
-   * recursion will just flip it to black
-   * and exit. If coming from Case 1,
-   * p is known to be red.
+   * This leaves us violating 5) which
+   * can be fixed by flipping p to black
+   * if it was red, or by recursing at p.
+   * p is red when coming from Case 1.
*/
rb_set_parent_color(sibling, parent,
RB_RED);
-   node = parent;
-   parent = rb_parent(node);
-   continue;
+   if (rb_is_red(parent))
+   rb_set_black(parent);
+   else {
+   node = parent;
+   

[Xen-devel] [PATCH v6 15/16 RESEND] rbtree: low level optimizations in rb_erase()

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

Various minor optimizations in rb_erase():
- Avoid multiple loading of node->__rb_parent_color when computing parent
  and color information (possibly not in close sequence, as there might
  be further branches in the algorithm)
- In the 1-child subcase of case 1, copy the __rb_parent_color field from
  the erased node to the child instead of recomputing it from the desired
  parent and color
- When searching for the erased node's successor, differentiate between
  cases 2 and 3 based on whether any left links were followed. This avoids
  a condition later down.
- In case 3, keep a pointer to the erased node's right child so we don't
  have to refetch it later to adjust its parent.
- In the no-childs subcase of cases 2 and 3, place the rebalance assigment
  last so that the compiler can remove the following if(rebalance) test.

Also, added some comments to illustrate cases 2 and 3.

Signed-off-by: Michel Lespinasse 
Acked-by: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Andrea Arcangeli 
Cc: David Woodhouse 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 4f035ad67f4633c233cb3642711d49b4efc9c82d]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 98 ++---
 1 file changed, 64 insertions(+), 34 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index e7df273800..5c4e239c24 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -47,9 +47,14 @@
 #defineRB_RED  0
 #defineRB_BLACK1
 
-#define rb_color(r)   ((r)->__rb_parent_color & 1)
-#define rb_is_red(r)   (!rb_color(r))
-#define rb_is_black(r) rb_color(r)
+#define __rb_parent(pc)((struct rb_node *)(pc & ~3))
+
+#define __rb_color(pc) ((pc) & 1)
+#define __rb_is_black(pc)  __rb_color(pc)
+#define __rb_is_red(pc)(!__rb_color(pc))
+#define rb_color(rb)   __rb_color((rb)->__rb_parent_color)
+#define rb_is_red(rb)  __rb_is_red((rb)->__rb_parent_color)
+#define rb_is_black(rb)__rb_is_black((rb)->__rb_parent_color)
 
 static inline void rb_set_black(struct rb_node *rb)
 {
@@ -378,6 +383,7 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 {
struct rb_node *child = node->rb_right, *tmp = node->rb_left;
struct rb_node *parent, *rebalance;
+   unsigned long pc;
 
if (!tmp) {
/*
@@ -387,51 +393,75 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 * and node must be black due to 4). We adjust colors locally
 * so as to bypass __rb_erase_color() later on.
 */
-
-   parent = rb_parent(node);
+   pc = node->__rb_parent_color;
+   parent = __rb_parent(pc);
__rb_change_child(node, child, parent, root);
if (child) {
-   rb_set_parent_color(child, parent, RB_BLACK);
+   child->__rb_parent_color = pc;
rebalance = NULL;
-   } else {
-   rebalance = rb_is_black(node) ? parent : NULL;
-   }
+   } else
+   rebalance = __rb_is_black(pc) ? parent : NULL;
} else if (!child) {
/* Still case 1, but this time the child is node->rb_left */
-   parent = rb_parent(node);
+   tmp->__rb_parent_color = pc = node->__rb_parent_color;
+   parent = __rb_parent(pc);
__rb_change_child(node, tmp, parent, root);
-   rb_set_parent_color(tmp, parent, RB_BLACK);
rebalance = NULL;
} else {
-   struct rb_node *old = node, *left;
-
-   node = child;
-   while ((left = node->rb_left) != NULL)
-   node = left;
-
-   __rb_change_child(old, node, rb_parent(old), root);
-
-   child = node->rb_right;
-   parent = rb_parent(node);
-
-   if (parent == old) {
-   parent = node;
+   struct rb_node *successor = child, *child2;
+   tmp = child->rb_left;
+   if (!tmp) {
+   /*
+* Case 2: node's successor is its right child
+*
+*(n)  (s)
+*/ \  / \
+*  (x) (s)  ->  (x) (c)
+*\
+*(c)
+*/
+   parent = child;
+   child2 = child->rb_right;
} else {
-   parent->rb_left = child;
-
-   node->rb_right = old->rb_right;
-   rb_set_parent(old->rb_right, node);
+   /*
+* Case 3: node's successor is leftmost under
+  

[Xen-devel] [PATCH v6 08/16 RESEND] rbtree: optimize case selection logic in __rb_erase_color()

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

In __rb_erase_color(), we have to select one of 3 cases depending on the
color on the 'other' node children.  If both children are black, we flip a
few node colors and iterate.  Otherwise, we do either one or two tree
rotations, depending on the color of the 'other' child opposite to 'node',
and then we are done.

The corresponding logic had duplicate checks for the color of the 'other'
child opposite to 'node'.  It was checking it first to determine if both
children are black, and then to determine how many tree rotations are
required.  Rearrange the logic to avoid that extra check.

Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit e125d1471a4f8f1bf7ea9a83deb8d23cb40bd712]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 68 +++--
 1 file changed, 30 insertions(+), 38 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 5d44533f57..462662886a 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -283,28 +283,24 @@ static void __rb_erase_color(struct rb_node *node, struct 
rb_node *parent,
__rb_rotate_left(parent, root);
other = parent->rb_right;
}
-   if ((!other->rb_left || rb_is_black(other->rb_left)) &&
-   (!other->rb_right || rb_is_black(other->rb_right)))
-   {
-   rb_set_red(other);
-   node = parent;
-   parent = rb_parent(node);
-   }
-   else
-   {
-   if (!other->rb_right || 
rb_is_black(other->rb_right))
-   {
-   rb_set_black(other->rb_left);
+   if (!other->rb_right || rb_is_black(other->rb_right)) {
+   if (!other->rb_left ||
+   rb_is_black(other->rb_left)) {
rb_set_red(other);
-   __rb_rotate_right(other, root);
-   other = parent->rb_right;
+   node = parent;
+   parent = rb_parent(node);
+   continue;
}
-   rb_set_color(other, rb_color(parent));
-   rb_set_black(parent);
-   rb_set_black(other->rb_right);
-   __rb_rotate_left(parent, root);
-   break;
+   rb_set_black(other->rb_left);
+   rb_set_red(other);
+   __rb_rotate_right(other, root);
+   other = parent->rb_right;
}
+   rb_set_color(other, rb_color(parent));
+   rb_set_black(parent);
+   rb_set_black(other->rb_right);
+   __rb_rotate_left(parent, root);
+   break;
} else {
other = parent->rb_left;
if (rb_is_red(other))
@@ -314,28 +310,24 @@ static void __rb_erase_color(struct rb_node *node, struct 
rb_node *parent,
__rb_rotate_right(parent, root);
other = parent->rb_left;
}
-   if ((!other->rb_left || rb_is_black(other->rb_left)) &&
-   (!other->rb_right || rb_is_black(other->rb_right)))
-   {
-   rb_set_red(other);
-   node = parent;
-   parent = rb_parent(node);
-   }
-   else
-   {
-   if (!other->rb_left || 
rb_is_black(other->rb_left))
-   {
-   rb_set_black(other->rb_right);
+   if (!other->rb_left || rb_is_black(other->rb_left)) {
+   if (!other->rb_right ||
+   rb_is_black(other->rb_right)) {
rb_set_red(other);
-   __rb_rotate_left(other, root);
-   other = parent->rb_left;
+   node = parent;
+   parent = rb_parent(

[Xen-devel] [PATCH v6 10/16 RESEND] rbtree: coding style adjustments

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

Set comment and indentation style to be consistent with linux coding style
and the rest of the file, as suggested by Peter Zijlstra

Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 7ce6ff9e5de99e7b72019c7de82fb438fe1dc5a0]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 42 +++---
 1 file changed, 23 insertions(+), 19 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 0ad1a1455d..b964171bee 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -363,8 +363,7 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
child = node->rb_right;
else if (!node->rb_right)
child = node->rb_left;
-   else
-   {
+   else {
struct rb_node *old = node, *left;
 
node = node->rb_right;
@@ -406,17 +405,15 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 
if (child)
rb_set_parent(child, parent);
-   if (parent)
-   {
+   if (parent) {
if (parent->rb_left == node)
parent->rb_left = child;
else
parent->rb_right = child;
-   }
-   else
+   } else
root->rb_node = child;
 
- color:
+color:
if (color == RB_BLACK)
__rb_erase_color(child, parent, root);
 }
@@ -458,8 +455,10 @@ struct rb_node *rb_next(const struct rb_node *node)
if (RB_EMPTY_NODE(node))
return NULL;
 
-   /* If we have a right-hand child, go down and then left as far
-  as we can. */
+   /*
+* If we have a right-hand child, go down and then left as far
+* as we can.
+*/
if (node->rb_right) {
node = node->rb_right;
while (node->rb_left)
@@ -467,12 +466,13 @@ struct rb_node *rb_next(const struct rb_node *node)
return (struct rb_node *)node;
}
 
-   /* No right-hand children.  Everything down and left is
-  smaller than us, so any 'next' node must be in the general
-  direction of our parent. Go up the tree; any time the
-  ancestor is a right-hand child of its parent, keep going
-  up. First time it's a left-hand child of its parent, said
-  parent is our 'next' node. */
+   /*
+* No right-hand children. Everything down and left is smaller than us,
+* so any 'next' node must be in the general direction of our parent.
+* Go up the tree; any time the ancestor is a right-hand child of its
+* parent, keep going up. First time it's a left-hand child of its
+* parent, said parent is our 'next' node.
+*/
while ((parent = rb_parent(node)) && node == parent->rb_right)
node = parent;
 
@@ -487,8 +487,10 @@ struct rb_node *rb_prev(const struct rb_node *node)
if (RB_EMPTY_NODE(node))
return NULL;
 
-   /* If we have a left-hand child, go down and then right as far
-  as we can. */
+   /*
+* If we have a left-hand child, go down and then right as far
+* as we can.
+*/
if (node->rb_left) {
node = node->rb_left;
while (node->rb_right)
@@ -496,8 +498,10 @@ struct rb_node *rb_prev(const struct rb_node *node)
return (struct rb_node *)node;
}
 
-   /* No left-hand children. Go up till we find an ancestor which
-  is a right-hand child of its parent */
+   /*
+* No left-hand children. Go up till we find an ancestor which
+* is a right-hand child of its parent
+*/
while ((parent = rb_parent(node)) && node == parent->rb_left)
node = parent;
 
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 11/16 RESEND] rbtree: optimize fetching of sibling node

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

When looking to fetch a node's sibling, we went through a sequence of:
- check if node is the parent's left child
- if it is, then fetch the parent's right child

This can be replaced with:
- fetch the parent's right child as an assumed sibling
- check that node is NOT the fetched child

This avoids fetching the parent's left child when node is actually
that child. Saves a bit on code size, though it doesn't seem to make
a large difference in speed.

Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Cc: David Woodhouse 
Acked-by: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 59633abf34e2f44b8e772a2c12a92132aa7c2220]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index b964171bee..07b0875227 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -107,8 +107,8 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
 
gparent = rb_red_parent(parent);
 
-   if (parent == gparent->rb_left) {
-   tmp = gparent->rb_right;
+   tmp = gparent->rb_right;
+   if (parent != tmp) {/* parent == gparent->rb_left */
if (tmp && rb_is_red(tmp)) {
/*
 * Case 1 - color flips
@@ -131,7 +131,8 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
continue;
}
 
-   if (parent->rb_right == node) {
+   tmp = parent->rb_right;
+   if (node == tmp) {
/*
 * Case 2 - left rotate at parent
 *
@@ -151,6 +152,7 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
RB_BLACK);
rb_set_parent_color(parent, node, RB_RED);
parent = node;
+   tmp = node->rb_right;
}
 
/*
@@ -162,7 +164,7 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
 * / \
 *n   U
 */
-   gparent->rb_left = tmp = parent->rb_right;
+   gparent->rb_left = tmp;  /* == parent->rb_right */
parent->rb_right = gparent;
if (tmp)
rb_set_parent_color(tmp, gparent, RB_BLACK);
@@ -180,7 +182,8 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
continue;
}
 
-   if (parent->rb_left == node) {
+   tmp = parent->rb_left;
+   if (node == tmp) {
/* Case 2 - right rotate at parent */
parent->rb_left = tmp = node->rb_right;
node->rb_right = parent;
@@ -189,10 +192,11 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
RB_BLACK);
rb_set_parent_color(parent, node, RB_RED);
parent = node;
+   tmp = node->rb_left;
}
 
/* Case 3 - left rotate at gparent */
-   gparent->rb_right = tmp = parent->rb_left;
+   gparent->rb_right = tmp;  /* == parent->rb_left */
parent->rb_left = gparent;
if (tmp)
rb_set_parent_color(tmp, gparent, RB_BLACK);
@@ -223,8 +227,9 @@ static void __rb_erase_color(struct rb_node *node, struct 
rb_node *parent,
break;
} else if (!parent) {
break;
-   } else if (parent->rb_left == node) {
-   sibling = parent->rb_right;
+   }
+   sibling = parent->rb_right;
+   if (node != sibling) {  /* node == parent->rb_left */
if (rb_is_red(sibling)) {
/*
 * Case 1 - left rotate at parent
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 05/16 RESEND] rbtree: adjust root color in rb_insert_color() only when necessary

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

The root node of an rbtree must always be black.  However,
rb_insert_color() only needs to maintain this invariant when it has been
broken - that is, when it exits the loop due to the current (red) node
being the root.  In all other cases (exiting after tree rotations, or
exiting due to an existing black parent) the invariant is already
satisfied, so there is no need to adjust the root node color.

Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 6d58452dc066db61acdff7b84671db1b11a3de1c]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 19 +++
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 9dc296e0d8..244f1d8818 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -91,8 +91,21 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
 {
struct rb_node *parent, *gparent;
 
-   while ((parent = rb_parent(node)) && rb_is_red(parent))
-   {
+   while (true) {
+   /*
+* Loop invariant: node is red
+*
+* If there is a black parent, we are done.
+* Otherwise, take some corrective action as we don't
+* want a red root or two consecutive red nodes.
+*/
+   parent = rb_parent(node);
+   if (!parent) {
+   rb_set_black(node);
+   break;
+   } else if (rb_is_black(parent))
+   break;
+
gparent = rb_parent(parent);
 
if (parent == gparent->rb_left)
@@ -142,8 +155,6 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
break;
}
}
-
-   rb_set_black(root->rb_node);
 }
 EXPORT_SYMBOL(rb_insert_color);
 
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 09/16 RESEND] rbtree: low level optimizations in __rb_erase_color()

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

In __rb_erase_color(), we often already have pointers to the nodes being
rotated and/or know what their colors must be, so we can generate more
efficient code than the generic __rb_rotate_left() and __rb_rotate_right()
functions.

Also when the current node is red or when flipping the sibling's color,
the parent is already known so we can use the more efficient
rb_set_parent_color() function to set the desired color.

Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 6280d2356fd8ad0936a63c10dc1e6accf48d0c61]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 208 +---
 1 file changed, 115 insertions(+), 93 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 462662886a..0ad1a1455d 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -39,7 +39,8 @@
  *  5), then the longest possible path due to 4 is 2B.
  *
  *  We shall indicate color with case, where black nodes are uppercase and red
- *  nodes will be lowercase.
+ *  nodes will be lowercase. Unknown color nodes shall be drawn as red within
+ *  parentheses and have some accompanying text comment.
  */
 
 #defineRB_RED  0
@@ -48,17 +49,11 @@
 #define rb_color(r)   ((r)->__rb_parent_color & 1)
 #define rb_is_red(r)   (!rb_color(r))
 #define rb_is_black(r) rb_color(r)
-#define rb_set_red(r)  do { (r)->__rb_parent_color &= ~1; } while (0)
-#define rb_set_black(r)  do { (r)->__rb_parent_color |= 1; } while (0)
 
 static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
 {
rb->__rb_parent_color = rb_color(rb) | (unsigned long)p;
 }
-static inline void rb_set_color(struct rb_node *rb, int color)
-{
-   rb->__rb_parent_color = (rb->__rb_parent_color & ~1) | color;
-}
 
 static inline void rb_set_parent_color(struct rb_node *rb,
  struct rb_node *p, int color)
@@ -71,52 +66,6 @@ static inline struct rb_node *rb_red_parent(struct rb_node 
*red)
return (struct rb_node *)red->__rb_parent_color;
 }
 
-static void __rb_rotate_left(struct rb_node *node, struct rb_root *root)
-{
-   struct rb_node *right = node->rb_right;
-   struct rb_node *parent = rb_parent(node);
-
-   if ((node->rb_right = right->rb_left))
-   rb_set_parent(right->rb_left, node);
-   right->rb_left = node;
-
-   rb_set_parent(right, parent);
-
-   if (parent)
-   {
-   if (node == parent->rb_left)
-   parent->rb_left = right;
-   else
-   parent->rb_right = right;
-   }
-   else
-   root->rb_node = right;
-   rb_set_parent(node, right);
-}
-
-static void __rb_rotate_right(struct rb_node *node, struct rb_root *root)
-{
-   struct rb_node *left = node->rb_left;
-   struct rb_node *parent = rb_parent(node);
-
-   if ((node->rb_left = left->rb_right))
-   rb_set_parent(left->rb_right, node);
-   left->rb_right = node;
-
-   rb_set_parent(left, parent);
-
-   if (parent)
-   {
-   if (node == parent->rb_right)
-   parent->rb_right = left;
-   else
-   parent->rb_left = left;
-   }
-   else
-   root->rb_node = left;
-   rb_set_parent(node, left);
-}
-
 /*
  * Helper function for rotations:
  * - old's parent and color get assigned to new
@@ -257,7 +206,7 @@ EXPORT_SYMBOL(rb_insert_color);
 static void __rb_erase_color(struct rb_node *node, struct rb_node *parent,
 struct rb_root *root)
 {
-   struct rb_node *other;
+   struct rb_node *sibling, *tmp1, *tmp2;
 
while (true) {
/*
@@ -270,63 +219,136 @@ static void __rb_erase_color(struct rb_node *node, 
struct rb_node *parent,
 * and tree rotations as per one of the 4 cases below.
 */
if (node && rb_is_red(node)) {
-   rb_set_black(node);
+   rb_set_parent_color(node, parent, RB_BLACK);
break;
} else if (!parent) {
break;
} else if (parent->rb_left == node) {
-   other = parent->rb_right;
-   if (rb_is_red(other))
-   {
-   rb_set_black(other);
-   rb_set_red(parent);
-   __rb_rotate_left(parent, root);
-   other = parent->rb_right;
+   sibling = parent->rb_right;
+   if (rb_is_red(sibling)) {
+   /*
+  

[Xen-devel] [PATCH v6 07/16 RESEND] rbtree: adjust node color in __rb_erase_color() only when necessary

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

In __rb_erase_color(), we were always setting a node to black after
exiting the main loop.  And in one case, after fixing up the tree to
satisfy all rbtree invariants, we were setting the current node to root
just to guarantee a loop exit, at which point the root would be set to
black.  However this is not necessary, as the root of an rbtree is already
known to be black.  The only case where the color flip is required is when
we exit the loop due to the current node being red, and it's easiest to
just do the flip at that point instead of doing it after the loop.

[adrian.hun...@intel.com: perf tools: fix build for another rbtree.c change]
Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: Adrian Hunter 
Cc: Alexander Shishkin 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit d6ff1273928ebf15466a85b7e1810cd00e72998b]

Ported only rbtree.c to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 28 +---
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 72dfcf9acb..5d44533f57 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -259,10 +259,22 @@ static void __rb_erase_color(struct rb_node *node, struct 
rb_node *parent,
 {
struct rb_node *other;
 
-   while ((!node || rb_is_black(node)) && node != root->rb_node)
-   {
-   if (parent->rb_left == node)
-   {
+   while (true) {
+   /*
+* Loop invariant: all leaf paths going through node have a
+* black node count that is 1 lower than other leaf paths.
+*
+* If node is red, we can flip it to black to adjust.
+* If node is the root, all leaf paths go through it.
+* Otherwise, we need to adjust the tree through color flips
+* and tree rotations as per one of the 4 cases below.
+*/
+   if (node && rb_is_red(node)) {
+   rb_set_black(node);
+   break;
+   } else if (!parent) {
+   break;
+   } else if (parent->rb_left == node) {
other = parent->rb_right;
if (rb_is_red(other))
{
@@ -291,12 +303,9 @@ static void __rb_erase_color(struct rb_node *node, struct 
rb_node *parent,
rb_set_black(parent);
rb_set_black(other->rb_right);
__rb_rotate_left(parent, root);
-   node = root->rb_node;
break;
}
-   }
-   else
-   {
+   } else {
other = parent->rb_left;
if (rb_is_red(other))
{
@@ -325,13 +334,10 @@ static void __rb_erase_color(struct rb_node *node, struct 
rb_node *parent,
rb_set_black(parent);
rb_set_black(other->rb_left);
__rb_rotate_right(parent, root);
-   node = root->rb_node;
break;
}
}
}
-   if (node)
-   rb_set_black(node);
 }
 
 void rb_erase(struct rb_node *node, struct rb_root *root)
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 04/16 RESEND] rbtree: break out of rb_insert_color loop after tree rotation

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

It is a well known property of rbtrees that insertion never requires more
than two tree rotations.  In our implementation, after one loop iteration
identified one or two necessary tree rotations, we would iterate and look
for more.  However at that point the node's parent would always be black,
which would cause us to exit the loop.

We can make the code flow more obvious by just adding a break statement
after the tree rotations, where we know we are done.  Additionally, in the
cases where two tree rotations are necessary, we don't have to update the
'node' pointer as it wouldn't be used until the next loop iteration, which
we now avoid due to this break statement.

Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 1f0528653e41ec230c60f5738820e8a544731399]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c | 14 --
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index a75b336ba2..9dc296e0d8 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -109,18 +109,15 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
}
}
 
-   if (parent->rb_right == node)
-   {
-   register struct rb_node *tmp;
+   if (parent->rb_right == node) {
__rb_rotate_left(parent, root);
-   tmp = parent;
parent = node;
-   node = tmp;
}
 
rb_set_black(parent);
rb_set_red(gparent);
__rb_rotate_right(gparent, root);
+   break;
} else {
{
register struct rb_node *uncle = 
gparent->rb_left;
@@ -134,18 +131,15 @@ void rb_insert_color(struct rb_node *node, struct rb_root 
*root)
}
}
 
-   if (parent->rb_left == node)
-   {
-   register struct rb_node *tmp;
+   if (parent->rb_left == node) {
__rb_rotate_right(parent, root);
-   tmp = parent;
parent = node;
-   node = tmp;
}
 
rb_set_black(parent);
rb_set_red(gparent);
__rb_rotate_left(gparent, root);
+   break;
}
}
 
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 03/16 RESEND] rbtree: move some implementation details from rbtree.h to rbtree.c

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

rbtree users must use the documented APIs to manipulate the tree
structure.  Low-level helpers to manipulate node colors and parenthood are
not part of that API, so move them to lib/rbtree.c

Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Signed-off-by: David Woodhouse 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit bf7ad8eeab995710c766df49c9c69a8592ca0216]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c  | 20 +++-
 xen/include/xen/rbtree.h | 34 +-
 2 files changed, 28 insertions(+), 26 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 76f009f5a9..a75b336ba2 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -23,6 +23,24 @@
 #include 
 #include 
 
+#defineRB_RED  0
+#defineRB_BLACK1
+
+#define rb_color(r)   ((r)->__rb_parent_color & 1)
+#define rb_is_red(r)   (!rb_color(r))
+#define rb_is_black(r) rb_color(r)
+#define rb_set_red(r)  do { (r)->__rb_parent_color &= ~1; } while (0)
+#define rb_set_black(r)  do { (r)->__rb_parent_color |= 1; } while (0)
+
+static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
+{
+   rb->__rb_parent_color = rb_color(rb) | (unsigned long)p;
+}
+static inline void rb_set_color(struct rb_node *rb, int color)
+{
+   rb->__rb_parent_color = (rb->__rb_parent_color & ~1) | color;
+}
+
 static void __rb_rotate_left(struct rb_node *node, struct rb_root *root)
 {
struct rb_node *right = node->rb_right;
@@ -255,7 +273,7 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
rb_set_parent(old->rb_right, node);
}
 
-   node->rb_parent_color = old->rb_parent_color;
+   node->__rb_parent_color = old->__rb_parent_color;
node->rb_left = old->rb_left;
rb_set_parent(old->rb_left, node);
 
diff --git a/xen/include/xen/rbtree.h b/xen/include/xen/rbtree.h
index e947e3800f..1b72590e4e 100644
--- a/xen/include/xen/rbtree.h
+++ b/xen/include/xen/rbtree.h
@@ -94,36 +94,18 @@ static inline struct page * rb_insert_page_cache(struct 
inode * inode,
 #ifndef __RBTREE_H__
 #define __RBTREE_H__
 
-struct rb_node
-{
-   unsigned long  rb_parent_color;
-#defineRB_RED  0
-#defineRB_BLACK1
+struct rb_node {
+   unsigned long  __rb_parent_color;
struct rb_node *rb_right;
struct rb_node *rb_left;
 } __attribute__((aligned(sizeof(long;
 /* The alignment might seem pointless, but allegedly CRIS needs it */
 
-struct rb_root
-{
+struct rb_root {
struct rb_node *rb_node;
 };
 
-#define rb_parent(r)   ((struct rb_node *)((r)->rb_parent_color & ~3))
-#define rb_color(r)   ((r)->rb_parent_color & 1)
-#define rb_is_red(r)   (!rb_color(r))
-#define rb_is_black(r) rb_color(r)
-#define rb_set_red(r)  do { (r)->rb_parent_color &= ~1; } while (0)
-#define rb_set_black(r)  do { (r)->rb_parent_color |= 1; } while (0)
-
-static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
-{
-   rb->rb_parent_color = (rb->rb_parent_color & 3) | (unsigned long)p;
-}
-static inline void rb_set_color(struct rb_node *rb, int color)
-{
-   rb->rb_parent_color = (rb->rb_parent_color & ~1) | color;
-}
+#define rb_parent(r)   ((struct rb_node *)((r)->__rb_parent_color & ~3))
 
 #define RB_ROOT(struct rb_root) { NULL, }
 #definerb_entry(ptr, type, member) container_of(ptr, type, member)
@@ -131,8 +113,10 @@ static inline void rb_set_color(struct rb_node *rb, int 
color)
 #define RB_EMPTY_ROOT(root)  ((root)->rb_node == NULL)
 
 /* 'empty' nodes are nodes that are known not to be inserted in an rbree */
-#define RB_EMPTY_NODE(node)  ((node)->rb_parent_color == (unsigned long)(node))
-#define RB_CLEAR_NODE(node)  ((node)->rb_parent_color = (unsigned long)(node))
+#define RB_EMPTY_NODE(node)  \
+   ((node)->__rb_parent_color == (unsigned long)(node))
+#define RB_CLEAR_NODE(node)  \
+   ((node)->__rb_parent_color = (unsigned long)(node))
 
 extern void rb_insert_color(struct rb_node *, struct rb_root *);
 extern void rb_erase(struct rb_node *, struct rb_root *);
@@ -150,7 +134,7 @@ extern void rb_replace_node(struct rb_node *victim, struct 
rb_node *new,
 static inline void rb_link_node(struct rb_node * node, struct rb_node * parent,
struct rb_node ** rb_link)
 {
-   node->rb_parent_color = (unsigned long )parent;
+   node->__rb_parent_color = (unsigned long )parent;
node->rb_left = node->rb_right = NULL;
 
*rb_link = node;
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 02/16 RESEND] rbtree: empty nodes have no color

2017-11-21 Thread Praveen Kumar
From: Michel Lespinasse 

Empty nodes have no color.  We can make use of this property to simplify
the code emitted by the RB_EMPTY_NODE and RB_CLEAR_NODE macros.  Also,
we can get rid of the rb_init_node function which had been introduced by
commit 88d19cf37952 ("timers: Add rb_init_node() to allow for stack
allocated rb nodes") to avoid some issue with the empty node's color not
being initialized.

I'm not sure what the RB_EMPTY_NODE checks in rb_prev() / rb_next() are
doing there, though.  axboe introduced them in commit 10fd48f2376d
("rbtree: fixed reversed RB_EMPTY_NODE and rb_next/prev").  The way I
see it, the 'empty node' abstraction is only used by rbtree users to
flag nodes that they haven't inserted in any rbtree, so asking the
predecessor or successor of such nodes doesn't make any sense.

One final rb_init_node() caller was recently added in sysctl code to
implement faster sysctl name lookups.  This code doesn't make use of
RB_EMPTY_NODE at all, and from what I could see it only called
rb_init_node() under the mistaken assumption that such initialization was
required before node insertion.

[s...@canb.auug.org.au: fix net/ceph/osd_client.c build]
Signed-off-by: Michel Lespinasse 
Cc: Andrea Arcangeli 
Acked-by: David Woodhouse 
Cc: Rik van Riel 
Cc: Peter Zijlstra 
Cc: Daniel Santos 
Cc: Jens Axboe 
Cc: "Eric W. Biederman" 
Cc: John Stultz 
Signed-off-by: Stephen Rothwell 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 4c199a93a2d36b277a9fd209a0f2793f8460a215]

Ported rbtree.h and rbtree.c changes which are relevant to Xen.

Signed-off-by: Praveen Kumar 
---
 xen/common/rbtree.c  | 4 ++--
 xen/include/xen/rbtree.h | 8 +---
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 62e6387dcd..76f009f5a9 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -316,7 +316,7 @@ struct rb_node *rb_next(const struct rb_node *node)
 {
struct rb_node *parent;
 
-   if (rb_parent(node) == node)
+   if (RB_EMPTY_NODE(node))
return NULL;
 
/* If we have a right-hand child, go down and then left as far
@@ -345,7 +345,7 @@ struct rb_node *rb_prev(const struct rb_node *node)
 {
struct rb_node *parent;
 
-   if (rb_parent(node) == node)
+   if (RB_EMPTY_NODE(node))
return NULL;
 
/* If we have a left-hand child, go down and then right as far
diff --git a/xen/include/xen/rbtree.h b/xen/include/xen/rbtree.h
index 9496f099f8..e947e3800f 100644
--- a/xen/include/xen/rbtree.h
+++ b/xen/include/xen/rbtree.h
@@ -128,9 +128,11 @@ static inline void rb_set_color(struct rb_node *rb, int 
color)
 #define RB_ROOT(struct rb_root) { NULL, }
 #definerb_entry(ptr, type, member) container_of(ptr, type, member)
 
-#define RB_EMPTY_ROOT(root)((root)->rb_node == NULL)
-#define RB_EMPTY_NODE(node)(rb_parent(node) == node)
-#define RB_CLEAR_NODE(node)(rb_set_parent(node, node))
+#define RB_EMPTY_ROOT(root)  ((root)->rb_node == NULL)
+
+/* 'empty' nodes are nodes that are known not to be inserted in an rbree */
+#define RB_EMPTY_NODE(node)  ((node)->rb_parent_color == (unsigned long)(node))
+#define RB_CLEAR_NODE(node)  ((node)->rb_parent_color = (unsigned long)(node))
 
 extern void rb_insert_color(struct rb_node *, struct rb_root *);
 extern void rb_erase(struct rb_node *, struct rb_root *);
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 00/16] xen: common: rbtree: ported updates from Linux tree

2017-11-21 Thread Praveen Kumar
Hi All,

The patch imports the changes and updates of the rbtree implementaiton
from Linux tree. But since, the only current implementation is with tmem.c,
which am not much aware off much and therefore, was unable to test the changes
thoroughly. Having said that, I do have plans of adding futher code changes
which will be using rb-tree more in credit2 scheduler and that will help in
further testing the same.

I have not imported augmented, rcu and patches which added new rbtree
functionality, as there was no specific requirement for current planned
implementation.

Below are the categorized Linux commit versions which are not imported :

Augmented rbtree :
14b94af0b251a2c80885b60538166fb7d04a642e
9d9e6f9703bbd642f3f2f807e6aaa642a4cbcec9
9c079add0d0f45220f4bb37febf0621137ec2d38
3cb7a56344ca45ee56d71c5f8fe9f922306bff1f
f231aebfc4cae2f6ed27a46a31e2630909513d77


Add postorder iteration functions:
9dee5c51516d2c3fff22633c1272c5652e68075a

RCU related implementation :
d72da4a4d973d8a0a0d3c97e7cdebf287fbe3a99
c1adf20052d80f776849fa2c1acb472cdeb7786c
ce093a04543c403d52c1a5788d8cb92e47453aba

Please share your inputs. Thanks in advance.

Regards,

~Praveen.

Praveen Kumar (16):
  rbtree: remove redundant if()-condition in rb_erase()
  rbtree: empty nodes have no color
  rbtree: move some implementation details from rbtree.h to rbtree.c
  rbtree: break out of rb_insert_color loop after tree rotation
  rbtree: adjust root color in rb_insert_color() only when necessary
  rbtree: low level optimizations in rb_insert_color()
  rbtree: adjust node color in __rb_erase_color() only when necessary
  rbtree: optimize case selection logic in __rb_erase_color()
  rbtree: low level optimizations in __rb_erase_color()
  rbtree: coding style adjustments
  rbtree: optimize fetching of sibling node
  rbtree: add __rb_change_child() helper function
  rbtree: place easiest case first in rb_erase()
  rbtree: handle 1-child recoloring in rb_erase() instead of
rb_erase_color()
  rbtree: low level optimizations in rb_erase()
  rbtree: fix typo in comment of rb_insert_color

 xen/common/rbtree.c  | 646 ++-
 xen/include/xen/rbtree.h |  38 +--
 2 files changed, 428 insertions(+), 256 deletions(-)

---
Updated set of changes catering the comments provided.
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v6 01/16] rbtree: remove redundant if()-condition in rb_erase()

2017-11-21 Thread Praveen Kumar
From: Wolfram Strepp 

Furthermore, notice that the initial checks:

if (!node->rb_left)
child = node->rb_right;
else if (!node->rb_right)
child = node->rb_left;
else
{
...
}
guarantee that old->rb_right is set in the final else branch, therefore
we can omit checking that again.

Signed-off-by: Wolfram Strepp 
Signed-off-by: Peter Zijlstra 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
[Linux commit 4b324126e0c6c3a5080ca3ec0981e8766ed6f1ee]

Ported to Xen.

Signed-off-by: Praveen Kumar 
---
Removed new line from previous patch to sync the changes completely with linux
code base.
---
 xen/common/rbtree.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/common/rbtree.c b/xen/common/rbtree.c
index 167ebfdc4d..62e6387dcd 100644
--- a/xen/common/rbtree.c
+++ b/xen/common/rbtree.c
@@ -250,15 +250,15 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
if (child)
rb_set_parent(child, parent);
parent->rb_left = child;
+
+   node->rb_right = old->rb_right;
+   rb_set_parent(old->rb_right, node);
}
 
node->rb_parent_color = old->rb_parent_color;
-   node->rb_right = old->rb_right;
node->rb_left = old->rb_left;
-
rb_set_parent(old->rb_left, node);
-   if (old->rb_right)
-   rb_set_parent(old->rb_right, node);
+
goto color;
}
 
-- 
2.13.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] mini-os: add config item for printing via hypervisor

2017-11-21 Thread Juergen Gross
Today Mini-OS will print all console output via the hypervisor, too.

Make this behavior configurable instead and default it to "off".

Signed-off-by: Juergen Gross 
---
 Config.mk | 2 ++
 arch/x86/testbuild/all-no | 1 +
 arch/x86/testbuild/all-yes| 1 +
 arch/x86/testbuild/newxen-yes | 1 +
 console/console.c | 7 +--
 5 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/Config.mk b/Config.mk
index 0baedd1..586dce0 100644
--- a/Config.mk
+++ b/Config.mk
@@ -180,6 +180,7 @@ CONFIG_XENBUS ?= y
 CONFIG_XC ?=y
 CONFIG_LWIP ?= $(lwip)
 CONFIG_BALLOON ?= n
+CONFIG_USE_XEN_CONSOLE ?= n
 
 # Export config items as compiler directives
 DEFINES-$(CONFIG_PARAVIRT) += -DCONFIG_PARAVIRT
@@ -197,6 +198,7 @@ DEFINES-$(CONFIG_FBFRONT) += -DCONFIG_FBFRONT
 DEFINES-$(CONFIG_CONSFRONT) += -DCONFIG_CONSFRONT
 DEFINES-$(CONFIG_XENBUS) += -DCONFIG_XENBUS
 DEFINES-$(CONFIG_BALLOON) += -DCONFIG_BALLOON
+DEFINES-$(CONFIG_USE_XEN_CONSOLE) += -DCONFIG_USE_XEN_CONSOLE
 
 DEFINES-y += -D__XEN_INTERFACE_VERSION__=$(XEN_INTERFACE_VERSION)
 
diff --git a/arch/x86/testbuild/all-no b/arch/x86/testbuild/all-no
index 78720c3..1c50bba 100644
--- a/arch/x86/testbuild/all-no
+++ b/arch/x86/testbuild/all-no
@@ -16,3 +16,4 @@ CONFIG_XENBUS = n
 CONFIG_XC = n
 CONFIG_LWIP = n
 CONFIG_BALLOON = n
+CONFIG_USE_XEN_CONSOLE = n
diff --git a/arch/x86/testbuild/all-yes b/arch/x86/testbuild/all-yes
index 303c56b..8732e69 100644
--- a/arch/x86/testbuild/all-yes
+++ b/arch/x86/testbuild/all-yes
@@ -17,3 +17,4 @@ CONFIG_XC = y
 # LWIP is special: it needs support from outside
 CONFIG_LWIP = n
 CONFIG_BALLOON = y
+CONFIG_USE_XEN_CONSOLE = y
diff --git a/arch/x86/testbuild/newxen-yes b/arch/x86/testbuild/newxen-yes
index 907a8a0..9c30c00 100644
--- a/arch/x86/testbuild/newxen-yes
+++ b/arch/x86/testbuild/newxen-yes
@@ -17,4 +17,5 @@ CONFIG_XC = y
 # LWIP is special: it needs support from outside
 CONFIG_LWIP = n
 CONFIG_BALLOON = y
+CONFIG_USE_XEN_CONSOLE = y
 XEN_INTERFACE_VERSION=__XEN_LATEST_INTERFACE_VERSION__
diff --git a/console/console.c b/console/console.c
index 2e04552..6a0b923 100644
--- a/console/console.c
+++ b/console/console.c
@@ -45,11 +45,6 @@
 #include 
 
 
-/* Copies all print output to the Xen emergency console apart
-   of standard dom0 handled console */
-#define USE_XEN_CONSOLE
-
-
 /* If console not initialised the printk will be sent to xen serial line 
NOTE: you need to enable verbose in xen/Rules.mk for it to work. */
 static int console_initialised = 0;
@@ -135,7 +130,7 @@ void print(int direct, const char *fmt, va_list args)
 (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(buf), buf);
 return;
 } else {
-#ifndef USE_XEN_CONSOLE
+#ifndef CONFIG_USE_XEN_CONSOLE
 if(!console_initialised)
 #endif
 (void)HYPERVISOR_console_io(CONSOLEIO_write, strlen(buf), buf);
-- 
2.12.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Next Xen Arm Community call - Wednesday 22nd November

2017-11-21 Thread Julien Grall
Hi all,

Quick reminder, the call will be tomorrow (Wednesday 22nd) at 5pm GMT.

The details to join the call are:

Call+44 1223 406065 (Local dial in)
and enter the access code below followed by # key.
Participant code: 4915191

Mobile Auto Dial:
VoIP: voip://+441223406065;4915191#
iOS devices: +44 1223 406065,4915191 and press #
Other devices: +44 1223 406065x4915191#

Additional Calling Information:

UK +44 1142828002
US CA +1 4085761502
US TX +1 5123141073
JP +81 453455355
DE +49 8945604050
NO +47 73187518
SE +46 46313131
FR +33 497235101
TW +886 35657119
HU +36 13275600
IE +353 91337900

Toll Free

UK 0800 1412084
US +1 8668801148
CN +86 4006782367
IN 0008009868365
IN +918049282778
TW 08000 22065
HU 0680981587
IE 1800800022
KF +972732558877

Cheers,

On 16 November 2017 at 11:54, Julien Grall  wrote:
> Hi all,
>
> Apologies I was meant to organize the call earlier.
>
> I would suggest to have the next community call on Wednesday 22nd November
> 5pm GMT. Does it sound good?
>
> Do you have any specific topic you would like to discuss?
>
> Cheers,
>
> --
> Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread Igor Druzhinin
On 21/11/17 13:22, Jan Beulich wrote:
 On 09.11.17 at 15:49,  wrote:
>> See the code comment being added for why we need this.
>>
>> Reported-by: Igor Druzhinin 
>> Signed-off-by: Jan Beulich 
> 
> I realize we aren't settled yet on where to put the sync call. The
> discussion appears to have stalled, though. Just to recap,
> alternatives to the placement below are
> - at the top of complete_domain_destroy(), being the specific
>   RCU callback exhibiting the problem (others are unlikely to
>   touch guest state)
> - in rcu_do_batch(), paralleling the similar call from
>   do_tasklet_work()

rcu_do_batch() sounds better to me. As I said before I think that the
problem is general for the hypervisor (not for VMX only) and might
appear in other places as well.

Those choices that you outlined appear to be different in terms whether
we solve the general problem and probably have some minor performance
impact or we solve the ad-hoc problem but make the system more
entangled. Here I'm more inclined to the first choice because this
particular scenario the performance impact should be negligible.

Igor


> 
> Jan
> 
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -479,7 +479,13 @@ static void vmx_vcpu_destroy(struct vcpu
>>   * we should disable PML manually here. Note that vmx_vcpu_destroy is 
>> called
>>   * prior to vmx_domain_destroy so we need to disable PML for each vcpu
>>   * separately here.
>> + *
>> + * Before doing that though, flush all state for the vCPU previously 
>> having
>> + * run on the current CPU, so that this flushing of state won't happen 
>> from
>> + * the TLB flush IPI handler behind the back of a vmx_vmcs_enter() /
>> + * vmx_vmcs_exit() section.
>>   */
>> +sync_local_execstate();
>>  vmx_vcpu_disable_pml(v);
>>  vmx_destroy_vmcs(v);
>>  passive_domain_destroy(v);
>>
>>
>>
>>
>> ___
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> https://lists.xen.org/xen-devel 
> 
> 
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 116406: tolerable all pass - PUSHED

2017-11-21 Thread osstest service owner
flight 116406 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116406/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  d2f86bf604698806d311cc251c1b66fbb752673c
baseline version:
 xen  eb0660c6950e08e44fdfeca3e29320382e2a1554

Last test of basis   116232  2017-11-16 18:03:56 Z4 days
Testing same since   116406  2017-11-21 12:01:35 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/xen.git
   eb0660c..d2f86bf  d2f86bf604698806d311cc251c1b66fbb752673c -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Ping#2: [PATCH] x86emul: keep compiler from using {x, y, z}mm registers itself

2017-11-21 Thread Andrew Cooper
On 21/11/17 13:26, Jan Beulich wrote:
 On 06.11.17 at 16:04,  wrote:
>> On 11/06/2017 11:59 AM, Jan Beulich wrote:
>> On 16.10.17 at 14:42,  wrote:
>>> On 16.10.17 at 14:37,  wrote:
> On 16/10/17 13:32, Jan Beulich wrote:
>> Since the emulator acts on the live hardware registers, we need to
>> prevent the compiler from using them e.g. for inlined memcpy() /
>> memset() (as gcc7 does). We can't, however, set this from the command
>> line, as otherwise the 64-bit build would face issues with functions
>> returning floating point values and being declared in standard headers.
>>
>> As the pragma isn't available prior to gcc6, we need to invoke it
>> conditionally. Luckily up to gcc6 we haven't seen generated code access
>> SIMD registers beyond what our asm()s do.
>>
>> Reported-by: George Dunlap 
>> Signed-off-by: Jan Beulich 
>> ---
>> While this doesn't affect core functionality, I think it would still be
>> nice for it to be allowed in for 4.10.
> Agreed.
>
> Has this been tested with Clang?
 Sorry, no - still haven't got around to set up a suitable Clang
 locally.

>  It stands a good chance of being
> compatible, but we may need an && !defined(__clang__) included.
 Should non-gcc silently ignore "#pragma GCC ..." it doesn't
 recognize, or not define __GNUC__ in the first place if it isn't
 sufficiently compatible? I.e. if anything I'd expect we need
 "#elif defined(__clang__)" to achieve the same for Clang by
 some different pragma (if such exists).
>>> Not having received any reply so far, I'm wondering whether
>>> being able to build the test harness with clang is more
>>> important than for it to work correctly when built with gcc. I
>>> can't predict when I would get around to set up a suitable
>>> clang on my dev systems.
>> I agree with the argument you make above.  On the unlikely chance
>> there's a problem Travis should catch it, and someone who actually has a
>> clang setup can help sort it out.
> I'm still lacking an ack, before it being sensible to check with Julien
> whether this is still fine to go in at this late stage.

Acked-by: Andrew Cooper 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] Ping#2: [PATCH] x86emul: keep compiler from using {x, y, z}mm registers itself

2017-11-21 Thread Jan Beulich
>>> On 06.11.17 at 16:04,  wrote:
> On 11/06/2017 11:59 AM, Jan Beulich wrote:
> On 16.10.17 at 14:42,  wrote:
>> On 16.10.17 at 14:37,  wrote:
 On 16/10/17 13:32, Jan Beulich wrote:
> Since the emulator acts on the live hardware registers, we need to
> prevent the compiler from using them e.g. for inlined memcpy() /
> memset() (as gcc7 does). We can't, however, set this from the command
> line, as otherwise the 64-bit build would face issues with functions
> returning floating point values and being declared in standard headers.
>
> As the pragma isn't available prior to gcc6, we need to invoke it
> conditionally. Luckily up to gcc6 we haven't seen generated code access
> SIMD registers beyond what our asm()s do.
>
> Reported-by: George Dunlap 
> Signed-off-by: Jan Beulich 
> ---
> While this doesn't affect core functionality, I think it would still be
> nice for it to be allowed in for 4.10.

 Agreed.

 Has this been tested with Clang?
>>>
>>> Sorry, no - still haven't got around to set up a suitable Clang
>>> locally.
>>>
  It stands a good chance of being
 compatible, but we may need an && !defined(__clang__) included.
>>>
>>> Should non-gcc silently ignore "#pragma GCC ..." it doesn't
>>> recognize, or not define __GNUC__ in the first place if it isn't
>>> sufficiently compatible? I.e. if anything I'd expect we need
>>> "#elif defined(__clang__)" to achieve the same for Clang by
>>> some different pragma (if such exists).
>> 
>> Not having received any reply so far, I'm wondering whether
>> being able to build the test harness with clang is more
>> important than for it to work correctly when built with gcc. I
>> can't predict when I would get around to set up a suitable
>> clang on my dev systems.
> 
> I agree with the argument you make above.  On the unlikely chance
> there's a problem Travis should catch it, and someone who actually has a
> clang setup can help sort it out.

I'm still lacking an ack, before it being sensible to check with Julien
whether this is still fine to go in at this late stage.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction

2017-11-21 Thread Jan Beulich
>>> On 09.11.17 at 15:49,  wrote:
> See the code comment being added for why we need this.
> 
> Reported-by: Igor Druzhinin 
> Signed-off-by: Jan Beulich 

I realize we aren't settled yet on where to put the sync call. The
discussion appears to have stalled, though. Just to recap,
alternatives to the placement below are
- at the top of complete_domain_destroy(), being the specific
  RCU callback exhibiting the problem (others are unlikely to
  touch guest state)
- in rcu_do_batch(), paralleling the similar call from
  do_tasklet_work()

Jan

> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -479,7 +479,13 @@ static void vmx_vcpu_destroy(struct vcpu
>   * we should disable PML manually here. Note that vmx_vcpu_destroy is 
> called
>   * prior to vmx_domain_destroy so we need to disable PML for each vcpu
>   * separately here.
> + *
> + * Before doing that though, flush all state for the vCPU previously 
> having
> + * run on the current CPU, so that this flushing of state won't happen 
> from
> + * the TLB flush IPI handler behind the back of a vmx_vmcs_enter() /
> + * vmx_vmcs_exit() section.
>   */
> +sync_local_execstate();
>  vmx_vcpu_disable_pml(v);
>  vmx_destroy_vmcs(v);
>  passive_domain_destroy(v);
> 
> 
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> https://lists.xen.org/xen-devel 




___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 04/16] SUPPORT.md: Add core ARM features

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 13:39,  wrote:
> What about something like this?
> 
> ### IOMMU
> 
> Status, AMD IOMMU: Supported
> Status, Intel VT-d: Supported
> Status, ARM SMMUv1: Supported
> Status, ARM SMMUv2: Supported

Fine with me, as it makes things explicit.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 03/16] SUPPORT.md: Add some x86 features

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 13:24,  wrote:
>> On Nov 21, 2017, at 11:35 AM, Jan Beulich 
>> Much depends on whether you think "guest" == "DomU". To me
>> Dom0 is a guest, too.
> 
> That’s not how I’ve ever understood those terms.
> 
> A guest at a hotel is someone who is served, and who does not have (legal) 
> access to the internals of the system.  The maids who clean the room and the 
> janitors who sweep the floors are hosts, because they have (to various 
> degrees) extra access designed to help them serve the guests.
> 
> A “guest” is a virtual machine that does not have access to the internals of 
> the system; that is the “target” of virtualization.  As such, the dom0 kernel 
> and all the toolstack / emulation code running in domain 0 are part of the 
> “host”.
> 
> Domain 0 is a domain and a VM, but only domUs are guests.

Okay then; just FTR I've always been considering "domain" ==
"guest" == "VM".

Jan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] tools/libxl: mark special pages as reserved in e820 map for PVH

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 12:48,  wrote:
> On 21/11/17 12:27, Jan Beulich wrote:
> On 21.11.17 at 12:06,  wrote:
>>> The "special pages" for PVH guests include the frames for console and
>>> Xenstore ring buffers. Those have to be marked as "Reserved" in the
>>> guest's E820 map, as otherwise conflicts might arise later e.g. when
>>> hotplugging memory into the guest.
>> 
>> Afaict this differs from v1 only in no longer adding the extra entry
>> for HVM. How does this address the concerns raised on v1 wrt spec
>> compliance? v1 having caused problems with hvmloader should not
>> have resulted in simply excluding HVM here. That's even more so
>> because we mean HVM and PVH to converge in the long run - I'd
>> expect that to mean that no clear type distinction would exist
>> anymore on libxl.
> 
> The difference is for HVM the HVMloader is creating the additional
> "Reserved" entry.
> 
>> If you want to reserve Xenstore ring buffer and console page,
>> why don't you reserve just the two (provided of course they
>> live outside of any [fake] PCI device BAR), which then ought to
>> also be compatible with plain HVM?
> 
> For PVH the "mmio" area is starting with the LAPIC and extends up
> to 4GB.

Oh, I see - that's probably okay then (or at least as okay as
libxl having knowledge of the LAPIC base address in the first
place).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 04/16] SUPPORT.md: Add core ARM features

2017-11-21 Thread George Dunlap


On Nov 21, 2017, at 11:37 AM, Jan Beulich 
mailto:jbeul...@suse.com>> wrote:

On 21.11.17 at 11:45, 
mailto:george.dun...@citrix.com>> wrote:
On 11/21/2017 08:11 AM, Jan Beulich wrote:
On 13.11.17 at 16:41, 
mailto:george.dun...@citrix.com>> wrote:
+### ARM/SMMUv1
+
+Status: Supported
+
+### ARM/SMMUv2
+
+Status: Supported

Do these belong here, when IOMMU isn't part of the corresponding
x86 patch?

Since there was recently a time when these weren't supported, I think
it's useful to have them in here.  (Julien, let me know if you think
otherwise.)

Do you think it would be useful to include an IOMMU line for x86?

At this point of the series I would surely have said "yes". The
later PCI passthrough additions state this implicitly at least (by
requiring an IOMMU for passthrough to be supported at all).
But even then saying so explicitly may be better.

How much do we specifically need to break down?  AMD / Intel?

What about something like this?

### IOMMU

Status, AMD IOMMU: Supported
Status, Intel VT-d: Supported
Status, ARM SMMUv1: Supported
Status, ARM SMMUv2: Supported

 -George
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 03/16] SUPPORT.md: Add some x86 features

2017-11-21 Thread Ian Jackson
Jan Beulich writes ("Re: [PATCH 03/16] SUPPORT.md: Add some x86 features"):
> Much depends on whether you think "guest" == "DomU". To me
> Dom0 is a guest, too.

Not to me.  I'm with George.  (As far as I can make out his message,
which I think was sent with HTML-style quoting which some Citrix thing
has stripped out, so I can't see who said what.)

But I don't think this is important and I would like to see this
document go in.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 03/16] SUPPORT.md: Add some x86 features

2017-11-21 Thread George Dunlap


On Nov 21, 2017, at 11:35 AM, Jan Beulich 
mailto:jbeul...@suse.com>> wrote:

On 21.11.17 at 11:42, 
mailto:george.dun...@citrix.com>> wrote:
On 11/21/2017 08:09 AM, Jan Beulich wrote:
On 13.11.17 at 16:41, 
mailto:george.dun...@citrix.com>> wrote:
+### x86/PVH guest
+
+Status: Supported
+
+PVH is a next-generation paravirtualized mode
+designed to take advantage of hardware virtualization support when possible.
+During development this was sometimes called HVMLite or PVHv2.
+
+Requires hardware virtualisation support (Intel VMX / AMD SVM)

I think it needs to be said that only DomU is considered supported.
Dom0 is perhaps not even experimental at this point, considering
the panic() in dom0_construct_pvh().

Indeed, that's why dom0 PVH isn't in the list, and why this says 'PVH
guest', and is in the 'Guest Type' section.  We generally don't say,
"Oh, and we don't have this feature at all".

If you think it's important we could add a sentence here explicitly
stating that dom0 PVH isn't supported, but I sort of feel like it isn't
necessary.

Much depends on whether you think "guest" == "DomU". To me
Dom0 is a guest, too.

That’s not how I’ve ever understood those terms.

A guest at a hotel is someone who is served, and who does not have (legal) 
access to the internals of the system.  The maids who clean the room and the 
janitors who sweep the floors are hosts, because they have (to various degrees) 
extra access designed to help them serve the guests.

A “guest” is a virtual machine that does not have access to the internals of 
the system; that is the “target” of virtualization.  As such, the dom0 kernel 
and all the toolstack / emulation code running in domain 0 are part of the 
“host”.

Domain 0 is a domain and a VM, but only domUs are guests.

Any other opinions on this?  Do we need to add these to the terms defined at 
the bottom?

 -George
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.9-testing test] 116378: regressions - trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116378 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116378/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64broken
 test-amd64-amd64-livepatch   broken
 test-xtf-amd64-amd64-5   broken
 test-amd64-i386-migrupgrade  broken
 test-amd64-i386-qemuu-rhel6hvm-amd broken
 test-amd64-i386-migrupgrade 5 host-install/dst_host(5) broken REGR. vs. 116234
 test-amd64-i386-qemuu-rhel6hvm-amd  4 host-install(4)  broken REGR. vs. 116234
 test-xtf-amd64-amd64-54 host-install(4)broken REGR. vs. 116234
 test-amd64-amd64-livepatch4 host-install(4)broken REGR. vs. 116234
 test-amd64-i386-xl-qemut-debianhvm-amd64 4 host-install(4) broken REGR. vs. 
116234
 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail REGR. vs. 116234

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116220
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116220
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116220
 test-amd64-amd64-xl-qemuu-ws16-amd64 14 guest-localmigratefail like 116234
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 116234
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116234
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116234
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116234
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 116234
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 116234
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  ae34ab8c5d2e977f6d8081c2ce4494875232f563
baseline version:
 xen  d6ce860bbdf9dbdc88e4f2692e16776a622b2949

Last test of basis   116234  2017-11-16 19:03:58 Z4 days
Testing same since   116378  2017-11-20 15:15:44 Z0 days1 attempts

-

Re: [Xen-devel] [PATCH v2] tools/libxl: mark special pages as reserved in e820 map for PVH

2017-11-21 Thread Juergen Gross
On 21/11/17 12:27, Jan Beulich wrote:
 On 21.11.17 at 12:06,  wrote:
>> The "special pages" for PVH guests include the frames for console and
>> Xenstore ring buffers. Those have to be marked as "Reserved" in the
>> guest's E820 map, as otherwise conflicts might arise later e.g. when
>> hotplugging memory into the guest.
> 
> Afaict this differs from v1 only in no longer adding the extra entry
> for HVM. How does this address the concerns raised on v1 wrt spec
> compliance? v1 having caused problems with hvmloader should not
> have resulted in simply excluding HVM here. That's even more so
> because we mean HVM and PVH to converge in the long run - I'd
> expect that to mean that no clear type distinction would exist
> anymore on libxl.

The difference is for HVM the HVMloader is creating the additional
"Reserved" entry.

> If you want to reserve Xenstore ring buffer and console page,
> why don't you reserve just the two (provided of course they
> live outside of any [fake] PCI device BAR), which then ought to
> also be compatible with plain HVM?

For PVH the "mmio" area is starting with the LAPIC and extends up
to 4GB. And all of this area conflicts with the HVMloader:

(d11) HVM Loader
(d11) Detected Xen v4.10-unstable
(d11) Fail to setup memory map due to conflict on dynamic reserved
memory range.
(d11) *** HVMLoader bug at e820.c:52
(d11) *** HVMLoader crashed.

I guess this should be fixed, but I wanted the patch to be as small
as possible to minimze the risk for 4.10.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 11:56,  wrote:
> On 11/21/2017 08:29 AM, Jan Beulich wrote:
> On 13.11.17 at 16:41,  wrote:
>>> +### PV USB support for xl
>>> +
>>> +Status: Supported
>>> +
>>> +### PV 9pfs support for xl
>>> +
>>> +Status: Tech Preview
>> 
>> Why are these two being called out, but xl support for other device
>> types isn't?
> 
> Do you see how big this document is? :-)  If you think something else
> needs to be covered, don't ask why I didn't mention it, just say what
> you think I missed.

Well, (not very) implicitly here: The same for all other PV protocols.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 04/16] SUPPORT.md: Add core ARM features

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 11:45,  wrote:
> On 11/21/2017 08:11 AM, Jan Beulich wrote:
> On 13.11.17 at 16:41,  wrote:
>>> +### ARM/SMMUv1
>>> +
>>> +Status: Supported
>>> +
>>> +### ARM/SMMUv2
>>> +
>>> +Status: Supported
>> 
>> Do these belong here, when IOMMU isn't part of the corresponding
>> x86 patch?
> 
> Since there was recently a time when these weren't supported, I think
> it's useful to have them in here.  (Julien, let me know if you think
> otherwise.)
> 
> Do you think it would be useful to include an IOMMU line for x86?

At this point of the series I would surely have said "yes". The
later PCI passthrough additions state this implicitly at least (by
requiring an IOMMU for passthrough to be supported at all).
But even then saying so explicitly may be better.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 03/16] SUPPORT.md: Add some x86 features

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 11:42,  wrote:
> On 11/21/2017 08:09 AM, Jan Beulich wrote:
> On 13.11.17 at 16:41,  wrote:
>>> +### x86/PVH guest
>>> +
>>> +Status: Supported
>>> +
>>> +PVH is a next-generation paravirtualized mode 
>>> +designed to take advantage of hardware virtualization support when 
>>> possible.
>>> +During development this was sometimes called HVMLite or PVHv2.
>>> +
>>> +Requires hardware virtualisation support (Intel VMX / AMD SVM)
>> 
>> I think it needs to be said that only DomU is considered supported.
>> Dom0 is perhaps not even experimental at this point, considering
>> the panic() in dom0_construct_pvh().
> 
> Indeed, that's why dom0 PVH isn't in the list, and why this says 'PVH
> guest', and is in the 'Guest Type' section.  We generally don't say,
> "Oh, and we don't have this feature at all".
> 
> If you think it's important we could add a sentence here explicitly
> stating that dom0 PVH isn't supported, but I sort of feel like it isn't
> necessary.

Much depends on whether you think "guest" == "DomU". To me
Dom0 is a guest, too.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 02/16] SUPPORT.md: Add core functionality

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 11:36,  wrote:
> On 11/21/2017 08:03 AM, Jan Beulich wrote:
> On 13.11.17 at 16:41,  wrote:
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -16,6 +16,65 @@ for the definitions of the support status levels etc.
>>>  
>>>  # Feature Support
>>>  
>>> +## Memory Management
>>> +
>>> +### Memory Ballooning
>>> +
>>> +Status: Supported
>> 
>> Is this a proper feature in the context we're talking about? To me
>> it's meaningful in guest OS context only. I also wouldn't really
>> consider it "core", but placement within the series clearly is a minor
>> aspect.
>> 
>> I'd prefer this to be dropped altogether as a feature, but
> 
> This doesn't make any sense to me.  Allowing a guest to modify its own
> memory requires a *lot* of support, spread throughout the hypervisor;
> and there are a huge number of recent security holes that would have
> been much more difficult to exploit if guests didn't have the ability to
> balloon up or down.
> 
> If what you mean is *specifically* the technique of making a "memory
> balloon" to trick the guest OS into handing back memory without knowing
> it, then it's just a matter of semantics.  We could call this "dynamic
> memory control" or something like that if you prefer (although we'd have
> to mention ballooning in the description to make sure people can find it).

Indeed I'd prefer the alternative naming: Outside of p2m-pod.c there's
no mention of the term "balloon" in any of the hypervisor source files.
Furthermore this "dynamic memory control" can be used for things other
than ballooning, all of which I think is (to be) supported.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] tools/libxl: mark special pages as reserved in e820 map for PVH

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 12:06,  wrote:
> The "special pages" for PVH guests include the frames for console and
> Xenstore ring buffers. Those have to be marked as "Reserved" in the
> guest's E820 map, as otherwise conflicts might arise later e.g. when
> hotplugging memory into the guest.

Afaict this differs from v1 only in no longer adding the extra entry
for HVM. How does this address the concerns raised on v1 wrt spec
compliance? v1 having caused problems with hvmloader should not
have resulted in simply excluding HVM here. That's even more so
because we mean HVM and PVH to converge in the long run - I'd
expect that to mean that no clear type distinction would exist
anymore on libxl.

If you want to reserve Xenstore ring buffer and console page,
why don't you reserve just the two (provided of course they
live outside of any [fake] PCI device BAR), which then ought to
also be compatible with plain HVM?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-21 Thread Juergen Gross
On 21/11/17 11:42, Andrew Cooper wrote:
> On 21/11/17 07:44, Jan Beulich wrote:
> On 20.11.17 at 17:59,  wrote:
>>> On 11/20/2017 11:43 AM, Jan Beulich wrote:
>>> On 20.11.17 at 17:28,  wrote:
> On 11/20/2017 11:26 AM, Jan Beulich wrote:
> On 20.11.17 at 17:14,  wrote:
>>> What could cause grub2 to fail to find space for the pointer in the
>>> first page? Will we ever have anything in EBDA (which is one of the
>>> possible RSDP locations)?
>> Well, the EBDA (see the B in its name) is again something that's
>> meaningless without there being a BIOS.
> Exactly. So it should always be available for grub to copy the pointer
> there.
 But what use would it be if grub copied it there? It just shouldn't
 be there, neither before nor after grub (just like grub doesn't
 introduce firmware into the system).
>>> So that the guest can find it using standard methods. If Xen can't
>>> guarantee ACPI-compliant placement of the pointer then someone has to
>>> help the guest find it in the expected place. We can do it with a
>>> dedicated entry point by setting the pointer explicitly (although
>>> admittedly this is not done correctly now) or we need to have firmware
>>> (grub2) place it in the "right" location.
>>>
>>> (It does look a bit hacky though)
>> Indeed. Of course ACPI without any actual firmware is sort of odd,
>> too. As to dedicated entry point and its alternatives: Xen itself
>> tells grub (aiui we're talking about a flavor of it running PVH itself)
>> where the RSDP is. Why can't grub forward that information in a
>> suitable way (e.g. via a new tag, or - for Linux - as a new entry
>> in the Linux boot header)?
> 
> Or if the worst comes to the worst, fabricate an acpi_rsdp= command line
> parameter?

This would be easy: just replace the #ifdef CONFIG_KEXEC in
drivers/acpi/osl.c of the Linux kernel with:

#if defined(CONFIG_KEXEC) || defined(CONFIG_XEN_PVH)

and the parameter is usable and active.

Another possibility would be to let grub copy the RSDP to below 1MB and
add the E820 entry for it.

In any case it seems save to let Xen place the RSDP just below 4GB,
together with console and Xenstore pages (so this area just expands).
grub can handle this either on its own or together with the kernel.

Lets see how Roger's plans with BSD look like.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2] tools/libxl: mark special pages as reserved in e820 map for PVH

2017-11-21 Thread Juergen Gross
The "special pages" for PVH guests include the frames for console and
Xenstore ring buffers. Those have to be marked as "Reserved" in the
guest's E820 map, as otherwise conflicts might arise later e.g. when
hotplugging memory into the guest.

Signed-off-by: Juergen Gross 
---
This is a bugfix for PVH guests. Please consider for 4.10.
---
 tools/libxl/libxl_x86.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 5f91fe4f92..d82013f6ed 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -530,6 +530,9 @@ int libxl__arch_domain_construct_memmap(libxl__gc *gc,
 if (d_config->rdms[i].policy != LIBXL_RDM_RESERVE_POLICY_INVALID)
 e820_entries++;
 
+/* Add mmio entry for PVH. */
+if (dom->mmio_size && d_config->b_info.type == LIBXL_DOMAIN_TYPE_PVH)
+e820_entries++;
 
 /* If we should have a highmem range. */
 if (highmem_size)
@@ -564,6 +567,14 @@ int libxl__arch_domain_construct_memmap(libxl__gc *gc,
 nr++;
 }
 
+/* mmio area */
+if (dom->mmio_size && d_config->b_info.type == LIBXL_DOMAIN_TYPE_PVH) {
+e820[nr].addr = dom->mmio_start;
+e820[nr].size = dom->mmio_size;
+e820[nr].type = E820_RESERVED;
+nr++;
+}
+
 for (i = 0; i < MAX_ACPI_MODULES; i++) {
 if (dom->acpi_modules[i].length) {
 e820[nr].addr = dom->acpi_modules[i].guest_addr_out & ~(page_size 
- 1);
-- 
2.12.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86

2017-11-21 Thread George Dunlap
On 11/21/2017 08:29 AM, Jan Beulich wrote:
 On 13.11.17 at 16:41,  wrote:
>> +### PV USB support for xl
>> +
>> +Status: Supported
>> +
>> +### PV 9pfs support for xl
>> +
>> +Status: Tech Preview
> 
> Why are these two being called out, but xl support for other device
> types isn't?

Do you see how big this document is? :-)  If you think something else
needs to be covered, don't ask why I didn't mention it, just say what
you think I missed.

> 
>> +### QEMU backend hotplugging for xl
>> +
>> +Status: Supported
> 
> Wouldn't this more appropriately be
> 
> ### QEMU backend hotplugging
> 
> Status, xl: Supported

Maybe -- let me think about it.

> 
> ?
> 
>> +## Virtual driver support, guest side
>> +
>> +### Blkfront
>> +
>> +Status, Linux: Supported
>> +Status, FreeBSD: Supported, Security support external
>> +Status, NetBSD: Supported, Security support external
>> +Status, Windows: Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV block protocol
>> +
>> +### Netfront
>> +
>> +Status, Linux: Supported
>> +States, Windows: Supported
>> +Status, FreeBSD: Supported, Security support external
>> +Status, NetBSD: Supported, Security support external
>> +Status, OpenBSD: Supported, Security support external
> 
> Seeing the difference in OSes between the two (with the variance
> increasing in entries further down) - what does the absence of an
> OS on one list, but its presence on another mean? While not
> impossible, I would find it surprising if e.g. OpenBSD had netfront
> but not even a basic blkfront.

Good catch.  Roger suggested that I add the OpenBSD Netfront; he's away
so I'll have to see if I can figure out if they have blkfront support or
not.

>> +Guest-side driver capable of speaking the Xen PV networking protocol
>> +
>> +### PV Framebuffer (frontend)
>> +
>> +Status, Linux (xen-fbfront): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
>> +
>> +### PV Console (frontend)
>> +
>> +Status, Linux (hvc_xen): Supported
>> +Status, Windows: Supported
>> +Status, FreeBSD: Supported, Security support external
>> +Status, NetBSD: Supported, Security support external
>> +
>> +Guest-side driver capable of speaking the Xen PV console protocol
>> +
>> +### PV keyboard (frontend)
>> +
>> +Status, Linux (xen-kbdfront): Supported
>> +Status, Windows: Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV keyboard protocol
> 
> Are these three active/usable in guests regardless of whether the
> guest is being run PV, PVH, or HVM? If not, wouldn't this need
> spelling out?

In theory I think they could be used; I suspect it's just that they
aren't used.  Let me see if I can think of a way to concisely express that.

>> +## Virtual device support, host side
>> +
>> +### Blkback
>> +
>> +Status, Linux (blkback): Supported
> 
> Strictly speaking, if the driver name is to be spelled out here in
> the first place, it's xen-blkback here and ...
> 
>> +Status, FreeBSD (blkback): Supported, Security support external
>> +Status, NetBSD (xbdback): Supported, security support external
>> +Status, QEMU (xen_disk): Supported
>> +Status, Blktap2: Deprecated
>> +
>> +Host-side implementations of the Xen PV block protocol
>> +
>> +### Netback
>> +
>> +Status, Linux (netback): Supported
> 
> ... xen-netback here for the upstream kernels.

Ack.


>> +### PV USB (backend)
>> +
>> +Status, Linux: Experimental
> 
> What existing/upstream code does this refer to?

I guess a bunch of patches posted to a mailing list?  Yeah, that's
probably something we should take out.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 04/16] SUPPORT.md: Add core ARM features

2017-11-21 Thread George Dunlap
On 11/21/2017 08:11 AM, Jan Beulich wrote:
 On 13.11.17 at 16:41,  wrote:
>> +### ARM/SMMUv1
>> +
>> +Status: Supported
>> +
>> +### ARM/SMMUv2
>> +
>> +Status: Supported
> 
> Do these belong here, when IOMMU isn't part of the corresponding
> x86 patch?

Since there was recently a time when these weren't supported, I think
it's useful to have them in here.  (Julien, let me know if you think
otherwise.)

Do you think it would be useful to include an IOMMU line for x86?

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-21 Thread Andrew Cooper
On 21/11/17 09:37, Juergen Gross wrote:
> On 21/11/17 09:46, Jan Beulich wrote:
> On 21.11.17 at 09:13,  wrote:
>>> On 21/11/17 08:50, Jan Beulich wrote:
>>> On 20.11.17 at 19:28,  wrote:
> On 20/11/17 17:14, Jan Beulich wrote:
> On 20.11.17 at 16:24,  wrote:
>>> So without my patch the RSDP table is loaded e.g. at about 6.5MB when
>>> I'm using grub2 (the loaded grub image is about 5.5MB in size and it
>>> is being loaded at 1MB).
>>>
>>> When I'm using the PVH Linux kernel directly RSDP is just below 1MB
>>> due to pure luck (the bzImage loader is still using the PV specific
>>> ELF notes and this results in the loader believing RSDP is loadable
>>> at this address, which is true, but the tests used to come to this
>>> conclusion are just not applicable for PVH).
>>>
>>> So in your opinion we should revoke the PVH support from Xen 4.10,
>>> Linux and maybe BSD because RSDP is loaded in middle of RAM of the
>>> guest?
>> So what's wrong with it being put wherever the next free memory
>> location is being determined to be by the loader, just like is being
>> done for other information, including modules (if any)?
> The RSDP table is marked as "Reserved" in the memory map. So putting it
> somewhere in the middle of the guest's memory will force the guest to
> use 4kB pages instead of 2MB or even 1GB pages. I'd really like to avoid
> this problem, as we've been hit by the very same in HVM guests before
> causing quite measurable performance drops.
 This is a valid point.

> So I'd rather put it in the first MB as most kernels have to deal with
> small pages at beginning of RAM today. An alternative would be to put
> it just below 4GB where e.g. the console and Xenstore page are located.
 Putting it in the first Mb implies that mappings there will continue to
 be 4k ones. I can't, however, see why for PVH that should be
 necessary: There's no BIOS and nothing legacy that needs to live
 there, so other than HVM it could benefit from using a 1Gb mapping
 even at address zero (even if this might be something that can't
 be achieved right away). So yes, if anything, the allocation should
 be made top down starting from 4Gb. Otoh, I don't see a strict
 need for this area to live below 4Gb in the first place.
>>> The physical RSDP address in the PVH start info block is 32 bits
>>> only. So it can't be above 4GB.
>> struct hvm_start_info {
>> uint32_t magic; /* Contains the magic value 0x336ec578   
>> */
>> /* ("xEn3" with the 0x80 bit of the "E" 
>> set).*/
>> uint32_t version;   /* Version of this structure.
>> */
>> uint32_t flags; /* SIF_xxx flags.
>> */
>> uint32_t nr_modules;/* Number of modules passed to the kernel.   
>> */
>> uint64_t modlist_paddr; /* Physical address of an array of   
>> */
>> /* hvm_modlist_entry.
>> */
>> uint64_t cmdline_paddr; /* Physical address of the command line. 
>> */
>> uint64_t rsdp_paddr;/* Physical address of the RSDP ACPI data
>> */
>> /* structure.
>> */
>> };
> Oh, it seems I have been looking into an outdated document. Thanks for
> the correction.
>
>> Granted a comment a few lines up in the public header says "NB: Xen
>> on x86 will always try to place all the data below the 4GiB boundary."
> Okay.

The 4GB limit is specifically because we don't know a priori whether it
is a 32 or 64bit guest, and we agreed not to put the tables anywhere you
couldn't reach with paging disabled.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 03/16] SUPPORT.md: Add some x86 features

2017-11-21 Thread George Dunlap
On 11/21/2017 08:09 AM, Jan Beulich wrote:
 On 13.11.17 at 16:41,  wrote:
>> +### x86/PVH guest
>> +
>> +Status: Supported
>> +
>> +PVH is a next-generation paravirtualized mode 
>> +designed to take advantage of hardware virtualization support when possible.
>> +During development this was sometimes called HVMLite or PVHv2.
>> +
>> +Requires hardware virtualisation support (Intel VMX / AMD SVM)
> 
> I think it needs to be said that only DomU is considered supported.
> Dom0 is perhaps not even experimental at this point, considering
> the panic() in dom0_construct_pvh().

Indeed, that's why dom0 PVH isn't in the list, and why this says 'PVH
guest', and is in the 'Guest Type' section.  We generally don't say,
"Oh, and we don't have this feature at all".

If you think it's important we could add a sentence here explicitly
stating that dom0 PVH isn't supported, but I sort of feel like it isn't
necessary.

>> +### Host ACPI (via Domain 0)
>> +
>> +Status, x86 PV: Supported
>> +Status, x86 PVH: Tech preview
>
> Are we this far already? Preview implies functional completeness,
> but I'm not sure about all ACPI related parts actually having been
> implemented (and see also below). But perhaps things like P and C
> state handling come as individual features later on.

Hmm, yeah, it doesn't make much sense to say that we have "Tech preview"
status for a feature with a PVH dom0, when PVH dom0 itself isn't even
'experimental' yet.  I'll remove this (unless Roger or Wei want to object).

 -George


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-21 Thread Andrew Cooper
On 21/11/17 07:44, Jan Beulich wrote:
 On 20.11.17 at 17:59,  wrote:
>> On 11/20/2017 11:43 AM, Jan Beulich wrote:
>> On 20.11.17 at 17:28,  wrote:
 On 11/20/2017 11:26 AM, Jan Beulich wrote:
 On 20.11.17 at 17:14,  wrote:
>> What could cause grub2 to fail to find space for the pointer in the
>> first page? Will we ever have anything in EBDA (which is one of the
>> possible RSDP locations)?
> Well, the EBDA (see the B in its name) is again something that's
> meaningless without there being a BIOS.
 Exactly. So it should always be available for grub to copy the pointer
 there.
>>> But what use would it be if grub copied it there? It just shouldn't
>>> be there, neither before nor after grub (just like grub doesn't
>>> introduce firmware into the system).
>> So that the guest can find it using standard methods. If Xen can't
>> guarantee ACPI-compliant placement of the pointer then someone has to
>> help the guest find it in the expected place. We can do it with a
>> dedicated entry point by setting the pointer explicitly (although
>> admittedly this is not done correctly now) or we need to have firmware
>> (grub2) place it in the "right" location.
>>
>> (It does look a bit hacky though)
> Indeed. Of course ACPI without any actual firmware is sort of odd,
> too. As to dedicated entry point and its alternatives: Xen itself
> tells grub (aiui we're talking about a flavor of it running PVH itself)
> where the RSDP is. Why can't grub forward that information in a
> suitable way (e.g. via a new tag, or - for Linux - as a new entry
> in the Linux boot header)?

Or if the worst comes to the worst, fabricate an acpi_rsdp= command line
parameter?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] x86/hvm: Don't corrupt the HVM context stream when writing the MSR record

2017-11-21 Thread Julien Grall

Hi,

On 11/16/2017 10:45 PM, Andrew Cooper wrote:

Ever since it was introduced in c/s bd1f0b45ff, hvm_save_cpu_msrs() has had a
bug whereby it corrupts the HVM context stream if some, but fewer than the
maximum number of MSRs are written.

_hvm_init_entry() creates an hvm_save_descriptor with length for
msr_count_max, but in the case that we write fewer than max, h->cur only moves
forward by the amount of space used, causing the subsequent
hvm_save_descriptor to be written within the bounds of the previous one.

To resolve this, reduce the length reported by the descriptor to match the
actual number of bytes used.

A typical failure on the destination side looks like:

 (XEN) HVM4 restore: CPU_MSR 0
 (XEN) HVM4.0 restore: not enough data left to read 56 MSR bytes
 (XEN) HVM4 restore: failed to load entry 20/0

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Wei Liu 
CC: Julien Grall 

This wants backporting to all stable trees, so should also be considered for
inclusion into 4.10 at this point.


Release-acked-by: Julien Grall 

Cheers,


---
  xen/arch/x86/hvm/hvm.c | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 0af498a..c5e8467 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1330,6 +1330,7 @@ static int hvm_save_cpu_msrs(struct domain *d, 
hvm_domain_context_t *h)
  
  for_each_vcpu ( d, v )

  {
+struct hvm_save_descriptor *d = _p(&h->data[h->cur]);
  struct hvm_msr *ctxt;
  unsigned int i;
  
@@ -1348,8 +1349,13 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)

  ctxt->msr[i]._rsvd = 0;
  
  if ( ctxt->count )

+{
+/* Rewrite length to indicate how much space we actually used. */
+d->length = HVM_CPU_MSR_SIZE(ctxt->count);
  h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
  else
+/* or rewind and remove the descriptor from the stream. */
  h->cur -= sizeof(struct hvm_save_descriptor);
  }
  



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] tools/libxc: Fix restoration of PV MSRs after migrate

2017-11-21 Thread Julien Grall

Hi,

On 11/16/2017 09:13 PM, Andrew Cooper wrote:

There are two bugs in process_vcpu_msrs() which clearly demonstrate that I
didn't test this bit of Migration v2 very well when writing it...

vcpu->msrsz is always expected to be a multiple of xen_domctl_vcpu_msr_t
records in a spec-compliant stream, so the modulo yields 0 for the msr_count,
rather than the actual number sent in the stream.

Passing 0 for the msr_count causes the hypercall to exit early, and hides the
fact that the guest handle is inserted into the wrong field in the domctl
union.

The reason that these bugs have gone unnoticed for so long is that the only
MSRs passed like this for PV guests are the AMD DBGEXT MSRs, which only exist
in fairly modern hardware, and whose use doesn't appear to be implemented in
any contemporary PV guests.

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Ian Jackson 
CC: Wei Liu 
CC: Julien Grall 

This wants backporting to all stable trees, so should also be considered for
inclusion into 4.10 at this point.


Release-acked-by: Julien Grall 

Cheers,


---
  tools/libxc/xc_sr_restore_x86_pv.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/xc_sr_restore_x86_pv.c 
b/tools/libxc/xc_sr_restore_x86_pv.c
index 50e25c1..ed0fd0e 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xc_sr_restore_x86_pv.c
@@ -455,8 +455,8 @@ static int process_vcpu_msrs(struct xc_sr_context *ctx,
  domctl.cmd = XEN_DOMCTL_set_vcpu_msrs;
  domctl.domain = ctx->domid;
  domctl.u.vcpu_msrs.vcpu = vcpuid;
-domctl.u.vcpu_msrs.msr_count = vcpu->msrsz % sizeof(xen_domctl_vcpu_msr_t);
-set_xen_guest_handle(domctl.u.vcpuextstate.buffer, buffer);
+domctl.u.vcpu_msrs.msr_count = vcpu->msrsz / sizeof(xen_domctl_vcpu_msr_t);
+set_xen_guest_handle(domctl.u.vcpu_msrs.msrs, buffer);
  
  memcpy(buffer, vcpu->msr, vcpu->msrsz);
  



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 02/16] SUPPORT.md: Add core functionality

2017-11-21 Thread George Dunlap
On 11/21/2017 08:03 AM, Jan Beulich wrote:
 On 13.11.17 at 16:41,  wrote:
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -16,6 +16,65 @@ for the definitions of the support status levels etc.
>>  
>>  # Feature Support
>>  
>> +## Memory Management
>> +
>> +### Memory Ballooning
>> +
>> +Status: Supported
> 
> Is this a proper feature in the context we're talking about? To me
> it's meaningful in guest OS context only. I also wouldn't really
> consider it "core", but placement within the series clearly is a minor
> aspect.
> 
> I'd prefer this to be dropped altogether as a feature, but

This doesn't make any sense to me.  Allowing a guest to modify its own
memory requires a *lot* of support, spread throughout the hypervisor;
and there are a huge number of recent security holes that would have
been much more difficult to exploit if guests didn't have the ability to
balloon up or down.

If what you mean is *specifically* the technique of making a "memory
balloon" to trick the guest OS into handing back memory without knowing
it, then it's just a matter of semantics.  We could call this "dynamic
memory control" or something like that if you prefer (although we'd have
to mention ballooning in the description to make sure people can find it).

> Acked-by: Jan Beulich 
> is independent of that.
> 
>> +### Credit2 Scheduler
>> +
>> +Status: Supported
> 
> Sort of unrelated, but with this having been the case since 4.8 as it
> looks, is there a reason it still isn't the default scheduler?
Well first of all it was missing some features which credit1 had:
namely, soft affinity (i.e., required for host NUMA awareness) and caps.
 These were checked in this release cycle; but we also wanted to switch
the default at the beginning of a development cycle to get the highest
chance of shaking out any weird bugs.

So according to those criteria, we could switch to credit2 being the
default scheduler as soon as 4.10 development window opens.

At some point recently Dario said there were still some unusual behavior
he wanted to dig into; but I think with him not working for Citrix
anymore, it's doubtful we'll have resource to take that up; the best
option might be to just pull the lever and see what happens.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.7-testing test] 116377: regressions - trouble: blocked/broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116377 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116377/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-rumprun-amd64 broken
 test-amd64-amd64-libvirt broken
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsmbroken
 test-amd64-amd64-xl-qcow2broken
 test-amd64-amd64-libvirt  4 host-install(4)broken REGR. vs. 116348
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 4 host-install(4) broken REGR. 
vs. 116348
 test-amd64-amd64-xl-qcow2 4 host-install(4)broken REGR. vs. 116348
 test-amd64-amd64-rumprun-amd64  4 host-install(4)  broken REGR. vs. 116348
 test-xtf-amd64-amd64-3 49 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 116348
 build-armhf-xsm   6 xen-buildfail REGR. vs. 116348

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds 12 guest-start  fail  like 116219
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116321
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116321
 test-xtf-amd64-amd64-2  49 xtf/test-hvm64-lbr-tsx-vmentry fail like 116348
 test-xtf-amd64-amd64-1  49 xtf/test-hvm64-lbr-tsx-vmentry fail like 116348
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116348
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail like 116348
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116348
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116348
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116348
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116348
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116348
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116348
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  bcc9e245aafbdae44c761053c898bedb3582cc4d
baseline version:
 xen  259a5c3000d840f244dbb30f2b47b95f2dc0f80f

Last test of basis   116348  2017-11-19 18:52:56 Z1 days
Testing same since   116377  2017-11-20 15:14:35 Z0 days1 attempts


People who touched revisions under test:
  Jan Beulich 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  fail
 build-i386-xsm   pass
 build-amd64-xtf  pass 

[Xen-devel] [xen-4.5-testing baseline-only test] 72472: regressions - FAIL

2017-11-21 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72472 xen-4.5-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72472/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt  16 guest-saverestore.2   fail REGR. vs. 72359
 test-armhf-armhf-xl-midway   19 leak-check/check  fail REGR. vs. 72359
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1  fail REGR. vs. 72359
 test-amd64-amd64-xl-qemuu-winxpsp3 16 guest-localmigrate/x10 fail REGR. vs. 
72359

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-rumprun-amd64 17 rumprun-demo-xenstorels/xenstorels.repeat 
fail REGR. vs. 72359

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2 15 guest-saverestore.2fail baseline untested
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1   fail blocked in 72359
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   like 72359
 test-amd64-amd64-xl-rtds  7 xen-boot fail   like 72359
 test-xtf-amd64-amd64-3   60 leak-check/check fail   like 72359
 test-xtf-amd64-amd64-4   60 leak-check/check fail   like 72359
 test-xtf-amd64-amd64-5   60 leak-check/check fail   like 72359
 test-xtf-amd64-amd64-1   60 leak-check/check fail   like 72359
 test-xtf-amd64-amd64-2   60 leak-check/check fail   like 72359
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 72359
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-localmigrate  fail like 72359
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail like 72359
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 72359
 test-amd64-amd64-xl-qemut-winxpsp3 10 windows-install  fail like 72359
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-xtf-amd64-amd64-3   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-xtf-amd64-amd64-4   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-xtf-amd64-amd64-5   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-xtf-amd64-amd64-1   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-xtf-amd64-amd64-2   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-armhf-armhf-libvirt-raw 11 guest-start  fail   never pass
 test-armhf-armhf-xl-vhd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-i386-xl-qemut-win10-i386 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 17 guest-stop fail never pass

version targeted for testing:
 xen  41f6dd05d10fd1b4281c1722e2d8f29e378abe9a
baseline version:
 xen  08aa260dd172de625ecc2b64b78b1aa68de1f472

Last test of basis72359  2017-10-26 15:14:21 Z   25 days
Testing same since72472  2017-11-20 16:15:40 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  George Dunlap 
  Jan Beulich 

jobs:
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 buil

Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-21 Thread Juergen Gross
On 21/11/17 09:46, Jan Beulich wrote:
 On 21.11.17 at 09:13,  wrote:
>> On 21/11/17 08:50, Jan Beulich wrote:
>> On 20.11.17 at 19:28,  wrote:
 On 20/11/17 17:14, Jan Beulich wrote:
 On 20.11.17 at 16:24,  wrote:
>> So without my patch the RSDP table is loaded e.g. at about 6.5MB when
>> I'm using grub2 (the loaded grub image is about 5.5MB in size and it
>> is being loaded at 1MB).
>>
>> When I'm using the PVH Linux kernel directly RSDP is just below 1MB
>> due to pure luck (the bzImage loader is still using the PV specific
>> ELF notes and this results in the loader believing RSDP is loadable
>> at this address, which is true, but the tests used to come to this
>> conclusion are just not applicable for PVH).
>>
>> So in your opinion we should revoke the PVH support from Xen 4.10,
>> Linux and maybe BSD because RSDP is loaded in middle of RAM of the
>> guest?
>
> So what's wrong with it being put wherever the next free memory
> location is being determined to be by the loader, just like is being
> done for other information, including modules (if any)?

 The RSDP table is marked as "Reserved" in the memory map. So putting it
 somewhere in the middle of the guest's memory will force the guest to
 use 4kB pages instead of 2MB or even 1GB pages. I'd really like to avoid
 this problem, as we've been hit by the very same in HVM guests before
 causing quite measurable performance drops.
>>>
>>> This is a valid point.
>>>
 So I'd rather put it in the first MB as most kernels have to deal with
 small pages at beginning of RAM today. An alternative would be to put
 it just below 4GB where e.g. the console and Xenstore page are located.
>>>
>>> Putting it in the first Mb implies that mappings there will continue to
>>> be 4k ones. I can't, however, see why for PVH that should be
>>> necessary: There's no BIOS and nothing legacy that needs to live
>>> there, so other than HVM it could benefit from using a 1Gb mapping
>>> even at address zero (even if this might be something that can't
>>> be achieved right away). So yes, if anything, the allocation should
>>> be made top down starting from 4Gb. Otoh, I don't see a strict
>>> need for this area to live below 4Gb in the first place.
>>
>> The physical RSDP address in the PVH start info block is 32 bits
>> only. So it can't be above 4GB.
> 
> struct hvm_start_info {
> uint32_t magic; /* Contains the magic value 0x336ec578   
> */
> /* ("xEn3" with the 0x80 bit of the "E" 
> set).*/
> uint32_t version;   /* Version of this structure.
> */
> uint32_t flags; /* SIF_xxx flags.
> */
> uint32_t nr_modules;/* Number of modules passed to the kernel.   
> */
> uint64_t modlist_paddr; /* Physical address of an array of   
> */
> /* hvm_modlist_entry.
> */
> uint64_t cmdline_paddr; /* Physical address of the command line. 
> */
> uint64_t rsdp_paddr;/* Physical address of the RSDP ACPI data
> */
> /* structure.
> */
> };

Oh, it seems I have been looking into an outdated document. Thanks for
the correction.

> Granted a comment a few lines up in the public header says "NB: Xen
> on x86 will always try to place all the data below the 4GiB boundary."

Okay.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 116372: regressions - trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116372 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116372/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd broken
 test-amd64-amd64-xl-xsm  broken
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsmbroken
 test-amd64-amd64-xl-multivcpu broken
 test-amd64-i386-qemuu-rhel6hvm-amd broken
 test-amd64-amd64-xl-xsm   4 host-install(4)broken REGR. vs. 115643
 test-amd64-i386-qemuu-rhel6hvm-amd  4 host-install(4)  broken REGR. vs. 115643
 test-amd64-i386-qemut-rhel6hvm-amd  4 host-install(4)  broken REGR. vs. 115643
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 4 host-install(4) broken 
REGR. vs. 115643
 test-amd64-amd64-xl-multivcpu  4 host-install(4)   broken REGR. vs. 115643
 test-amd64-amd64-xl-qemuu-win10-i386  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-freebsd10-amd64  7 xen-boot  fail REGR. vs. 115643
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-pvhv2-amd  7 xen-bootfail REGR. vs. 115643
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 115643
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 115643
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 115643
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-rumprun-i386  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-libvirt-qcow2  7 xen-bootfail REGR. vs. 115643
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 115643
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-boot  fail REGR. vs. 115643
 test-amd64-amd64-xl-qemut-win7-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-saverestore fail REGR. vs. 115643

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115643
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115643
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115643
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 115643
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115643
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 115643
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115643
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf

Re: [Xen-devel] [PATCH 16/16] SUPPORT.md: Add limits RFC

2017-11-21 Thread Jan Beulich
>>> On 13.11.17 at 16:41,  wrote:
> +### Virtual CPUs
> +
> +Limit, x86 PV: 8192
> +Limit-security, x86 PV: 32
> +Limit, x86 HVM: 128
> +Limit-security, x86 HVM: 32

Personally I consider the "Limit-security" numbers too low here, but
I have no proof that higher numbers will work _in all cases_.

> +### Virtual RAM
> +
> +Limit-security, x86 PV: 2047GiB

I think this needs splitting for 64- and 32-bit (the latter can go up
to 168Gb only on hosts with no memory past the 168Gb boundary,
and up to 128Gb only on larger ones, without this being a processor
architecture limitation).

> +### Event Channel FIFO ABI
> +
> +Limit: 131072

Are we certain this is a security supportable limit? There is at least
one loop (in get_free_port()) which can potentially have this number
of iterations.

That's already leaving aside the one in the 'e' key handler. Speaking
of which - I think we should state somewhere that there's no security
support if any key whatsoever was sent to Xen via the console or
the sysctl interface.

And more generally - surely there are items that aren't present in
the series and no-one can realistically spot right away. What do we
mean to imply for functionality not covered in the doc? One thing
coming to mind here are certain command line options, an example
being "sync_console" - the description states "not suitable for
production environments", but I think this should be tightened to
exclude security support.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86

2017-11-21 Thread Paul Durrant
> -Original Message-
[snip]
> > +### PV keyboard (frontend)
> > +
> > +Status, Linux (xen-kbdfront): Supported
> > +Status, Windows: Supported
> > +
> > +Guest-side driver capable of speaking the Xen PV keyboard protocol
> 
> Are these three active/usable in guests regardless of whether the
> guest is being run PV, PVH, or HVM? If not, wouldn't this need
> spelling out?
> 

I believe the necessary patches to make the PV vkdb protocol usable 
independently of vfb are at least queued for upstream QEMU.

Stefano, am I correct?

Cheers,

  Paul

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 2/2 v2] xen: Fix 16550 UART console for HP Moonshot (Aarch64) platform

2017-11-21 Thread Bhupinder Thakur

Hi,


On Thursday 16 November 2017 03:26 PM, George Dunlap wrote:

On Nov 15, 2017, at 9:20 PM, Konrad Rzeszutek Wilk  
wrote:

On Thu, Nov 09, 2017 at 03:49:24PM +0530, Bhupinder Thakur wrote:

The console was not working on HP Moonshot (HPE Proliant Aarch64) because
the UART registers were accessed as 8-bit aligned addresses. However,
registers are 32-bit aligned for HP Moonshot.

Since ACPI/SPCR table does not specify the register shift to be applied to 
the
register offset, this patch implements an erratum to correctly set the 
register
shift for HP Moonshot.

Similar erratum was implemented in linux:

commit 79a648328d2a604524a30523ca763fbeca0f70e3
Author: Loc Ho 
Date:   Mon Jul 3 14:33:09 2017 -0700

ACPI: SPCR: Workaround for APM X-Gene 8250 UART 32-alignment errata

APM X-Gene verion 1 and 2 have an 8250 UART with its register
aligned to 32-bit. In addition, the latest released BIOS
encodes the access field as 8-bit access instead 32-bit access.
This causes no console with ACPI boot as the console
will not match X-Gene UART port due to the lack of mmio32
option.

Signed-off-by: Loc Ho 
Acked-by: Greg Kroah-Hartman 
Signed-off-by: Rafael J. Wysocki 

Any particular reason you offset this whole commit description by four spaces?

I get this effect when I use “git show” to look at a changeset for some reason. 
 Bhupinder, did you perhaps export a changeset as a patch using “git show” and 
then re-import it?

In any case, this needs to be fixed.

Yes I copied the commit message from git show. I will align the text.

  -George


Regards,
Bhupinder

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough

2017-11-21 Thread Jan Beulich
>>> On 13.11.17 at 16:41,  wrote:
> +### x86/PCI Device Passthrough
> +
> +Status: Supported, with caveats

I think this wants to be

### PCI Device Passthrough

Status, x86 HVM: Supported, with caveats
Status, x86 PV: Supported, with caveats

to (a) allow later extending for ARM and (b) exclude PVH (assuming
that its absence means non-existing code).

> +Only systems using IOMMUs will be supported.
> +
> +Not compatible with migration, altp2m, introspection, memory sharing, or 
> memory paging.

And PoD, iirc.

With these adjustments (or substantially similar ones)
Acked-by: Jan Beulich 

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [seabios test] 116373: regressions - trouble: broken/fail/pass

2017-11-21 Thread osstest service owner
flight 116373 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116373/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd broken
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  4 host-install(4)broken pass in 116346

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  df46d10c8a7b88eb82f3ceb2aa31782dee15593d
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   17 days
Failing since115733  2017-11-10 17:19:59 Z   10 days   17 attempts
Testing same since   116211  2017-11-16 00:20:45 Z5 days7 attempts


People who touched revisions under test:
  Kevin O'Connor 
  Stefan Berger 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-i386-qemuu-rhel6hvm-amd broken
broken-step test-amd64-i386-qemuu-rhel6hvm-amd host-install(4)

Not pushing.


commit df46d10c8a7b88eb82f3ceb2aa31782dee15593d
Author: Stefan Berger 
Date:   Tue Nov 14 15:03:47 2017 -0500

tpm: Add support for TPM2 ACPI table

Add support for the TPM2 ACPI table. If we find it and its
of the appropriate size, we can get the log_area_start_address
and log_area_minimum_size from it.

The latest version of the spec can be found here:

https://trustedcomputinggroup.org/tcg-acpi-specification/

Signed-off-by: Stefan Berger 

commit 0541f2f0f246e77d7c726926976920e8072d1119
Author: Kevin O'Connor 
Date:   Fri Nov 10 12:20:35 2017 -0500

paravirt: Only enable sercon in NOGRAPHIC mode if no other console specified

Signed-off-by: Kevin O'Connor 

commit 9ce6778f08c632c52b25bc8f754291ef18710d53
Author: Kevin O'Connor 
Date:  

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-21 Thread Jan Beulich
>>> On 13.11.17 at 16:41,  wrote:
> Signed-off-by: George Dunlap 

Wouldn't PoD belong here too? With that added as supported on x86
HVM
Acked-by: Jan Beulich 

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 12/16] SUPPORT.md: Add Security-releated features

2017-11-21 Thread Jan Beulich
>>> On 13.11.17 at 16:41,  wrote:
> With the exception of driver domains, which depend on PCI passthrough,
> and will be introduced later.
> 
> Signed-off-by: George Dunlap 

Shouldn't we also explicitly exclude tool stack disaggregation here,
with reference to XSA-77?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 11/16] SUPPORT.md: Add 'easy' HA / FT features

2017-11-21 Thread Jan Beulich
>>> On 13.11.17 at 16:41,  wrote:
> +### x86/vMCE
> +
> +Status: Supported
> +
> +Forward Machine Check Exceptions to Appropriate guests

Acked-by: Jan Beulich 
perhaps with the A converted to lower case.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem

2017-11-21 Thread Jan Beulich
>>> On 13.11.17 at 16:41,  wrote:
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -152,6 +152,35 @@ Output of information in machine-parseable JSON format
>  
>  Status: Supported, Security support external
>  
> +## Debugging, analysis, and crash post-mortem
> +
> +### gdbsx
> +
> +Status, x86: Supported
> +
> +Debugger to debug ELF guests
> +
> +### Soft-reset for PV guests
> +
> +Status: Supported
> +
> +Soft-reset allows a new kernel to start 'from scratch' with a fresh VM 
> state, 
> +but with all the memory from the previous state of the VM intact.
> +This is primarily designed to allow "crash kernels", 
> +which can do core dumps of memory to help with debugging in the event of a 
> crash.
> +
> +### xentrace
> +
> +Status, x86: Supported
> +
> +Tool to capture Xen trace buffer data
> +
> +### gcov
> +
> +Status: Supported, Not security supported

I agree with excluding security support here, but why wouldn't the
same be the case for gdbsx and xentrace?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-21 Thread Jan Beulich
>>> On 21.11.17 at 09:13,  wrote:
> On 21/11/17 08:50, Jan Beulich wrote:
> On 20.11.17 at 19:28,  wrote:
>>> On 20/11/17 17:14, Jan Beulich wrote:
>>> On 20.11.17 at 16:24,  wrote:
> So without my patch the RSDP table is loaded e.g. at about 6.5MB when
> I'm using grub2 (the loaded grub image is about 5.5MB in size and it
> is being loaded at 1MB).
>
> When I'm using the PVH Linux kernel directly RSDP is just below 1MB
> due to pure luck (the bzImage loader is still using the PV specific
> ELF notes and this results in the loader believing RSDP is loadable
> at this address, which is true, but the tests used to come to this
> conclusion are just not applicable for PVH).
>
> So in your opinion we should revoke the PVH support from Xen 4.10,
> Linux and maybe BSD because RSDP is loaded in middle of RAM of the
> guest?

 So what's wrong with it being put wherever the next free memory
 location is being determined to be by the loader, just like is being
 done for other information, including modules (if any)?
>>>
>>> The RSDP table is marked as "Reserved" in the memory map. So putting it
>>> somewhere in the middle of the guest's memory will force the guest to
>>> use 4kB pages instead of 2MB or even 1GB pages. I'd really like to avoid
>>> this problem, as we've been hit by the very same in HVM guests before
>>> causing quite measurable performance drops.
>> 
>> This is a valid point.
>> 
>>> So I'd rather put it in the first MB as most kernels have to deal with
>>> small pages at beginning of RAM today. An alternative would be to put
>>> it just below 4GB where e.g. the console and Xenstore page are located.
>> 
>> Putting it in the first Mb implies that mappings there will continue to
>> be 4k ones. I can't, however, see why for PVH that should be
>> necessary: There's no BIOS and nothing legacy that needs to live
>> there, so other than HVM it could benefit from using a 1Gb mapping
>> even at address zero (even if this might be something that can't
>> be achieved right away). So yes, if anything, the allocation should
>> be made top down starting from 4Gb. Otoh, I don't see a strict
>> need for this area to live below 4Gb in the first place.
> 
> The physical RSDP address in the PVH start info block is 32 bits
> only. So it can't be above 4GB.

struct hvm_start_info {
uint32_t magic; /* Contains the magic value 0x336ec578   */
/* ("xEn3" with the 0x80 bit of the "E" set).*/
uint32_t version;   /* Version of this structure.*/
uint32_t flags; /* SIF_xxx flags.*/
uint32_t nr_modules;/* Number of modules passed to the kernel.   */
uint64_t modlist_paddr; /* Physical address of an array of   */
/* hvm_modlist_entry.*/
uint64_t cmdline_paddr; /* Physical address of the command line. */
uint64_t rsdp_paddr;/* Physical address of the RSDP ACPI data*/
/* structure.*/
};

Granted a comment a few lines up in the public header says "NB: Xen
on x86 will always try to place all the data below the 4GiB boundary."

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


  1   2   >