[Xen-devel] [linux-linus test] 115153: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115153 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115153/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. vs. 114682

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 114658
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 114682
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114682
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 114682
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 114682
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114682
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114682
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 114682
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114682
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 linux06987dad0a563e406e7841df0f8759368523714f
baseline version:
 linuxebe6e90ccc6679cb01d2b280e4b61e6092d4bedb

Last test of basis   114682  2017-10-18 09:54:11 Z5 days
Failing since114781  2017-10-20 01:00:47 Z4 days7 attempts
Testing same since   115153  2017-10-23 15:53:43 Z0 days1 attempts


313 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvops  

[Xen-devel] Is that possible to merge MBA into Xen 4.10?

2017-10-23 Thread Yi Sun
Hi, all,

As you may know, MBA patch set has got enough Reviewed-by/Acked-by in last week.
It is ready to be merged. 

This is a feature for Skylake, Intel has launched Skylake and KVM already
supported MBA, so including it in Xen 4.10 will quickly fill this gap.

MBA missed the 4.10 feature freeze date for only a few days due to lack of
timely review for earlier versions which slowed down the patch iteration 
notably.
It seems maintainers are very busy recently so that the review progress for 4.10
is slower than before. So I am wondering if it is possible to merge it into 
4.10?

This patch set mainly touches codes related to PSR in 
tools/domctl/sysctl/hypervisor.
It does not touch other features. So, the risk is low to merge it.

Thank you!

BRs,
Sun Yi

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 115162: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115162 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115162/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm6 xen-buildfail REGR. vs. 114507
 build-i3866 xen-buildfail REGR. vs. 114507
 build-amd64-xsm   6 xen-buildfail REGR. vs. 114507
 build-amd64   6 xen-buildfail REGR. vs. 114507
 build-armhf-xsm   6 xen-buildfail REGR. vs. 114507
 build-armhf   6 xen-buildfail REGR. vs. 114507

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)  blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 

[Xen-devel] [qemu-mainline bisection] complete build-amd64

2017-10-23 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job build-amd64
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  e822e81e350825dd94f41ee2538ff1432b812eb9
  Bug not present: 5bac3c39c82e149515c10643acafd1d292433775
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/115166/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/qemu-mainline/build-amd64.xen-build 
--summary-out=tmp/115166.bisection-summary --basis-template=114507 
--blessings=real,real-bisect qemu-mainline build-amd64 xen-build
Searching for failure / basis pass:
 115141 fail [host=merlot0] / 114507 [host=godello0] 114475 [host=godello0] 
114409 [host=rimava1] 114279 [host=godello0] 114148 [host=baroque0] 114106 
[host=italia0] 114083 [host=nobling1] 114042 [host=baroque1] 113974 
[host=italia0] 113964 [host=huxelrebe0] 113876 [host=huxelrebe0] 113864 
[host=huxelrebe0] 113852 [host=huxelrebe1] 113839 [host=huxelrebe1] 113817 
[host=baroque0] 113784 [host=rimava1] 113780 [host=baroque1] 113769 
[host=godello1] 113743 [host=godello0] 113711 [host=nobling1] 113689 
[host=baroque1] 113659 [host=godello1] 113646 [host=godello0] 113626 
[host=baroque0] 113613 [host=nobling1] 113607 [host=huxelrebe1] 113596 
[host=nocera1] 113586 [host=huxelrebe1] 113580 [host=nobling1] 113560 
[host=baroque0] 113545 [host=baroque0] 113527 [host=nobling1] 113512 
[host=godello1] 113490 [host=baroque0] 113464 [host=godello1] 113432 
[host=godello0] 113391 [host=nocera1] 113345 [host=godello0] 113302 
[host=baroque0] 113179 [host=nobling0] 113160 [host=huxelrebe1] 113148 
[host=chardonnay1] 112275 [host=huxelrebe0] 112263 [host=nobling1] 112217 
[host=godello1] 112155 [host=huxelrebe0] 112100 [host=pinot0] 112072 
[host=huxelrebe1] 112041 [host=fiano0] 112011 [host=huxelrebe1] 111986 
[host=godello1] 111963 [host=godello0] 111926 [host=godello1] 111889 
[host=rimava0] 111848 [host=nobling1] 111817 [host=rimava0] 111790 
[host=godello0] 111765 [host=fiano1] 111732 [host=rimava0] 111703 
[host=elbling1] 111667 [host=godello1] 111648 [host=nobling0] 111624 
[host=godello0] 111601 [host=nobling0] 111548 [host=godello1] 111522 
[host=nobling0] 111475 [host=huxelrebe1] 111379 [host=godello1] 111373 
[host=nobling1] 111359 [host=chardonnay1] 111265 [host=nobling1] 111211 
[host=godello0] 111092 [host=godello0] 111065 [host=godello1] 111000 
[host=godello1] 110968 [host=nobling1] 110925 [host=godello0] 110901 
[host=nobling0] 110478 [host=nobling1] 110458 [host=italia1] 110428 
[host=baroque0] 110401 [host=godello0] 110376 [host=godello1] 110340 
[host=godello1] 110268 [host=godello0] 110210 [host=godello1] 110161 
[host=nobling1] 110114 [host=rimava0] 110084 [host=rimava0] 110054 
[host=godello1] 110032 [host=godello1] 110022 [host=rimava0] 109975 
[host=italia1] 109954 [host=baroque0] 109928 [host=baroque0] 109898 
[host=italia1] 109862 [host=baroque1] 109711 [host=godello0] 109701 
[host=baroque1] 109664 [host=godello0] 109653 [host=godello0] 109613 
[host=godello0] 109583 [host=chardonnay1] 107644 [host=godello1] 107636 
[host=godello0] 107610 [host=godello1] 107598 [host=godello0] 107580 
[host=pinot0] 107572 [host=godello1] 107557 [host=godello1] 107542 
[host=godello1] 107531 [host=godello0] 107501 [host=chardonnay1] 107378 
[host=godello0] 107360 [host=fiano1] 107250 [host=godello1] 107219 
[host=italia1] 107196 [host=pinot1] 107166 [host=godello1] 107152 
[host=italia0] 107055 [host=baroque1] 107025 [host=godello0] 107011 
[host=fiano0] 106999 [host=baroque1] 106977 [host=huxelrebe0] 106965 
[host=huxelrebe0] 106941 [host=godello0] 106905 [host=rimava0] 106889 
[host=godello1] 106866 [host=huxelrebe0] 106828 [host=huxelrebe1] 106809 
[host=fiano1] 106793 [host=huxelrebe0] 106787 [host=godello1] 106767 
[host=fiano1] 106747 [host=chardonnay1] 106732 [host=godello0] 106718 
[host=huxelrebe0] 106702 [host=huxelrebe1] 106682 [host=godello0] 106641 ok.
Failure / basis pass flights: 115141 / 106641
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c8ea0457495342c417c3dc033bba25148b279f60 
e822e81e350825dd94f41ee2538ff1432b812eb9 
24fb44e971a62b345c7b6ca3c03b454a1e150abe
Basis pass 8b4834ee1202852ed83a9fc61268c65fb6961ea7 
5bac3c39c82e149515c10643acafd1d292433775 
9dc1e0cd81ee469d638d1962a92d9b4bd2972bfa
Generating revisions with ./adhoc-revtuple-generator  

[Xen-devel] [linux-4.9 test] 115140: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115140 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115140/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 114814

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-ws16-amd64 13 guest-saverestore fail in 115110 pass 
in 115140
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail 
in 115110 pass in 115140
 test-armhf-armhf-xl-arndale   6 xen-installfail pass in 115110
 test-armhf-armhf-libvirt 16 guest-start/debian.repeat  fail pass in 115110

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale 13 migrate-support-check fail in 115110 never pass
 test-armhf-armhf-xl-arndale 14 saverestore-support-check fail in 115110 never 
pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114814
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114814
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 linux4d4a6a3f8a12602ce8dc800123715fe7b5c1c3a1
baseline version:
 linux5d7a76acad403638f635c918cc63d1d44ffa4065

Last test of basis   114814  2017-10-20 20:51:56 Z3 days
Testing same since   114845  2017-10-21 16:14:17 Z2 days5 attempts


People who touched revisions under test:
  Alex Deucher 
  Alexandre Belloni 
  Andrew Morton 
  Anoob Soman 
  Arnd Bergmann 
  Bart Van Assche 
  Ben Skeggs 
  Bin Liu 
  Borislav Petkov 
  Christoph Lameter 
  Christophe JAILLET 
  Coly 

Re: [Xen-devel] [PATCH V3 28/29] x86/vvtd: Add queued invalidation (QI) support

2017-10-23 Thread Tian, Kevin
> From: Gao, Chao
> Sent: Monday, October 23, 2017 4:52 PM
> 
> On Mon, Oct 23, 2017 at 09:57:16AM +0100, Roger Pau Monné wrote:
> >On Mon, Oct 23, 2017 at 03:50:24PM +0800, Chao Gao wrote:
> >> On Fri, Oct 20, 2017 at 12:20:06PM +0100, Roger Pau Monné wrote:
> >> >On Thu, Sep 21, 2017 at 11:02:09PM -0400, Lan Tianyu wrote:
> >> >> From: Chao Gao 
> >> >> +}
> >> >> +
> >> >> +unmap_guest_page((void*)qinval_page);
> >> >> +return ret;
> >> >> +
> >> >> + error:
> >> >> +unmap_guest_page((void*)qinval_page);
> >> >> +gdprintk(XENLOG_ERR, "Internal error in Queue Invalidation.\n");
> >> >> +domain_crash(vvtd->domain);
> >> >
> >> >Do you really need to crash the domain in such case?
> >>
> >> We reach here when guest requests some operations vvtd doesn't claim
> >> supported or emulated. I am afraid it also can be triggered by guest.
> >> How about ignoring the invalidation request?
> >
> >What would real hardware do in such case?
> 
> After reading the spec again, I think hardware may generate a fault
> event, seeing VT-d spec 10.4.9 Fault Status Register:
> Hardware detected an error associated with the invalidation queue. This
> could be due to either a hardware error while fetching a descriptor from
> the invalidation queue, or hardware detecting an erroneous or invalid
> descriptor in the invalidation queue. At this time, a fault event may be
> generated based on the programming of the Fault Event Control register
> 

Please do proper emulation according to hardware behavior.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 11/13] xen/pvcalls: implement poll command

2017-10-23 Thread Stefano Stabellini
On Tue, 17 Oct 2017, Boris Ostrovsky wrote:
> > +static unsigned int pvcalls_front_poll_passive(struct file *file,
> > +  struct pvcalls_bedata *bedata,
> > +  struct sock_mapping *map,
> > +  poll_table *wait)
> > +{
> > +   int notify, req_id, ret;
> > +   struct xen_pvcalls_request *req;
> > +
> > +   if (test_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
> > +(void *)>passive.flags)) {
> > +   uint32_t req_id = READ_ONCE(map->passive.inflight_req_id);
> > +
> > +   if (req_id != PVCALLS_INVALID_ID &&
> > +   READ_ONCE(bedata->rsp[req_id].req_id) == req_id)
> > +   return POLLIN | POLLRDNORM;
> 
> 
> Same READ_ONCE() question as for an earlier patch.

Same answer :-)


> > +
> > +   poll_wait(file, >passive.inflight_accept_req, wait);
> > +   return 0;
> > +   }
> > +
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 08/13] xen/pvcalls: implement accept command

2017-10-23 Thread Stefano Stabellini
On Tue, 17 Oct 2017, Boris Ostrovsky wrote:
> On 10/06/2017 08:30 PM, Stefano Stabellini wrote:
> > Introduce a waitqueue to allow only one outstanding accept command at
> > any given time and to implement polling on the passive socket. Introduce
> > a flags field to keep track of in-flight accept and poll commands.
> > 
> > Send PVCALLS_ACCEPT to the backend. Allocate a new active socket. Make
> > sure that only one accept command is executed at any given time by
> > setting PVCALLS_FLAG_ACCEPT_INFLIGHT and waiting on the
> > inflight_accept_req waitqueue.
> > 
> > Convert the new struct sock_mapping pointer into an uint64_t and use it
> > as id for the new socket to pass to the backend.
> > 
> > Check if the accept call is non-blocking: in that case after sending the
> > ACCEPT command to the backend store the sock_mapping pointer of the new
> > struct and the inflight req_id then return -EAGAIN (which will respond
> > only when there is something to accept). Next time accept is called,
> > we'll check if the ACCEPT command has been answered, if so we'll pick up
> > where we left off, otherwise we return -EAGAIN again.
> > 
> > Note that, differently from the other commands, we can use
> > wait_event_interruptible (instead of wait_event) in the case of accept
> > as we are able to track the req_id of the ACCEPT response that we are
> > waiting.
> > 
> > Signed-off-by: Stefano Stabellini 
> > CC: boris.ostrov...@oracle.com
> > CC: jgr...@suse.com
> > ---
> >  drivers/xen/pvcalls-front.c | 146 
> > 
> >  drivers/xen/pvcalls-front.h |   3 +
> >  2 files changed, 149 insertions(+)
> > 
> > diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> > index 5433fae..8958e74 100644
> > --- a/drivers/xen/pvcalls-front.c
> > +++ b/drivers/xen/pvcalls-front.c
> > @@ -77,6 +77,16 @@ struct sock_mapping {
> >  #define PVCALLS_STATUS_BIND  1
> >  #define PVCALLS_STATUS_LISTEN2
> > uint8_t status;
> > +   /*
> > +* Internal state-machine flags.
> > +* Only one accept operation can be inflight for a socket.
> > +* Only one poll operation can be inflight for a given socket.
> > +*/
> > +#define PVCALLS_FLAG_ACCEPT_INFLIGHT 0
> > +   uint8_t flags;
> > +   uint32_t inflight_req_id;
> > +   struct sock_mapping *accept_map;
> > +   wait_queue_head_t inflight_accept_req;
> > } passive;
> > };
> >  };
> > @@ -392,6 +402,8 @@ int pvcalls_front_bind(struct socket *sock, struct 
> > sockaddr *addr, int addr_len)
> > memcpy(req->u.bind.addr, addr, sizeof(*addr));
> > req->u.bind.len = addr_len;
> >  
> > +   init_waitqueue_head(>passive.inflight_accept_req);
> > +
> > map->active_socket = false;
> >  
> > bedata->ring.req_prod_pvt++;
> > @@ -470,6 +482,140 @@ int pvcalls_front_listen(struct socket *sock, int 
> > backlog)
> > return ret;
> >  }
> >  
> > +int pvcalls_front_accept(struct socket *sock, struct socket *newsock, int 
> > flags)
> > +{
> > +   struct pvcalls_bedata *bedata;
> > +   struct sock_mapping *map;
> > +   struct sock_mapping *map2 = NULL;
> > +   struct xen_pvcalls_request *req;
> > +   int notify, req_id, ret, evtchn, nonblock;
> > +
> > +   pvcalls_enter();
> > +   if (!pvcalls_front_dev) {
> > +   pvcalls_exit();
> > +   return -ENOTCONN;
> > +   }
> > +   bedata = dev_get_drvdata(_front_dev->dev);
> > +
> > +   map = (struct sock_mapping *) sock->sk->sk_send_head;
> > +   if (!map) {
> > +   pvcalls_exit();
> > +   return -ENOTSOCK;
> > +   }
> > +
> > +   if (map->passive.status != PVCALLS_STATUS_LISTEN) {
> > +   pvcalls_exit();
> > +   return -EINVAL;
> > +   }
> > +
> > +   nonblock = flags & SOCK_NONBLOCK;
> > +   /*
> > +* Backend only supports 1 inflight accept request, will return
> > +* errors for the others
> > +*/
> > +   if (test_and_set_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,
> > +(void *)>passive.flags)) {
> > +   req_id = READ_ONCE(map->passive.inflight_req_id);
> > +   if (req_id != PVCALLS_INVALID_ID &&
> > +   READ_ONCE(bedata->rsp[req_id].req_id) == req_id) {
> 
> 
> READ_ONCE (especially the second one)? I know I may sound fixated on
> this but I really don't understand how compiler may do anything wrong if
> straight reads were used.
> 
> For the first case, I guess, theoretically the compiler may decide to
> re-fetch map->passive.inflight_req_id. But even if it did, would that be
> a problem? Both of these READ_ONCE targets are updated below before
> PVCALLS_FLAG_ACCEPT_INFLIGHT is cleared so there should not be any
> change between re-fetching, I think. (The only exception is the noblock
> case, which does WRITE_ONCE that don't understand either)

READ_ONCE is reasonably cheap: do we really want to have this 

Re: [Xen-devel] [PATCH v5 02/13] xen/pvcalls: implement frontend disconnect

2017-10-23 Thread Stefano Stabellini
On Tue, 17 Oct 2017, Boris Ostrovsky wrote:
> On 10/06/2017 08:30 PM, Stefano Stabellini wrote:
> > Introduce a data structure named pvcalls_bedata. It contains pointers to
> > the command ring, the event channel, a list of active sockets and a list
> > of passive sockets. Lists accesses are protected by a spin_lock.
> >
> > Introduce a waitqueue to allow waiting for a response on commands sent
> > to the backend.
> >
> > Introduce an array of struct xen_pvcalls_response to store commands
> > responses.
> >
> > pvcalls_refcount is used to keep count of the outstanding pvcalls users.
> > Only remove connections once the refcount is zero.
> >
> > Implement pvcalls frontend removal function. Go through the list of
> > active and passive sockets and free them all, one at a time.
> >
> > Signed-off-by: Stefano Stabellini 
> > CC: boris.ostrov...@oracle.com
> > CC: jgr...@suse.com
> > ---
> >  drivers/xen/pvcalls-front.c | 67 
> > +
> >  1 file changed, 67 insertions(+)
> >
> > diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> > index a8d38c2..d8b7a04 100644
> > --- a/drivers/xen/pvcalls-front.c
> > +++ b/drivers/xen/pvcalls-front.c
> > @@ -20,6 +20,46 @@
> >  #include 
> >  #include 
> >  
> > +#define PVCALLS_INVALID_ID UINT_MAX
> > +#define PVCALLS_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
> > +#define PVCALLS_NR_REQ_PER_RING __CONST_RING_SIZE(xen_pvcalls, 
> > XEN_PAGE_SIZE)
> > +
> > +struct pvcalls_bedata {
> > +   struct xen_pvcalls_front_ring ring;
> > +   grant_ref_t ref;
> > +   int irq;
> > +
> > +   struct list_head socket_mappings;
> > +   struct list_head socketpass_mappings;
> > +   spinlock_t socket_lock;
> > +
> > +   wait_queue_head_t inflight_req;
> > +   struct xen_pvcalls_response rsp[PVCALLS_NR_REQ_PER_RING];
> 
> Did you mean _REQ_ or _RSP_ in the macro name?

For each request there is one response, so it doesn't make a difference.
But for clarity, I will rename.


> > +};
> > +/* Only one front/back connection supported. */
> > +static struct xenbus_device *pvcalls_front_dev;
> > +static atomic_t pvcalls_refcount;
> > +
> > +/* first increment refcount, then proceed */
> > +#define pvcalls_enter() {   \
> > +   atomic_inc(_refcount);  \
> > +}
> > +
> > +/* first complete other operations, then decrement refcount */
> > +#define pvcalls_exit() {\
> > +   atomic_dec(_refcount);  \
> > +}
> > +
> > +static irqreturn_t pvcalls_front_event_handler(int irq, void *dev_id)
> > +{
> > +   return IRQ_HANDLED;
> > +}
> > +
> > +static void pvcalls_front_free_map(struct pvcalls_bedata *bedata,
> > +  struct sock_mapping *map)
> > +{
> > +}
> > +
> >  static const struct xenbus_device_id pvcalls_front_ids[] = {
> > { "pvcalls" },
> > { "" }
> > @@ -27,6 +67,33 @@
> >  
> >  static int pvcalls_front_remove(struct xenbus_device *dev)
> >  {
> > +   struct pvcalls_bedata *bedata;
> > +   struct sock_mapping *map = NULL, *n;
> > +
> > +   bedata = dev_get_drvdata(_front_dev->dev);
> > +   dev_set_drvdata(>dev, NULL);
> > +   pvcalls_front_dev = NULL;
> > +   if (bedata->irq >= 0)
> > +   unbind_from_irqhandler(bedata->irq, dev);
> > +
> > +   smp_mb();
> > +   while (atomic_read(_refcount) > 0)
> > +   cpu_relax();
> > +   list_for_each_entry_safe(map, n, >socket_mappings, list) {
> > +   pvcalls_front_free_map(bedata, map);
> > +   kfree(map);
> > +   }
> > +   list_for_each_entry_safe(map, n, >socketpass_mappings, list) {
> > +   spin_lock(>socket_lock);
> > +   list_del_init(>list);
> > +   spin_unlock(>socket_lock);
> > +   kfree(map);
> 
> Why do you re-init the entry if you are freeing it?

Fair enough, I'll just list_del.


> And do you really
> need the locks around it? This looks similar to the case we've discussed
> for other patches --- if we are concerned that someone may grab this
> entry then something must be wrong.
> 
> (Sorry, this must have been here in earlier versions but I only now
> noticed it.)

Yes, you are right, it is already protected by the global refcount, I'll
remove.


> > +   }
> > +   if (bedata->ref >= 0)
> > +   gnttab_end_foreign_access(bedata->ref, 0, 0);
> > +   kfree(bedata->ring.sring);
> > +   kfree(bedata);
> > +   xenbus_switch_state(dev, XenbusStateClosed);
> > return 0;
> >  }
> >  
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] linux-arm-xen branch, commit access, etc.

2017-10-23 Thread Stefano Stabellini
On Fri, 20 Oct 2017, Julien Grall wrote:
>   Julien, do you think we need to keep a special linux tree around for Xen
>   on ARM testing in OSSTest or we can start using vanilla kernel releases?
>   I would love to get rid of it, if you know of any reasons why we have to
>   keep it, this is the time to speak :-)
> 
> 
> I think it would be better to keep aroundSome platform may be available 
> before the code is merged.

Sure.


Ian,

let's create a /arm/linux.git tree on xenbits where both Julien and I
can push. The idea is that we'll try to use vanilla kernel releases but
we'll keep it around just in case we'll need special patches for
hardware support in the future. If it turns out that we don't actually
need it, we can get rid of it in a year or two.

We'll initialize /arm/linux.git based on the current linux-arm-xen
branch. /arm/linux.git will replace linux-arm-xen in OSSTest.

Sounds good?

Thanks,

Stefano

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 0/4] TEE mediator framework + OP-TEE mediator

2017-10-23 Thread Stefano Stabellini
On Mon, 23 Oct 2017, Volodymyr Babchuk wrote:
> > >This is a lot of a work. It requires changes in generic parts of XEN.
> > >I fear it will be very hard to upstream such changes, because no one
> > >sees an immediate value in them. How do you think, what are my chances
> > >to upstream this?
> > 
> > It is fairly annoying to see you justifying back most of this thread with
> > "no one sees an immediate value in them".
> >
> > I am not the only maintainers in Xen, so effectively can't promise whether
> > it is going to be upstreamed. But I believe the community has been very
> > supportive so far, a lot of discussions happened (see [2]) because of the
> > OP-TEE support. So what more do you expect from us?
> I'm sorry, I didn't mean to offend you or someone else. You, guys, can
> be harsh sometimes, but I really appreciate help provided by the
> community. And I, certainly, don't ask you about any guarantees or
> something of that sort.
> 
> I'm just bothered by amount of required work and by upstreaming
> process. But this is not a strong argument against mediators in
> stubdoms, I think :)
> 
> Currently I'm developing virtualization support in OP-TEE, so in
> meantime we'll have much time to discuss mediators and stubdomain
> approach (if you have time). To test this feature in OP-TEE I'm
> extending this RFC, making optee.c to look like full-scale mediator.
> I need to do this anyways, to test OP-TEE. When I'll finish, I can
> show you how mediator can look like. Maybe this will persuade you to
> one or another approach.

Hi Volodymyr,

We really appreciate your work and we care about your use-case. We
really want this feature to be successful for you (and everybody else).

Sorry if it doesn't always come out this way, but email conversations
can sound "harsh" sometimes. However, keep in mind that both Julien and
I are completely on your side on this work item. Please keep up with the
good work :-)


> > >Approach in this RFC is much simpler. Few hooks in arch code + additional
> > >subsystem, which can be easily turned off.
> > 
> > Stefano do you have any opinion on this discussion?

We need to start somewhere, and I think this series could be a decent
starting point.

I think it is OK to have a small SMC filter in Xen. What Volodymyr is
suggesting looks reasonable for now. As the code grows, we might found
ourselves in the situation where we'll have to introduce stubdoms for
TEE virtualization/emulation, and I think that's OK. Possibly, we'll
have a "fast path" in Xen, only for filtering and small manipulations,
and a "slow path" in the stubdom when more complex actions are
necessary.

For this series, I think we need a way to specify which domains can talk
to TEE, so that we can only allow it for a specific subset of DomUs. I
would probably use XSM for that.

For the long term, I think both Volodymyr and us as maintainers need to
be prepared to introduce stubdoms for TEE emulation. It will most
probably happen as the feature-set grows. However, this small TEE
framework in Xen could still be useful, and could be the basis for
forwarding TEE requests to a stubdom for evaluation: maybe not all calls
need to be forwarded to the stubdom, some of them could go directly to
the firmware and this is where this series comes in.

What do you think?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [examine test] 115152: FAIL

2017-10-23 Thread osstest service owner
flight 115152 examine real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115152/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 examine-arndale-lakeside  2 hosts-allocate broken REGR. vs. 113775
 examine-arndale-westfield 2 hosts-allocate broken REGR. vs. 113775
 examine-merlot1   2 hosts-allocate broken REGR. vs. 113775
 examine-arndale-metrocentre   2 hosts-allocate broken REGR. vs. 113775

baseline version:
 flight   113775

jobs:
 examine-baroque0 pass
 examine-baroque1 pass
 examine-arndale-bluewaterpass
 examine-cubietruck-braquepass
 examine-chardonnay0  pass
 examine-chardonnay1  pass
 examine-elbling0 pass
 examine-elbling1 pass
 examine-fiano0   pass
 examine-fiano1   pass
 examine-cubietruck-gleizes   pass
 examine-godello0 pass
 examine-godello1 pass
 examine-huxelrebe0   pass
 examine-huxelrebe1   pass
 examine-italia0  pass
 examine-italia1  pass
 examine-arndale-lakeside fail
 examine-merlot0  pass
 examine-merlot1  fail
 examine-arndale-metrocentre  fail
 examine-cubietruck-metzinger pass
 examine-nobling0 pass
 examine-nobling1 pass
 examine-nocera0  pass
 examine-nocera1  pass
 examine-cubietruck-picasso   pass
 examine-pinot0   pass
 examine-pinot1   pass
 examine-rimava0  pass
 examine-rimava1  pass
 examine-arndale-westfieldfail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 115132: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115132 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115132/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. vs. 114644
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 114644
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. vs. 114644

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 16 guest-localmigrate/x10 fail pass 
in 115087

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 114644
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114644
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114644
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114644
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 114644
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 114644
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  8e77dabc58c4b6c747dfb4b948551147905a7840
baseline version:
 xen  24fb44e971a62b345c7b6ca3c03b454a1e150abe

Last test of basis   114644  2017-10-17 10:49:11 Z6 days
Failing since114670  2017-10-18 05:03:38 Z5 days8 attempts
Testing same since   114808  2017-10-20 14:56:19 Z3 days6 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anthony PERARD 
  David Esler 
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 
  Roger Pau Monné 
  Stefano Stabellini 
  Tim Deegan 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm   

[Xen-devel] [qemu-mainline test] 115141: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115141 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115141/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm6 xen-buildfail REGR. vs. 114507
 build-i3866 xen-buildfail REGR. vs. 114507
 build-amd64-xsm   6 xen-buildfail REGR. vs. 114507
 build-amd64   6 xen-buildfail REGR. vs. 114507
 build-armhf-xsm   6 xen-buildfail REGR. vs. 114507
 build-armhf   6 xen-buildfail REGR. vs. 114507

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)  blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 

Re: [Xen-devel] [PATCH RFC v2] Add SUPPORT.md

2017-10-23 Thread Stefano Stabellini
On Mon, 23 Oct 2017, Andrew Cooper wrote:
> >>> +### x86 PV/Event Channels
> >>> +
> >>> +Limit: 131072
> >> Why do we call out event channel limits but not grant table limits? 
> >> Also, why is this x86?  The 2l and fifo ABIs are arch agnostic, as far
> >> as I am aware.
> > Sure, but I'm pretty sure that ARM guests don't (perhaps cannot?) use PV
> > event channels.
> 
> This is mixing the hypervisor API/ABI capabilities with the actual
> abilities of guests (which is also different to what Linux would use in
> the guests).
> 
> ARM guests, as well as x86 HVM with APICV (configured properly) will
> actively want to avoid the guest event channel interface, because its
> slower.
> 
> This solitary evtchn limit serves no useful purpose IMO.

Just a clarification: ARM guests have event channels. They are delivered
to the guest using a single PPI (per processor interrupt). I am pretty
sure that limit on the number of event channels on ARM is the same as on
x86 because they both depend on the same fifo ABI.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 0/4] TEE mediator framework + OP-TEE mediator

2017-10-23 Thread Volodymyr Babchuk
On Mon, Oct 23, 2017 at 05:59:44PM +0100, Julien Grall wrote:

> Hi Volodymyr,
Hi Julien,

> Let me begin the e-mail with I am not totally adversed to putting the TEE
> mediator in Xen. At the moment, I am trying to understand the whole picture.
Thanks for clarification. This is really reassuring :)
In my turn, I'm not totally against TEE mediators in stubdoms. I'm only
concerned about required efforts.

> On 20/10/17 18:37, Volodymyr Babchuk wrote:
> >On Fri, Oct 20, 2017 at 02:11:14PM +0100, Julien Grall wrote:
> >>On 17/10/17 16:59, Volodymyr Babchuk wrote:
> >>>On Mon, Oct 16, 2017 at 01:00:21PM +0100, Julien Grall wrote:
> On 11/10/17 20:01, Volodymyr Babchuk wrote:
> >I want to present TEE mediator, that was discussed earlier ([1]).
> >
> >I selected design with built-in mediators. This is easiest way,
> >it removes many questions, it is easy to implement and maintain
> >(at least I hope so).
> 
> Well, it may close the technical questions but still leave the security
> impact unanswered. I would have appreciated a summary of each approach and
> explain the pros/cons.
> >>>This is the most secure way also. In terms of trust between guests and
> >>>Xen at least. I'm worked with OP-TEE guys mostly, so when I hear about
> >>>"security", my first thoughts are "Can TEE OS trust to XEN as a
> >>>mediator? Can TEE client trust to XEN as a mediator?". And with
> >>>current approach answer is "yes, they can, especially if XEN is a part
> >>>of a chain of trust".
> >>>
> >>>But you probably wanted to ask "Can guest compromise whole system by
> >>>using TEE mediator or TEE OS?". This is an interesting question.
> >>>First let's discuss requirements for a TEE mediator. So, mediator
> >>>should be able to:
> >>>
> >>>  * Receive request to handle trapped SMC. This request should include
> >>>user registers + some information about guest (at least domain id).
> >>>  * Pin/unpin domain memory pages.
> >>>  * Map domain memory pages into own address space with RW access.
> >>>  * Issue real SMC to a TEE.
> >>>  * Receive information about guest creation and destruction.
> >>>  * (Probably) inject IRQs into a domain (this can be not a requester 
> >>> domain,
> >>>but some other domain, that also called to TEE).
> >>>
> >>>This is a minimal list of requirements. I think, this should be enough to
> >>>implement mediator for OP-TEE. But I can't say for sure for other TEEs.
> >>>
> >>>Let's consider possible approaches:
> >>>
> >>>1. Mediator right in XEN, works at EL2.
> >>>Pros:
> >>> * Mediator can use all XEN APIs
> >>> * As mediator resides in XEN, it can be checked together with XEN
> >>>   for a validity (trusted boot).
> >>> * Mediator is initialized before Dom0. Dom0 can work with a TEE.
> >>> * No extra context switches, no special ABI between XEN and mediator.
> >>>
> >>>Cons:
> >>> * Because it lives in EL2, it can compromise whole hypervisor,
> >>>   if there is a security bug in mediator code.
> >>> * No support for closed source TEEs.
> >>
> >>Another cons is you assume TEE API is fully stable and will not change.
> >>Imagine a new function is added, or a vendor decided to hence with a new set
> >>of API. How will you know Xen is safe to use it?
> >With whitelisting, as you correctly suggested below. XEN will process
> >only know requests. Anything that looks unfimiliar should be rejected.
> 
> Let's imagine the guest is running on a platform with a newer version of
> TEE. This guest will probe the version of OP-TEE and knows the new function
> is present.
This request will be handled mediator. At this moment, OP-TEE client does
not use versions. Instead it uses capability flags. So, mediator should
filter all unknown caps. This will force guest to use only supported
subset of features.
If, in the future, client will relly on versions (i.e. due to dramatic
protocol change), mediator can either downgrade version or refuse to work
at all.

> If as you said Xen is using a whitelist, this means the hypervisor will
> return unimplemented.
> How do you expect the guest to behave in that case?
As I said above, guest should downgrade to supported features subset.

> Note that I think a whitelist is a good idea, but I think we need to think a
> bit more about the implication.
At least now OP-TEE is designed in a such way, that it is compatible in both
ways. I'm sure that future OP-TEE development will be done with virtualization
support in mind, so it will not break existing setups.

> >
> >>If it is not safe, this means you have a whitelist solution and therefore
> >>tie Xen to a specific OP-TEE version. So if you need to use a new function
> >>you would need to upgrade Xen making the code of using new version
> >>potentially high.
> >Yes, any ABI change between OP-TEE and its clients will require mediator
> >upgrade. Luckilly, OP-TEE maintains ABI backward-compatible, so if you'll
> >install old XEN and new OP-TEE, OP-TEE 

Re: [Xen-devel] [PATCH v4 4/5] xentrace: enable per-VCPU extratime flag for RTDS

2017-10-23 Thread Meng Xu
On Tue, Oct 17, 2017 at 4:10 AM, Dario Faggioli  wrote:
> On Wed, 2017-10-11 at 14:02 -0400, Meng Xu wrote:
>> Change repl_budget event output for xentrace formats and xenalyze
>>
>> Signed-off-by: Meng Xu 
>>
> I'd say:
>
> Reviewed-by: Dario Faggioli 

Hi guys,

Just a reminder, we may need this patch for the work-conserving RTDS
scheduler in Xen 4.10.

I say Julien sent out the rc2 today which does not include this patch.

Thanks and best regards,

Meng

---
Meng Xu
Ph.D. Candidate in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 115156: tolerable all pass - PUSHED

2017-10-23 Thread osstest service owner
flight 115156 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115156/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  1f2c7894dfe3d52f33655de202bd474999a1637b
baseline version:
 xen  e1bc61af7621ee07f7ba03afcfa4b6fa54fbfb2a

Last test of basis   115148  2017-10-23 14:17:52 Z0 days
Testing same since   115156  2017-10-23 17:01:28 Z0 days1 attempts


People who touched revisions under test:
  Ian Jackson 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=1f2c7894dfe3d52f33655de202bd474999a1637b
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
 export PERLLIB=.:.
 PERLLIB=.:.
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
1f2c7894dfe3d52f33655de202bd474999a1637b
+ branch=xen-unstable-smoke
+ revision=1f2c7894dfe3d52f33655de202bd474999a1637b
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
 export PERLLIB=.:.:.
 PERLLIB=.:.:.
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
+++ export PERLLIB=.:.:.:.
+++ PERLLIB=.:.:.:.
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.9-testing
+ '[' x1f2c7894dfe3d52f33655de202bd474999a1637b = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : 

Re: [Xen-devel] [PATCH v12 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources

2017-10-23 Thread Julien Grall



On 20/10/17 11:10, Paul Durrant wrote:

-Original Message-
From: Julien Grall [mailto:julien.gr...@linaro.org]
Sent: 20 October 2017 11:00
To: Paul Durrant ; 'Jan Beulich'

Cc: Julien Grall ; Andrew Cooper
; George Dunlap
; Ian Jackson ; Roger
Pau Monne ; Wei Liu ; Stefano
Stabellini ; xen-de...@lists.xenproject.org; Konrad
Rzeszutek Wilk ; Daniel De Graaf
; Tim (Xen.org) 
Subject: Re: [Xen-devel] [PATCH v12 05/11] x86/mm: add
HYPERVISOR_memory_op to acquire guest resources

Hi Paul,

On 20/10/17 09:26, Paul Durrant wrote:

-Original Message-
From: Jan Beulich [mailto:jbeul...@suse.com]
Sent: 20 October 2017 07:25
To: Julien Grall 
Cc: Julien Grall ; Andrew Cooper
; George Dunlap
; Ian Jackson ; Paul
Durrant ; Roger Pau Monne
; Wei Liu ; Stefano Stabellini
; xen-de...@lists.xenproject.org; Konrad

Rzeszutek

Wilk ; Daniel De Graaf

;

Tim (Xen.org) 
Subject: Re: [Xen-devel] [PATCH v12 05/11] x86/mm: add
HYPERVISOR_memory_op to acquire guest resources


On 19.10.17 at 18:21,  wrote:

Looking a bit more at the resource you can acquire from this hypercall.
Some of them are allocated using alloc_xenheap_page() so not assigned

to

a domain.

So I am not sure how you can expect a function set_foreign_p2m_entry

to

take reference in that case.


Hmm, with the domain parameter added, DOMID_XEN there (for
Xen heap pages) could identify no references to be taken, if that
was really the intended behavior in that case. However, even for
Xen heap pages life time tracking ought to be done - it is for a
reason that share_xen_page_with_guest() assigns the target
domain as the owner of such pages, as that allows get_page() to
succeed for them.





Hi Julien,


So, nothing I'm doing here is making anything worse, right? Grant tables are

assigned to the guest, and IOREQ server pages are allocated with
alloc_domheap_page() so nothing is anonymous.

I don't think grant tables is assigned to the guest today. They are
allocated using xenheap_pages() and I can't find
share_xen_page_with_guest().


The guest would not be able to map them if they were not assigned in some way!


Do you mean for PV? For HVM/PVH, we don't check whether the page is 
assigned (see gnttab_map_frame).



See the code block at 
http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/common/grant_table.c;hb=HEAD#l1716
It calls gnttab_create_shared_page() which is what calls through to 
share_xen_page_with_guest().


Thank you for the link, I will have a look.





Anyway, I discussed with Stefano about it. set_foreign_p2m_entry is
going to be left unimplemented on Arm until someone as time to implement
correctly the function.



That makes sense. Do you still have any issues with this patch apart from the 
cosmetic ones you spotted in the header?


No. Although, may I request to add a comment in the ARM helpers about 
the reference counting?


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-next test] 115131: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115131 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115131/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-examine  7 reboot   fail REGR. vs. 114658
 test-armhf-armhf-libvirt-xsm  5 host-ping-check-native   fail REGR. vs. 114682
 test-armhf-armhf-xl-arndale   7 xen-boot fail REGR. vs. 114682
 build-amd64-pvops 6 kernel-build fail REGR. vs. 114682
 test-armhf-armhf-xl-xsm   7 xen-boot fail REGR. vs. 114682
 test-armhf-armhf-xl-credit2   7 xen-boot fail REGR. vs. 114682
 test-armhf-armhf-xl-vhd   7 xen-boot fail REGR. vs. 114682
 test-armhf-armhf-libvirt-raw  7 xen-boot fail REGR. vs. 114682
 test-armhf-armhf-libvirt  7 xen-boot fail REGR. vs. 114682
 test-armhf-armhf-xl   7 xen-boot fail REGR. vs. 114682
 test-armhf-armhf-xl-cubietruck  7 xen-boot   fail REGR. vs. 114682
 test-armhf-armhf-xl-multivcpu  7 xen-bootfail REGR. vs. 114682

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds  7 xen-boot fail REGR. vs. 114682

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114682
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 114682
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux36ef71cae353f88fd6e095e2aaa3e5953af1685d
baseline version:
 linuxebe6e90ccc6679cb01d2b280e4b61e6092d4bedb

Last test of basis  (not found) 
Failing since   (not found) 
Testing same since   114796  2017-10-20 09:26:55 Z  

Re: [Xen-devel] [PATCH RFC v2] Add SUPPORT.md

2017-10-23 Thread Andrew Cooper
On 23/10/17 17:22, George Dunlap wrote:
> On 09/11/2017 06:53 PM, Andrew Cooper wrote:
>> On 11/09/17 18:01, George Dunlap wrote:
>>> +### x86/RAM
>>> +
>>> +Limit, x86: 16TiB
>>> +Limit, ARM32: 16GiB
>>> +Limit, ARM64: 5TiB
>>> +
>>> +[XXX: Andy to suggest what this should say for x86]
>> The limit for x86 is either 16TiB or 123TiB, depending on
>> CONFIG_BIGMEM.  CONFIG_BIGMEM is exposed via menuconfig without
>> XEN_CONFIG_EXPERT, so falls into at least some kind of support statement.
>>
>> As for practical limits, I don't think its reasonable to claim anything
>> which we can't test.  What are the specs in the MA colo?
> At the moment the "Limit" tag specifically says that it's theoretical
> and may not work.
>
> We could add another tag, "Limit-tested", or something like that.
>
> Or, we could simply have the Limit-security be equal to the highest
> amount which has been tested (either by osstest or downstreams).
>
> For simplicity's sake I'd go with the second one.

It think it would be very helpful to distinguish the upper limits from
the supported limits.  There will be a large difference between the two.

Limit-Theoretical and Limit-Supported ?

In all cases, we should identify why the limit is where it is, even if
that is only "maximum people have tested to".  Other

>
> Shall I write an e-mail with a more direct query for the maximum amounts
> of various numbers tested by the XenProject (via osstest), Citrix, SuSE,
> and Oracle?

For XenServer,
http://docs.citrix.com/content/dam/docs/en-us/xenserver/current-release/downloads/xenserver-config-limits.pdf

>> [root@fusebot ~]# python
>> Python 2.7.5 (default, Nov 20 2015, 02:00:19)
>> [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
> from xen.lowlevel.xc import xc as XC
> xc = XC()
> xc.domain_create()
>> 1
> xc.domain_max_vcpus(1, 8192)
>> 0
> xc.domain_create()
>> 2
> xc.domain_max_vcpus(2, 8193)
>> Traceback (most recent call last):
>>   File "", line 1, in 
>> xen.lowlevel.xc.Error: (22, 'Invalid argument')
>>
>> Trying to shut such a domain down however does tickle a host watchdog
>> timeout as the for_each_vcpu() loops in domain_kill() are very long.
> For now I'll set 'Limit' to 8192, and 'Limit-security' to 512.
> Depending on what I get for the "test limit" survey I may adjust it
> afterwards.

The largest production x86 server I am aware of is a Skylake-S system
with 496 threads.  512 is not a plausibly-tested number.

>
>>> +Limit, x86 HVM: 128
>>> +Limit, ARM32: 8
>>> +Limit, ARM64: 128
>>> +
>>> +[XXX Andrew Cooper: Do want to add "Limit-Security" here for some of 
>>> these?]
>> 32 for each.  64 vcpu HVM guests can excerpt enough p2m lock pressure to
>> trigger a 5 second host watchdog timeout.
> Is that "32 for x86 PV and x86 HVM", or "32 for x86 HVM and ARM64"?  Or
> something else?

The former.  I'm not qualified to comment on any of the ARM limits.

There are several non-trivial for_each_vcpu() loops in the domain_kill
path which aren't handled by continuations.  ISTR 128 vcpus is enough to
trip a watchdog timeout when freeing pagetables.

>
>>> +### Virtual RAM
>>> +
>>> +Limit, x86 PV: >1TB
>>> +Limit, x86 HVM: 1TB
>>> +Limit, ARM32: 16GiB
>>> +Limit, ARM64: 1TB
>> There is no specific upper bound on the size of PV or HVM guests that I
>> am aware of.  1.5TB HVM domains definitely work, because that's what we
>> test and support in XenServer.
> Are there limits for 32-bit guests?  There's some complicated limit
> having to do with the m2p, right?

32bit PV guests need to live in MFNs under the 128G boundary, despite
the fact their p2m handling supports 4TB of RAM.

The PVinPVH plan will lift this limitation, at which point it will be
possible to have many 128G 32bit PV(inPVH) VMs on a large system. 
(OTOH, I'm not aware of any 32bit PV guest which itself supports more
than 64G of RAM, other than perhaps SLES 11.)

>
>>> +
>>> +### x86 PV/Event Channels
>>> +
>>> +Limit: 131072
>> Why do we call out event channel limits but not grant table limits? 
>> Also, why is this x86?  The 2l and fifo ABIs are arch agnostic, as far
>> as I am aware.
> Sure, but I'm pretty sure that ARM guests don't (perhaps cannot?) use PV
> event channels.

This is mixing the hypervisor API/ABI capabilities with the actual
abilities of guests (which is also different to what Linux would use in
the guests).

ARM guests, as well as x86 HVM with APICV (configured properly) will
actively want to avoid the guest event channel interface, because its
slower.

This solitary evtchn limit serves no useful purpose IMO.

>
>>> +## High Availability and Fault Tolerance
>>> +
>>> +### Live Migration, Save & Restore
>>> +
>>> +Status, x86: Supported
>> With caveats.  From docs/features/migration.pandoc
> This would extend the meaning of "caveats" from "when it's not security
> supported" to "when it doesn't work"; which is 

[Xen-devel] Xen 4.10 RC2

2017-10-23 Thread Julien Grall

Hi all,

Xen 4.10 RC2 is tagged. You can check that out from xen.git:

  git://xenbits.xen.org/xen.git 4.10.0-rc2

For your convenience there is also a tarball at:
https://downloads.xenproject.org/release/xen/4.10.0-rc2/xen-4.10.0-rc2.tar.gz

And the signature is at:
https://downloads.xenproject.org/release/xen/4.10.0-rc2/xen-4.10.0-rc2.tar.gz.sig

Please send bug reports and test reports to
xen-de...@lists.xenproject.org. When sending bug reports, please CC
relevant maintainers and me (julien.gr...@linaro.org).

Thanks,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] xenalyze: fix compilation

2017-10-23 Thread Julien Grall

Hi,

On 23/10/17 17:32, George Dunlap wrote:

On 10/23/2017 05:28 PM, Roger Pau Monne wrote:

Recent changes in xenalyze introduced INT_MIN without also adding the
required header, fix this by adding the header.

Signed-off-by: Roger Pau Monné 


Acked-by: George Dunlap 


Release-acked-by: Julien Grall 

Cheers,




---
Cc: George Dunlap 
Cc: Ian Jackson 
Cc: Wei Liu 
Cc: Julien Grall 
---
This should be accepted for 4.10 because it's a build bug fix, with no
functional change at all.
---
  tools/xentrace/xenalyze.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index 79bdba7fed..5768b54f86 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -23,6 +23,7 @@
  #include 
  #include 
  #include 
+#include 
  #include 
  #include 
  #include 





--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC 0/4] TEE mediator framework + OP-TEE mediator

2017-10-23 Thread Julien Grall

Hi Volodymyr,

Let me begin the e-mail with I am not totally adversed to putting the 
TEE mediator in Xen. At the moment, I am trying to understand the whole 
picture.


On 20/10/17 18:37, Volodymyr Babchuk wrote:

On Fri, Oct 20, 2017 at 02:11:14PM +0100, Julien Grall wrote:

On 17/10/17 16:59, Volodymyr Babchuk wrote:

On Mon, Oct 16, 2017 at 01:00:21PM +0100, Julien Grall wrote:

On 11/10/17 20:01, Volodymyr Babchuk wrote:

I want to present TEE mediator, that was discussed earlier ([1]).

I selected design with built-in mediators. This is easiest way,
it removes many questions, it is easy to implement and maintain
(at least I hope so).


Well, it may close the technical questions but still leave the security
impact unanswered. I would have appreciated a summary of each approach and
explain the pros/cons.

This is the most secure way also. In terms of trust between guests and
Xen at least. I'm worked with OP-TEE guys mostly, so when I hear about
"security", my first thoughts are "Can TEE OS trust to XEN as a
mediator? Can TEE client trust to XEN as a mediator?". And with
current approach answer is "yes, they can, especially if XEN is a part
of a chain of trust".

But you probably wanted to ask "Can guest compromise whole system by
using TEE mediator or TEE OS?". This is an interesting question.
First let's discuss requirements for a TEE mediator. So, mediator
should be able to:

  * Receive request to handle trapped SMC. This request should include
user registers + some information about guest (at least domain id).
  * Pin/unpin domain memory pages.
  * Map domain memory pages into own address space with RW access.
  * Issue real SMC to a TEE.
  * Receive information about guest creation and destruction.
  * (Probably) inject IRQs into a domain (this can be not a requester domain,
but some other domain, that also called to TEE).

This is a minimal list of requirements. I think, this should be enough to
implement mediator for OP-TEE. But I can't say for sure for other TEEs.

Let's consider possible approaches:

1. Mediator right in XEN, works at EL2.
Pros:
 * Mediator can use all XEN APIs
 * As mediator resides in XEN, it can be checked together with XEN
   for a validity (trusted boot).
 * Mediator is initialized before Dom0. Dom0 can work with a TEE.
 * No extra context switches, no special ABI between XEN and mediator.

Cons:
 * Because it lives in EL2, it can compromise whole hypervisor,
   if there is a security bug in mediator code.
 * No support for closed source TEEs.


Another cons is you assume TEE API is fully stable and will not change.
Imagine a new function is added, or a vendor decided to hence with a new set
of API. How will you know Xen is safe to use it?

With whitelisting, as you correctly suggested below. XEN will process
only know requests. Anything that looks unfimiliar should be rejected.


Let's imagine the guest is running on a platform with a newer version of 
TEE. This guest will probe the version of OP-TEE and knows the new 
function is present.


If as you said Xen is using a whitelist, this means the hypervisor will 
return unimplemented.


How do you expect the guest to behave in that case?

Note that I think a whitelist is a good idea, but I think we need to 
think a bit more about the implication.





If it is not safe, this means you have a whitelist solution and therefore
tie Xen to a specific OP-TEE version. So if you need to use a new function
you would need to upgrade Xen making the code of using new version
potentially high.

Yes, any ABI change between OP-TEE and its clients will require mediator
upgrade. Luckilly, OP-TEE maintains ABI backward-compatible, so if you'll
install old XEN and new OP-TEE, OP-TEE will use only that subset of ABI,
which is known to XEN.


Also, correct me if I am wrong, OP-TEE is a BSD 2-Clause. This means you
impose anyone wanted to modify OP-TEE for their own purpose can make a
closed version of the TEE. But if you need to introspect/whitelist call, you
impose the vendor to expose their API.

Basically yes. Is this bad? OP-TEE driver in Linux is licensed under GPL v2.
If vendor modifies interface between OP-TEE and Linux, they anyways obligued
to expose API.


Pardon me for potential stupid questions, my knowledge of OP-TEE is limited.

My understanding is the OP-TEE will provide a generic way to access 
different Trusted Application. While OP-TEE API may be generic, the TA 
API is custom. AFAICT the latter is not part of Linux driver.


So here my questions:
	1) Are you planning allow all the guests to access every Trusted 
Applications?

2) Will you ever need to introspect those messages?



2. Mediator in a stubdomain. Works at EL1.
Pros:
 * Mediator is isolated from hypervisor (but it still can do potentially
   dangerous things like mapping domain memory or pining pages).
 * One can legally create and use mediator for a closed-source TEE.


* Easier 

[Xen-devel] [PATCH v2] scripts: introduce a script for build test

2017-10-23 Thread Wei Liu
Signed-off-by: Ian Jackson 
Signed-off-by: Wei Liu 
---
Cc: Andrew Cooper 
Cc: George Dunlap 
Cc: Ian Jackson 
Cc: Jan Beulich 
Cc: Konrad Rzeszutek Wilk 
Cc: Stefano Stabellini 
Cc: Tim Deegan 
Cc: Wei Liu 
Cc: Julien Grall 
Cc: Anthony PERARD 
---
 scripts/build-test.sh | 53 +++
 1 file changed, 53 insertions(+)
 create mode 100755 scripts/build-test.sh

diff --git a/scripts/build-test.sh b/scripts/build-test.sh
new file mode 100755
index 00..316419d6b7
--- /dev/null
+++ b/scripts/build-test.sh
@@ -0,0 +1,53 @@
+#!/bin/sh
+
+# Run command on every commit within the range specified. If no command is
+# provided, use the default one to clean and build the whole tree.
+#
+# Cross-build is not yet supported.
+
+set -e
+
+if ! test -f xen/common/kernel.c; then
+echo "Please run this script from top-level directory"
+exit 1
+fi
+
+if test $# -lt 2 ; then
+echo "Usage: $0   [CMD]"
+exit 1
+fi
+
+status=`git status -s`
+if test -n "$status"; then
+echo "Tree is dirty, aborted"
+exit 1
+fi
+
+if git branch | grep -q '^\*.\+detached at'; then
+echo "Detached HEAD, aborted"
+exit 1
+fi
+
+BASE=$1; shift
+TIP=$1; shift
+ORIG_BRANCH=`git rev-parse --abbrev-ref HEAD`
+
+if ! git merge-base --is-ancestor $BASE $TIP; then
+echo "$BASE is not an ancestor of $TIP, aborted"
+exit 1
+fi
+
+git rev-list $BASE..$TIP | nl -ba | tac | \
+while read num rev; do
+echo "Testing $num $rev"
+git checkout $rev
+if test $# -eq 0 ; then
+make -j4 distclean && ./configure && make -j4
+else
+"$@"
+fi
+echo
+done
+
+echo "Restoring original HEAD"
+git checkout $ORIG_BRANCH
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] xenalyze: fix compilation

2017-10-23 Thread George Dunlap
On 10/23/2017 05:28 PM, Roger Pau Monne wrote:
> Recent changes in xenalyze introduced INT_MIN without also adding the
> required header, fix this by adding the header.
> 
> Signed-off-by: Roger Pau Monné 

Acked-by: George Dunlap 

> ---
> Cc: George Dunlap 
> Cc: Ian Jackson 
> Cc: Wei Liu 
> Cc: Julien Grall 
> ---
> This should be accepted for 4.10 because it's a build bug fix, with no
> functional change at all.
> ---
>  tools/xentrace/xenalyze.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
> index 79bdba7fed..5768b54f86 100644
> --- a/tools/xentrace/xenalyze.c
> +++ b/tools/xentrace/xenalyze.c
> @@ -23,6 +23,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH for-4.10] xenalyze: fix compilation

2017-10-23 Thread Roger Pau Monne
Recent changes in xenalyze introduced INT_MIN without also adding the
required header, fix this by adding the header.

Signed-off-by: Roger Pau Monné 
---
Cc: George Dunlap 
Cc: Ian Jackson 
Cc: Wei Liu 
Cc: Julien Grall 
---
This should be accepted for 4.10 because it's a build bug fix, with no
functional change at all.
---
 tools/xentrace/xenalyze.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index 79bdba7fed..5768b54f86 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
-- 
2.13.5 (Apple Git-94)


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v2] Add SUPPORT.md

2017-10-23 Thread George Dunlap
On 09/11/2017 06:53 PM, Andrew Cooper wrote:
> On 11/09/17 18:01, George Dunlap wrote:
>> +### x86/PV
>> +
>> +Status: Supported
>> +
>> +Traditional Xen Project PV guest
> 
> What's a "Xen Project" PV guest?  Just Xen here.
> 
> Also, a perhaps a statement of "No hardware requirements" ?

OK.

> 
>> +### x86/RAM
>> +
>> +Limit, x86: 16TiB
>> +Limit, ARM32: 16GiB
>> +Limit, ARM64: 5TiB
>> +
>> +[XXX: Andy to suggest what this should say for x86]
> 
> The limit for x86 is either 16TiB or 123TiB, depending on
> CONFIG_BIGMEM.  CONFIG_BIGMEM is exposed via menuconfig without
> XEN_CONFIG_EXPERT, so falls into at least some kind of support statement.
> 
> As for practical limits, I don't think its reasonable to claim anything
> which we can't test.  What are the specs in the MA colo?

At the moment the "Limit" tag specifically says that it's theoretical
and may not work.

We could add another tag, "Limit-tested", or something like that.

Or, we could simply have the Limit-security be equal to the highest
amount which has been tested (either by osstest or downstreams).

For simplicity's sake I'd go with the second one.

Shall I write an e-mail with a more direct query for the maximum amounts
of various numbers tested by the XenProject (via osstest), Citrix, SuSE,
and Oracle?

>> +
>> +## Limits/Guest
>> +
>> +### Virtual CPUs
>> +
>> +Limit, x86 PV: 512
> 
> Where did this number come from?  The actual limit as enforced in Xen is
> 8192, and it has been like that for a very long time (i.e. the 3.x days)

Looks like Lars copied this from
https://wiki.xenproject.org/wiki/Xen_Project_Release_Features.  Not sure
where it came from before that.

> [root@fusebot ~]# python
> Python 2.7.5 (default, Nov 20 2015, 02:00:19)
> [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
 from xen.lowlevel.xc import xc as XC
 xc = XC()
 xc.domain_create()
> 1
 xc.domain_max_vcpus(1, 8192)
> 0
 xc.domain_create()
> 2
 xc.domain_max_vcpus(2, 8193)
> Traceback (most recent call last):
>   File "", line 1, in 
> xen.lowlevel.xc.Error: (22, 'Invalid argument')
> 
> Trying to shut such a domain down however does tickle a host watchdog
> timeout as the for_each_vcpu() loops in domain_kill() are very long.

For now I'll set 'Limit' to 8192, and 'Limit-security' to 512.
Depending on what I get for the "test limit" survey I may adjust it
afterwards.

>> +Limit, x86 HVM: 128
>> +Limit, ARM32: 8
>> +Limit, ARM64: 128
>> +
>> +[XXX Andrew Cooper: Do want to add "Limit-Security" here for some of these?]
> 
> 32 for each.  64 vcpu HVM guests can excerpt enough p2m lock pressure to
> trigger a 5 second host watchdog timeout.

Is that "32 for x86 PV and x86 HVM", or "32 for x86 HVM and ARM64"?  Or
something else?

>> +### Virtual RAM
>> +
>> +Limit, x86 PV: >1TB
>> +Limit, x86 HVM: 1TB
>> +Limit, ARM32: 16GiB
>> +Limit, ARM64: 1TB
> 
> There is no specific upper bound on the size of PV or HVM guests that I
> am aware of.  1.5TB HVM domains definitely work, because that's what we
> test and support in XenServer.

Are there limits for 32-bit guests?  There's some complicated limit
having to do with the m2p, right?

>> +
>> +### x86 PV/Event Channels
>> +
>> +Limit: 131072
> 
> Why do we call out event channel limits but not grant table limits? 
> Also, why is this x86?  The 2l and fifo ABIs are arch agnostic, as far
> as I am aware.

Sure, but I'm pretty sure that ARM guests don't (perhaps cannot?) use PV
event channels.

> 
>> +## High Availability and Fault Tolerance
>> +
>> +### Live Migration, Save & Restore
>> +
>> +Status, x86: Supported
> 
> With caveats.  From docs/features/migration.pandoc

This would extend the meaning of "caveats" from "when it's not security
supported" to "when it doesn't work"; which is probably the best thing
at the moment.

> * x86 HVM with nested-virt (no relevant information included in the stream)
[snip]
> Also, features such as vNUMA and nested virt (which are two I know for
> certain) have all state discarded on the source side, because they were
> never suitably plumbed in.

OK, I'll list these, as well as PCI pass-through.

(Actually, vNUMA doesn't seem to be on the list!)

And we should probably add a safety-catch to prevent a VM started with
any of these from being live-migrated.

In fact, if possible, that should be a whitelist: Any configuration that
isn't specifically known to work with migration should cause a migration
command to be refused.

What about the following features?

 * Guest serial console
 * Crash kernels
 * Transcendent Memory
 * Alternative p2m
 * vMCE
 * vPMU
 * Intel Platform QoS
 * Remus
 * COLO
 * PV protocols: Keyboard, PVUSB, PVSCSI, PVTPM, 9pfs, pvcalls?
 * FlASK?
 * CPU / memory hotplug?

> * x86 HVM guest physmap operations (not reflected in logdirty bitmap)
> * x86 PV P2M structure changes (not noticed, stale mappings used) for
>   

[Xen-devel] [xen-unstable-smoke test] 115148: tolerable all pass - PUSHED

2017-10-23 Thread osstest service owner
flight 115148 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115148/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  e1bc61af7621ee07f7ba03afcfa4b6fa54fbfb2a
baseline version:
 xen  8e77dabc58c4b6c747dfb4b948551147905a7840

Last test of basis   114800  2017-10-20 11:01:40 Z3 days
Testing same since   115148  2017-10-23 14:17:52 Z0 days1 attempts


People who touched revisions under test:
  Ian Jackson 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=e1bc61af7621ee07f7ba03afcfa4b6fa54fbfb2a
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
 export PERLLIB=.:.
 PERLLIB=.:.
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
e1bc61af7621ee07f7ba03afcfa4b6fa54fbfb2a
+ branch=xen-unstable-smoke
+ revision=e1bc61af7621ee07f7ba03afcfa4b6fa54fbfb2a
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
 export PERLLIB=.:.:.
 PERLLIB=.:.:.
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
+++ export PERLLIB=.:.:.:.
+++ PERLLIB=.:.:.:.
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.9-testing
+ '[' xe1bc61af7621ee07f7ba03afcfa4b6fa54fbfb2a = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : https://github.com/tianocore/edk2.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/ovmf.git
++ : 

Re: [Xen-devel] [xen-unstable test] 115037: regressions - FAIL

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 03:38:33PM +0100, Andrew Cooper wrote:
> On 23/10/17 15:34, Jan Beulich wrote:
>  On 23.10.17 at 15:58,  wrote:
> >> On 23/10/17 09:40, Jan Beulich wrote:
> >> On 23.10.17 at 01:49,  wrote:
>  flight 115037 xen-unstable real [real]
>  http://logs.test-lab.xenproject.org/osstest/logs/115037/ 
> 
>  Regressions :-(
> 
>  Tests which did not succeed and are blocking,
>  including tests which could not be run:
>    test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. 
>  vs. 114644
>    test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. 
>  vs. 114644
>    test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. 
>  vs. 114644
> >>> I'm puzzled by these recurring failures: Until flight 114525 all three
> >>> (plus the fourth sibling, which is in "guest-stop fail never pass" state)
> >>> were fail-never-pass on windows-install (the 64-bit host ones) or
> >>> guest-saverestore (the 32-bit host ones). Then flights 114540 and
> >>> 114644 were successes, and since then guest-stop has been failing.
> >>> The guest console doesn't show any indication that the guest may
> >>> have received a shutdown signal.
> >> Would it be possible of a platform specific bug? The last two flights 
> >> are failing on merlot1.
> > Not very likely here, I would say.
> 
> These tests have reliably never passed before, and there are no changes
> recently (I'm aware of) which would cause them to start passing.

There is on osstest side -- we bumped the disk from 10G to 20G.

Previously the tests failed due to there was insufficient space to store
this iso and the guest image.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC] ARM: vPL011: use receive timeout interrupt

2017-10-23 Thread Andre Przywara
Hi,

On 18/10/17 17:32, Bhupinder Thakur wrote:
> Hi Andre,
> 
> I verified this patch on qualcomm platform. It is working fine.
> 
> On 18 October 2017 at 19:11, Andre Przywara  wrote:
>> Instead of asserting the receive interrupt (RXI) on the first character
>> in the FIFO, lets (ab)use the receive timeout interrupt (RTI) for that
>> purpose. That seems to be closer to the spec and what hardware does.
>> Improve the readability of vpl011_data_avail() on the way.
>>
>> Signed-off-by: Andre Przywara 
>> ---
>> Hi,
>>
>> this one is the approach I mentioned in the email earlier today.
>> It goes on top of Bhupinders v12 27/27, but should eventually be merged
>> into this one once we agreed on the subject. I just carved it out here
>> for clarity to make it clearer what has been changed.
>> Would be good if someone could test it.
>>
>> Cheers,
>> Andre.
>>  xen/arch/arm/vpl011.c | 61 
>> ---
>>  1 file changed, 29 insertions(+), 32 deletions(-)
>>
>> diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
>> index adf1711571..ae18bddd81 100644
>> --- a/xen/arch/arm/vpl011.c
>> +++ b/xen/arch/arm/vpl011.c
>> @@ -105,9 +105,13 @@ static uint8_t vpl011_read_data(struct domain *d)
>>  if ( fifo_level == 0 )
>>  {
>>  vpl011->uartfr |= RXFE;
>> -vpl011->uartris &= ~RXI;
>> -vpl011_update_interrupt_status(d);
>> +vpl011->uartris &= ~RTI;
>>  }
>> +
>> +if ( fifo_level < sizeof(intf->in) - SBSA_UART_FIFO_SIZE / 2 )
>> +vpl011->uartris &= ~RXI;
>> +
>> +vpl011_update_interrupt_status(d);
> I think we check if ( fifo_level < SBSA_UART_FIFO_SIZE / 2 ) which
> should be a valid condition to clear the RX interrupt.

Are you sure? My understanding is that the semantics of the return value
of xencons_queued() differs between intf and outf:
- For intf, Xen fills that buffer with incoming characters. The
watermark is assumed to be (FIFO / 2), which translates into 16
characters. Now for the SBSA vUART RX side that means: "Assert the RX
interrupt if there is only room for 16 (or less) characters in the FIFO
(read: intf buffer in our case). Since we (ab)use the Xen buffer for the
FIFO, this means we warn if the number of queued characters exceeds
(buffersize - 16).
- For outf, the UART emulation fills the buffer. The SBSA vUART TX side
demands that the TX interrupt is asserted if the fill level of the
transmit FIFO is less than or equal to the 16 characters, which means:
number of queued characters is less than 16.

I think the key point is that our trigger level isn't symmetrical here,
since we have to emulate the architected 32-byte FIFO semantics for the
driver, but have a (secretly) much larger "FIFO" internally.

Do you agree with this reasoning and do I have a thinko here? Could well
be I am seriously misguided here.

Cheers,
Andre

>>  }
>>  else
>>  gprintk(XENLOG_ERR, "vpl011: Unexpected IN ring buffer empty\n");
>> @@ -129,7 +133,7 @@ static void vpl011_update_tx_fifo_status(struct vpl011 
>> *vpl011,
>>   unsigned int fifo_level)
>>  {
>>  struct xencons_interface *intf = vpl011->ring_buf;
>> -unsigned int fifo_threshold;
>> +unsigned int fifo_threshold = sizeof(intf->out) - SBSA_UART_FIFO_SIZE/2;
>>
>>  BUILD_BUG_ON(sizeof (intf->out) < SBSA_UART_FIFO_SIZE);
>>
>> @@ -137,8 +141,6 @@ static void vpl011_update_tx_fifo_status(struct vpl011 
>> *vpl011,
>>   * Set the TXI bit only when there is space for fifo_size/2 bytes which
>>   * is the trigger level for asserting/de-assterting the TX interrupt.
>>   */
>> -fifo_threshold = sizeof(intf->out) - SBSA_UART_FIFO_SIZE/2;
>> -
>>  if ( fifo_level <= fifo_threshold )
>>  vpl011->uartris |= TXI;
>>  else
>> @@ -390,35 +392,30 @@ static void vpl011_data_avail(struct domain *d)
>>  out_cons,
>>  sizeof(intf->out));
>>
>> -/* Update the uart rx state if the buffer is not empty. */
>> -if ( in_fifo_level != 0 )
>> -{
>> +/ Update the UART RX state /
>> +
>> +/* Clear the FIFO_EMPTY bit if the FIFO holds at least one character. */
>> +if ( in_fifo_level > 0 )
>>  vpl011->uartfr &= ~RXFE;
>>
>> -if ( in_fifo_level == sizeof(intf->in) )
>> -vpl011->uartfr |= RXFF;
>> +/* Set the FIFO_FULL bit if the ring buffer is full. */
>> +if ( in_fifo_level == sizeof(intf->in) )
>> +vpl011->uartfr |= RXFF;
>>
>> -/*
>> - * Currently, the RXI bit is getting set even if there is a single
>> - * byte of data in the rx fifo. Ideally, the RXI bit should be set
>> - * only if the rx fifo level reaches the threshold.
>> - *
>> - * However, since currently RX timeout interrupt is not
>> - * 

[Xen-devel] [linux-linus test] 115121: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115121 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115121/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 15 guest-saverestore.2 fail REGR. 
vs. 114682
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. vs. 114682

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore.2fail like 114658
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 114682
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 114682
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 114682
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114682
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114682
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 114682
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114682
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 linux02982f8550b3f2d908848f417ba802193dee5f4a
baseline version:
 linuxebe6e90ccc6679cb01d2b280e4b61e6092d4bedb

Last test of basis   114682  2017-10-18 09:54:11 Z5 days
Failing since114781  2017-10-20 01:00:47 Z3 days6 attempts
Testing same since   115121  2017-10-23 06:59:28 Z0 days1 attempts


People who touched revisions under test:
  Adrian Hunter 
  Al Viro 
  Alex Deucher 
  Alex Elder (Linaro) 
  Alexander Duyck 
  Alexei Starovoitov 
  Anders K Pedersen 
  Andrea Arcangeli 
  Andrew Bowers 
  Andrew Duggan 
  Andrey Smirnov 
  Andy Gross 
  Andy Lutomirski 
  Aneesh Kumar K.V 
  Anna Schumaker 
  Ard Biesheuvel 
  Arend van Spriel 

Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 04:09:58PM +0100, Ian Jackson wrote:
> Wei Liu writes ("Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for 
> build testing"):
> > On Mon, Oct 23, 2017 at 03:50:31PM +0100, Anthony PERARD wrote:
> > > FIY, I do like to put script and other files in my checkouts, the git
> > > clean will remove them.
> > 
> > I changed that to make distclean this morning.
> 
> Urgh.  This script depends on git, so please continue to use git to
> check if the tree is clean.
> 
> The right answer would be to *check that the tree is clean* before
> starting.

Sure, that's also something I just did.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Ian Jackson
Wei Liu writes ("Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for 
build testing"):
> On Mon, Oct 23, 2017 at 03:50:31PM +0100, Anthony PERARD wrote:
> > FIY, I do like to put script and other files in my checkouts, the git
> > clean will remove them.
> 
> I changed that to make distclean this morning.

Urgh.  This script depends on git, so please continue to use git to
check if the tree is clean.

The right answer would be to *check that the tree is clean* before
starting.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 03:50:31PM +0100, Anthony PERARD wrote:
> On Mon, Oct 23, 2017 at 02:02:53PM +0100, George Dunlap wrote:
> > On 10/20/2017 06:32 PM, Wei Liu wrote:
> > > Signed-off-by: Wei Liu 
> > > ---
> > > Cc: Andrew Cooper 
> > > Cc: George Dunlap 
> > > Cc: Ian Jackson 
> > > Cc: Jan Beulich 
> > > Cc: Konrad Rzeszutek Wilk 
> > > Cc: Stefano Stabellini 
> > > Cc: Tim Deegan 
> > > Cc: Wei Liu 
> > > Cc: Julien Grall 
> > > 
> > > The risk for this is zero, hence the for-4.10 tag.
> > 
> > I'm not necessarily arguing against this, but in my estimation this
> > isn't zero risk.  It's a new feature (even if one only for developers).
> > It's not *intended* to destroy anything, but a bug in it well could
> > destroy data.
> 
> There is a `git clean -dxf` in the script, this is destructif! I'm sure
> it's going to take some people by surprise.
> 
> FIY, I do like to put script and other files in my checkouts, the git
> clean will remove them.

I changed that to make distclean this morning.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC] [Draft Design] ACPI/IORT Support in Xen.

2017-10-23 Thread Julien Grall

Hi,

On 23/10/17 14:57, Andre Przywara wrote:

On 12/10/17 22:03, Manish Jaggi wrote:

It is proposed that the idrange of PCIRC and ITS group be constant for
domUs.


"constant" is a bit confusing here. Maybe "arbitrary", "from scratch" or
"independent from the actual h/w"?


I don't think we should tie to anything here. IORT for DomU will get 
some input, it could be same as the host or something generated (not 
necessarily constant). That's implementation details and might be up to 
the user.





In case if PCI PT,using a domctl toolstack can communicate
physical RID: virtual RID, deviceID: virtual deviceID to xen.

It is assumed that domU PCI Config access would be trapped in Xen. The
RID at which assigned device is enumerated would be the one provided by the
domctl, domctl_set_deviceid_mapping

TODO: device assign domctl i/f.
Note: This should suffice the virtual deviceID support pointed by Andre.
[4]


Well, there's more to it. First thing: while I tried to include virtual
ITS deviceIDs to be different from physical ones, in the moment there
are fixed to being mapped 1:1 in the code.

So the first step would be to go over the ITS code and identify where
"devid" refers to a virtual deviceID and where to a physical one
(probably renaming them accordingly). Then we would need a function to
translate between the two. At the moment this would be a dummy function
(just return the input value). Later we would loop in the actual table.


We might not need this domctl if assign_device hypercall is extended to
provide this information.


Do we actually need a new interface or even extend the existing one?
If I got Julien correctly, the existing interface is just fine?


In the first place, I am not sure to understand why Domctl is mentioned 
in this document. I can understand why you want to describe the 
information used for DomU IORT. But it does not matter at how this is 
tying to the rest of the passthrough work.


[...]



6. IORT Generation
---
There would be a common code to generate IORT table from iort_table_struct.


That sounds useful, but we would need to be careful with sharing code
between Xen and the tool stack. Has this actually been done before?


Yes, see libelf for instance. But I think there is a terminology problem 
here.


Skimming the rest of the e-mail I see: "populate a basic IORT in a 
buffer passed by toolstack (using a domctl : domctl_prepare_dom_iort)".
By sharing code, I meant creating a library that would be compiled in 
both the hypervisor and the toolstack.


But as I said before, this is not the purpose now. The purpose is 
finally getting support of IORT in the hypervisor with the generation of 
the IORT for Dom0 fully separated from the parsing.



a. For Dom0
     the structure (iort_table_struct) be modified to remove smmu nodes
     and update id_mappings.
     PCIRC idmap -> output refrence to ITS group.
     (RID -> DeviceID).

     TODO: Describe algo in update_id_mapping function to map RID ->
DeviceID used
     in my earlier patch [3]


If the above approach works, this would become a simple list iteration,
creating PCI rc nodes with the appropriate pointer to the ITS nodes.


b. For DomU
     - iort_table_struct would have minimal 2 nodes (1 PCIRC and 1 ITS
group)
     - populate a basic IORT in a buffer passed by toolstack( using a
domctl : domctl_prepare_dom_iort)


I think we should reduce this to iterating the same data structure as
for Dom0. Each pass-through-ed PCI device would possibly create one
struct instance, and later on we do the same iteration as we do for
Dom0. If that proves to be simple enough, we might even live with the
code duplication between Xen and the toolstack.


I think you summarize quite what I have been saying in the previous 
thread. Thank you :).


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 00/33] osstest: FreeBSD host support

2017-10-23 Thread Roger Pau Monné
On Fri, Oct 20, 2017 at 04:32:44PM +0100, Ian Jackson wrote:
> We have decided:
> 
>  We will push the anoint and examine parts of this series to osstest
>  pretest.  (You're going to give me a suitable branch on Monday.)
>  This should work because we have anointed FreeBSD builds already.

Sorry for the delay, had to cherry-pick some commits from the FreeBSD
host install series in order for the examine one to work. I've pushed
this to the following branch:

git://xenbits.xen.org/people/royger/osstest.git examine

Here is the output of a sample examine flight with the contents of the
branch:

http://osstest.xs.citrite.net/~osstest/testlogs/logs/72345/

Note that patch "ts-freebsd-host-install: add arguments to test
memdisk append options" is missing an Ack (you requested changes to
it).

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Anthony PERARD
On Mon, Oct 23, 2017 at 02:02:53PM +0100, George Dunlap wrote:
> On 10/20/2017 06:32 PM, Wei Liu wrote:
> > Signed-off-by: Wei Liu 
> > ---
> > Cc: Andrew Cooper 
> > Cc: George Dunlap 
> > Cc: Ian Jackson 
> > Cc: Jan Beulich 
> > Cc: Konrad Rzeszutek Wilk 
> > Cc: Stefano Stabellini 
> > Cc: Tim Deegan 
> > Cc: Wei Liu 
> > Cc: Julien Grall 
> > 
> > The risk for this is zero, hence the for-4.10 tag.
> 
> I'm not necessarily arguing against this, but in my estimation this
> isn't zero risk.  It's a new feature (even if one only for developers).
> It's not *intended* to destroy anything, but a bug in it well could
> destroy data.

There is a `git clean -dxf` in the script, this is destructif! I'm sure
it's going to take some people by surprise.

FIY, I do like to put script and other files in my checkouts, the git
clean will remove them.

-- 
Anthony PERARD

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 115037: regressions - FAIL

2017-10-23 Thread Andrew Cooper
On 23/10/17 15:34, Jan Beulich wrote:
 On 23.10.17 at 15:58,  wrote:
>> On 23/10/17 09:40, Jan Beulich wrote:
>> On 23.10.17 at 01:49,  wrote:
 flight 115037 xen-unstable real [real]
 http://logs.test-lab.xenproject.org/osstest/logs/115037/ 

 Regressions :-(

 Tests which did not succeed and are blocking,
 including tests which could not be run:
   test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. vs. 
 114644
   test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 
 114644
   test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. vs. 
 114644
>>> I'm puzzled by these recurring failures: Until flight 114525 all three
>>> (plus the fourth sibling, which is in "guest-stop fail never pass" state)
>>> were fail-never-pass on windows-install (the 64-bit host ones) or
>>> guest-saverestore (the 32-bit host ones). Then flights 114540 and
>>> 114644 were successes, and since then guest-stop has been failing.
>>> The guest console doesn't show any indication that the guest may
>>> have received a shutdown signal.
>> Would it be possible of a platform specific bug? The last two flights 
>> are failing on merlot1.
> Not very likely here, I would say.

These tests have reliably never passed before, and there are no changes
recently (I'm aware of) which would cause them to start passing.

The windows VMs aren't running PV drivers, so have no clue about the
xenstore control key, or what a setting of shutdown is supposed to mean.

The bug is why there are two spurious passes.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 115037: regressions - FAIL

2017-10-23 Thread Jan Beulich
>>> On 23.10.17 at 15:58,  wrote:
> On 23/10/17 09:40, Jan Beulich wrote:
> On 23.10.17 at 01:49,  wrote:
>>> flight 115037 xen-unstable real [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/115037/ 
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>>   test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. vs. 
>>> 114644
>>>   test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 
>>> 114644
>>>   test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. vs. 
>>> 114644
>> 
>> I'm puzzled by these recurring failures: Until flight 114525 all three
>> (plus the fourth sibling, which is in "guest-stop fail never pass" state)
>> were fail-never-pass on windows-install (the 64-bit host ones) or
>> guest-saverestore (the 32-bit host ones). Then flights 114540 and
>> 114644 were successes, and since then guest-stop has been failing.
>> The guest console doesn't show any indication that the guest may
>> have received a shutdown signal.
> 
> Would it be possible of a platform specific bug? The last two flights 
> are failing on merlot1.

Not very likely here, I would say.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 115037: regressions - FAIL

2017-10-23 Thread Ian Jackson
Julien Grall writes ("Re: [Xen-devel] [xen-unstable test] 115037: regressions - 
FAIL"):
> Would it be possible of a platform specific bug? The last two flights 
> are failing on merlot1.

The merlots are a highly unusual AMD machines which have NUMA nodes
with no memory and seem to sometimes have performance problems...

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC] [Draft Design] ACPI/IORT Support in Xen.

2017-10-23 Thread Andre Przywara
Hi Manish,

On 12/10/17 22:03, Manish Jaggi wrote:
> ACPI/IORT Support in Xen.
> --
> 
> I had sent out patch series [0] to hide smmu from Dom0 IORT. Extending
> the scope
> and including all that is required to support ACPI/IORT in Xen.
> Presenting for review
> first _draft_ of design of ACPI/IORT support in Xen. Not complete though.
> 
> Discussed is the parsing and generation of IORT table for Dom0 and DomUs.
> It is proposed that IORT be parsed and the information in saved into xen
> data-structure
> say host_iort_struct and is reused by all xen subsystems like ITS / SMMU
> etc.
> 
> Since this is first draft is open to technical comments, modifications
> and suggestions. Please be open and feel free to add any missing points
> / additions.
> 
> 1. What is IORT. What are its components ?
> 2. Current Support in Xen
> 3. IORT for Dom0
> 4. IORT for DomU
> 5. Parsing of IORT in Xen
> 6. Generation of IORT
> 7. Future Work and TODOs
> 
> 1. What is IORT. What are its components ?
> 
> IORT refers to Input Output remapping table. It is essentially used to find
> information about the IO topology (PCIRC-SMMU-ITS) and relationships
> between
> devices.
> 
> A general structure of IORT is has nodes which have information about
> PCI RC,
> SMMU, ITS and Platform devices. Using an IORT table relationship between
> RID -> StreamID -> DeviceId can be obtained. More specifically which
> device is
> behind which SMMU and which interrupt controller, this topology is
> described in
> IORT Table.
> 
> RID is a requester ID in PCI context,
> StreamID is the ID of the device in SMMU context,
> DeviceID is the ID programmed in ITS.
> 
> For a non-pci device RID could be simply an ID.
> 
> Each iort_node contains an ID map array to translate from one ID into
> another.
> IDmap Entry {input_range, output_range, output_node_ref, id_count}
> This array is present in PCI RC node,SMMU node, Named component node etc
> and can reference to a SMMU or ITS node.
> 
> 2. Current Support of IORT
> ---
> Currently Xen passes host IORT table to dom0 without any modifications.
> For DomU no IORT table is passed.
> 
> 3. IORT for Dom0
> -
> IORT for Dom0 is prepared by xen and it is fairly similar to the host iort.
> However few nodes could be removed removed or modified. For instance
> - host SMMU nodes should not be present
> - ITS group nodes are same as host iort but, no stage2 mapping is done
> for them.

What do you mean with stage2 mapping?

> - platform nodes (named components) may be selectively present depending
> on the case where xen is using some. This could be controlled by  xen command
> line.

Mmh, I am not so sure platform devices described in the IORT (those
which use MSIs!) are so much different from PCI devices here. My
understanding is those platform devices are network adapters, for
instance, for which Xen has no use.
So I would translate "Named Components" or "platform devices" as devices
just not using the PCIe bus (so no config space and no (S)BDF), but
being otherwise the same from an ITS or SMMU point of view.

> - More items : TODO

I think we agreed upon rewriting the IORT table instead of patching it?
So to some degree your statements are true, but when we rewrite the IORT
table without SMMUs (and possibly without other components like the
PMUs), it would be kind of a stretch to call it "fairly similar to the
host IORT". I think "based on the host IORT" would be more precise.

> 4. IORT for DomU
> -
> IORT for DomU is generated by the toolstack. IORT topology is different
> when DomU supports device passthrough.

Can you elaborate on that? Different compared to what? My understanding
is that without device passthrough there would be no IORT in the first
place?

> At a minimum domU IORT should include a single PCIRC and ITS Group.
> Similar PCIRC can be added in DSDT.
> Additional node can be added if platform device is assigned to domU.
> No extra node should be required for PCI device pass-through.

Again I don't fully understand this last sentence.

> It is proposed that the idrange of PCIRC and ITS group be constant for
> domUs.

"constant" is a bit confusing here. Maybe "arbitrary", "from scratch" or
"independent from the actual h/w"?

> In case if PCI PT,using a domctl toolstack can communicate
> physical RID: virtual RID, deviceID: virtual deviceID to xen.
> 
> It is assumed that domU PCI Config access would be trapped in Xen. The
> RID at which assigned device is enumerated would be the one provided by the
> domctl, domctl_set_deviceid_mapping
> 
> TODO: device assign domctl i/f.
> Note: This should suffice the virtual deviceID support pointed by Andre.
> [4]

Well, there's more to it. First thing: while I tried to include virtual
ITS deviceIDs to be different from physical ones, in the moment there
are fixed to being mapped 1:1 in the code.

So the first step would 

Re: [Xen-devel] [PATCH v2 for-4.10 2/2] xentoolcore_restrict_all: Implement for libxenevtchn

2017-10-23 Thread Julien Grall

Hi Ian,

On 19/10/17 15:57, Ian Jackson wrote:

Ross Lagerwall writes ("[PATCH v2 for-4.10 2/2] xentoolcore_restrict_all: Implement 
for libxenevtchn"):

Signed-off-by: Ross Lagerwall 
---
Changed in v2:
* Keep warning about DoS and resource exhaustion being a possibility.


Acked-by: Ian Jackson 

Julien, I think you intended your release-ack to apply to both these
patches.  Unless you object I will put your release-ack on this patch
too, therefore, and commit both of them.


I wasn't CCed on the first version of this patch so it was directed to 
the patch #1.


Anyway,

Release-acked-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 for 4.10] x86/vpt: guarantee the return value of pt_update_irq() set in vIRR or PIR

2017-10-23 Thread Julien Grall

Hi,

On 20/10/17 15:16, Jan Beulich wrote:

On 20.10.17 at 15:23,  wrote:

On 20/10/17 12:42, Jan Beulich wrote:

On 20.10.17 at 02:35,  wrote:

pt_update_irq() is expected to return the vector number of periodic
timer interrupt, which should be set in vIRR of vlapic or in PIR.
Otherwise it would trigger the assertion in vmx_intr_assist(), please
seeing
https://lists.xenproject.org/archives/html/xen-devel/2017-10/msg00915.html.

But it fails to achieve that in the following two case:
1. hvm_isa_irq_assert() may not set the corresponding bit in vIRR for
mask field of IOAPIC RTE is set. Please refer to the call tree
vmx_intr_assist() -> pt_update_irq() -> hvm_isa_irq_assert() ->
assert_irq() -> assert_gsi() -> vioapic_irq_positive_edge(). The patch
checks whether the vector is set or not in vIRR of vlapic or PIR before
returning.

2. someone changes the vector field of IOAPIC RTE between asserting
the irq and getting the vector of the irq, leading to setting the
old vector number but returning a different vector number. This patch
allows hvm_isa_irq_assert() to accept a callback which can get the
interrupt vector with irq_lock held. Thus, no one can change the vector
between the two operations.

BTW, the first argument of pi_test_and_set_pir() should be uint8_t
and I take this chance to fix it.

Signed-off-by: Chao Gao 


Reviewed-by: Jan Beulich 


Do you have any opinion on this patch going to Xen 4.10?


Well, the author having hopes that this addresses the assertion
failure we keep seeing in osstest every once in a while, I think
we certainly want to have it despite me not being fully convinced
that it'll actually help. I'm sufficiently convinced though it won't do
any bad.


I guess it is worth having a try then:

Release-acked-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 115037: regressions - FAIL

2017-10-23 Thread Julien Grall

+ Andrew

Hi,

On 23/10/17 09:40, Jan Beulich wrote:

On 23.10.17 at 01:49,  wrote:

flight 115037 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115037/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
  test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. vs. 114644
  test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 114644
  test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. vs. 114644


I'm puzzled by these recurring failures: Until flight 114525 all three
(plus the fourth sibling, which is in "guest-stop fail never pass" state)
were fail-never-pass on windows-install (the 64-bit host ones) or
guest-saverestore (the 32-bit host ones). Then flights 114540 and
114644 were successes, and since then guest-stop has been failing.
The guest console doesn't show any indication that the guest may
have received a shutdown signal.


Would it be possible of a platform specific bug? The last two flights 
are failing on merlot1.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] Config.mk: update mini-os changeset

2017-10-23 Thread Julien Grall

Hi,

On 20/10/17 12:10, Wei Liu wrote:

The new changeset contains the new console.h fix in xen.git.

Signed-off-by: Wei Liu 
---
Cc: Julien Grall 

This is rather low risk because stubdom build in xen.git uses xen
headers directly.

I just don't want to ship a version of xen which points to a buggy
mini-os changeset.


Release-acked-by: Julien Grall 

Cheers,


---
  Config.mk | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Config.mk b/Config.mk
index 78e8a2cc8a..664f97e726 100644
--- a/Config.mk
+++ b/Config.mk
@@ -274,9 +274,9 @@ MINIOS_UPSTREAM_URL ?= git://xenbits.xen.org/mini-os.git
  endif
  OVMF_UPSTREAM_REVISION ?= 947f3737abf65fda63f3ffd97fddfa6986986868
  QEMU_UPSTREAM_REVISION ?= qemu-xen-4.10.0-rc1
-MINIOS_UPSTREAM_REVISION ?= xen-4.10.0-rc1
-# Tue Oct 3 19:45:19 2017 +0100
-# Link against libxentoolcore
+MINIOS_UPSTREAM_REVISION ?= 0b4b7897e08b967a09bed2028a79fabff82342dd
+# Mon Oct 16 16:36:41 2017 +0100
+# Update Xen header files again
  
  SEABIOS_UPSTREAM_REVISION ?= rel-1.10.2

  # Wed Jun 22 14:53:24 2016 +0800



--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxl: annotate s to be nonnull in libxl__enum_from_string

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 02:04:58PM +0100, Ian Jackson wrote:
> Wei Liu writes ("Re: [Xen-devel] [PATCH for-4.10] libxl: annotate s to be 
> nonnull in libxl__enum_from_string"):
> > On Mon, Oct 23, 2017 at 01:32:50PM +0100, Julien Grall wrote:
> > > I would be ok with that. Wei do you have any opinion?
> > 
> > Sure this is a simple enough patch. We should preferably turn all NN1 to
> > NN(1), too.
> 
> That would be fine by me but I don't feel a need to hurry with that.
> I can provide the patch to do that right now, or we can save doing
> that for after 4.10.
> 

Either is fine.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxl: annotate s to be nonnull in libxl__enum_from_string

2017-10-23 Thread Ian Jackson
Wei Liu writes ("Re: [Xen-devel] [PATCH for-4.10] libxl: annotate s to be 
nonnull in libxl__enum_from_string"):
> On Mon, Oct 23, 2017 at 01:32:50PM +0100, Julien Grall wrote:
> > I would be ok with that. Wei do you have any opinion?
> 
> Sure this is a simple enough patch. We should preferably turn all NN1 to
> NN(1), too.

That would be fine by me but I don't feel a need to hurry with that.
I can provide the patch to do that right now, or we can save doing
that for after 4.10.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread George Dunlap
On 10/20/2017 06:32 PM, Wei Liu wrote:
> Signed-off-by: Wei Liu 
> ---
> Cc: Andrew Cooper 
> Cc: George Dunlap 
> Cc: Ian Jackson 
> Cc: Jan Beulich 
> Cc: Konrad Rzeszutek Wilk 
> Cc: Stefano Stabellini 
> Cc: Tim Deegan 
> Cc: Wei Liu 
> Cc: Julien Grall 
> 
> The risk for this is zero, hence the for-4.10 tag.

I'm not necessarily arguing against this, but in my estimation this
isn't zero risk.  It's a new feature (even if one only for developers).
It's not *intended* to destroy anything, but a bug in it well could
destroy data.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxl: annotate s to be nonnull in libxl__enum_from_string

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 01:32:50PM +0100, Julien Grall wrote:
> Hi,
> 
> On 20/10/17 11:47, Ian Jackson wrote:
> > Julien Grall writes ("Re: [Xen-devel] [PATCH for-4.10] libxl: annotate s to 
> > be nonnull in libxl__enum_from_string"):
> > > Release-acked-by: Julien Grall 
> > 
> > Thanks, I have applied this.  Not sure whether this followup is 4.10
> > material, but IMO it is if we would otherwise want to add another
> > open-coded __attribute__.
> 
> I would be ok with that. Wei do you have any opinion?
> 

Sure this is a simple enough patch. We should preferably turn all NN1 to
NN(1), too.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxl: annotate s to be nonnull in libxl__enum_from_string

2017-10-23 Thread Julien Grall

Hi,

On 20/10/17 11:47, Ian Jackson wrote:

Julien Grall writes ("Re: [Xen-devel] [PATCH for-4.10] libxl: annotate s to be 
nonnull in libxl__enum_from_string"):

Release-acked-by: Julien Grall 


Thanks, I have applied this.  Not sure whether this followup is 4.10
material, but IMO it is if we would otherwise want to add another
open-coded __attribute__.


I would be ok with that. Wei do you have any opinion?

Cheers,



Ian.

 From b15e10f24a0d3c35033c26832e91aa14d40fc437 Mon Sep 17 00:00:00 2001
From: Ian Jackson 
Date: Fri, 20 Oct 2017 11:42:42 +0100
Subject: [PATCH] libxl: Replace open-coded __attribute__ with NN() macro

Inspired by
   #define __nonnull(...) __attribute__((__nonnull__(__VA_ARGS__)))
which is used in the hypervisor.

These annotations may well become very common in libxl, so we choose a
short name.

Signed-off-by: Ian Jackson 
CC: Andrew Cooper 
CC: Wei Liu 
CC: Julien Grall 
---
  tools/libxl/libxl_internal.h | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 9fe472e..bfa95d8 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -635,6 +635,7 @@ static inline int libxl__gc_is_real(const libxl__gc *gc)
   */
  /* register ptr in gc for free on exit from outermost libxl callframe. */
  
+#define NN(...) __attribute__((nonnull(__VA_ARGS__)))

  #define NN1 __attribute__((nonnull(1)))
   /* It used to be legal to pass NULL for gc_opt.  Get the compiler to
* warn about this if any slip through. */
@@ -1711,7 +1712,7 @@ _hidden char *libxl__domid_to_name(libxl__gc *gc, 
uint32_t domid);
  _hidden char *libxl__cpupoolid_to_name(libxl__gc *gc, uint32_t poolid);
  
  _hidden int libxl__enum_from_string(const libxl_enum_string_table *t,

-const char *s, int *e) 
__attribute__((nonnull(2)));
+const char *s, int *e) NN(2);
  
  _hidden yajl_gen_status libxl__yajl_gen_asciiz(yajl_gen hand, const char *str);
  



--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 01:07:36PM +0100, Ian Jackson wrote:
> Wei Liu writes ("Re: [PATCH for-4.10] scripts: add a script for build 
> testing"):
> > On Mon, Oct 23, 2017 at 01:02:00PM +0100, Ian Jackson wrote:
> > > In particular, if you:
> > >  * check that the tree is not dirty
> > >  * detach HEAD
> > 
> > I think these two checks are good.
> > 
> > >  * reattach HEAD afterwards at least on success
> > 
> > This is already the case for git-rebase on success.
> 
> No.  git-rebase _rewrites_ HEAD.
> 

I see. I will steal bits from your snippet where appropriate.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 2/5] xen: Provide XEN_DMOP_add_to_physmap

2017-10-23 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 23 October 2017 13:18
> To: Paul Durrant 
> Cc: Andrew Cooper ; George Dunlap
> ; Ian Jackson ; Ross
> Lagerwall ; Wei Liu ;
> Stefano Stabellini ; xen-devel@lists.xen.org; Konrad
> Rzeszutek Wilk ; Tim (Xen.org) 
> Subject: RE: [Xen-devel] [PATCH v2 2/5] xen: Provide
> XEN_DMOP_add_to_physmap
> 
> >>> On 23.10.17 at 14:03,  wrote:
> >> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> >> Ross Lagerwall
> >> Sent: 23 October 2017 10:05
> >> --- a/xen/include/public/hvm/dm_op.h
> >> +++ b/xen/include/public/hvm/dm_op.h
> >> @@ -368,6 +368,22 @@ struct xen_dm_op_remote_shutdown {
> >> /* (Other reason values are not blocked) */
> >>  };
> >>
> >> +/*
> >> + * XEN_DMOP_add_to_physmap : Sets the GPFNs at which a page range
> >> appears in
> >> + *   the specified guest's pseudophysical address
> >> + *   space. Identical to XENMEM_add_to_physmap 
> >> with
> >> + *   space == XENMAPSPACE_gmfn_range.
> >> + */
> >> +#define XEN_DMOP_add_to_physmap 17
> >> +
> >> +struct xen_dm_op_add_to_physmap {
> >> +uint16_t size; /* Number of GMFNs to process. */
> >> +uint16_t pad0;
> >> +uint32_t pad1;
> >
> > I think you can lose pad1 by putting idx and gpfn above size rather than
> > below (since IIRC we only need pad up to the next 4 byte boundary).
> 
> No, tail padding would then still be wanted, I think.

Ok.  I stand corrected :-)

  Paul

> 
> Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 2/5] xen: Provide XEN_DMOP_add_to_physmap

2017-10-23 Thread Ross Lagerwall

On 10/23/2017 01:03 PM, Paul Durrant wrote:
snip>> +/*

+ * XEN_DMOP_add_to_physmap : Sets the GPFNs at which a page range
appears in
+ *   the specified guest's pseudophysical address
+ *   space. Identical to XENMEM_add_to_physmap with
+ *   space == XENMAPSPACE_gmfn_range.
+ */
+#define XEN_DMOP_add_to_physmap 17
+
+struct xen_dm_op_add_to_physmap {
+uint16_t size; /* Number of GMFNs to process. */
+uint16_t pad0;
+uint32_t pad1;


I think you can lose pad1 by putting idx and gpfn above size rather than below 
(since IIRC we only need pad up to the next 4 byte boundary).

Nope, the build fails unless I pad it to an 8 byte boundary. This is 
also why I added padding to struct xen_dm_op_pin_memory_cacheattr...


--
Ross Lagerwall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 2/5] xen: Provide XEN_DMOP_add_to_physmap

2017-10-23 Thread Jan Beulich
>>> On 23.10.17 at 14:03,  wrote:
>> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
>> Ross Lagerwall
>> Sent: 23 October 2017 10:05
>> --- a/xen/include/public/hvm/dm_op.h
>> +++ b/xen/include/public/hvm/dm_op.h
>> @@ -368,6 +368,22 @@ struct xen_dm_op_remote_shutdown {
>> /* (Other reason values are not blocked) */
>>  };
>> 
>> +/*
>> + * XEN_DMOP_add_to_physmap : Sets the GPFNs at which a page range
>> appears in
>> + *   the specified guest's pseudophysical address
>> + *   space. Identical to XENMEM_add_to_physmap with
>> + *   space == XENMAPSPACE_gmfn_range.
>> + */
>> +#define XEN_DMOP_add_to_physmap 17
>> +
>> +struct xen_dm_op_add_to_physmap {
>> +uint16_t size; /* Number of GMFNs to process. */
>> +uint16_t pad0;
>> +uint32_t pad1;
> 
> I think you can lose pad1 by putting idx and gpfn above size rather than 
> below (since IIRC we only need pad up to the next 4 byte boundary).

No, tail padding would then still be wanted, I think.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Ian Jackson
Wei Liu writes ("Re: [PATCH for-4.10] scripts: add a script for build testing"):
> On Mon, Oct 23, 2017 at 02:24:40AM -0600, Jan Beulich wrote:
> > On 20.10.17 at 19:32,  wrote:
> > > +git rebase $BASE $TIP -x "$CMD"
> > 
> > Is this quoting on $CMD really going to work right no matter what
> > the variable actually expands to? I.e. don't you either want to use
> > "eval" or adjust script arguments such that you can use "$@" with
> > its special quoting rules?

Yes.  Jan is completely right.

> What sort of use cases you have in mind that involve complex quoting and
> expansion?

There is really no excuse at all, in a script like this, for not using
`shift' to eat the main positional parameters, and then executing
"$@", faithfully reproducing the incoming parameters.

Of course there is a problem with getting this through git-rebase but
as I have just pointed out, git-rev-list and git-checkout are much
more suitable building blocks than git-rebase (which does a lot of
undesirable stuff that has to be suppressed, etc.)

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Jan Beulich
>>> On 23.10.17 at 13:41,  wrote:
> On Mon, Oct 23, 2017 at 02:24:40AM -0600, Jan Beulich wrote:
>> >>> On 20.10.17 at 19:32,  wrote:
>> > --- /dev/null
>> > +++ b/scripts/build-test.sh
>> > @@ -0,0 +1,40 @@
>> > +#!/bin/sh
>> > +
>> > +# WARNING: Always backup the branch by creating another reference to it if
>> > +# you're not familiar with git-rebase(1).
>> > +#
>> > +# Use `git rebase` to run command or script on every commit within the 
>> > range
>> > +# specified. If no command or script is provided, use the default one to 
>> > clean
>> > +# and build the whole tree.
>> > +#
>> > +# If something goes wrong, the script will stop at the commit that fails. 
>> >  Fix
>> > +# the failure and run `git rebase --continue`.
>> > +#
>> > +# If for any reason the tree is screwed, use `git rebase --abort` to 
>> > restore to
>> > +# original state.
>> > +
>> > +if ! test -f xen/Kconfig; then
>> > +echo "Please run this script from top-level directory"
>> 
>> Wouldn't running this in one of the top-level sub-trees also be useful?
>> E.g. why would one want a hypervisor only series not touching the
>> public interface to have the tools tree rebuilt all the time?
>> 
> 
> You can do that by supplying your custom command.

Oh, of course - silly me.

>> > +echo
>> > +
>> > +git rebase $BASE $TIP -x "$CMD"
>> 
>> Is this quoting on $CMD really going to work right no matter what
>> the variable actually expands to? I.e. don't you either want to use
>> "eval" or adjust script arguments such that you can use "$@" with
>> its special quoting rules?
> 
> What sort of use cases you have in mind that involve complex quoting and
> expansion?

A typical cross build command line of mine looks like

make -sC build/xen/$v {XEN_TARGET_ARCH,t}=x86_64 CC=gccx LD=ldx 
OBJCOPY=objcopyx NM=nmx -j32 xen

which you can see leverages the fact that make allows variable
settings on the command line. For other utilities this would require
e.g. "CC=gccx my-script", and I'm not sure whether quoting you
apply would work right (largely depends on what git does with the
argument).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 4/5] tools: libxendevicemodel: Provide xendevicemodel_add_to_physmap

2017-10-23 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> Ross Lagerwall
> Sent: 23 October 2017 10:05
> To: xen-devel@lists.xen.org
> Cc: Ross Lagerwall ; Ian Jackson
> ; Wei Liu 
> Subject: [Xen-devel] [PATCH v2 4/5] tools: libxendevicemodel: Provide
> xendevicemodel_add_to_physmap
> 
> Signed-off-by: Ross Lagerwall 

Reviewed-by: Paul Durrant 

> ---
> 
> Changed in v2:
> * Make it operate on a range.
> 
>  tools/libs/devicemodel/Makefile |  2 +-
>  tools/libs/devicemodel/core.c   | 21 +
>  tools/libs/devicemodel/include/xendevicemodel.h | 15 +++
>  tools/libs/devicemodel/libxendevicemodel.map|  5 +
>  4 files changed, 42 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libs/devicemodel/Makefile
> b/tools/libs/devicemodel/Makefile
> index 342371a..5b2df7a 100644
> --- a/tools/libs/devicemodel/Makefile
> +++ b/tools/libs/devicemodel/Makefile
> @@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
>  include $(XEN_ROOT)/tools/Rules.mk
> 
>  MAJOR= 1
> -MINOR= 1
> +MINOR= 2
>  SHLIB_LDFLAGS += -Wl,--version-script=libxendevicemodel.map
> 
>  CFLAGS   += -Werror -Wmissing-prototypes
> diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
> index b66d4f9..07953d3 100644
> --- a/tools/libs/devicemodel/core.c
> +++ b/tools/libs/devicemodel/core.c
> @@ -564,6 +564,27 @@ int xendevicemodel_shutdown(
>  return xendevicemodel_op(dmod, domid, 1, , sizeof(op));
>  }
> 
> +int xendevicemodel_add_to_physmap(
> +xendevicemodel_handle *dmod, domid_t domid, uint16_t size, uint64_t
> idx,
> +uint64_t gpfn)
> +{
> +struct xen_dm_op op;
> +struct xen_dm_op_add_to_physmap *data;
> +
> +memset(, 0, sizeof(op));
> +
> +op.op = XEN_DMOP_add_to_physmap;
> +data = _to_physmap;
> +
> +data->size = size;
> +data->pad0 = 0;
> +data->pad1 = 0;
> +data->idx = idx;
> +data->gpfn = gpfn;
> +
> +return xendevicemodel_op(dmod, domid, 1, , sizeof(op));
> +}
> +
>  int xendevicemodel_restrict(xendevicemodel_handle *dmod, domid_t
> domid)
>  {
>  return osdep_xendevicemodel_restrict(dmod, domid);
> diff --git a/tools/libs/devicemodel/include/xendevicemodel.h
> b/tools/libs/devicemodel/include/xendevicemodel.h
> index dda0bc7..6967e58 100644
> --- a/tools/libs/devicemodel/include/xendevicemodel.h
> +++ b/tools/libs/devicemodel/include/xendevicemodel.h
> @@ -326,6 +326,21 @@ int xendevicemodel_shutdown(
>  xendevicemodel_handle *dmod, domid_t domid, unsigned int reason);
> 
>  /**
> + * Sets the GPFNs at which a page range appears in the domain's
> + * pseudophysical address space.
> + *
> + * @parm dmod a handle to an open devicemodel interface.
> + * @parm domid the domain id to be serviced
> + * @parm size Number of GMFNs to process
> + * @parm idx Index into GMFN space
> + * @parm gpfn Starting GPFN where the GMFNs should appear
> + * @return 0 on success, -1 on failure.
> + */
> +int xendevicemodel_add_to_physmap(
> +xendevicemodel_handle *dmod, domid_t domid, uint16_t size, uint64_t
> idx,
> +uint64_t gpfn);
> +
> +/**
>   * This function restricts the use of this handle to the specified
>   * domain.
>   *
> diff --git a/tools/libs/devicemodel/libxendevicemodel.map
> b/tools/libs/devicemodel/libxendevicemodel.map
> index cefd32b..4a19ecb 100644
> --- a/tools/libs/devicemodel/libxendevicemodel.map
> +++ b/tools/libs/devicemodel/libxendevicemodel.map
> @@ -27,3 +27,8 @@ VERS_1.1 {
>   global:
>   xendevicemodel_shutdown;
>  } VERS_1.0;
> +
> +VERS_1.2 {
> + global:
> + xendevicemodel_add_to_physmap;
> +} VERS_1.1;
> --
> 2.9.5
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Ian Jackson
Wei Liu writes ("Re: [PATCH for-4.10] scripts: add a script for build testing"):
> On Mon, Oct 23, 2017 at 01:02:00PM +0100, Ian Jackson wrote:
> > In particular, if you:
> >  * check that the tree is not dirty
> >  * detach HEAD
> 
> I think these two checks are good.
> 
> >  * reattach HEAD afterwards at least on success
> 
> This is already the case for git-rebase on success.

No.  git-rebase _rewrites_ HEAD.

Your script should just check out the intermediate commits.  You
probably don't in fact want git-rebase.  In particular, you don't want
to risk merge conflicts.

I have a script I use for dgit testing that looks like this:

  #!/bin/bash
  #
  # run  git fetch main
  # and then this

  set -e
  set -o pipefail

  revspec=main/${STTM_TESTED-tested}..main/pretest

  echo "testing $revspec ..."

  git-rev-list $revspec | nl -ba | tac | \
  while read num rev; do
  echo >&2 ""
  echo >&2 "testing $num $rev"
  git checkout $rev
  ${0%/*}/sometest-to-tested
  done

FAOD,

Signed-off-by: Ian Jackson 

for inclusion of parts of this in the Xen build system.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 3/5] xen: Provide XEN_DMOP_pin_memory_cacheattr

2017-10-23 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> Ross Lagerwall
> Sent: 23 October 2017 10:05
> To: xen-devel@lists.xen.org
> Cc: Stefano Stabellini ; Wei Liu
> ; Konrad Rzeszutek Wilk ;
> George Dunlap ; Andrew Cooper
> ; Ian Jackson ; Tim
> (Xen.org) ; Ross Lagerwall ; Jan
> Beulich 
> Subject: [Xen-devel] [PATCH v2 3/5] xen: Provide
> XEN_DMOP_pin_memory_cacheattr
> 
> Provide XEN_DMOP_pin_memory_cacheattr to allow a deprivileged QEMU
> to
> pin the caching type of RAM after moving the VRAM. It is equivalent to
> XEN_DOMCTL_pin_memory_cacheattr.
> 
> Signed-off-by: Ross Lagerwall 

Reviewed-by: Paul Durrant 

> ---
> 
> Changed in v2:
> * Check pad is 0.
> 
>  xen/arch/x86/hvm/dm.c  | 18 ++
>  xen/include/public/hvm/dm_op.h | 14 ++
>  xen/include/xlat.lst   |  1 +
>  3 files changed, 33 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index 0027567..42d02cc 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -21,6 +21,7 @@
> 
>  #include 
>  #include 
> +#include 
>  #include 
> 
>  #include 
> @@ -670,6 +671,22 @@ static int dm_op(const struct dmop_args *op_args)
>  break;
>  }
> 
> +case XEN_DMOP_pin_memory_cacheattr:
> +{
> +const struct xen_dm_op_pin_memory_cacheattr *data =
> +_memory_cacheattr;
> +
> +if ( data->pad )
> +{
> +rc = -EINVAL;
> +break;
> +}
> +
> +rc = hvm_set_mem_pinned_cacheattr(d, data->start, data->end,
> +  data->type);
> +break;
> +}
> +
>  default:
>  rc = -EOPNOTSUPP;
>  break;
> @@ -700,6 +717,7 @@ CHECK_dm_op_inject_event;
>  CHECK_dm_op_inject_msi;
>  CHECK_dm_op_remote_shutdown;
>  CHECK_dm_op_add_to_physmap;
> +CHECK_dm_op_pin_memory_cacheattr;
> 
>  int compat_dm_op(domid_t domid,
>   unsigned int nr_bufs,
> diff --git a/xen/include/public/hvm/dm_op.h
> b/xen/include/public/hvm/dm_op.h
> index f685110..f9c86b8 100644
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -384,6 +384,19 @@ struct xen_dm_op_add_to_physmap {
>  uint64_aligned_t gpfn; /* Starting GPFN where the GMFNs should appear.
> */
>  };
> 
> +/*
> + * XEN_DMOP_pin_memory_cacheattr : Pin caching type of RAM space.
> + * Identical to XEN_DOMCTL_pin_mem_cacheattr.
> + */
> +#define XEN_DMOP_pin_memory_cacheattr 18
> +
> +struct xen_dm_op_pin_memory_cacheattr {
> +uint64_aligned_t start; /* Start gfn. */
> +uint64_aligned_t end;   /* End gfn. */
> +uint32_t type;  /* XEN_DOMCTL_MEM_CACHEATTR_* */
> +uint32_t pad;
> +};
> +
>  struct xen_dm_op {
>  uint32_t op;
>  uint32_t pad;
> @@ -406,6 +419,7 @@ struct xen_dm_op {
>  map_mem_type_to_ioreq_server;
>  struct xen_dm_op_remote_shutdown remote_shutdown;
>  struct xen_dm_op_add_to_physmap add_to_physmap;
> +struct xen_dm_op_pin_memory_cacheattr pin_memory_cacheattr;
>  } u;
>  };
> 
> diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
> index d40bac6..fffb308 100644
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -65,6 +65,7 @@
>  ?dm_op_inject_msihvm/dm_op.h
>  ?dm_op_ioreq_server_rangehvm/dm_op.h
>  ?dm_op_modified_memory   hvm/dm_op.h
> +?dm_op_pin_memory_cacheattr  hvm/dm_op.h
>  ?dm_op_remote_shutdown   hvm/dm_op.h
>  ?dm_op_set_ioreq_server_statehvm/dm_op.h
>  ?dm_op_set_isa_irq_level hvm/dm_op.h
> --
> 2.9.5
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 01:02:00PM +0100, Ian Jackson wrote:
> Wei Liu writes ("Re: [PATCH for-4.10] scripts: add a script for build 
> testing"):
> > On Mon, Oct 23, 2017 at 02:24:40AM -0600, Jan Beulich wrote:
> > > What is this startup delay intended for?
> > 
> > To give user a chance to check the command -- git-rebase can be
> > destructive after all.
> 
> I can't resist this bikeshed.  This kind of thing is quite annoying.
> If your command might be destructive, why not fix it so that it's not
> destructive.
> 
> In particular, if you:
>  * check that the tree is not dirty
>  * detach HEAD

I think these two checks are good.

>  * reattach HEAD afterwards at least on success

This is already the case for git-rebase on success.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 2/5] xen: Provide XEN_DMOP_add_to_physmap

2017-10-23 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> Ross Lagerwall
> Sent: 23 October 2017 10:05
> To: xen-devel@lists.xen.org
> Cc: Stefano Stabellini ; Wei Liu
> ; Konrad Rzeszutek Wilk ;
> George Dunlap ; Andrew Cooper
> ; Ian Jackson ; Tim
> (Xen.org) ; Ross Lagerwall ; Jan
> Beulich 
> Subject: [Xen-devel] [PATCH v2 2/5] xen: Provide
> XEN_DMOP_add_to_physmap
> 
> Provide XEN_DMOP_add_to_physmap, a limited version of
> XENMEM_add_to_physmap to allow a deprivileged QEMU to move VRAM
> when a
> guest programs its BAR. It is equivalent to XENMEM_add_to_physmap with
> space == XENMAPSPACE_gmfn_range.
> 
> Signed-off-by: Ross Lagerwall 

Reviewed-by: Paul Durrant 

...with one observation below...

> ---
> 
> Changed in v2:
> * Make it operate on a range.
> 
>  xen/arch/x86/hvm/dm.c  | 31 +++
>  xen/include/public/hvm/dm_op.h | 17 +
>  xen/include/xlat.lst   |  1 +
>  3 files changed, 49 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index 32ade95..0027567 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -640,6 +640,36 @@ static int dm_op(const struct dmop_args *op_args)
>  break;
>  }
> 
> +case XEN_DMOP_add_to_physmap:
> +{
> +struct xen_dm_op_add_to_physmap *data =
> +_to_physmap;
> +struct xen_add_to_physmap xatp = {
> +.domid = op_args->domid,
> +.size = data->size,
> +.space = XENMAPSPACE_gmfn_range,
> +.idx = data->idx,
> +.gpfn = data->gpfn,
> +};
> +
> +if ( data->pad0 || data->pad1 )
> +{
> +rc = -EINVAL;
> +break;
> +}
> +
> +rc = xenmem_add_to_physmap(d, , 0);
> +if ( rc > 0 )
> +{
> +data->size -= rc;
> +data->idx += rc;
> +data->gpfn += rc;
> +const_op = false;
> +rc = -ERESTART;
> +}
> +break;
> +}
> +
>  default:
>  rc = -EOPNOTSUPP;
>  break;
> @@ -669,6 +699,7 @@ CHECK_dm_op_set_mem_type;
>  CHECK_dm_op_inject_event;
>  CHECK_dm_op_inject_msi;
>  CHECK_dm_op_remote_shutdown;
> +CHECK_dm_op_add_to_physmap;
> 
>  int compat_dm_op(domid_t domid,
>   unsigned int nr_bufs,
> diff --git a/xen/include/public/hvm/dm_op.h
> b/xen/include/public/hvm/dm_op.h
> index e173085..f685110 100644
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -368,6 +368,22 @@ struct xen_dm_op_remote_shutdown {
> /* (Other reason values are not blocked) */
>  };
> 
> +/*
> + * XEN_DMOP_add_to_physmap : Sets the GPFNs at which a page range
> appears in
> + *   the specified guest's pseudophysical address
> + *   space. Identical to XENMEM_add_to_physmap with
> + *   space == XENMAPSPACE_gmfn_range.
> + */
> +#define XEN_DMOP_add_to_physmap 17
> +
> +struct xen_dm_op_add_to_physmap {
> +uint16_t size; /* Number of GMFNs to process. */
> +uint16_t pad0;
> +uint32_t pad1;

I think you can lose pad1 by putting idx and gpfn above size rather than below 
(since IIRC we only need pad up to the next 4 byte boundary).

  Paul

> +uint64_aligned_t idx;  /* Index into GMFN space. */
> +uint64_aligned_t gpfn; /* Starting GPFN where the GMFNs should
> appear. */
> +};
> +
>  struct xen_dm_op {
>  uint32_t op;
>  uint32_t pad;
> @@ -389,6 +405,7 @@ struct xen_dm_op {
>  struct xen_dm_op_map_mem_type_to_ioreq_server
>  map_mem_type_to_ioreq_server;
>  struct xen_dm_op_remote_shutdown remote_shutdown;
> +struct xen_dm_op_add_to_physmap add_to_physmap;
>  } u;
>  };
> 
> diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
> index 4346cbe..d40bac6 100644
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -57,6 +57,7 @@
>  ?grant_entry_v2  grant_table.h
>  ?gnttab_swap_grant_ref   grant_table.h
>  !dm_op_buf   hvm/dm_op.h
> +?dm_op_add_to_physmaphvm/dm_op.h
>  ?dm_op_create_ioreq_server   hvm/dm_op.h
>  ?dm_op_destroy_ioreq_server  hvm/dm_op.h
>  ?dm_op_get_ioreq_server_info hvm/dm_op.h
> --
> 2.9.5
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Ian Jackson
Wei Liu writes ("Re: [PATCH for-4.10] scripts: add a script for build testing"):
> On Mon, Oct 23, 2017 at 02:24:40AM -0600, Jan Beulich wrote:
> > What is this startup delay intended for?
> 
> To give user a chance to check the command -- git-rebase can be
> destructive after all.

I can't resist this bikeshed.  This kind of thing is quite annoying.
If your command might be destructive, why not fix it so that it's not
destructive.

In particular, if you:
 * check that the tree is not dirty
 * detach HEAD
 * reattach HEAD afterwards at least on success
then the risk of lossage is low and you can safely just go ahead.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 115129: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115129 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115129/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm6 xen-buildfail REGR. vs. 114507
 build-i3866 xen-buildfail REGR. vs. 114507
 build-amd64-xsm   6 xen-buildfail REGR. vs. 114507
 build-amd64   6 xen-buildfail REGR. vs. 114507
 build-armhf-xsm   6 xen-buildfail REGR. vs. 114507
 build-armhf   6 xen-buildfail REGR. vs. 114507

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)  blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 

Re: [Xen-devel] [PATCH v2 1/5] xen/mm: Make xenmem_add_to_physmap global

2017-10-23 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> Ross Lagerwall
> Sent: 23 October 2017 10:05
> To: xen-devel@lists.xen.org
> Cc: Stefano Stabellini ; Wei Liu
> ; Konrad Rzeszutek Wilk ;
> George Dunlap ; Andrew Cooper
> ; Ian Jackson ; Tim
> (Xen.org) ; Ross Lagerwall ; Jan
> Beulich 
> Subject: [Xen-devel] [PATCH v2 1/5] xen/mm: Make
> xenmem_add_to_physmap global
> 
> Make it global in preparation to be called by a new dmop.
> 
> Signed-off-by: Ross Lagerwall 
> 
> ---

You need to delete the above '---' otherwise this R-b will not get carried 
through into the commit.

  Paul

> Reviewed-by: Paul Durrant 
> ---
>  xen/common/memory.c  | 5 ++---
>  xen/include/xen/mm.h | 3 +++
>  2 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index ad987e0..c4f05c7 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -741,9 +741,8 @@ static long
> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange
> _t) arg)
>  return rc;
>  }
> 
> -static int xenmem_add_to_physmap(struct domain *d,
> - struct xen_add_to_physmap *xatp,
> - unsigned int start)
> +int xenmem_add_to_physmap(struct domain *d, struct
> xen_add_to_physmap *xatp,
> +  unsigned int start)
>  {
>  unsigned int done = 0;
>  long rc = 0;
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index e813c07..0e0e511 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -579,6 +579,9 @@ int xenmem_add_to_physmap_one(struct domain
> *d, unsigned int space,
>union xen_add_to_physmap_batch_extra extra,
>unsigned long idx, gfn_t gfn);
> 
> +int xenmem_add_to_physmap(struct domain *d, struct
> xen_add_to_physmap *xatp,
> +  unsigned int start);
> +
>  /* Return 0 on success, or negative on error. */
>  int __must_check guest_remove_page(struct domain *d, unsigned long
> gmfn);
>  int __must_check steal_page(struct domain *d, struct page_info *page,
> --
> 2.9.5
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 12:30:33PM +0100, Anthony PERARD wrote:
> On Fri, Oct 20, 2017 at 06:32:55PM +0100, Wei Liu wrote:
> > +CMD=${3:-git clean -fdx && ./configure && make -j4}
> > +
> > +echo "Running command \"$CMD\" on every commit from $BASE to $TIP"
> > +echo -n "Starting in "
> > +
> > +for i in `seq 5 -1 1`; do
> > +echo -n "$i ... "
> > +sleep 1
> > +done
> > +
> 
> Instead of the count down, I would do:
> echo -n 'Continue ? (^C to quit) '
> read
> 
> 
> OR something like:
> echo -n 'Continue ? [Yn] '
> read answer
> [[ "$answer" =~ ^(|Y|y|yes)$ ]] || exit
> 

No objection from me. The latter is better.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.9 test] 115110: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115110 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115110/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 114814

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail 
pass in 115052
 test-amd64-i386-xl-qemuu-ws16-amd64 13 guest-saverestore   fail pass in 115052

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail in 115052 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114814
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114814
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 linux4d4a6a3f8a12602ce8dc800123715fe7b5c1c3a1
baseline version:
 linux5d7a76acad403638f635c918cc63d1d44ffa4065

Last test of basis   114814  2017-10-20 20:51:56 Z2 days
Testing same since   114845  2017-10-21 16:14:17 Z1 days4 attempts


People who touched revisions under test:
  Alex Deucher 
  Alexandre Belloni 
  Andrew Morton 
  Anoob Soman 
  Arnd Bergmann 
  Bart Van Assche 
  Ben Skeggs 
  Bin Liu 
  Borislav Petkov 
  Christoph Lameter 
  Christophe JAILLET 
  Coly Li 
  Dan Carpenter 
  David Rientjes 
  David S. Miller 
  Dennis Dalessandro 

Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Wei Liu
On Mon, Oct 23, 2017 at 02:24:40AM -0600, Jan Beulich wrote:
> >>> On 20.10.17 at 19:32,  wrote:
> > --- /dev/null
> > +++ b/scripts/build-test.sh
> > @@ -0,0 +1,40 @@
> > +#!/bin/sh
> > +
> > +# WARNING: Always backup the branch by creating another reference to it if
> > +# you're not familiar with git-rebase(1).
> > +#
> > +# Use `git rebase` to run command or script on every commit within the 
> > range
> > +# specified. If no command or script is provided, use the default one to 
> > clean
> > +# and build the whole tree.
> > +#
> > +# If something goes wrong, the script will stop at the commit that fails.  
> > Fix
> > +# the failure and run `git rebase --continue`.
> > +#
> > +# If for any reason the tree is screwed, use `git rebase --abort` to 
> > restore to
> > +# original state.
> > +
> > +if ! test -f xen/Kconfig; then
> > +echo "Please run this script from top-level directory"
> 
> Wouldn't running this in one of the top-level sub-trees also be useful?
> E.g. why would one want a hypervisor only series not touching the
> public interface to have the tools tree rebuilt all the time?
> 

You can do that by supplying your custom command.

The script really aims to be an easy to use thing to point contributors
to hence the checks, warning and restrictions, while at the same time
allows some flexibility.

For example, if you want to build hypervisor only:

$ ./scripts/build-test.sh $BASE $TIP "make -C xen clean && make -C xen"

> > +exit 1
> > +fi
> > +
> > +if test $# -lt 2 ; then
> > +echo "Usage: $0   [CMD|SCRIPT]"
> 
> Perhaps
> 
> echo "Usage: $0   

Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Anthony PERARD
On Mon, Oct 23, 2017 at 12:30:33PM +0100, Anthony PERARD wrote:
> On Fri, Oct 20, 2017 at 06:32:55PM +0100, Wei Liu wrote:
> > +CMD=${3:-git clean -fdx && ./configure && make -j4}
> > +
> > +echo "Running command \"$CMD\" on every commit from $BASE to $TIP"
> > +echo -n "Starting in "
> > +
> > +for i in `seq 5 -1 1`; do
> > +echo -n "$i ... "
> > +sleep 1
> > +done
> > +
> 
> Instead of the count down, I would do:
> echo -n 'Continue ? (^C to quit) '
> read
> 
> 
> OR something like:
> echo -n 'Continue ? [Yn] '
> read answer
> [[ "$answer" =~ ^(|Y|y|yes)$ ]] || exit
> 
> 
> I don't like to wait.

And I don't like the presure to have to decide if I want to ^C within a
limited time. I would probably kill the script if I did not know what
it was going to do as soon as I see the count down.

-- 
Anthony PERARD

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Anthony PERARD
On Fri, Oct 20, 2017 at 06:32:55PM +0100, Wei Liu wrote:
> +CMD=${3:-git clean -fdx && ./configure && make -j4}
> +
> +echo "Running command \"$CMD\" on every commit from $BASE to $TIP"
> +echo -n "Starting in "
> +
> +for i in `seq 5 -1 1`; do
> +echo -n "$i ... "
> +sleep 1
> +done
> +

Instead of the count down, I would do:
echo -n 'Continue ? (^C to quit) '
read


OR something like:
echo -n 'Continue ? [Yn] '
read answer
[[ "$answer" =~ ^(|Y|y|yes)$ ]] || exit


I don't like to wait.

-- 
Anthony PERARD

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2 4/5] tools: libxendevicemodel: Provide xendevicemodel_add_to_physmap

2017-10-23 Thread Ian Jackson
Ross Lagerwall writes ("[PATCH v2 4/5] tools: libxendevicemodel: Provide 
xendevicemodel_add_to_physmap"):
> Signed-off-by: Ross Lagerwall 

Assuming the hypervisor parts go in:

Acked-by: Ian Jackson 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v5.1 8/8] configure: do_compiler: Dump some extra info under bash [and 1 more messages]

2017-10-23 Thread Ian Jackson
Ian Jackson writes ("[PATCH v5.1 8/8] configure: do_compiler: Dump some extra 
info under bash"):
> This makes it much easier to find a particular thing in config.log.
> 
> The information may be lacking in other shells, resulting in harmless
> empty output.  (This is why we don't use the proper ${FUNCNAME[*]}
> array syntax - other shells will choke on that.)
> 
> The extra output is only printed if configure is run with bash.  The
> something), it is necessary to say   bash ./configure  to get the extra
> debug info in the log.

Kent Spillner points out that this last sentence is garbled.  The
paragraph should read:

  The extra output is only printed if configure is run with bash.  On
  systems where /bin/sh is not bash, it is necessary to say bash
  ./configure to get the extra debug info in the log.

I have updated it in my branch.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Julien Grall

Hi Wei,

On 20/10/17 18:32, Wei Liu wrote:

Signed-off-by: Wei Liu 
---
Cc: Andrew Cooper 
Cc: George Dunlap 
Cc: Ian Jackson 
Cc: Jan Beulich 
Cc: Konrad Rzeszutek Wilk 
Cc: Stefano Stabellini 
Cc: Tim Deegan 
Cc: Wei Liu 
Cc: Julien Grall 

The risk for this is zero, hence the for-4.10 tag.


Agree.


---
  scripts/build-test.sh | 40 
  1 file changed, 40 insertions(+)
  create mode 100755 scripts/build-test.sh

diff --git a/scripts/build-test.sh b/scripts/build-test.sh
new file mode 100755
index 00..a08468e83b
--- /dev/null
+++ b/scripts/build-test.sh
@@ -0,0 +1,40 @@
+#!/bin/sh
+
+# WARNING: Always backup the branch by creating another reference to it if
+# you're not familiar with git-rebase(1).
+#
+# Use `git rebase` to run command or script on every commit within the range
+# specified. If no command or script is provided, use the default one to clean
+# and build the whole tree.
+#
+# If something goes wrong, the script will stop at the commit that fails.  Fix
+# the failure and run `git rebase --continue`.
+#
+# If for any reason the tree is screwed, use `git rebase --abort` to restore to
+# original state.
+
+if ! test -f xen/Kconfig; then
+echo "Please run this script from top-level directory"
+exit 1
+fi
+
+if test $# -lt 2 ; then
+echo "Usage: $0   [CMD|SCRIPT]"
+exit 1
+fi
+
+BASE=$1
+TIP=$2
+CMD=${3:-git clean -fdx && ./configure && make -j4}


Can you document somewhere that there are no cross-compilation supported?


+
+echo "Running command \"$CMD\" on every commit from $BASE to $TIP"
+echo -n "Starting in "
+
+for i in `seq 5 -1 1`; do
+echo -n "$i ... "
+sleep 1
+done
+
+echo
+
+git rebase $BASE $TIP -x "$CMD"



Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] docs: update coverage.markdown

2017-10-23 Thread Julien Grall



On 20/10/17 18:08, Roger Pau Monné wrote:

On Fri, Oct 20, 2017 at 05:30:41PM +0100, Wei Liu wrote:

The coverage support in hypervisor is redone. Update the document.

Signed-off-by: Wei Liu 


Adding Julien, although I'm not sure if doc changes also need a
release-ack.

Reviewed-by: Roger Pau Monné 


I would forgo it for documentation. Such patches can only make release 
the better :).


Cheers,



Thanks!

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel



--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] libxc: remove stale error check for domain size in xc_sr_save_x86_hvm.c

2017-10-23 Thread Juergen Gross
On 06/10/17 15:30, Julien Grall wrote:
> Hi,
> 
> On 27/09/17 15:36, Wei Liu wrote:
>> On Tue, Sep 26, 2017 at 02:02:56PM +0200, Juergen Gross wrote:
>>> Long ago domains to be saved were limited to 1TB size due to the
>>> migration stream v1 limitations which used a 32 bit value for the
>>> PFN and the frame type (4 bits) leaving only 28 bits for the PFN.
>>>
>>> Migration stream V2 uses a 64 bit value for this purpose, so there
>>> is no need to refuse saving (or migrating) domains larger than 1 TB.
>>>
>>> For 32 bit toolstacks there is still a size limit, as domains larger
>>> than about 1TB will lead to an exhausted virtual address space of the
>>> saving process. So keep the test for 32 bit, but don't base it on the
>>> page type macros. As a migration could lead to the situation where a
>>> 32 bit toolstack would have to handle such a large domain (in case the
>>> sending side is 64 bit) the same test should be added for restoring a
>>> domain.
>>>
>>> Signed-off-by: Juergen Gross 
>>
>> I will leave this to Andrew.
>>
>> I don't really have an opinion here.
> 
> 
> I will wait Andrew feedback before giving a release ack on this patch.

Andrew?


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 2/7] xsm: flask: change the dummy xsm policy and flask hook for map_gmfn_foregin

2017-10-23 Thread Zhongze Liu
Hi Jan,

2017-10-23 15:26 GMT+08:00 Jan Beulich :
 On 22.10.17 at 13:21,  wrote:
>> How about changing the policy to (c over d) && ((d over t) || (c over t))?
>> Given that (c over d) is a must, which is always checked somewhere higher
>> in the call stack as Daniel pointed out,  permitting (d over t) or (c
>> over t) actually infers
>> permitting the other.
>>
>> - if you permit (d over t) but not (c over t):
>>   Given (c over t),
>>   (c) can first map the src page from (t) into its own memory space and then 
>> map
>>   this page from its own memory space to (d)'s memory space.
>
> Would that work? The page, when in (c)'s space, is still owned by (t),
> so I don't see how mapping into (d)'s space could become possible
> just because it's mapped into (c)'s.

Yes, indeed. This won't work. Sorry for giving a wrong example here.

I think I now agree to add a new subop, too.

Cheers,

Zhongze Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH V3 28/29] x86/vvtd: Add queued invalidation (QI) support

2017-10-23 Thread Chao Gao
On Mon, Oct 23, 2017 at 09:57:16AM +0100, Roger Pau Monné wrote:
>On Mon, Oct 23, 2017 at 03:50:24PM +0800, Chao Gao wrote:
>> On Fri, Oct 20, 2017 at 12:20:06PM +0100, Roger Pau Monné wrote:
>> >On Thu, Sep 21, 2017 at 11:02:09PM -0400, Lan Tianyu wrote:
>> >> From: Chao Gao 
>> >> +}
>> >> +
>> >> +unmap_guest_page((void*)qinval_page);
>> >> +return ret;
>> >> +
>> >> + error:
>> >> +unmap_guest_page((void*)qinval_page);
>> >> +gdprintk(XENLOG_ERR, "Internal error in Queue Invalidation.\n");
>> >> +domain_crash(vvtd->domain);
>> >
>> >Do you really need to crash the domain in such case?
>> 
>> We reach here when guest requests some operations vvtd doesn't claim
>> supported or emulated. I am afraid it also can be triggered by guest.
>> How about ignoring the invalidation request?
>
>What would real hardware do in such case?

After reading the spec again, I think hardware may generate a fault
event, seeing VT-d spec 10.4.9 Fault Status Register: 
Hardware detected an error associated with the invalidation queue. This
could be due to either a hardware error while fetching a descriptor from
the invalidation queue, or hardware detecting an erroneous or invalid
descriptor in the invalidation queue. At this time, a fault event may be
generated based on the programming of the Fault Event Control register

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v1] x86/vvmx: don't enable vmcs shadowing for nested guests

2017-10-23 Thread Sergey Dyasli
Running "./xtf_runner vvmx" in L1 Xen under L0 Xen produces the
following result on H/W with VMCS shadowing:

Test: vmxon
Failure in test_vmxon_in_root_cpl0()
  Expected 0x820f: VMfailValid(15) VMXON_IN_ROOT
   Got 0x82004400: VMfailValid(17408) 
Test result: FAILURE

This happens because SDM allows vmentries with enabled VMCS shadowing
VM-execution control and VMCS link pointer value of ~0ull. But results
of a nested VMREAD are undefined in such cases.

Fix this by not copying the value of VMCS shadowing control from vmcs01
to vmcs02.

Signed-off-by: Sergey Dyasli 
---
 xen/arch/x86/hvm/vmx/vvmx.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index dde02c076b..013d049f8a 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -633,6 +633,7 @@ void nvmx_update_secondary_exec_control(struct vcpu *v,
 SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY;
 
 host_cntrl &= ~apicv_bit;
+host_cntrl &= ~SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
 shadow_cntrl = get_vvmcs(v, SECONDARY_VM_EXEC_CONTROL);
 
 /* No vAPIC-v support, so it shouldn't be set in vmcs12. */
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Xen 4.9 is broken with last version of Win10

2017-10-23 Thread Paul Durrant
De-htmling...
Moving to xen-users (xen-devel to bcc)...

-
From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of Berillions
Sent: 21 October 2017 17:50
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen 4.9 is broken with last version of Win10

Hi guys,
I send you this message to warn you that the latest official version of Windows 
10 is broken with Xen. This actual version called "Fall Creator Update" is 
released few days ago.
I did my tests with this version 1709 and the old version 1703 called "Creator 
Update" and SEABIOS.
Windows 10 version 1709 :
SEABIOS : Able to boot to the CD/DVD-ROM but when you must to choose your disk 
to install the system, Windows says that these drivers are obsolete and don't 
find your disk.
http://hpics.li/0082aa8
I try to translate the French message :
Load a driver 
Your computer needs a media's driver which is missing. It can be a DVD Disk, 
USB Disk or Hard Disk driver. If you have a CD or an USB Key with the driver, 
insert it now.

Windows 10 version 1703 :
SEABIOS : All works correctly.
http://hpics.li/0b9aaaf
This problem affect QEMU/KVM too, see here :
http://lists.nongnu.org/archive/html/qemu-discuss/2017-10/msg00044.html

Cheers,
Maxime
-

Hi,

  I just downloaded a copy of 1709 and I don't see any particular problem. What 
does your xl.cfg look like? I guess the problem is your choice of system disk 
emulation, which is why you see the same issue with KVM.

  Paul
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] Ping: [PATCH RFC v2] x86/domctl: Don't pause the whole domain if only getting vcpu state

2017-10-23 Thread Alexandru Stefan ISAILA
Any thoughts appreciated.

On Vi, 2017-10-06 at 13:02 +0300, Alexandru Isaila wrote:
> This patch adds the hvm_save_one_cpu_ctxt() function.
> It optimizes by only pausing the vcpu on all HVMSR_PER_VCPU save
> callbacks where only data for one VCPU is required.
>
> Signed-off-by: Alexandru Isaila 
>
> ---
> Changes since V1:
> - Integrated the vcpu check into all the save callbacks
> ---
>  tools/tests/vhpet/emul.h   |   3 +-
>  tools/tests/vhpet/main.c   |   2 +-
>  xen/arch/x86/cpu/mcheck/vmce.c |  16 ++-
>  xen/arch/x86/domctl.c  |   2 -
>  xen/arch/x86/hvm/hpet.c|   2 +-
>  xen/arch/x86/hvm/hvm.c | 280 ++-
> --
>  xen/arch/x86/hvm/i8254.c   |   2 +-
>  xen/arch/x86/hvm/irq.c |   6 +-
>  xen/arch/x86/hvm/mtrr.c|  32 -
>  xen/arch/x86/hvm/pmtimer.c |   2 +-
>  xen/arch/x86/hvm/rtc.c |   2 +-
>  xen/arch/x86/hvm/save.c|  71 ---
>  xen/arch/x86/hvm/vioapic.c |   2 +-
>  xen/arch/x86/hvm/viridian.c|  17 ++-
>  xen/arch/x86/hvm/vlapic.c  |  23 +++-
>  xen/arch/x86/hvm/vpic.c|   2 +-
>  xen/include/asm-x86/hvm/hvm.h  |   2 +
>  xen/include/asm-x86/hvm/save.h |   5 +-
>  18 files changed, 324 insertions(+), 147 deletions(-)
>
> diff --git a/tools/tests/vhpet/emul.h b/tools/tests/vhpet/emul.h
> index 383acff..99d5bbd 100644
> --- a/tools/tests/vhpet/emul.h
> +++ b/tools/tests/vhpet/emul.h
> @@ -296,7 +296,8 @@ struct hvm_hw_hpet
>  };
>
>  typedef int (*hvm_save_handler)(struct domain *d,
> -hvm_domain_context_t *h);
> +hvm_domain_context_t *h,
> +unsigned int instance);
>  typedef int (*hvm_load_handler)(struct domain *d,
>  hvm_domain_context_t *h);
>
> diff --git a/tools/tests/vhpet/main.c b/tools/tests/vhpet/main.c
> index 6fe65ea..3d8e7f5 100644
> --- a/tools/tests/vhpet/main.c
> +++ b/tools/tests/vhpet/main.c
> @@ -177,7 +177,7 @@ void __init hvm_register_savevm(uint16_t
> typecode,
>
>  int do_save(uint16_t typecode, struct domain *d,
> hvm_domain_context_t *h)
>  {
> -return hvm_sr_handlers[typecode].save(d, h);
> +return hvm_sr_handlers[typecode].save(d, h, d->max_vcpus);
>  }
>
>  int do_load(uint16_t typecode, struct domain *d,
> hvm_domain_context_t *h)
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> b/xen/arch/x86/cpu/mcheck/vmce.c
> index e07cd2f..a1a12a5 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -349,12 +349,24 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>  return ret;
>  }
>
> -static int vmce_save_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
> +static int vmce_save_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h, unsigned int instance)
>  {
>  struct vcpu *v;
>  int err = 0;
>
> -for_each_vcpu ( d, v )
> +if( instance < d->max_vcpus )
> +{
> +struct hvm_vmce_vcpu ctxt;
> +
> +v = d->vcpu[instance];
> +ctxt.caps = v->arch.vmce.mcg_cap;
> +ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> +ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> +ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> +
> +err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
> +}
> +else for_each_vcpu ( d, v )
>  {
>  struct hvm_vmce_vcpu ctxt = {
>  .caps = v->arch.vmce.mcg_cap,
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 540ba08..d3c4e14 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -624,12 +624,10 @@ long arch_do_domctl(
>   !is_hvm_domain(d) )
>  break;
>
> -domain_pause(d);
>  ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
> domctl->u.hvmcontext_partial.instance,
> domctl->u.hvmcontext_partial.buffer,
> >u.hvmcontext_partial.bufsz);
> -domain_unpause(d);
>
>  if ( !ret )
>  copyback = true;
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index 3ea895a..56f4691 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -509,7 +509,7 @@ static const struct hvm_mmio_ops hpet_mmio_ops =
> {
>  };
>
>
> -static int hpet_save(struct domain *d, hvm_domain_context_t *h)
> +static int hpet_save(struct domain *d, hvm_domain_context_t *h,
> unsigned int instance)
>  {
>  HPETState *hp = domain_vhpet(d);
>  struct vcpu *v = pt_global_vcpu_target(d);
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 205b4cb..140f2c3 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -728,13 +728,19 @@ void hvm_domain_destroy(struct domain *d)
>  }
>  }
>
> -static int hvm_save_tsc_adjust(struct domain *d,
> hvm_domain_context_t *h)
> +static int hvm_save_tsc_adjust(struct 

[Xen-devel] [distros-debian-sid test] 72341: tolerable trouble: blocked/broken/fail/pass

2017-10-23 Thread Platform Team regression test user
flight 72341 distros-debian-sid real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72341/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-armhf-sid-netboot-pygrub  1 build-check(1)blocked n/a
 build-arm64-pvops 2 hosts-allocate   broken like 72240
 build-arm64   2 hosts-allocate   broken like 72240
 build-arm64-pvops 3 capture-logs broken like 72240
 build-arm64   3 capture-logs broken like 72240
 test-amd64-i386-i386-sid-netboot-pvgrub 10 debian-di-install   fail like 72240
 test-armhf-armhf-armhf-sid-netboot-pygrub 10 debian-di-install fail like 72240
 test-amd64-i386-amd64-sid-netboot-pygrub 10 debian-di-install  fail like 72240
 test-amd64-amd64-amd64-sid-netboot-pvgrub 10 debian-di-install fail like 72240
 test-amd64-amd64-i386-sid-netboot-pygrub 10 debian-di-install  fail like 72240

baseline version:
 flight   72240

jobs:
 build-amd64  pass
 build-arm64  broken  
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-arm64-pvopsbroken  
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-sid-netboot-pvgrubfail
 test-amd64-i386-i386-sid-netboot-pvgrub  fail
 test-amd64-i386-amd64-sid-netboot-pygrub fail
 test-arm64-arm64-armhf-sid-netboot-pygrubblocked 
 test-armhf-armhf-armhf-sid-netboot-pygrubfail
 test-amd64-amd64-i386-sid-netboot-pygrub fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 115087: regressions - FAIL

2017-10-23 Thread osstest service owner
flight 115087 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/115087/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. vs. 114644
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 114644
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. vs. 114644

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 114644
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114644
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114644
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114644
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 114644
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 114644
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  8e77dabc58c4b6c747dfb4b948551147905a7840
baseline version:
 xen  24fb44e971a62b345c7b6ca3c03b454a1e150abe

Last test of basis   114644  2017-10-17 10:49:11 Z5 days
Failing since114670  2017-10-18 05:03:38 Z5 days7 attempts
Testing same since   114808  2017-10-20 14:56:19 Z2 days5 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anthony PERARD 
  David Esler 
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 
  Roger Pau Monné 
  Stefano Stabellini 
  Tim Deegan 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  

[Xen-devel] [PATCH v2 0/5] Add dmops to allow use of VGA with restricted QEMU

2017-10-23 Thread Ross Lagerwall
The recently added support for restricting QEMU prevents use of the VGA
console. This series addresses that by adding a couple of new dmops.
A corresponding patch for QEMU is needed to make use of the new dmops.

Changes in v2:
* Address Paul's comments - mainly making add_to_physmap operate on a
  range.

Ross Lagerwall (5):
  xen/mm: Make xenmem_add_to_physmap global
  xen: Provide XEN_DMOP_add_to_physmap
  xen: Provide XEN_DMOP_pin_memory_cacheattr
  tools: libxendevicemodel: Provide xendevicemodel_add_to_physmap
  tools: libxendevicemodel: Provide xendevicemodel_pin_memory_cacheattr

 tools/libs/devicemodel/Makefile |  2 +-
 tools/libs/devicemodel/core.c   | 40 
 tools/libs/devicemodel/include/xendevicemodel.h | 29 +++
 tools/libs/devicemodel/libxendevicemodel.map|  6 +++
 xen/arch/x86/hvm/dm.c   | 49 +
 xen/common/memory.c |  5 +--
 xen/include/public/hvm/dm_op.h  | 31 
 xen/include/xen/mm.h|  3 ++
 xen/include/xlat.lst|  2 +
 9 files changed, 163 insertions(+), 4 deletions(-)

-- 
2.9.5


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 1/5] xen/mm: Make xenmem_add_to_physmap global

2017-10-23 Thread Ross Lagerwall
Make it global in preparation to be called by a new dmop.

Signed-off-by: Ross Lagerwall 

---
Reviewed-by: Paul Durrant 
---
 xen/common/memory.c  | 5 ++---
 xen/include/xen/mm.h | 3 +++
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index ad987e0..c4f05c7 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -741,9 +741,8 @@ static long 
memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
 return rc;
 }
 
-static int xenmem_add_to_physmap(struct domain *d,
- struct xen_add_to_physmap *xatp,
- unsigned int start)
+int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
+  unsigned int start)
 {
 unsigned int done = 0;
 long rc = 0;
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index e813c07..0e0e511 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -579,6 +579,9 @@ int xenmem_add_to_physmap_one(struct domain *d, unsigned 
int space,
   union xen_add_to_physmap_batch_extra extra,
   unsigned long idx, gfn_t gfn);
 
+int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
+  unsigned int start);
+
 /* Return 0 on success, or negative on error. */
 int __must_check guest_remove_page(struct domain *d, unsigned long gmfn);
 int __must_check steal_page(struct domain *d, struct page_info *page,
-- 
2.9.5


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 5/5] tools: libxendevicemodel: Provide xendevicemodel_pin_memory_cacheattr

2017-10-23 Thread Ross Lagerwall
Signed-off-by: Ross Lagerwall 

---
Acked-by: Ian Jackson 
Reviewed-by: Paul Durrant 
---
 tools/libs/devicemodel/core.c   | 19 +++
 tools/libs/devicemodel/include/xendevicemodel.h | 14 ++
 tools/libs/devicemodel/libxendevicemodel.map|  1 +
 3 files changed, 34 insertions(+)

diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
index 07953d3..e496fc9 100644
--- a/tools/libs/devicemodel/core.c
+++ b/tools/libs/devicemodel/core.c
@@ -585,6 +585,25 @@ int xendevicemodel_add_to_physmap(
 return xendevicemodel_op(dmod, domid, 1, , sizeof(op));
 }
 
+int xendevicemodel_pin_memory_cacheattr(
+xendevicemodel_handle *dmod, domid_t domid, uint64_t start, uint64_t end,
+uint32_t type)
+{
+struct xen_dm_op op;
+struct xen_dm_op_pin_memory_cacheattr *data;
+
+memset(, 0, sizeof(op));
+
+op.op = XEN_DMOP_pin_memory_cacheattr;
+data = _memory_cacheattr;
+
+data->start = start;
+data->end = end;
+data->type = type;
+
+return xendevicemodel_op(dmod, domid, 1, , sizeof(op));
+}
+
 int xendevicemodel_restrict(xendevicemodel_handle *dmod, domid_t domid)
 {
 return osdep_xendevicemodel_restrict(dmod, domid);
diff --git a/tools/libs/devicemodel/include/xendevicemodel.h 
b/tools/libs/devicemodel/include/xendevicemodel.h
index 6967e58..d82535b 100644
--- a/tools/libs/devicemodel/include/xendevicemodel.h
+++ b/tools/libs/devicemodel/include/xendevicemodel.h
@@ -341,6 +341,20 @@ int xendevicemodel_add_to_physmap(
 uint64_t gpfn);
 
 /**
+ * Pins caching type of RAM space.
+ *
+ * @parm dmod a handle to an open devicemodel interface.
+ * @parm domid the domain id to be serviced
+ * @parm start Start gfn
+ * @parm end End gfn
+ * @parm type XEN_DOMCTL_MEM_CACHEATTR_*
+ * @return 0 on success, -1 on failure.
+ */
+int xendevicemodel_pin_memory_cacheattr(
+xendevicemodel_handle *dmod, domid_t domid, uint64_t start, uint64_t end,
+uint32_t type);
+
+/**
  * This function restricts the use of this handle to the specified
  * domain.
  *
diff --git a/tools/libs/devicemodel/libxendevicemodel.map 
b/tools/libs/devicemodel/libxendevicemodel.map
index 4a19ecb..e820b77 100644
--- a/tools/libs/devicemodel/libxendevicemodel.map
+++ b/tools/libs/devicemodel/libxendevicemodel.map
@@ -31,4 +31,5 @@ VERS_1.1 {
 VERS_1.2 {
global:
xendevicemodel_add_to_physmap;
+   xendevicemodel_pin_memory_cacheattr;
 } VERS_1.1;
-- 
2.9.5


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 4/5] tools: libxendevicemodel: Provide xendevicemodel_add_to_physmap

2017-10-23 Thread Ross Lagerwall
Signed-off-by: Ross Lagerwall 
---

Changed in v2:
* Make it operate on a range.

 tools/libs/devicemodel/Makefile |  2 +-
 tools/libs/devicemodel/core.c   | 21 +
 tools/libs/devicemodel/include/xendevicemodel.h | 15 +++
 tools/libs/devicemodel/libxendevicemodel.map|  5 +
 4 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/tools/libs/devicemodel/Makefile b/tools/libs/devicemodel/Makefile
index 342371a..5b2df7a 100644
--- a/tools/libs/devicemodel/Makefile
+++ b/tools/libs/devicemodel/Makefile
@@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR= 1
-MINOR= 1
+MINOR= 2
 SHLIB_LDFLAGS += -Wl,--version-script=libxendevicemodel.map
 
 CFLAGS   += -Werror -Wmissing-prototypes
diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
index b66d4f9..07953d3 100644
--- a/tools/libs/devicemodel/core.c
+++ b/tools/libs/devicemodel/core.c
@@ -564,6 +564,27 @@ int xendevicemodel_shutdown(
 return xendevicemodel_op(dmod, domid, 1, , sizeof(op));
 }
 
+int xendevicemodel_add_to_physmap(
+xendevicemodel_handle *dmod, domid_t domid, uint16_t size, uint64_t idx,
+uint64_t gpfn)
+{
+struct xen_dm_op op;
+struct xen_dm_op_add_to_physmap *data;
+
+memset(, 0, sizeof(op));
+
+op.op = XEN_DMOP_add_to_physmap;
+data = _to_physmap;
+
+data->size = size;
+data->pad0 = 0;
+data->pad1 = 0;
+data->idx = idx;
+data->gpfn = gpfn;
+
+return xendevicemodel_op(dmod, domid, 1, , sizeof(op));
+}
+
 int xendevicemodel_restrict(xendevicemodel_handle *dmod, domid_t domid)
 {
 return osdep_xendevicemodel_restrict(dmod, domid);
diff --git a/tools/libs/devicemodel/include/xendevicemodel.h 
b/tools/libs/devicemodel/include/xendevicemodel.h
index dda0bc7..6967e58 100644
--- a/tools/libs/devicemodel/include/xendevicemodel.h
+++ b/tools/libs/devicemodel/include/xendevicemodel.h
@@ -326,6 +326,21 @@ int xendevicemodel_shutdown(
 xendevicemodel_handle *dmod, domid_t domid, unsigned int reason);
 
 /**
+ * Sets the GPFNs at which a page range appears in the domain's
+ * pseudophysical address space.
+ *
+ * @parm dmod a handle to an open devicemodel interface.
+ * @parm domid the domain id to be serviced
+ * @parm size Number of GMFNs to process
+ * @parm idx Index into GMFN space
+ * @parm gpfn Starting GPFN where the GMFNs should appear
+ * @return 0 on success, -1 on failure.
+ */
+int xendevicemodel_add_to_physmap(
+xendevicemodel_handle *dmod, domid_t domid, uint16_t size, uint64_t idx,
+uint64_t gpfn);
+
+/**
  * This function restricts the use of this handle to the specified
  * domain.
  *
diff --git a/tools/libs/devicemodel/libxendevicemodel.map 
b/tools/libs/devicemodel/libxendevicemodel.map
index cefd32b..4a19ecb 100644
--- a/tools/libs/devicemodel/libxendevicemodel.map
+++ b/tools/libs/devicemodel/libxendevicemodel.map
@@ -27,3 +27,8 @@ VERS_1.1 {
global:
xendevicemodel_shutdown;
 } VERS_1.0;
+
+VERS_1.2 {
+   global:
+   xendevicemodel_add_to_physmap;
+} VERS_1.1;
-- 
2.9.5


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 3/5] xen: Provide XEN_DMOP_pin_memory_cacheattr

2017-10-23 Thread Ross Lagerwall
Provide XEN_DMOP_pin_memory_cacheattr to allow a deprivileged QEMU to
pin the caching type of RAM after moving the VRAM. It is equivalent to
XEN_DOMCTL_pin_memory_cacheattr.

Signed-off-by: Ross Lagerwall 
---

Changed in v2:
* Check pad is 0.

 xen/arch/x86/hvm/dm.c  | 18 ++
 xen/include/public/hvm/dm_op.h | 14 ++
 xen/include/xlat.lst   |  1 +
 3 files changed, 33 insertions(+)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 0027567..42d02cc 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -21,6 +21,7 @@
 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -670,6 +671,22 @@ static int dm_op(const struct dmop_args *op_args)
 break;
 }
 
+case XEN_DMOP_pin_memory_cacheattr:
+{
+const struct xen_dm_op_pin_memory_cacheattr *data =
+_memory_cacheattr;
+
+if ( data->pad )
+{
+rc = -EINVAL;
+break;
+}
+
+rc = hvm_set_mem_pinned_cacheattr(d, data->start, data->end,
+  data->type);
+break;
+}
+
 default:
 rc = -EOPNOTSUPP;
 break;
@@ -700,6 +717,7 @@ CHECK_dm_op_inject_event;
 CHECK_dm_op_inject_msi;
 CHECK_dm_op_remote_shutdown;
 CHECK_dm_op_add_to_physmap;
+CHECK_dm_op_pin_memory_cacheattr;
 
 int compat_dm_op(domid_t domid,
  unsigned int nr_bufs,
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index f685110..f9c86b8 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -384,6 +384,19 @@ struct xen_dm_op_add_to_physmap {
 uint64_aligned_t gpfn; /* Starting GPFN where the GMFNs should appear. */
 };
 
+/*
+ * XEN_DMOP_pin_memory_cacheattr : Pin caching type of RAM space.
+ * Identical to XEN_DOMCTL_pin_mem_cacheattr.
+ */
+#define XEN_DMOP_pin_memory_cacheattr 18
+
+struct xen_dm_op_pin_memory_cacheattr {
+uint64_aligned_t start; /* Start gfn. */
+uint64_aligned_t end;   /* End gfn. */
+uint32_t type;  /* XEN_DOMCTL_MEM_CACHEATTR_* */
+uint32_t pad;
+};
+
 struct xen_dm_op {
 uint32_t op;
 uint32_t pad;
@@ -406,6 +419,7 @@ struct xen_dm_op {
 map_mem_type_to_ioreq_server;
 struct xen_dm_op_remote_shutdown remote_shutdown;
 struct xen_dm_op_add_to_physmap add_to_physmap;
+struct xen_dm_op_pin_memory_cacheattr pin_memory_cacheattr;
 } u;
 };
 
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index d40bac6..fffb308 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -65,6 +65,7 @@
 ?  dm_op_inject_msihvm/dm_op.h
 ?  dm_op_ioreq_server_rangehvm/dm_op.h
 ?  dm_op_modified_memory   hvm/dm_op.h
+?  dm_op_pin_memory_cacheattr  hvm/dm_op.h
 ?  dm_op_remote_shutdown   hvm/dm_op.h
 ?  dm_op_set_ioreq_server_statehvm/dm_op.h
 ?  dm_op_set_isa_irq_level hvm/dm_op.h
-- 
2.9.5


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2 2/5] xen: Provide XEN_DMOP_add_to_physmap

2017-10-23 Thread Ross Lagerwall
Provide XEN_DMOP_add_to_physmap, a limited version of
XENMEM_add_to_physmap to allow a deprivileged QEMU to move VRAM when a
guest programs its BAR. It is equivalent to XENMEM_add_to_physmap with
space == XENMAPSPACE_gmfn_range.

Signed-off-by: Ross Lagerwall 
---

Changed in v2:
* Make it operate on a range.

 xen/arch/x86/hvm/dm.c  | 31 +++
 xen/include/public/hvm/dm_op.h | 17 +
 xen/include/xlat.lst   |  1 +
 3 files changed, 49 insertions(+)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 32ade95..0027567 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -640,6 +640,36 @@ static int dm_op(const struct dmop_args *op_args)
 break;
 }
 
+case XEN_DMOP_add_to_physmap:
+{
+struct xen_dm_op_add_to_physmap *data =
+_to_physmap;
+struct xen_add_to_physmap xatp = {
+.domid = op_args->domid,
+.size = data->size,
+.space = XENMAPSPACE_gmfn_range,
+.idx = data->idx,
+.gpfn = data->gpfn,
+};
+
+if ( data->pad0 || data->pad1 )
+{
+rc = -EINVAL;
+break;
+}
+
+rc = xenmem_add_to_physmap(d, , 0);
+if ( rc > 0 )
+{
+data->size -= rc;
+data->idx += rc;
+data->gpfn += rc;
+const_op = false;
+rc = -ERESTART;
+}
+break;
+}
+
 default:
 rc = -EOPNOTSUPP;
 break;
@@ -669,6 +699,7 @@ CHECK_dm_op_set_mem_type;
 CHECK_dm_op_inject_event;
 CHECK_dm_op_inject_msi;
 CHECK_dm_op_remote_shutdown;
+CHECK_dm_op_add_to_physmap;
 
 int compat_dm_op(domid_t domid,
  unsigned int nr_bufs,
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index e173085..f685110 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -368,6 +368,22 @@ struct xen_dm_op_remote_shutdown {
/* (Other reason values are not blocked) */
 };
 
+/*
+ * XEN_DMOP_add_to_physmap : Sets the GPFNs at which a page range appears in
+ *   the specified guest's pseudophysical address
+ *   space. Identical to XENMEM_add_to_physmap with
+ *   space == XENMAPSPACE_gmfn_range.
+ */
+#define XEN_DMOP_add_to_physmap 17
+
+struct xen_dm_op_add_to_physmap {
+uint16_t size; /* Number of GMFNs to process. */
+uint16_t pad0;
+uint32_t pad1;
+uint64_aligned_t idx;  /* Index into GMFN space. */
+uint64_aligned_t gpfn; /* Starting GPFN where the GMFNs should appear. */
+};
+
 struct xen_dm_op {
 uint32_t op;
 uint32_t pad;
@@ -389,6 +405,7 @@ struct xen_dm_op {
 struct xen_dm_op_map_mem_type_to_ioreq_server
 map_mem_type_to_ioreq_server;
 struct xen_dm_op_remote_shutdown remote_shutdown;
+struct xen_dm_op_add_to_physmap add_to_physmap;
 } u;
 };
 
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 4346cbe..d40bac6 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -57,6 +57,7 @@
 ?  grant_entry_v2  grant_table.h
 ?  gnttab_swap_grant_ref   grant_table.h
 !  dm_op_buf   hvm/dm_op.h
+?  dm_op_add_to_physmaphvm/dm_op.h
 ?  dm_op_create_ioreq_server   hvm/dm_op.h
 ?  dm_op_destroy_ioreq_server  hvm/dm_op.h
 ?  dm_op_get_ioreq_server_info hvm/dm_op.h
-- 
2.9.5


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH V3 28/29] x86/vvtd: Add queued invalidation (QI) support

2017-10-23 Thread Roger Pau Monné
On Mon, Oct 23, 2017 at 03:50:24PM +0800, Chao Gao wrote:
> On Fri, Oct 20, 2017 at 12:20:06PM +0100, Roger Pau Monné wrote:
> >On Thu, Sep 21, 2017 at 11:02:09PM -0400, Lan Tianyu wrote:
> >> From: Chao Gao 
> >> +}
> >> +
> >> +unmap_guest_page((void*)qinval_page);
> >> +return ret;
> >> +
> >> + error:
> >> +unmap_guest_page((void*)qinval_page);
> >> +gdprintk(XENLOG_ERR, "Internal error in Queue Invalidation.\n");
> >> +domain_crash(vvtd->domain);
> >
> >Do you really need to crash the domain in such case?
> 
> We reach here when guest requests some operations vvtd doesn't claim
> supported or emulated. I am afraid it also can be triggered by guest.
> How about ignoring the invalidation request?

What would real hardware do in such case?

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH V3 28/29] x86/vvtd: Add queued invalidation (QI) support

2017-10-23 Thread Chao Gao
On Fri, Oct 20, 2017 at 12:20:06PM +0100, Roger Pau Monné wrote:
>On Thu, Sep 21, 2017 at 11:02:09PM -0400, Lan Tianyu wrote:
>> From: Chao Gao 
>> 
>> Queued Invalidation Interface is an expanded invalidation interface with
>> extended capabilities. Hardware implementations report support for queued
>> invalidation interface through the Extended Capability Register. The queued
>> invalidation interface uses an Invalidation Queue (IQ), which is a circular
>> buffer in system memory. Software submits commands by writing Invalidation
>> Descriptors to the IQ.
>> 
>> In this patch, a new function viommu_process_iq() is used for emulating how
>> hardware handles invalidation requests through QI.
>> 
>> Signed-off-by: Chao Gao 
>> Signed-off-by: Lan Tianyu 
>> ---
>> +static int process_iqe(struct vvtd *vvtd, int i)
>
>unsigned int.
>
>> +{
>> +uint64_t iqa;
>> +struct qinval_entry *qinval_page;
>> +int ret = 0;
>> +
>> +iqa = vvtd_get_reg_quad(vvtd, DMAR_IQA_REG);
>> +qinval_page = map_guest_page(vvtd->domain, 
>> DMA_IQA_ADDR(iqa)>>PAGE_SHIFT);
>
>PFN_DOWN instead of open coding the shift. Both can be initialized
>at declaration. Also AFAICT iqa is only used once, so the local
>variable is not needed.
>
>> +if ( IS_ERR(qinval_page) )
>> +{
>> +gdprintk(XENLOG_ERR, "Can't map guest IRT (rc %ld)",
>> + PTR_ERR(qinval_page));
>> +return PTR_ERR(qinval_page);
>> +}
>> +
>> +switch ( qinval_page[i].q.inv_wait_dsc.lo.type )
>> +{
>> +case TYPE_INVAL_WAIT:
>> +if ( qinval_page[i].q.inv_wait_dsc.lo.sw )
>> +{
>> +uint32_t data = qinval_page[i].q.inv_wait_dsc.lo.sdata;
>> +uint64_t addr = (qinval_page[i].q.inv_wait_dsc.hi.saddr << 2);
>
>Unneeded parentheses.
>
>> +
>> +ret = hvm_copy_to_guest_phys(addr, , sizeof(data), 
>> current);
>> +if ( ret )
>> +vvtd_info("Failed to write status address");
>
>Don't you need to return or do something here? (like raise some kind
>of error?)

The 'addr' is programmed by guest. Here vvtd cannot finish this write
for some reason (i.e. the 'addr' may be not in the guest physical memory space).
According to VT-d spec 6.5.2.8 Invalidation Wait Descriptor, "Hardware
behavior is undefined if the Status Address specified is not an address
route-able to memory (such as peer address, interrupt address range of
0xFEEX_ etc.) I think that Xen can just ignore it. I should use
vvtd_debug() for it is guest triggerable.

>> +if ( !vvtd_test_bit(vvtd, DMAR_IECTL_REG, 
>> DMA_IECTL_IM_SHIFT) )
>> +{
>> +ie_data = vvtd_get_reg(vvtd, DMAR_IEDATA_REG);
>> +ie_addr = vvtd_get_reg(vvtd, DMAR_IEADDR_REG);
>> +vvtd_generate_interrupt(vvtd, ie_addr, ie_data);
>
>...you don't seem two need the two local variables. They are used only
>once.
>
>> +vvtd_clear_bit(vvtd, DMAR_IECTL_REG, 
>> DMA_IECTL_IP_SHIFT);
>> +}
>> +}
>> +}
>> +break;
>> +
>> +case TYPE_INVAL_IEC:
>> +/*
>> + * Currently, no cache is preserved in hypervisor. Only need to 
>> update
>> + * pIRTEs which are modified in binding process.
>> + */
>> +break;
>> +
>> +default:
>> +goto error;
>
>There's no reason to use a label that's only used for the default
>case. Simply place the code in the error label here.
>
>> +}
>> +
>> +unmap_guest_page((void*)qinval_page);
>> +return ret;
>> +
>> + error:
>> +unmap_guest_page((void*)qinval_page);
>> +gdprintk(XENLOG_ERR, "Internal error in Queue Invalidation.\n");
>> +domain_crash(vvtd->domain);
>
>Do you really need to crash the domain in such case?

We reach here when guest requests some operations vvtd doesn't claim
supported or emulated. I am afraid it also can be triggered by guest.
How about ignoring the invalidation request?

I will change the error message for it isn't internal error.

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-next 3/3] x86/pv: Misc improvements to pv_destroy_gdt()

2017-10-23 Thread Jan Beulich
>>> On 19.10.17 at 17:47,  wrote:
> Hoist the l1e_from_pfn(zero_pfn, __PAGE_HYPERVISOR_RO) calculation out of the
> loop, and switch the code over to using mfn_t.
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7] x86/altp2m: Added xc_altp2m_set_mem_access_multi()

2017-10-23 Thread Razvan Cojocaru
On 23.10.2017 11:41, Jan Beulich wrote:
 On 23.10.17 at 10:34,  wrote:
> 
>>
>> On 23.10.2017 11:10, Jan Beulich wrote:
>> On 20.10.17 at 18:32,  wrote:
 On 10/20/2017 07:15 PM, Wei Liu wrote:
> On Mon, Oct 16, 2017 at 08:07:41PM +0300, Petre Pircalabu wrote:
>> From: Razvan Cojocaru 
>>
>> For the default EPT view we have xc_set_mem_access_multi(), which
>> is able to set an array of pages to an array of access rights with
>> a single hypercall. However, this functionality was lacking for the
>> altp2m subsystem, which could only set page restrictions for one
>> page at a time. This patch addresses the gap.
>>
>> HVMOP_altp2m_set_mem_access_multi has been added as a HVMOP (as opposed 
>> to a
>> DOMCTL) for consistency with its HVMOP_altp2m_set_mem_access counterpart 
>> (and
>> hence with the original altp2m design, where domains are allowed - with 
>> the
>> proper altp2m access rights - to alter these settings), in the absence 
>> of an
>> official position on the issue from the original altp2m designers.
>>
>> Signed-off-by: Razvan Cojocaru 
>> Signed-off-by: Petre Pircalabu 
>>
>
> The title is a bit misleading -- this patch actually contains changes to
> hypervisor as well.

 Sorry, I have assumed that the hypervisor changes are implied. We're
 happy to change it. Would "x86/altp2m: Added
 xc_altp2m_set_mem_access_multi() and hypervisor support" be better?
>>>
>>> But please not again "Added" - we've had this discussion before.
>>> The title is supposed to tell what a patch does, not what the state
>>> of the code is after it was applied.
>>
>> Will do, how does "{xen,libxc}/altp2m: support for setting restrictions
>> for an array of pages" sound?
> 
> The text is fine, but I'm not sure the {xen,libxc} part of the prefix
> is really very useful.

I was hoping to address Wei's comment with it - 'xen' would stand for
the hypervisor part, and 'libxc' for the toolstack part. However, you're
right: for one, the 'x86' part was useful, and then the problem before
was not so much that it didn't explicitly specify 'xen', but that it
implied that the changes have more to do with libxc (because it
mentioned xc_altp2m_set_mem_access_multi()).

"x86/altp2m: support for setting restrictions for an array of pages" it
is then. :) Sorry for causing confusion!


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-next 2/3] x86/pv: Use DIV_ROUND_UP() when converting between GDT entries and frames

2017-10-23 Thread Jan Beulich
>>> On 19.10.17 at 17:47,  wrote:
> Also consistently use use nr_frames, rather than mixing nr_pages with a
> frames[] array.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-next 1/3] x86/pv: Move compat_set_gdt() to be beside do_set_gdt()

2017-10-23 Thread Jan Beulich
>>> On 19.10.17 at 17:47,  wrote:
> This also makes the do_update_descriptor() pair of functions adjacent.
> 
> Purely code motion; no functional change.
> 
> Signed-off-by: Andrew Cooper 

Acked-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-unstable test] 115037: regressions - FAIL

2017-10-23 Thread Jan Beulich
>>> On 23.10.17 at 01:49,  wrote:
> flight 115037 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/115037/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. vs. 
> 114644
>  test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 
> 114644
>  test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop   fail REGR. vs. 
> 114644

I'm puzzled by these recurring failures: Until flight 114525 all three
(plus the fourth sibling, which is in "guest-stop fail never pass" state)
were fail-never-pass on windows-install (the 64-bit host ones) or
guest-saverestore (the 32-bit host ones). Then flights 114540 and
114644 were successes, and since then guest-stop has been failing.
The guest console doesn't show any indication that the guest may
have received a shutdown signal.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7] x86/altp2m: Added xc_altp2m_set_mem_access_multi()

2017-10-23 Thread Jan Beulich
>>> On 23.10.17 at 10:34,  wrote:

> 
> On 23.10.2017 11:10, Jan Beulich wrote:
> On 20.10.17 at 18:32,  wrote:
>>> On 10/20/2017 07:15 PM, Wei Liu wrote:
 On Mon, Oct 16, 2017 at 08:07:41PM +0300, Petre Pircalabu wrote:
> From: Razvan Cojocaru 
>
> For the default EPT view we have xc_set_mem_access_multi(), which
> is able to set an array of pages to an array of access rights with
> a single hypercall. However, this functionality was lacking for the
> altp2m subsystem, which could only set page restrictions for one
> page at a time. This patch addresses the gap.
>
> HVMOP_altp2m_set_mem_access_multi has been added as a HVMOP (as opposed 
> to a
> DOMCTL) for consistency with its HVMOP_altp2m_set_mem_access counterpart 
> (and
> hence with the original altp2m design, where domains are allowed - with 
> the
> proper altp2m access rights - to alter these settings), in the absence of 
> an
> official position on the issue from the original altp2m designers.
>
> Signed-off-by: Razvan Cojocaru 
> Signed-off-by: Petre Pircalabu 
>

 The title is a bit misleading -- this patch actually contains changes to
 hypervisor as well.
>>>
>>> Sorry, I have assumed that the hypervisor changes are implied. We're
>>> happy to change it. Would "x86/altp2m: Added
>>> xc_altp2m_set_mem_access_multi() and hypervisor support" be better?
>> 
>> But please not again "Added" - we've had this discussion before.
>> The title is supposed to tell what a patch does, not what the state
>> of the code is after it was applied.
> 
> Will do, how does "{xen,libxc}/altp2m: support for setting restrictions
> for an array of pages" sound?

The text is fine, but I'm not sure the {xen,libxc} part of the prefix
is really very useful.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7] x86/altp2m: Added xc_altp2m_set_mem_access_multi()

2017-10-23 Thread Razvan Cojocaru


On 23.10.2017 11:10, Jan Beulich wrote:
 On 20.10.17 at 18:32,  wrote:
>> On 10/20/2017 07:15 PM, Wei Liu wrote:
>>> On Mon, Oct 16, 2017 at 08:07:41PM +0300, Petre Pircalabu wrote:
 From: Razvan Cojocaru 

 For the default EPT view we have xc_set_mem_access_multi(), which
 is able to set an array of pages to an array of access rights with
 a single hypercall. However, this functionality was lacking for the
 altp2m subsystem, which could only set page restrictions for one
 page at a time. This patch addresses the gap.

 HVMOP_altp2m_set_mem_access_multi has been added as a HVMOP (as opposed to 
 a
 DOMCTL) for consistency with its HVMOP_altp2m_set_mem_access counterpart 
 (and
 hence with the original altp2m design, where domains are allowed - with the
 proper altp2m access rights - to alter these settings), in the absence of 
 an
 official position on the issue from the original altp2m designers.

 Signed-off-by: Razvan Cojocaru 
 Signed-off-by: Petre Pircalabu 

>>>
>>> The title is a bit misleading -- this patch actually contains changes to
>>> hypervisor as well.
>>
>> Sorry, I have assumed that the hypervisor changes are implied. We're
>> happy to change it. Would "x86/altp2m: Added
>> xc_altp2m_set_mem_access_multi() and hypervisor support" be better?
> 
> But please not again "Added" - we've had this discussion before.
> The title is supposed to tell what a patch does, not what the state
> of the code is after it was applied.

Will do, how does "{xen,libxc}/altp2m: support for setting restrictions
for an array of pages" sound?

We'll change the title as soon as we have comments to address for a new
version.


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] scripts: add a script for build testing

2017-10-23 Thread Jan Beulich
>>> On 20.10.17 at 19:32,  wrote:
> --- /dev/null
> +++ b/scripts/build-test.sh
> @@ -0,0 +1,40 @@
> +#!/bin/sh
> +
> +# WARNING: Always backup the branch by creating another reference to it if
> +# you're not familiar with git-rebase(1).
> +#
> +# Use `git rebase` to run command or script on every commit within the range
> +# specified. If no command or script is provided, use the default one to 
> clean
> +# and build the whole tree.
> +#
> +# If something goes wrong, the script will stop at the commit that fails.  
> Fix
> +# the failure and run `git rebase --continue`.
> +#
> +# If for any reason the tree is screwed, use `git rebase --abort` to restore 
> to
> +# original state.
> +
> +if ! test -f xen/Kconfig; then
> +echo "Please run this script from top-level directory"

Wouldn't running this in one of the top-level sub-trees also be useful?
E.g. why would one want a hypervisor only series not touching the
public interface to have the tools tree rebuilt all the time?

> +exit 1
> +fi
> +
> +if test $# -lt 2 ; then
> +echo "Usage: $0   [CMD|SCRIPT]"

Perhaps

echo "Usage: $0   

Re: [Xen-devel] [PATCH v7] x86/altp2m: Added xc_altp2m_set_mem_access_multi()

2017-10-23 Thread Jan Beulich
>>> On 20.10.17 at 18:32,  wrote:
> On 10/20/2017 07:15 PM, Wei Liu wrote:
>> On Mon, Oct 16, 2017 at 08:07:41PM +0300, Petre Pircalabu wrote:
>>> From: Razvan Cojocaru 
>>>
>>> For the default EPT view we have xc_set_mem_access_multi(), which
>>> is able to set an array of pages to an array of access rights with
>>> a single hypercall. However, this functionality was lacking for the
>>> altp2m subsystem, which could only set page restrictions for one
>>> page at a time. This patch addresses the gap.
>>>
>>> HVMOP_altp2m_set_mem_access_multi has been added as a HVMOP (as opposed to a
>>> DOMCTL) for consistency with its HVMOP_altp2m_set_mem_access counterpart 
>>> (and
>>> hence with the original altp2m design, where domains are allowed - with the
>>> proper altp2m access rights - to alter these settings), in the absence of an
>>> official position on the issue from the original altp2m designers.
>>>
>>> Signed-off-by: Razvan Cojocaru 
>>> Signed-off-by: Petre Pircalabu 
>>>
>> 
>> The title is a bit misleading -- this patch actually contains changes to
>> hypervisor as well.
> 
> Sorry, I have assumed that the hypervisor changes are implied. We're
> happy to change it. Would "x86/altp2m: Added
> xc_altp2m_set_mem_access_multi() and hypervisor support" be better?

But please not again "Added" - we've had this discussion before.
The title is supposed to tell what a patch does, not what the state
of the code is after it was applied.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 for 4.10] x86/vpt: guarantee the return value of pt_update_irq() set in vIRR or PIR

2017-10-23 Thread Tian, Kevin
> From: Gao, Chao
> Sent: Friday, October 20, 2017 8:35 AM
> 
> pt_update_irq() is expected to return the vector number of periodic
> timer interrupt, which should be set in vIRR of vlapic or in PIR.
> Otherwise it would trigger the assertion in vmx_intr_assist(), please
> seeing https://lists.xenproject.org/archives/html/xen-devel/2017-
> 10/msg00915.html.
> 
> But it fails to achieve that in the following two case:
> 1. hvm_isa_irq_assert() may not set the corresponding bit in vIRR for
> mask field of IOAPIC RTE is set. Please refer to the call tree
> vmx_intr_assist() -> pt_update_irq() -> hvm_isa_irq_assert() ->
> assert_irq() -> assert_gsi() -> vioapic_irq_positive_edge(). The patch
> checks whether the vector is set or not in vIRR of vlapic or PIR before
> returning.
> 
> 2. someone changes the vector field of IOAPIC RTE between asserting
> the irq and getting the vector of the irq, leading to setting the
> old vector number but returning a different vector number. This patch
> allows hvm_isa_irq_assert() to accept a callback which can get the
> interrupt vector with irq_lock held. Thus, no one can change the vector
> between the two operations.
> 
> BTW, the first argument of pi_test_and_set_pir() should be uint8_t
> and I take this chance to fix it.
> 
> Signed-off-by: Chao Gao 

Reviewed-by: Kevin Tian 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


  1   2   >