[Xen-devel] [qemu-mainline test] 115474: regressions - FAIL
flight 115474 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/115474/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i3866 xen-buildfail REGR. vs. 114507 build-amd64-xsm 6 xen-buildfail REGR. vs. 114507 build-i386-xsm6 xen-buildfail REGR. vs. 114507 build-amd64 6 xen-buildfail REGR. vs. 114507 build-armhf 6 xen-buildfail REGR. vs. 114507 build-armhf-xsm 6 xen-buildfail REGR. vs. 114507 Tests which did not succeed, but are not blocking: test-amd64-i386-libvirt-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-i386-freebsd10-i386 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked n/a test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-armhf-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a test-amd64-i386-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-i386-freebsd10-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pair 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pygrub 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked n/a test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl1 build-check(1) blocked n/a build-i386-libvirt1 build-check(1) blocked n/a test-amd64-i386-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked n/a test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-i386-xl-xsm1 build-check(1) blocked n/a build-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-i386-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-i386-xl-raw1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-amd 1 build-check(1) blocked n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked n/a build-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-i386-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl 1 build-check(1) blocked n/a test-amd64-i386-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked n/a test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a test-armhf-armhf-xl-rtds 1
Re: [Xen-devel] [PATCH] scripts: warn about invalid MAINTAINERS patterns
> On Nov 1, 2017, at 13:11, Tom Saegerwrote: > > On Wed, Nov 01, 2017 at 09:50:05AM -0700, Joe Perches wrote: >> (add mercurial-devel and xen-devel to cc's) >> >> On Tue, 2017-10-31 at 16:37 -0500, Tom Saeger wrote: >>> Add "--pattern-checks" option to get_maintainer.pl to warn about invalid >>> "F" and "X" patterns found in MAINTAINERS file(s). >> >> Hey again Tom. >> >> About mercurial/hg. >> >> While as far as I know there hasn't been a mercurial tree >> for the linux kernel sources in many years, I believe the >> mercurial command to list files should be different. >> >>> my %VCS_cmds_hg = ( >>> @@ -167,6 +169,7 @@ my %VCS_cmds_hg = ( >>> "subject_pattern" => "^HgSubject: (.*)", >>> "stat_pattern" => "^(\\d+)\t(\\d+)\t\$file\$", >>> "file_exists_cmd" => "hg files \$file", >>> +"list_files_cmd" => "hg files \$file", >> >> I think this should be >> >> "list_files_cmd" => "hg manifest -R \$file", > > Ok - I'll add to v2. Actually, I'd recommend `hg files` over `hg manifest` by a wide margin. > > ___ > Mercurial-devel mailing list > mercurial-de...@mercurial-scm.org > https://www.mercurial-scm.org/mailman/listinfo/mercurial-devel signature.asc Description: Message signed with OpenPGP ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v1] x86/vvmx: don't enable vmcs shadowing for nested guests
> From: Sergey Dyasli [mailto:sergey.dya...@citrix.com] > Sent: Monday, October 23, 2017 5:33 PM > > Running "./xtf_runner vvmx" in L1 Xen under L0 Xen produces the > following result on H/W with VMCS shadowing: > > Test: vmxon > Failure in test_vmxon_in_root_cpl0() > Expected 0x820f: VMfailValid(15) VMXON_IN_ROOT >Got 0x82004400: VMfailValid(17408) > Test result: FAILURE > > This happens because SDM allows vmentries with enabled VMCS > shadowing > VM-execution control and VMCS link pointer value of ~0ull. But results > of a nested VMREAD are undefined in such cases. > > Fix this by not copying the value of VMCS shadowing control from vmcs01 > to vmcs02. > > Signed-off-by: Sergey DyasliAcked-by: Kevin Tian ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [linux-linus test] 115469: regressions - FAIL
flight 115469 linux-linus real [real] http://logs.test-lab.xenproject.org/osstest/logs/115469/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail REGR. vs. 114682 Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 114682 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114682 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114682 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 114682 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114682 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 114682 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114682 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 114682 test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass version targeted for testing: linux287683d027a3ff83feb6c7044430c79881664ecf baseline version: linuxebe6e90ccc6679cb01d2b280e4b61e6092d4bedb Last test of basis 114682 2017-10-18 09:54:11 Z 14 days Failing since114781 2017-10-20 01:00:47 Z 13 days 22 attempts Testing same since 115459 2017-11-01 05:28:20 Z0 days2 attempts 423 people touched revisions under test, not listing them all jobs: build-amd64-xsm pass build-armhf-xsm pass build-i386-xsm pass build-amd64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-pvopspass build-armhf-pvops
Re: [Xen-devel] [PATCH v3 for-next 4/4] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
> From: Julien Grall [mailto:julien.gr...@linaro.org] > Sent: Wednesday, November 1, 2017 10:03 PM > > Most of the users of page_to_mfn and mfn_to_page are either overriding > the macros to make them work with mfn_t or use mfn_x/_mfn because the > rest of the function use mfn_t. > > So make __page_to_mfn and __mfn_to_page return mfn_t by default. > > Only reasonable clean-ups are done in this patch because it is > already quite big. So some of the files now override page_to_mfn and > mfn_to_page to avoid using mfn_t. > > Lastly, domain_page_to_mfn is also converted to use mfn_t given that > most of the callers are now switched to _mfn(domain_page_to_mfn(...)). > > Signed-off-by: Julien Grall> Reviewed-by: Kevin Tian ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [qemu-mainline test] 115472: regressions - FAIL
flight 115472 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/115472/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i3866 xen-buildfail REGR. vs. 114507 build-amd64-xsm 6 xen-buildfail REGR. vs. 114507 build-i386-xsm6 xen-buildfail REGR. vs. 114507 build-amd64 6 xen-buildfail REGR. vs. 114507 build-armhf 6 xen-buildfail REGR. vs. 114507 build-armhf-xsm 6 xen-buildfail REGR. vs. 114507 Tests which did not succeed, but are not blocking: test-amd64-i386-libvirt-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-i386-freebsd10-i386 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked n/a test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-armhf-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a test-amd64-i386-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-i386-freebsd10-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pair 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pygrub 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked n/a test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl1 build-check(1) blocked n/a build-i386-libvirt1 build-check(1) blocked n/a test-amd64-i386-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked n/a test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-i386-xl-xsm1 build-check(1) blocked n/a build-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-i386-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-i386-xl-raw1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-amd 1 build-check(1) blocked n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked n/a build-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-i386-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl 1 build-check(1) blocked n/a test-amd64-i386-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked n/a test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a test-armhf-armhf-xl-rtds 1
[Xen-devel] [linux-4.9 test] 115466: regressions - FAIL
flight 115466 linux-4.9 real [real] http://logs.test-lab.xenproject.org/osstest/logs/115466/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail REGR. vs. 114814 Tests which are failing intermittently (not blocking): test-amd64-i386-freebsd10-i386 10 freebsd-install fail in 115457 pass in 115466 test-armhf-armhf-xl-cubietruck 6 xen-installfail in 115457 pass in 115466 test-amd64-amd64-xl-qemuu-ovmf-amd64 16 guest-localmigrate/x10 fail in 115457 pass in 115466 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail pass in 115457 Tests which did not succeed, but are not blocking: test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail in 115457 like 114814 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114814 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass version targeted for testing: linuxd785062ef20f9b2cd8cedcafea55ca8264f25f3e baseline version: linux5d7a76acad403638f635c918cc63d1d44ffa4065 Last test of basis 114814 2017-10-20 20:51:56 Z 12 days Failing since114845 2017-10-21 16:14:17 Z 11 days 20 attempts Testing same since 115296 2017-10-27 11:07:37 Z5 days 10 attempts People who touched revisions under test: Alan SternAlex Deucher Alexandre Belloni Andrew Morton Andrey Konovalov Anoob Soman Arend van Spriel Arnd Bergmann Bart Van Assche
Re: [Xen-devel] [PATCH v6 1/1] xen/time: do not decrease steal time after live migration on xen
Hi Boris, I have received from l...@intel.com that the prior version of patch hit issue during compilation with aarch64-linux-gnu-gcc. I think this patch reviewed by you would hit the same compiling issue on arm64 (there is no issue with x86_64). - 1st issue: Without including header into driver/xen/time.c, compilation on x86_64 works well (without any warning or error) but arm64 would hit the following error: drivers/xen/time.c: In function ‘xen_manage_runstate_time’: drivers/xen/time.c:94:20: error: implicit declaration of function ‘kmalloc_array’ [-Werror=implicit-function-declaration] runstate_delta = kmalloc_array(num_possible_cpus(), ^ drivers/xen/time.c:131:3: error: implicit declaration of function ‘kfree’ [-Werror=implicit-function-declaration] kfree(runstate_delta); ^ cc1: some warnings being treated as errors About the 1st issue, should I submit a new patch including or just a incremental based on previous patch merged into your own branch /tree? - 2nd issue: aarch64-linux-gnu-gcc expects a cast for kmalloc_array(). Is this really necessary as I did find people casting the return type of kmalloc/kcalloc/kmalloc_array in linux source code (e.g., drivers/block/virtio_blk.c). Can we just ignore this warning? drivers/xen/time.c:94:18: warning: assignment makes pointer from integer without a cast [-Wint-conversion] runstate_delta = kmalloc_array(num_possible_cpus(), ^ - Thank you very much! Dongli Zhang On 11/02/2017 03:19 AM, Boris Ostrovsky wrote: > On 10/31/2017 09:46 PM, Dongli Zhang wrote: >> After guest live migration on xen, steal time in /proc/stat >> (cpustat[CPUTIME_STEAL]) might decrease because steal returned by >> xen_steal_lock() might be less than this_rq()->prev_steal_time which is >> derived from previous return value of xen_steal_clock(). >> >> For instance, steal time of each vcpu is 335 before live migration. >> >> cpu 198 0 368 200064 1962 0 0 1340 0 0 >> cpu0 38 0 81 50063 492 0 0 335 0 0 >> cpu1 65 0 97 49763 634 0 0 335 0 0 >> cpu2 38 0 81 50098 462 0 0 335 0 0 >> cpu3 56 0 107 50138 374 0 0 335 0 0 >> >> After live migration, steal time is reduced to 312. >> >> cpu 200 0 370 200330 1971 0 0 1248 0 0 >> cpu0 38 0 82 50123 500 0 0 312 0 0 >> cpu1 65 0 97 49832 634 0 0 312 0 0 >> cpu2 39 0 82 50167 462 0 0 312 0 0 >> cpu3 56 0 107 50207 374 0 0 312 0 0 >> >> Since runstate times are cumulative and cleared during xen live migration >> by xen hypervisor, the idea of this patch is to accumulate runstate times >> to global percpu variables before live migration suspend. Once guest VM is >> resumed, xen_get_runstate_snapshot_cpu() would always return the sum of new >> runstate times and previously accumulated times stored in global percpu >> variables. Comments before the call of HYPERVISOR_suspend() has been >> removed as it is inaccurate. The call can return an error code (e.g., >> possibly -EPERM in the future). > > I'd like split comment removal bit into a separate paragraph. I can do > this when committing if you don't mind. > >> >> Similar and more severe issue would impact prior linux 4.8-4.10 as >> discussed by Michael Las at >> https://0xstubs.org/debugging-a-flaky-cpu-steal-time-counter-on-a-paravirtualized-xen-guest, >> which would overflow steal time and lead to 100% st usage in top command >> for linux 4.8-4.10. A backport of this patch would fix that issue. >> >> References: >> https://0xstubs.org/debugging-a-flaky-cpu-steal-time-counter-on-a-paravirtualized-xen-guest >> Signed-off-by: Dongli Zhang>> >> --- > > Reviewed-by: Boris Ostrovsky > > ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH] x86/cpuid: Enable new SSE/AVX/AVX512 cpu features
On Wed, Nov 01, 2017 at 03:29:16PM -0400, Konrad Rzeszutek Wilk wrote: > On Fri, Oct 27, 2017 at 10:18:04PM +0800, Yang Zhong wrote: > > Intel IceLake cpu has added new cpu features: AVX512VBMI2/GFNI/ > > VAES/AVX512VNNI/AVX512BITALG/VPCLMULQDQ. Those new cpu features > > need expose to guest.wq > > s/wq// Hello Konrad, Thanks for reviewing my patch, i will remove this .wq in next version,thanks! Regards, Yang > > > > The bit definition: > > CPUID.(EAX=7,ECX=0):ECX[bit 06] AVX512VBMI2 > > CPUID.(EAX=7,ECX=0):ECX[bit 08] GFNI > > CPUID.(EAX=7,ECX=0):ECX[bit 09] VAES > > CPUID.(EAX=7,ECX=0):ECX[bit 10] VPCLMULQDQ > > CPUID.(EAX=7,ECX=0):ECX[bit 11] AVX512VNNI > > CPUID.(EAX=7,ECX=0):ECX[bit 12] AVX512_BITALG > > > > The release document ref below link: > > https://software.intel.com/sites/default/files/managed/c5/15/\ > > architecture-instruction-set-extensions-programming-reference.pdf > > Ah! Thank you! > > > > Signed-off-by: Yang Zhong> > --- > > docs/man/xl.cfg.pod.5.in| 3 ++- > > tools/libxl/libxl_cpuid.c | 6 ++ > > tools/misc/xen-cpuid.c | 13 +++-- > > xen/include/public/arch-x86/cpufeatureset.h | 6 ++ > > xen/tools/gen-cpuid.py | 3 ++- > > 5 files changed, 23 insertions(+), 8 deletions(-) > > > > diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in > > index b7b91d8..d056768 100644 > > --- a/docs/man/xl.cfg.pod.5.in > > +++ b/docs/man/xl.cfg.pod.5.in > > @@ -1731,7 +1731,8 @@ perfctr_core perfctr_nb pge pku popcnt pse pse36 psn > > rdrand rdseed rdtscp rtm > > sha skinit smap smep smx ss sse sse2 sse3 sse4.1 sse4.2 sse4_1 sse4_2 sse4a > > ssse3 svm svm_decode svm_lbrv svm_npt svm_nrips svm_pausefilt svm_tscrate > > svm_vmcbclean syscall sysenter tbm tm tm2 topoext tsc tsc-deadline > > tsc_adjust > > -umip vme vmx wdt x2apic xop xsave xtpr > > +umip vme vmx wdt x2apic xop xsave xtpr avx512_vbmi2 gfni vaes vpclmulqdq > > +avx512_vnni avx512_bitalg > > > > > > The xend syntax is a list of values in the form of > > diff --git a/tools/libxl/libxl_cpuid.c b/tools/libxl/libxl_cpuid.c > > index e692b61..614991f 100644 > > --- a/tools/libxl/libxl_cpuid.c > > +++ b/tools/libxl/libxl_cpuid.c > > @@ -199,6 +199,12 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list > > *cpuid, const char* str) > > {"umip", 0x0007, 0, CPUID_REG_ECX, 2, 1}, > > {"pku", 0x0007, 0, CPUID_REG_ECX, 3, 1}, > > {"ospke",0x0007, 0, CPUID_REG_ECX, 4, 1}, > > +{"avx512_vbmi2", 0x0007, 0, CPUID_REG_ECX, 6, 1}, > > +{"gfni", 0x0007, 0, CPUID_REG_ECX, 8, 1}, > > +{"vaes", 0x0007, 0, CPUID_REG_ECX, 9, 1}, > > +{"vpclmulqdq", 0x0007, 0, CPUID_REG_ECX, 10, 1}, > > +{"avx512_vnni", 0x0007, 0, CPUID_REG_ECX, 11, 1}, > > +{"avx512_bitalg",0x0007, 0, CPUID_REG_ECX, 12, 1}, > > > > {"avx512-4vnniw",0x0007, 0, CPUID_REG_EDX, 2, 1}, > > {"avx512-4fmaps",0x0007, 0, CPUID_REG_EDX, 3, 1}, > > diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c > > index 106be0f..985deea 100644 > > --- a/tools/misc/xen-cpuid.c > > +++ b/tools/misc/xen-cpuid.c > > @@ -120,12 +120,13 @@ static const char *str_Da1[32] = > > > > static const char *str_7c0[32] = > > { > > -[ 0] = "prechwt1", [ 1] = "avx512vbmi", > > -[ 2] = "REZ", [ 3] = "pku", > > -[ 4] = "ospke", > > - > > -[5 ... 13] = "REZ", > > - > > +[ 0] = "prechwt1", [ 1] = "avx512vbmi", > > +[ 2] = "REZ", [ 3] = "pku", > > +[ 4] = "ospke",[ 5] = "REZ", > > +[ 6] = "avx512_vbmi2", [ 7] = "REZ", > > +[ 8] = "gfni", [ 9] = "vaes", > > +[10] = "vpclmulqdq", [11] = "avx512_vnni", > > +[12] = "avx512_bitalg",[13] = "REZ", > > [14] = "avx512_vpopcntdq", > > > > [15 ... 31] = "REZ", > > diff --git a/xen/include/public/arch-x86/cpufeatureset.h > > b/xen/include/public/arch-x86/cpufeatureset.h > > index 0ee3ea3..bb24b79 100644 > > --- a/xen/include/public/arch-x86/cpufeatureset.h > > +++ b/xen/include/public/arch-x86/cpufeatureset.h > > @@ -228,6 +228,12 @@ XEN_CPUFEATURE(AVX512VBMI,6*32+ 1) /*A AVX-512 > > Vector Byte Manipulation Ins > > XEN_CPUFEATURE(UMIP, 6*32+ 2) /*S User Mode Instruction > > Prevention */ > > XEN_CPUFEATURE(PKU, 6*32+ 3) /*H Protection Keys for Userspace > > */ > > XEN_CPUFEATURE(OSPKE, 6*32+ 4) /*! OS Protection Keys Enable */ > > +XEN_CPUFEATURE(AVX512_VBMI2, 6*32+ 6) /*A addition AVX-512 VBMI > > Instructions */ > > +XEN_CPUFEATURE(GFNI, 6*32+ 8) /*A Galois Field New Instructions > > */ > > +XEN_CPUFEATURE(VAES, 6*32+ 9) /*A Vector AES instructions */ > > +XEN_CPUFEATURE(VPCLMULQDQ,6*32+ 10) /*A vector PCLMULQDQ instructions > > */ > >
[Xen-devel] [qemu-mainline test] 115470: regressions - FAIL
flight 115470 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/115470/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i3866 xen-buildfail REGR. vs. 114507 build-amd64-xsm 6 xen-buildfail REGR. vs. 114507 build-i386-xsm6 xen-buildfail REGR. vs. 114507 build-amd64 6 xen-buildfail REGR. vs. 114507 build-armhf 6 xen-buildfail REGR. vs. 114507 build-armhf-xsm 6 xen-buildfail REGR. vs. 114507 Tests which did not succeed, but are not blocking: test-amd64-i386-libvirt-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-i386-freebsd10-i386 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked n/a test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-armhf-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a test-amd64-i386-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-i386-freebsd10-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pair 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pygrub 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked n/a test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl1 build-check(1) blocked n/a build-i386-libvirt1 build-check(1) blocked n/a test-amd64-i386-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked n/a test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-i386-xl-xsm1 build-check(1) blocked n/a build-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-i386-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-i386-xl-raw1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-amd 1 build-check(1) blocked n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked n/a build-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-i386-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl 1 build-check(1) blocked n/a test-amd64-i386-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked n/a test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a test-armhf-armhf-xl-rtds 1
Re: [Xen-devel] [PATCH v4 1/1] xen/time: do not decrease steal time after live migration on xen
Hi Dongli, Thank you for the patch! Yet something to improve: [auto build test ERROR on xen-tip/linux-next] [also build test ERROR on v4.14-rc7 next-20171018] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Dongli-Zhang/xen-time-do-not-decrease-steal-time-after-live-migration-on-xen/20171102-025719 base: https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git linux-next config: arm64-allmodconfig (attached as .config) compiler: aarch64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree make.cross ARCH=arm64 All errors (new ones prefixed by >>): drivers/xen/time.c: In function 'xen_accumulate_runstate_time': drivers/xen/time.c:92:25: error: implicit declaration of function 'kcalloc' [-Werror=implicit-function-declaration] runstate_time_delta = kcalloc(num_possible_cpus(), ^~~ drivers/xen/time.c:92:23: warning: assignment makes pointer from integer without a cast [-Wint-conversion] runstate_time_delta = kcalloc(num_possible_cpus(), ^ >> drivers/xen/time.c:102:31: error: implicit declaration of function >> 'kmalloc_array' [-Werror=implicit-function-declaration] runstate_time_delta[cpu] = kmalloc_array(RUNSTATE_max, ^ drivers/xen/time.c:102:29: warning: assignment makes pointer from integer without a cast [-Wint-conversion] runstate_time_delta[cpu] = kmalloc_array(RUNSTATE_max, ^ drivers/xen/time.c:141:5: error: implicit declaration of function 'kfree' [-Werror=implicit-function-declaration] kfree(runstate_time_delta[cpu]); ^ cc1: some warnings being treated as errors vim +/kmalloc_array +102 drivers/xen/time.c 82 83 void xen_accumulate_runstate_time(int action) 84 { 85 struct vcpu_runstate_info state; 86 int cpu, i; 87 88 switch (action) { 89 case -1: /* backup runstate time before suspend */ 90 WARN_ON_ONCE(unlikely(runstate_time_delta)); 91 > 92 runstate_time_delta = kcalloc(num_possible_cpus(), 93 sizeof(*runstate_time_delta), 94GFP_KERNEL); 95 if (unlikely(!runstate_time_delta)) { 96 pr_alert("%s: failed to allocate runstate_time_delta\n", 97 __func__); 98 return; 99 } 100 101 for_each_possible_cpu(cpu) { > 102 runstate_time_delta[cpu] = > kmalloc_array(RUNSTATE_max, 103 sizeof(**runstate_time_delta), 104GFP_KERNEL); 105 if (unlikely(!runstate_time_delta[cpu])) { 106 pr_alert("%s: failed to allocate runstate_time_delta[%d]\n", 107 __func__, cpu); 108 action = 0; 109 goto reclaim_mem; 110 } 111 112 xen_get_runstate_snapshot_cpu_delta(, cpu); 113 memcpy(runstate_time_delta[cpu], 114 state.time, 115 RUNSTATE_max * sizeof(**runstate_time_delta)); 116 } 117 break; 118 119 case 0: /* backup runstate time after resume */ 120 if (unlikely(!runstate_time_delta)) { 121 pr_alert("%s: cannot accumulate runstate time as runstate_time_delta is NULL\n", 122 __func__); 123 return; 124 } 125 126 for_each_possible_cpu(cpu) { 127 for (i = 0; i < RUNSTATE_max; i++) 128 per_cpu(old_runstate_time, cpu)[i] += 129 runstate_time_delta[cpu][i]; 130 } 131 break; 132 133 default: /* do not accumulate runstate time for checkpointing */ 134 break; 135 } 136 137 reclaim_mem: 138 if (action != -1 && runstate_time_delta) { 139 for_each_possible_cpu(cpu) { 140
Re: [Xen-devel] [PATCH-tip v2 2/2] x86/xen: Deprecate xen_nopvspin
On 11/01/2017 04:58 PM, Waiman Long wrote: > +/* TODO: To be removed in a future kernel version */ > static __init int xen_parse_nopvspin(char *arg) > { > - xen_pvspin = false; > + pr_warn("xen_nopvspin is deprecated, replace it with > \"pvlock_type=queued\"!\n"); > + if (!pv_spinlock_type) > + pv_spinlock_type = locktype_queued; Since we currently end up using unfair locks and because you are deprecating xen_nopvspin I wonder whether it would be better to set this to locktype_unfair so that current behavior doesn't change. (Sorry, I haven't responded to your earlier message before you posted this). Juergen? I am also not sure I agree with making pv_spinlock an enum *and* a bitmask at the same time. I understand that it makes checks easier but I think not assuming a value or a pattern would be better, especially since none of the uses is on a critical path. (For example, !pv_spinlock_type is the same as locktype_auto, which is defined but never used) -boris > return 0; > } > early_param("xen_nopvspin", xen_parse_nopvspin); > - ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [RFC] ARM: New (Xen) VGIC design document
On Wed, 1 Nov 2017, Andre Przywara wrote: > Hi Stefano, > > > On 01/11/17 01:58, Stefano Stabellini wrote: > > On Wed, 11 Oct 2017, Andre Przywara wrote: > > many thanks for going through all of this! No problems, and thanks for your work and for caring about doing the best thing for the project. > >> (CC:ing some KVM/ARM folks involved in the VGIC) > >> > >> starting with the addition of the ITS support we were seeing more and > >> more issues with the current implementation of our ARM Generic Interrupt > >> Controller (GIC) emulation, the VGIC. > >> Among other approaches to fix those issues it was proposed to copy the > >> VGIC emulation used in KVM. This one was suffering from very similar > >> issues, and a clean design from scratch lead to a very robust and > >> capable re-implementation. Interestingly this implementation is fairly > >> self-contained, so it seems feasible to copy it. Hopefully we only need > >> minor adjustments, possibly we can even copy it verbatim with some > >> additional glue layer code. > >> > >> Stefano asked for getting a design overview, to assess the feasibility > >> of copying the KVM code without reviewing tons of code in the first > >> place. > >> So to follow Xen rules for new features, this design document below is > >> an attempt to describe the current KVM VGIC design - in a hypervisor > >> agnostic session. It is a bit of a retro-fit design description, as it > >> is not strictly forward-looking only, but actually describing the > >> existing implemenation [1]. > >> > >> Please have a look and let me know: > >> 1) if this document has the right scope > >> 2) if this document has the right level of detail > >> 3) if there are points missing from the document > >> 3) if the design in general is a fit > > > > Please read the following statements as genuine questions and concerns. > > Most ideas on this document are good. Some of them I have even suggested > > them myself in the context of GIC improvements for Xen. I asked for a > > couple of clarifications. > > > > But I don't see why we cannot implement these ideas on top of the > > existing code, rather than with a separate codebase, ending up with two > > drivers. I would prefer a natual evolution. Specifically, the following > > improvements would be simple and would give us most of the benefits on > > top of the current codebase: > > - adding the irq lock, and the refcount > > - taking both vcpu locks when necessary (on migration code for example > > it would help a lot), the lower vcpu_id first > > - level irq emulation > > I think some of those points you mentioned are not easily implemented in > the current Xen. For instance I ran into locking order issues with those > *two* inflight and lr_queue lists, when trying to implement the lock and > the refcount. > Also this "put vIRQs into LRs early, but possibly rip them out again" is > really complicating things a lot. > > I believe only level IRQs could be added in a relatively straight > forward manner. > > So the problem with the evolutionary approach is that it generates a lot > of patches, some of them quite invasive, others creating hard-to-read > diffs, which are both hard to review. > And chances are that the actual result would be pretty close to the KVM > code. To be clear: I hacked the Xen VGIC into the KVM direction in a few > days some months ago, but it took me *weeks* to make sane patches of > only the first part of it. > And this would not cover all those general, tedious corner cases that > the VGIC comes with. Those would need to be fixed in a painful process, > which we could avoid by "lifting" the KVM code. I hear you, but the principal cost here is the review time, not the development time. Julien told me that it would be pretty much the same for him in terms of time it takes to review the changes, it doesn't matter if it's a new driver or changes to the existing driver. For me, it wouldn't be the same: I think it would take me far less time to review them if they were against the existing codebase. However, as I wrote, this is not my foremost concern. I would be up to committing myself to review this even if we decide to go for a new driver. > > If we do end up with a second separate driver for technical or process > > reasons, I would expect the regular Xen submission/review process to be > > followed. The code style will be different, the hooks into the rest of > > the hypervisors will be different and things will be generally changed. > > The new V/GIC might be derived from KVM, but it should end up looking > > and feeling like a 100% genuine Xen component. After all, we'll > > maintain it going forward. I don't want a copy of a Linux driver with > > glue code. The Xen community cannot be expected not to review the > > submission, but if we review it, then we'll ask for changes. Once we > > change the code, there will be no point in keeping the Linux code > > separate with glue code. We should fully adapt it to Xen. > >
[Xen-devel] [xen-unstable test] 115464: regressions - FAIL
flight 115464 xen-unstable real [real] http://logs.test-lab.xenproject.org/osstest/logs/115464/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. vs. 114644 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail REGR. vs. 114644 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail REGR. vs. 114644 Tests which are failing intermittently (not blocking): test-armhf-armhf-xl-rtds 6 xen-install fail in 115378 pass in 115464 test-amd64-i386-xl-raw 19 guest-start/debian.repeat fail in 115378 pass in 115464 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail in 115378 pass in 115464 test-armhf-armhf-xl 6 xen-install fail in 115401 pass in 115464 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore.2 fail in 115401 pass in 115464 test-amd64-amd64-xl-qcow219 guest-start/debian.repeat fail pass in 115378 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeat fail pass in 115401 test-amd64-amd64-rumprun-amd64 17 rumprun-demo-xenstorels/xenstorels.repeat fail pass in 115450 test-amd64-amd64-i386-pvgrub 19 guest-start/debian.repeat fail pass in 115450 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeat fail pass in 115450 test-armhf-armhf-xl-vhd 15 guest-start/debian.repeat fail pass in 115450 Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 114644 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114644 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114644 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114644 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114644 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 114644 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 114644 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass version targeted for testing: xen bb2c1a1cc98a22e2d4c14b18421aa7be6c2adf0d baseline version: xen
[Xen-devel] [PATCH-tip v2 0/2] x86/paravirt: Enable users to choose PV lock type
v1->v2: - Make pv_spinlock_type a bit mask for easier checking. - Add patch 2 to deprecate xen_nopvspin v1 - https://lkml.org/lkml/2017/11/1/381 Patch 1 adds a new pvlock_type parameter for the administrators to specify the type of lock to be used in a para-virtualized kernel. Patch 2 deprecates Xen's xen_nopvspin parameter as it is no longer needed. Waiman Long (2): x86/paravirt: Add kernel parameter to choose paravirt lock type x86/xen: Deprecate xen_nopvspin Documentation/admin-guide/kernel-parameters.txt | 11 --- arch/x86/include/asm/paravirt.h | 9 ++ arch/x86/kernel/kvm.c | 3 ++ arch/x86/kernel/paravirt.c | 40 - arch/x86/xen/spinlock.c | 17 +-- 5 files changed, 65 insertions(+), 15 deletions(-) -- 1.8.3.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH-tip v2 1/2] x86/paravirt: Add kernel parameter to choose paravirt lock type
Currently, there are 3 different lock types that can be chosen for the x86 architecture: - qspinlock - pvqspinlock - unfair lock One of the above lock types will be chosen at boot time depending on a number of different factors. Ideally, the hypervisors should be able to pick the best performing lock type for the current VM configuration. That is not currently the case as the performance of each lock type are affected by many different factors like the number of vCPUs in the VM, the amount vCPU overcommitment, the CPU type and so on. Generally speaking, unfair lock performs well for VMs with a small number of vCPUs. Native qspinlock may perform better than pvqspinlock if there is vCPU pinning and there is no vCPU over-commitment. This patch adds a new kernel parameter to allow administrator to choose the paravirt spinlock type to be used. VM administrators can experiment with the different lock types and choose one that can best suit their need, if they want to. Hypervisor developers can also use that to experiment with different lock types so that they can come up with a better algorithm to pick the best lock type. The hypervisor paravirt spinlock code will override this new parameter in determining if pvqspinlock should be used. The parameter, however, will override Xen's xen_nopvspin in term of disabling unfair lock. Signed-off-by: Waiman Long--- Documentation/admin-guide/kernel-parameters.txt | 7 + arch/x86/include/asm/paravirt.h | 9 ++ arch/x86/kernel/kvm.c | 3 ++ arch/x86/kernel/paravirt.c | 40 - arch/x86/xen/spinlock.c | 12 5 files changed, 65 insertions(+), 6 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f7df49d..c98d9c7 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3275,6 +3275,13 @@ [KNL] Number of legacy pty's. Overwrites compiled-in default number. + pvlock_type=[X86,PV_OPS] + Specify the paravirt spinlock type to be used. + Options are: + queued - native queued spinlock + pv - paravirt queued spinlock + unfair - simple TATAS unfair lock + quiet [KNL] Disable most log messages r128= [HW,DRM] diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 12deec7..c8f9ad9 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -690,6 +690,15 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) #endif /* SMP && PARAVIRT_SPINLOCKS */ +enum pv_spinlock_type { + locktype_auto = 0, + locktype_queued = 1, + locktype_paravirt = 2, + locktype_unfair = 4, +}; + +extern enum pv_spinlock_type pv_spinlock_type; + #ifdef CONFIG_X86_32 #define PV_SAVE_REGS "pushl %ecx; pushl %edx;" #define PV_RESTORE_REGS "popl %edx; popl %ecx;" diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 8bb9594..3faee63 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -646,6 +646,9 @@ void __init kvm_spinlock_init(void) if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) return; + if (pv_spinlock_type & (locktype_queued|locktype_unfair)) + return; + __pv_init_lock_hash(); pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 041096b..aee6756 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -115,11 +115,48 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target, return 5; } +/* + * The kernel argument "pvlock_type=" can be used to explicitly specify + * which type of spinlocks to be used. Currently, there are 3 options: + * 1) queued - the native queued spinlock + * 2) pv - the paravirt queued spinlock (if CONFIG_PARAVIRT_SPINLOCKS) + * 3) unfair - the simple TATAS unfair lock + * + * If this argument is not specified, the kernel will automatically choose + * an appropriate one depending on X86_FEATURE_HYPERVISOR and hypervisor + * specific settings. + */ +enum pv_spinlock_type __read_mostly pv_spinlock_type = locktype_auto; + +static int __init pvlock_setup(char *s) +{ + if (!s) + return -EINVAL; + + if (!strcmp(s, "queued")) + pv_spinlock_type = locktype_queued; + else if (!strcmp(s, "pv")) + pv_spinlock_type = locktype_paravirt; + else if (!strcmp(s, "unfair")) + pv_spinlock_type = locktype_unfair; + else
[Xen-devel] [PATCH-tip v2 2/2] x86/xen: Deprecate xen_nopvspin
With the new pvlock_type kernel parameter, xen_nopvspin is no longer needed. This patch deprecates the xen_nopvspin parameter by removing its documentation and treating it as an alias of "pvlock_type=queued". Signed-off-by: Waiman Long--- Documentation/admin-guide/kernel-parameters.txt | 4 arch/x86/xen/spinlock.c | 19 +++ 2 files changed, 7 insertions(+), 16 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index c98d9c7..683a817 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4596,10 +4596,6 @@ the unplug protocol never -- do not unplug even if version check succeeds - xen_nopvspin[X86,XEN] - Disables the ticketlock slowpath using Xen PV - optimizations. - xen_nopv[X86] Disables the PV optimizations forcing the HVM guest to run as generic HVM guest with no PV drivers. diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index d5f79ac..19e2e75 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -20,7 +20,6 @@ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; static DEFINE_PER_CPU(char *, irq_name); -static bool xen_pvspin = true; #include @@ -81,12 +80,8 @@ void xen_init_lock_cpu(int cpu) int irq; char *name; - if (!xen_pvspin || - (pv_spinlock_type & (locktype_queued|locktype_unfair))) { - if ((cpu == 0) && !pv_spinlock_type) - static_branch_disable(_spin_lock_key); + if (pv_spinlock_type & (locktype_queued|locktype_unfair)) return; - } WARN(per_cpu(lock_kicker_irq, cpu) >= 0, "spinlock on CPU%d exists on IRQ%d!\n", cpu, per_cpu(lock_kicker_irq, cpu)); @@ -110,8 +105,7 @@ void xen_init_lock_cpu(int cpu) void xen_uninit_lock_cpu(int cpu) { - if (!xen_pvspin || - (pv_spinlock_type & (locktype_queued|locktype_unfair))) + if (pv_spinlock_type & (locktype_queued|locktype_unfair)) return; unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL); @@ -132,8 +126,7 @@ void xen_uninit_lock_cpu(int cpu) */ void __init xen_init_spinlocks(void) { - if (!xen_pvspin || - (pv_spinlock_type & (locktype_queued|locktype_unfair))) { + if (pv_spinlock_type & (locktype_queued|locktype_unfair)) { printk(KERN_DEBUG "xen: PV spinlocks disabled\n"); return; } @@ -147,10 +140,12 @@ void __init xen_init_spinlocks(void) pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); } +/* TODO: To be removed in a future kernel version */ static __init int xen_parse_nopvspin(char *arg) { - xen_pvspin = false; + pr_warn("xen_nopvspin is deprecated, replace it with \"pvlock_type=queued\"!\n"); + if (!pv_spinlock_type) + pv_spinlock_type = locktype_queued; return 0; } early_param("xen_nopvspin", xen_parse_nopvspin); - -- 1.8.3.1 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH] scripts: warn about invalid MAINTAINERS patterns
On Wed, 2017-11-01 at 16:05 -0400, Augie Fackler wrote: > > On Nov 1, 2017, at 13:11, Tom Saegerwrote: > > > > On Wed, Nov 01, 2017 at 09:50:05AM -0700, Joe Perches wrote: > > > (add mercurial-devel and xen-devel to cc's) > > > > > > On Tue, 2017-10-31 at 16:37 -0500, Tom Saeger wrote: > > > > Add "--pattern-checks" option to get_maintainer.pl to warn about invalid > > > > "F" and "X" patterns found in MAINTAINERS file(s). > > > > > > Hey again Tom. > > > > > > About mercurial/hg. > > > > > > While as far as I know there hasn't been a mercurial tree > > > for the linux kernel sources in many years, I believe the > > > mercurial command to list files should be different. > > > > > > > my %VCS_cmds_hg = ( > > > > @@ -167,6 +169,7 @@ my %VCS_cmds_hg = ( > > > > "subject_pattern" => "^HgSubject: (.*)", > > > > "stat_pattern" => "^(\\d+)\t(\\d+)\t\$file\$", > > > > "file_exists_cmd" => "hg files \$file", > > > > +"list_files_cmd" => "hg files \$file", > > > > > > I think this should be > > > > > > "list_files_cmd" => "hg manifest -R \$file", > > > > Ok - I'll add to v2. > > Actually, I'd recommend `hg files` over `hg manifest` by a wide margin. why? hg files -R prefixes all the output hg manifest -R output is unprefixed ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v5 1/1] xen/time: do not decrease steal time after live migration on xen
Hi Dongli, Thank you for the patch! Yet something to improve: [auto build test ERROR on xen-tip/linux-next] [also build test ERROR on v4.14-rc7 next-20171018] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Dongli-Zhang/xen-time-do-not-decrease-steal-time-after-live-migration-on-xen/20171102-011408 base: https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git linux-next config: arm64-defconfig (attached as .config) compiler: aarch64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree make.cross ARCH=arm64 All error/warnings (new ones prefixed by >>): drivers//xen/time.c: In function 'xen_accumulate_runstate_time': >> drivers//xen/time.c:92:20: error: implicit declaration of function 'kcalloc' >> [-Werror=implicit-function-declaration] runstate_delta = kcalloc(num_possible_cpus(), ^~~ >> drivers//xen/time.c:92:18: warning: assignment makes pointer from integer >> without a cast [-Wint-conversion] runstate_delta = kcalloc(num_possible_cpus(), ^ >> drivers//xen/time.c:128:3: error: implicit declaration of function 'kfree' >> [-Werror=implicit-function-declaration] kfree(runstate_delta); ^ cc1: some warnings being treated as errors vim +/kcalloc +92 drivers//xen/time.c 82 83 void xen_accumulate_runstate_time(int action) 84 { 85 struct vcpu_runstate_info state; 86 int cpu, i; 87 88 switch (action) { 89 case -1: /* backup runstate time before suspend */ 90 WARN_ON_ONCE(unlikely(runstate_delta)); 91 > 92 runstate_delta = kcalloc(num_possible_cpus(), 93 sizeof(*runstate_delta), 94 GFP_KERNEL); 95 if (unlikely(!runstate_delta)) { 96 pr_alert("%s: failed to allocate runstate_delta\n", 97 __func__); 98 return; 99 } 100 101 for_each_possible_cpu(cpu) { 102 xen_get_runstate_snapshot_cpu_delta(, cpu); 103 memcpy(runstate_delta[cpu].time, state.time, 104RUNSTATE_max * sizeof(*runstate_delta[cpu].time)); 105 } 106 107 break; 108 109 case 0: /* backup runstate time after resume */ 110 if (unlikely(!runstate_delta)) { 111 pr_alert("%s: cannot accumulate runstate time as runstate_delta is NULL\n", 112 __func__); 113 return; 114 } 115 116 for_each_possible_cpu(cpu) { 117 for (i = 0; i < RUNSTATE_max; i++) 118 per_cpu(old_runstate_time, cpu)[i] += 119 runstate_delta[cpu].time[i]; 120 } 121 break; 122 123 default: /* do not accumulate runstate time for checkpointing */ 124 break; 125 } 126 127 if (action != -1 && runstate_delta) { > 128 kfree(runstate_delta); 129 runstate_delta = NULL; 130 } 131 } 132 --- 0-DAY kernel test infrastructureOpen Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation .config.gz Description: application/gzip ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
On 11/01/2017 03:01 PM, Boris Ostrovsky wrote: > On 11/01/2017 12:28 PM, Waiman Long wrote: >> On 11/01/2017 11:51 AM, Juergen Gross wrote: >>> On 01/11/17 16:32, Waiman Long wrote: Currently, there are 3 different lock types that can be chosen for the x86 architecture: - qspinlock - pvqspinlock - unfair lock One of the above lock types will be chosen at boot time depending on a number of different factors. Ideally, the hypervisors should be able to pick the best performing lock type for the current VM configuration. That is not currently the case as the performance of each lock type are affected by many different factors like the number of vCPUs in the VM, the amount vCPU overcommitment, the CPU type and so on. Generally speaking, unfair lock performs well for VMs with a small number of vCPUs. Native qspinlock may perform better than pvqspinlock if there is vCPU pinning and there is no vCPU over-commitment. This patch adds a new kernel parameter to allow administrator to choose the paravirt spinlock type to be used. VM administrators can experiment with the different lock types and choose one that can best suit their need, if they want to. Hypervisor developers can also use that to experiment with different lock types so that they can come up with a better algorithm to pick the best lock type. The hypervisor paravirt spinlock code will override this new parameter in determining if pvqspinlock should be used. The parameter, however, will override Xen's xen_nopvspin in term of disabling unfair lock. >>> Hmm, I'm not sure we need pvlock_type _and_ xen_nopvspin. What do others >>> think? >> I don't think we need xen_nopvspin, but I don't want to remove that >> without agreement from the community. > I also don't think xen_nopvspin will be needed after pvlock_type is > introduced. > > -boris Another reason that I didn't try to remove xen_nopvspin is backward compatibility concern. One way to handle it is to depreciate it and treat it as an alias to pvlock_type=queued. Cheers, Longman ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH 0/3] x86/svm: virtual VMLOAD/VMSAVE
On 01/11/17 17:00, Boris Ostrovsky wrote: > On 10/31/2017 06:03 PM, brian.wo...@amd.com wrote: >> From: Brian Woods>> >> x86/svm: virtual VMLOAD/VMSAVE >> >> On AMD family 17h server processors, there is a feature called virtual >> VMLOAD/VMSAVE. This allows a nested hypervisor to preform a VMLOAD or >> VMSAVE without needing to be intercepted by the host hypervisor. >> Virtual VMLOAD/VMSAVE requires the host hypervisor to be in long mode >> and nested page tables to be enabled. For more information about it >> please see: >> >> AMD64 Architecture Programmer’s Manual Volume 2: System Programming >> http://support.amd.com/TechDocs/24593.pdf >> Section: VMSAVE and VMLOAD Virtualization (Section 15.33.1) >> >> This patch series adds support to check for and enable the virtual >> VMLOAD/VMSAVE features if available. >> >> Signed-off-by: Brian Woods >> >> Brian Woods (3): >> x86/svm: rename lbr control field in vmcb >> x86/svm: add virtual VMLOAD/VMSAVE feature definition >> x86/svm: add virtual VMLOAD/VMSAVE support >> >> xen/arch/x86/hvm/svm/nestedsvm.c| 10 +- >> xen/arch/x86/hvm/svm/svm.c | 3 ++- >> xen/arch/x86/hvm/svm/svmdebug.c | 2 ++ >> xen/arch/x86/hvm/svm/vmcb.c | 7 +++ >> xen/include/asm-x86/hvm/svm/nestedsvm.h | 4 ++-- >> xen/include/asm-x86/hvm/svm/svm.h | 2 ++ >> xen/include/asm-x86/hvm/svm/vmcb.h | 7 --- >> 7 files changed, 24 insertions(+), 11 deletions(-) >> > > Reviewed-by: Boris Ostrovsky I've given these a spin on my Zen box, although nothing nested-virt specific. I've pulled the series into x86-next. ~Andrew ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH] x86/cpuid: Enable new SSE/AVX/AVX512 cpu features
On Fri, Oct 27, 2017 at 10:18:04PM +0800, Yang Zhong wrote: > Intel IceLake cpu has added new cpu features: AVX512VBMI2/GFNI/ > VAES/AVX512VNNI/AVX512BITALG/VPCLMULQDQ. Those new cpu features > need expose to guest.wq s/wq// > > The bit definition: > CPUID.(EAX=7,ECX=0):ECX[bit 06] AVX512VBMI2 > CPUID.(EAX=7,ECX=0):ECX[bit 08] GFNI > CPUID.(EAX=7,ECX=0):ECX[bit 09] VAES > CPUID.(EAX=7,ECX=0):ECX[bit 10] VPCLMULQDQ > CPUID.(EAX=7,ECX=0):ECX[bit 11] AVX512VNNI > CPUID.(EAX=7,ECX=0):ECX[bit 12] AVX512_BITALG > > The release document ref below link: > https://software.intel.com/sites/default/files/managed/c5/15/\ > architecture-instruction-set-extensions-programming-reference.pdf Ah! Thank you! > > Signed-off-by: Yang Zhong> --- > docs/man/xl.cfg.pod.5.in| 3 ++- > tools/libxl/libxl_cpuid.c | 6 ++ > tools/misc/xen-cpuid.c | 13 +++-- > xen/include/public/arch-x86/cpufeatureset.h | 6 ++ > xen/tools/gen-cpuid.py | 3 ++- > 5 files changed, 23 insertions(+), 8 deletions(-) > > diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in > index b7b91d8..d056768 100644 > --- a/docs/man/xl.cfg.pod.5.in > +++ b/docs/man/xl.cfg.pod.5.in > @@ -1731,7 +1731,8 @@ perfctr_core perfctr_nb pge pku popcnt pse pse36 psn > rdrand rdseed rdtscp rtm > sha skinit smap smep smx ss sse sse2 sse3 sse4.1 sse4.2 sse4_1 sse4_2 sse4a > ssse3 svm svm_decode svm_lbrv svm_npt svm_nrips svm_pausefilt svm_tscrate > svm_vmcbclean syscall sysenter tbm tm tm2 topoext tsc tsc-deadline tsc_adjust > -umip vme vmx wdt x2apic xop xsave xtpr > +umip vme vmx wdt x2apic xop xsave xtpr avx512_vbmi2 gfni vaes vpclmulqdq > +avx512_vnni avx512_bitalg > > > The xend syntax is a list of values in the form of > diff --git a/tools/libxl/libxl_cpuid.c b/tools/libxl/libxl_cpuid.c > index e692b61..614991f 100644 > --- a/tools/libxl/libxl_cpuid.c > +++ b/tools/libxl/libxl_cpuid.c > @@ -199,6 +199,12 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list > *cpuid, const char* str) > {"umip", 0x0007, 0, CPUID_REG_ECX, 2, 1}, > {"pku", 0x0007, 0, CPUID_REG_ECX, 3, 1}, > {"ospke",0x0007, 0, CPUID_REG_ECX, 4, 1}, > +{"avx512_vbmi2", 0x0007, 0, CPUID_REG_ECX, 6, 1}, > +{"gfni", 0x0007, 0, CPUID_REG_ECX, 8, 1}, > +{"vaes", 0x0007, 0, CPUID_REG_ECX, 9, 1}, > +{"vpclmulqdq", 0x0007, 0, CPUID_REG_ECX, 10, 1}, > +{"avx512_vnni", 0x0007, 0, CPUID_REG_ECX, 11, 1}, > +{"avx512_bitalg",0x0007, 0, CPUID_REG_ECX, 12, 1}, > > {"avx512-4vnniw",0x0007, 0, CPUID_REG_EDX, 2, 1}, > {"avx512-4fmaps",0x0007, 0, CPUID_REG_EDX, 3, 1}, > diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c > index 106be0f..985deea 100644 > --- a/tools/misc/xen-cpuid.c > +++ b/tools/misc/xen-cpuid.c > @@ -120,12 +120,13 @@ static const char *str_Da1[32] = > > static const char *str_7c0[32] = > { > -[ 0] = "prechwt1", [ 1] = "avx512vbmi", > -[ 2] = "REZ", [ 3] = "pku", > -[ 4] = "ospke", > - > -[5 ... 13] = "REZ", > - > +[ 0] = "prechwt1", [ 1] = "avx512vbmi", > +[ 2] = "REZ", [ 3] = "pku", > +[ 4] = "ospke",[ 5] = "REZ", > +[ 6] = "avx512_vbmi2", [ 7] = "REZ", > +[ 8] = "gfni", [ 9] = "vaes", > +[10] = "vpclmulqdq", [11] = "avx512_vnni", > +[12] = "avx512_bitalg",[13] = "REZ", > [14] = "avx512_vpopcntdq", > > [15 ... 31] = "REZ", > diff --git a/xen/include/public/arch-x86/cpufeatureset.h > b/xen/include/public/arch-x86/cpufeatureset.h > index 0ee3ea3..bb24b79 100644 > --- a/xen/include/public/arch-x86/cpufeatureset.h > +++ b/xen/include/public/arch-x86/cpufeatureset.h > @@ -228,6 +228,12 @@ XEN_CPUFEATURE(AVX512VBMI,6*32+ 1) /*A AVX-512 > Vector Byte Manipulation Ins > XEN_CPUFEATURE(UMIP, 6*32+ 2) /*S User Mode Instruction Prevention > */ > XEN_CPUFEATURE(PKU, 6*32+ 3) /*H Protection Keys for Userspace */ > XEN_CPUFEATURE(OSPKE, 6*32+ 4) /*! OS Protection Keys Enable */ > +XEN_CPUFEATURE(AVX512_VBMI2, 6*32+ 6) /*A addition AVX-512 VBMI > Instructions */ > +XEN_CPUFEATURE(GFNI, 6*32+ 8) /*A Galois Field New Instructions */ > +XEN_CPUFEATURE(VAES, 6*32+ 9) /*A Vector AES instructions */ > +XEN_CPUFEATURE(VPCLMULQDQ,6*32+ 10) /*A vector PCLMULQDQ instructions */ > +XEN_CPUFEATURE(AVX512_VNNI, 6*32+ 11) /*A Vector Neural Network > Instructions */ > +XEN_CPUFEATURE(AVX512_BITALG, 6*32+ 12) /*A support for VPOPCNT[B,W] and > VPSHUFBITQMB*/ > XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14) /*A POPCNT for vectors of DW/QW */ > XEN_CPUFEATURE(RDPID, 6*32+22) /*A RDPID instruction */ > > diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py > index 9ec4486..be8df48
Re: [Xen-devel] [PATCH v6 1/1] xen/time: do not decrease steal time after live migration on xen
On 10/31/2017 09:46 PM, Dongli Zhang wrote: > After guest live migration on xen, steal time in /proc/stat > (cpustat[CPUTIME_STEAL]) might decrease because steal returned by > xen_steal_lock() might be less than this_rq()->prev_steal_time which is > derived from previous return value of xen_steal_clock(). > > For instance, steal time of each vcpu is 335 before live migration. > > cpu 198 0 368 200064 1962 0 0 1340 0 0 > cpu0 38 0 81 50063 492 0 0 335 0 0 > cpu1 65 0 97 49763 634 0 0 335 0 0 > cpu2 38 0 81 50098 462 0 0 335 0 0 > cpu3 56 0 107 50138 374 0 0 335 0 0 > > After live migration, steal time is reduced to 312. > > cpu 200 0 370 200330 1971 0 0 1248 0 0 > cpu0 38 0 82 50123 500 0 0 312 0 0 > cpu1 65 0 97 49832 634 0 0 312 0 0 > cpu2 39 0 82 50167 462 0 0 312 0 0 > cpu3 56 0 107 50207 374 0 0 312 0 0 > > Since runstate times are cumulative and cleared during xen live migration > by xen hypervisor, the idea of this patch is to accumulate runstate times > to global percpu variables before live migration suspend. Once guest VM is > resumed, xen_get_runstate_snapshot_cpu() would always return the sum of new > runstate times and previously accumulated times stored in global percpu > variables. Comments before the call of HYPERVISOR_suspend() has been > removed as it is inaccurate. The call can return an error code (e.g., > possibly -EPERM in the future). I'd like split comment removal bit into a separate paragraph. I can do this when committing if you don't mind. > > Similar and more severe issue would impact prior linux 4.8-4.10 as > discussed by Michael Las at > https://0xstubs.org/debugging-a-flaky-cpu-steal-time-counter-on-a-paravirtualized-xen-guest, > which would overflow steal time and lead to 100% st usage in top command > for linux 4.8-4.10. A backport of this patch would fix that issue. > > References: > https://0xstubs.org/debugging-a-flaky-cpu-steal-time-counter-on-a-paravirtualized-xen-guest > Signed-off-by: Dongli Zhang> > --- Reviewed-by: Boris Ostrovsky ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
On 11/01/2017 12:28 PM, Waiman Long wrote: > On 11/01/2017 11:51 AM, Juergen Gross wrote: >> On 01/11/17 16:32, Waiman Long wrote: >>> Currently, there are 3 different lock types that can be chosen for >>> the x86 architecture: >>> >>> - qspinlock >>> - pvqspinlock >>> - unfair lock >>> >>> One of the above lock types will be chosen at boot time depending on >>> a number of different factors. >>> >>> Ideally, the hypervisors should be able to pick the best performing >>> lock type for the current VM configuration. That is not currently >>> the case as the performance of each lock type are affected by many >>> different factors like the number of vCPUs in the VM, the amount vCPU >>> overcommitment, the CPU type and so on. >>> >>> Generally speaking, unfair lock performs well for VMs with a small >>> number of vCPUs. Native qspinlock may perform better than pvqspinlock >>> if there is vCPU pinning and there is no vCPU over-commitment. >>> >>> This patch adds a new kernel parameter to allow administrator to >>> choose the paravirt spinlock type to be used. VM administrators can >>> experiment with the different lock types and choose one that can best >>> suit their need, if they want to. Hypervisor developers can also use >>> that to experiment with different lock types so that they can come >>> up with a better algorithm to pick the best lock type. >>> >>> The hypervisor paravirt spinlock code will override this new parameter >>> in determining if pvqspinlock should be used. The parameter, however, >>> will override Xen's xen_nopvspin in term of disabling unfair lock. >> Hmm, I'm not sure we need pvlock_type _and_ xen_nopvspin. What do others >> think? > I don't think we need xen_nopvspin, but I don't want to remove that > without agreement from the community. I also don't think xen_nopvspin will be needed after pvlock_type is introduced. -boris ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v3 for-next 3/4] xen/tmem: Convert the file common/tmem_xen.c to use typesafe MFN
On Wed, Nov 01, 2017 at 02:03:15PM +, Julien Grall wrote: > The file common/tmem_xen.c is now converted to use typesafe. This is > requiring to override the macro page_to_mfn to make it work with mfn_t. > > Note that all variables converted to mfn_t havem there initial value, > when set, switch from 0 to INVALID_MFN. This is fine because the initial > values was always overriden before used. > > Also add a couple of missing newlines suggested by Andrew in the code. > > Signed-off-by: Julien Grall> Reviewed-by: Andrew Cooper > > --- > > Cc: Konrad Rzeszutek Wilk Acked-by: Konrad Rzeszutek Wilk But could you confirm that you did compile this on x86 and with CONFIG_TMEM=y in the .config? Thanks! > > Changes in v2: > - Add missing newlines > - Add Andrew's reviewed-by > --- > xen/common/tmem_xen.c | 30 ++ > 1 file changed, 18 insertions(+), 12 deletions(-) > > diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c > index 20f74b268f..bd52e44faf 100644 > --- a/xen/common/tmem_xen.c > +++ b/xen/common/tmem_xen.c > @@ -14,6 +14,10 @@ > #include > #include > > +/* Override macros from asm/page.h to make them work with mfn_t */ > +#undef page_to_mfn > +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) > + > bool __read_mostly opt_tmem; > boolean_param("tmem", opt_tmem); > > @@ -31,7 +35,7 @@ static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, dstmem); > static DEFINE_PER_CPU_READ_MOSTLY(void *, scratch_page); > > #if defined(CONFIG_ARM) > -static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn, > +static inline void *cli_get_page(xen_pfn_t cmfn, mfn_t *pcli_mfn, > struct page_info **pcli_pfp, bool cli_write) > { > ASSERT_UNREACHABLE(); > @@ -39,14 +43,14 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned > long *pcli_mfn, > } > > static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp, > -unsigned long cli_mfn, bool mark_dirty) > +mfn_t cli_mfn, bool mark_dirty) > { > ASSERT_UNREACHABLE(); > } > #else > #include > > -static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn, > +static inline void *cli_get_page(xen_pfn_t cmfn, mfn_t *pcli_mfn, > struct page_info **pcli_pfp, bool cli_write) > { > p2m_type_t t; > @@ -68,16 +72,17 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned > long *pcli_mfn, > > *pcli_mfn = page_to_mfn(page); > *pcli_pfp = page; > -return map_domain_page(_mfn(*pcli_mfn)); > + > +return map_domain_page(*pcli_mfn); > } > > static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp, > -unsigned long cli_mfn, bool mark_dirty) > +mfn_t cli_mfn, bool mark_dirty) > { > if ( mark_dirty ) > { > put_page_and_type(cli_pfp); > -paging_mark_dirty(current->domain, _mfn(cli_mfn)); > +paging_mark_dirty(current->domain, cli_mfn); > } > else > put_page(cli_pfp); > @@ -88,14 +93,14 @@ static inline void cli_put_page(void *cli_va, struct > page_info *cli_pfp, > int tmem_copy_from_client(struct page_info *pfp, > xen_pfn_t cmfn, tmem_cli_va_param_t clibuf) > { > -unsigned long tmem_mfn, cli_mfn = 0; > +mfn_t tmem_mfn, cli_mfn = INVALID_MFN; > char *tmem_va, *cli_va = NULL; > struct page_info *cli_pfp = NULL; > int rc = 1; > > ASSERT(pfp != NULL); > tmem_mfn = page_to_mfn(pfp); > -tmem_va = map_domain_page(_mfn(tmem_mfn)); > +tmem_va = map_domain_page(tmem_mfn); > if ( guest_handle_is_null(clibuf) ) > { > cli_va = cli_get_page(cmfn, _mfn, _pfp, 0); > @@ -125,7 +130,7 @@ int tmem_compress_from_client(xen_pfn_t cmfn, > unsigned char *wmem = this_cpu(workmem); > char *scratch = this_cpu(scratch_page); > struct page_info *cli_pfp = NULL; > -unsigned long cli_mfn = 0; > +mfn_t cli_mfn = INVALID_MFN; > void *cli_va = NULL; > > if ( dmem == NULL || wmem == NULL ) > @@ -152,7 +157,7 @@ int tmem_compress_from_client(xen_pfn_t cmfn, > int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp, > tmem_cli_va_param_t clibuf) > { > -unsigned long tmem_mfn, cli_mfn = 0; > +mfn_t tmem_mfn, cli_mfn = INVALID_MFN; > char *tmem_va, *cli_va = NULL; > struct page_info *cli_pfp = NULL; > int rc = 1; > @@ -165,7 +170,8 @@ int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info > *pfp, > return -EFAULT; > } > tmem_mfn = page_to_mfn(pfp); > -tmem_va = map_domain_page(_mfn(tmem_mfn)); > +tmem_va = map_domain_page(tmem_mfn); > + > if ( cli_va ) > { > memcpy(cli_va, tmem_va, PAGE_SIZE); > @@
Re: [Xen-devel] [PATCH v3 for-next 4/4] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
On 11/01/2017 10:03 AM, Julien Grall wrote: > Most of the users of page_to_mfn and mfn_to_page are either overriding > the macros to make them work with mfn_t or use mfn_x/_mfn because the > rest of the function use mfn_t. > > So make __page_to_mfn and __mfn_to_page return mfn_t by default. > > Only reasonable clean-ups are done in this patch because it is > already quite big. So some of the files now override page_to_mfn and > mfn_to_page to avoid using mfn_t. > > Lastly, domain_page_to_mfn is also converted to use mfn_t given that > most of the callers are now switched to _mfn(domain_page_to_mfn(...)). > > Signed-off-by: Julien Grall> SVM: Reviewed-by: Boris Ostrovsky ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [qemu-mainline test] 115468: regressions - FAIL
flight 115468 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/115468/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i3866 xen-buildfail REGR. vs. 114507 build-amd64-xsm 6 xen-buildfail REGR. vs. 114507 build-i386-xsm6 xen-buildfail REGR. vs. 114507 build-amd64 6 xen-buildfail REGR. vs. 114507 build-armhf 6 xen-buildfail REGR. vs. 114507 build-armhf-xsm 6 xen-buildfail REGR. vs. 114507 Tests which did not succeed, but are not blocking: test-amd64-i386-libvirt-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-i386-freebsd10-i386 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked n/a test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-armhf-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a test-amd64-i386-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-i386-freebsd10-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pair 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pygrub 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked n/a test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl1 build-check(1) blocked n/a build-i386-libvirt1 build-check(1) blocked n/a test-amd64-i386-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked n/a test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-i386-xl-xsm1 build-check(1) blocked n/a build-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-i386-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-i386-xl-raw1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-amd 1 build-check(1) blocked n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked n/a build-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-i386-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl 1 build-check(1) blocked n/a test-amd64-i386-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked n/a test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a test-armhf-armhf-xl-rtds 1
Re: [Xen-devel] [PATCH v2 1/1] scripts: warn about invalid MAINTAINERS patterns
On Wed, 2017-11-01 at 13:13 -0500, Tom Saeger wrote: > Add "--self-test" option to get_maintainer.pl to show potential > issues in MAINTAINERS file(s) content. This patch subject should be: get_maintainer: Add --self-test for internal consistency tests Andrew, can you please change the subject if/when you add it to quilt? > Pattern check warnings are shown for "F" and "X" patterns found in > MAINTAINERS file(s) which do not match any files known by git. > > Signed-off-by: Tom Saeger> Cc: Joe Perches Otherwise, Acked-by: Joe Perches > --- > > v2: > > Incorporated suggestions from Joe Perches: > - changed "--pattern-checks" to "--self-test" to allow for future work. > - fixed vcs command "list_files_cmd" for mercurial. > - "--self-test" option is all or nothing. > - output to STDOUT > - output format in emacs-style "filename:line: message" > - changed self-test help to: > > --self-test => show potential issues with MAINTAINERS file content > > (Joe, I slightly reworded in hopes this rendition is clear and future proof). > > - Moved execution of $self_test to just after $help and $version. > This prompted encapsulating main content code to read MAINTAINERS files into > a function (read_all_maintainer_files) callable from $self_test. This > has the side benefit of not having to special case for "$self_test" in other > parts > of main program flow. This makes sense to me and is better program flow, thanks. cheers, Joe [v2 patch quoted below] > scripts/get_maintainer.pl | 94 > ++- > 1 file changed, 77 insertions(+), 17 deletions(-) > > diff --git a/scripts/get_maintainer.pl b/scripts/get_maintainer.pl > index bc443201d3ef..c68a5d1ba709 100755 > --- a/scripts/get_maintainer.pl > +++ b/scripts/get_maintainer.pl > @@ -57,6 +57,7 @@ my $sections = 0; > my $file_emails = 0; > my $from_filename = 0; > my $pattern_depth = 0; > +my $self_test = 0; > my $version = 0; > my $help = 0; > my $find_maintainer_files = 0; > @@ -138,6 +139,7 @@ my %VCS_cmds_git = ( > "subject_pattern" => "^GitSubject: (.*)", > "stat_pattern" => "^(\\d+)\\t(\\d+)\\t\$file\$", > "file_exists_cmd" => "git ls-files \$file", > +"list_files_cmd" => "git ls-files \$file", > ); > > my %VCS_cmds_hg = ( > @@ -167,6 +169,7 @@ my %VCS_cmds_hg = ( > "subject_pattern" => "^HgSubject: (.*)", > "stat_pattern" => "^(\\d+)\t(\\d+)\t\$file\$", > "file_exists_cmd" => "hg files \$file", > +"list_files_cmd" => "hg manifest -R \$file", > ); > > my $conf = which_conf(".get_maintainer.conf"); > @@ -216,6 +219,14 @@ if (-f $ignore_file) { > close($ignore); > } > > +if ($#ARGV > 0) { > +foreach (@ARGV) { > +if ($_ eq "-self-test" || $_ eq "--self-test") { > +die "$P: using --self-test does not allow any other option or > argument\n"; > +} > +} > +} > + > if (!GetOptions( > 'email!' => \$email, > 'git!' => \$email_git, > @@ -252,6 +263,7 @@ if (!GetOptions( > 'fe|file-emails!' => \$file_emails, > 'f|file' => \$from_filename, > 'find-maintainer-files' => \$find_maintainer_files, > + 'self-test' => \$self_test, > 'v|version' => \$version, > 'h|help|usage' => \$help, > )) { > @@ -268,6 +280,12 @@ if ($version != 0) { > exit 0; > } > > +if ($self_test) { > +read_all_maintainer_files(); > +check_maintainers_patterns(); > +exit 0; > +} > + > if (-t STDIN && !@ARGV) { > # We're talking to a terminal, but have no command line arguments. > die "$P: missing patchfile or -f file - use --help if necessary\n"; > @@ -311,12 +329,14 @@ if (!top_of_kernel_tree($lk_path)) { > my @typevalue = (); > my %keyword_hash; > my @mfiles = (); > +my @self_test_pattern_info = (); > > sub read_maintainer_file { > my ($file) = @_; > > open (my $maint, '<', "$file") > or die "$P: Can't open MAINTAINERS file '$file': $!\n"; > +my $i = 1; > while (<$maint>) { > my $line = $_; > > @@ -333,6 +353,9 @@ sub read_maintainer_file { > if ((-d $value)) { > $value =~ s@([^/])$@$1/@; > } > + if ($self_test) { > + push(@self_test_pattern_info, {file=>$file, > line=>$line, linenr=>$i, pat=>$value}); > + } > } elsif ($type eq "K") { > $keyword_hash{@typevalue} = $value; > } > @@ -341,6 +364,7 @@ sub read_maintainer_file { > $line =~ s/\n$//g; > push(@typevalue, $line); > } > + $i++; > } > close($maint); > } > @@ -357,26 +381,30 @@ sub find_ignore_git { > return grep { $_ !~ /^\.git$/; } @_; > } > > -if (-d "${lk_path}MAINTAINERS") { > -opendir(DIR, "${lk_path}MAINTAINERS") or die $!; > -my @files = readdir(DIR); > -
Re: [Xen-devel] [PATCH v2] xen: support priv-mapping in an HVM tools domain
On 11/01/2017 11:37 AM, Juergen Gross wrote: > > TBH I like V1 better, too. > > Boris, do you feel strong about the #ifdef part? Having looked at what this turned into I now like V1 better too ;-) Sorry, Paul. -boris ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v2 1/1] scripts: warn about invalid MAINTAINERS patterns
Add "--self-test" option to get_maintainer.pl to show potential issues in MAINTAINERS file(s) content. Pattern check warnings are shown for "F" and "X" patterns found in MAINTAINERS file(s) which do not match any files known by git. Signed-off-by: Tom SaegerCc: Joe Perches --- v2: Incorporated suggestions from Joe Perches: - changed "--pattern-checks" to "--self-test" to allow for future work. - fixed vcs command "list_files_cmd" for mercurial. - "--self-test" option is all or nothing. - output to STDOUT - output format in emacs-style "filename:line: message" - changed self-test help to: --self-test => show potential issues with MAINTAINERS file content (Joe, I slightly reworded in hopes this rendition is clear and future proof). - Moved execution of $self_test to just after $help and $version. This prompted encapsulating main content code to read MAINTAINERS files into a function (read_all_maintainer_files) callable from $self_test. This has the side benefit of not having to special case for "$self_test" in other parts of main program flow. scripts/get_maintainer.pl | 94 ++- 1 file changed, 77 insertions(+), 17 deletions(-) diff --git a/scripts/get_maintainer.pl b/scripts/get_maintainer.pl index bc443201d3ef..c68a5d1ba709 100755 --- a/scripts/get_maintainer.pl +++ b/scripts/get_maintainer.pl @@ -57,6 +57,7 @@ my $sections = 0; my $file_emails = 0; my $from_filename = 0; my $pattern_depth = 0; +my $self_test = 0; my $version = 0; my $help = 0; my $find_maintainer_files = 0; @@ -138,6 +139,7 @@ my %VCS_cmds_git = ( "subject_pattern" => "^GitSubject: (.*)", "stat_pattern" => "^(\\d+)\\t(\\d+)\\t\$file\$", "file_exists_cmd" => "git ls-files \$file", +"list_files_cmd" => "git ls-files \$file", ); my %VCS_cmds_hg = ( @@ -167,6 +169,7 @@ my %VCS_cmds_hg = ( "subject_pattern" => "^HgSubject: (.*)", "stat_pattern" => "^(\\d+)\t(\\d+)\t\$file\$", "file_exists_cmd" => "hg files \$file", +"list_files_cmd" => "hg manifest -R \$file", ); my $conf = which_conf(".get_maintainer.conf"); @@ -216,6 +219,14 @@ if (-f $ignore_file) { close($ignore); } +if ($#ARGV > 0) { +foreach (@ARGV) { +if ($_ eq "-self-test" || $_ eq "--self-test") { +die "$P: using --self-test does not allow any other option or argument\n"; +} +} +} + if (!GetOptions( 'email!' => \$email, 'git!' => \$email_git, @@ -252,6 +263,7 @@ if (!GetOptions( 'fe|file-emails!' => \$file_emails, 'f|file' => \$from_filename, 'find-maintainer-files' => \$find_maintainer_files, + 'self-test' => \$self_test, 'v|version' => \$version, 'h|help|usage' => \$help, )) { @@ -268,6 +280,12 @@ if ($version != 0) { exit 0; } +if ($self_test) { +read_all_maintainer_files(); +check_maintainers_patterns(); +exit 0; +} + if (-t STDIN && !@ARGV) { # We're talking to a terminal, but have no command line arguments. die "$P: missing patchfile or -f file - use --help if necessary\n"; @@ -311,12 +329,14 @@ if (!top_of_kernel_tree($lk_path)) { my @typevalue = (); my %keyword_hash; my @mfiles = (); +my @self_test_pattern_info = (); sub read_maintainer_file { my ($file) = @_; open (my $maint, '<', "$file") or die "$P: Can't open MAINTAINERS file '$file': $!\n"; +my $i = 1; while (<$maint>) { my $line = $_; @@ -333,6 +353,9 @@ sub read_maintainer_file { if ((-d $value)) { $value =~ s@([^/])$@$1/@; } + if ($self_test) { + push(@self_test_pattern_info, {file=>$file, line=>$line, linenr=>$i, pat=>$value}); + } } elsif ($type eq "K") { $keyword_hash{@typevalue} = $value; } @@ -341,6 +364,7 @@ sub read_maintainer_file { $line =~ s/\n$//g; push(@typevalue, $line); } + $i++; } close($maint); } @@ -357,26 +381,30 @@ sub find_ignore_git { return grep { $_ !~ /^\.git$/; } @_; } -if (-d "${lk_path}MAINTAINERS") { -opendir(DIR, "${lk_path}MAINTAINERS") or die $!; -my @files = readdir(DIR); -closedir(DIR); -foreach my $file (@files) { - push(@mfiles, "${lk_path}MAINTAINERS/$file") if ($file !~ /^\./); +read_all_maintainer_files(); + +sub read_all_maintainer_files { +if (-d "${lk_path}MAINTAINERS") { +opendir(DIR, "${lk_path}MAINTAINERS") or die $!; +my @files = readdir(DIR); +closedir(DIR); +foreach my $file (@files) { +push(@mfiles, "${lk_path}MAINTAINERS/$file") if ($file !~ /^\./); +} } -} -if ($find_maintainer_files) { -find( { wanted => \_is_maintainer_file, - preprocess => \_ignore_git, -
Re: [Xen-devel] [RFC] ARM: New (Xen) VGIC design document
Hi Christoffer, On 12/10/17 13:05, Christoffer Dall wrote: > Hi Andre, > > On Wed, Oct 11, 2017 at 03:33:03PM +0100, Andre Przywara wrote: >> Hi, >> >> (CC:ing some KVM/ARM folks involved in the VGIC) > > Very nice writeup! > > I added a bunch of comments, mostly for the writing and clarity, I hope > it helps. Thank you very much for the response and the comments! I really appreciate your precise (academic) language here. I held back the response since Stefano was the actual addressee of this write-up, so: sorry for the delay. >> starting with the addition of the ITS support we were seeing more and >> more issues with the current implementation of our ARM Generic Interrupt >> Controller (GIC) emulation, the VGIC. >> Among other approaches to fix those issues it was proposed to copy the >> VGIC emulation used in KVM. This one was suffering from very similar >> issues, and a clean design from scratch lead to a very robust and >> capable re-implementation. Interestingly this implementation is fairly >> self-contained, so it seems feasible to copy it. Hopefully we only need >> minor adjustments, possibly we can even copy it verbatim with some >> additional glue layer code. >> Stefano asked for getting a design overview, to assess the feasibility >> of copying the KVM code without reviewing tons of code in the first >> place. >> So to follow Xen rules for new features, this design document below is >> an attempt to describe the current KVM VGIC design - in a hypervisor >> agnostic session. It is a bit of a retro-fit design description, as it >> is not strictly forward-looking only, but actually describing the >> existing implemenation [1]. >> >> Please have a look and let me know: >> 1) if this document has the right scope >> 2) if this document has the right level of detail >> 3) if there are points missing from the document >> 3) if the design in general is a fit >> >> Appreciate any feedback! >> >> Cheers, >> Andre. >> >> --- >> >> VGIC design >> === >> >> This document describes the design of an ARM Generic Interrupt Controller >> (GIC) >> emulation. It is meant to emulate a GIC for a guest in an virtual machine, >> the common name for that is VGIC (from "virtual GIC"). >> >> This design was the result of a one-week-long design session with some >> engineers in a room, triggered by ever-increasing difficulties in maintaining >> the existing GIC emulation in the KVM hypervisor. The design eventually >> materialised as an alternative VGIC implementation in the Linux kernel >> (merged into Linux v4.7). As of Linux v4.8 the previous VGIC implementation >> was removed, so it is now the current code used by Linux. >> Although being used in KVM, the actual design of this VGIC is rather >> hypervisor >> agnostic and can be used by other hypervisors as well, in particular for Xen. >> >> GIC hardware virtualization support >> --- >> >> The ARM Generic Interrupt Controller (since v2) supports the virtualization >> extensions, which allows some parts of the interrupt life cycle to be handled >> purely inside the guest without exiting into the hypervisor. >> In the GICv2 and GICv3 architecture this covers mostly the "interrupt >> acknowledgement", "priority drop" and "interrupt deactivate" actions. >> So a guest can handle most of the interrupt processing code without >> leaving EL1 and trapping into the hypervisor. To accomplish >> this, the GIC holds so called "list registers" (LRs), which shadow the >> interrupt state for any virtual interrupt. Injecting an interrupt to a guest >> involves setting up one LR with the interrupt number, its priority and >> initial >> state (mostly "pending"), then entering the guest. Any EOI related action >> from within the guest just acts on those LRs, the hypervisor can later update >> the virtual interrupt state when the guest exists the next time (for whatever >> reason). >> But despite the GIC hardware helping out here, the whole interrupt >> configuration management is not virtualized at all and needs to be emulated >> by the hypervisor - or another related software component, for instance a >> userland emulator. This so called "distributor" part of the GIC consists of >> memory mapped registers, which can be trapped by the hypervisor, so any guest >> access can be emulated in the usual way. >> >> VGIC design motivation >> -- >> >> A GIC emulation thus needs to take care of those bits: >> >> - trap GIC distributor MMIO accesses and shadow the configuration setup >> (enabled/disabled, level/edge, priority, affinity) for virtual interrupts >> - handle incoming hardware and virtual interrupt requests and inject the >> associated virtual interrupt by manipulating one of the list registers >> - track the state of a virtual interrupt by inspecting the LRs after the >> guest has exited, possibly adjusting the shadowed virtual interrupt state >> >> Despite the distributor MMIO
Re: [Xen-devel] [PATCH] scripts: warn about invalid MAINTAINERS patterns
On Wed, Nov 01, 2017 at 09:50:05AM -0700, Joe Perches wrote: > (add mercurial-devel and xen-devel to cc's) > > On Tue, 2017-10-31 at 16:37 -0500, Tom Saeger wrote: > > Add "--pattern-checks" option to get_maintainer.pl to warn about invalid > > "F" and "X" patterns found in MAINTAINERS file(s). > > Hey again Tom. > > About mercurial/hg. > > While as far as I know there hasn't been a mercurial tree > for the linux kernel sources in many years, I believe the > mercurial command to list files should be different. > > > my %VCS_cmds_hg = ( > > @@ -167,6 +169,7 @@ my %VCS_cmds_hg = ( > > "subject_pattern" => "^HgSubject: (.*)", > > "stat_pattern" => "^(\\d+)\t(\\d+)\t\$file\$", > > "file_exists_cmd" => "hg files \$file", > > +"list_files_cmd" => "hg files \$file", > > I think this should be > > "list_files_cmd" => "hg manifest -R \$file", Ok - I'll add to v2. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH RFC v2] Add SUPPORT.md
On Tue, Oct 24, 2017 at 04:22:38PM +0100, George Dunlap wrote: > On Fri, Sep 15, 2017 at 3:51 PM, Konrad Rzeszutek Wilk >wrote: > >> +### Soft-reset for PV guests > > > > s/PV/HVM/ > > Is it? I thought this was for RHEL 5 PV guests to be able to do crash > kernels. > > >> +### Transcendent Memory > >> + > >> +Status: Experimental > >> + > >> +[XXX Add description] > > > > Guests with tmem drivers autoballoon memory out allowing a fluid > > and dynamic memory allocation - in effect memory overcommit without > > the need to swap. Only works with Linux guests (as it requires > > OS drivers). > > But autoballooning doesn't require any support in Xen, right? I > thought the TMEM support in Xen was more about the trancendent memory > backends. frontends you mean? That is Linux guests when compiled with XEN_TMEM will balloon down (using the self-shrinker) to using the normal balloon code (XENMEM_decrease_reservation, XENMEM_populate_physmap) to make the guest smaller. Then the Linux code starts hitting the case where it starts swapping memory out - and that is where the tmem comes in and the pages are swapped out to the hypervisor. There is also the secondary cache (cleancache) which just puts pages in the hypervisor temporary cache, kind of like an L3. For that you don't need ballooning. > > > ..snip.. > >> +### Live Patching > >> + > >> +Status, x86: Supported > >> +Status, ARM: Experimental > >> + > >> +Compile time disabled > > > > for ARM. > > > > As the patch will do: > > > > config LIVEPATCH > > - bool "Live patching support (TECH PREVIEW)" > > - default n > > + bool "Live patching support" > > + default X86 > > depends on HAS_BUILD_ID = "y" > > ---help--- > > Allows a running Xen hypervisor to be dynamically patched using > > Ack > > -George > > ___ > Xen-devel mailing list > Xen-devel@lists.xen.org > https://lists.xen.org/xen-devel ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH RFC v2] Add SUPPORT.md
On 09/12/2017 08:52 PM, Stefano Stabellini wrote: >>> +### Xen Framebuffer >>> + >>> +Status, Linux: Supported >> >> Frontend? > > Yes, please. If you write "Xen Framebuffer" I only take it to mean the > protocol as should be documented somewhere under docs/. Then I read > Linux, and I don't understand what you mean. Then I read QEMU and I have > to guess you are talking about the backend? Well this was in the "backend" section, so it was just completely wrong. I've removed it. :-) >>> +### ARM: 16K and 64K pages in guests >>> + >>> +Status: Supported, with caveats >>> + >>> +No support for QEMU backends in a 16K or 64K domain. >> >> Needs to be merged with the "1GB/2MB super page support"? > > Super-pages are different from page granularity. 1GB and 2MB pages are > based on the same 4K page granularity, while 512MB pages are based on > 64K granularity. Does it make sense? It does -- wondering what the best way to describe this concisely is. Would it make sense to say "L2 and L3 superpages", and then explain in the comment that for 4k page granularity that's 2MiB and 1GiB, and for 64k granularity it's 512MiB? > Maybe we want to say "ARM: 16K and 64K page granularity in guest" to > clarify. Clarifying that this is "page granularity" would be helpful. If we had a document describing this in more detail we could point to that also might be useful. -George ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Commit moratorium to staging
Hi Ian, On 11/01/2017 04:54 PM, Ian Jackson wrote: Julien Grall writes ("Re: Commit moratorium to staging"): Hi Ian, Thank you for the detailed e-mail. On 11/01/2017 02:07 PM, Ian Jackson wrote: Furthermore, the test is not intermittent, so a force push will be effective in the following sense: we would only get a "spurious" pass, resulting in the relevant osstest branch becoming stuck again, if a future test was unlucky and got an unaffected host. That will happen infrequently enough. ... I am not entirely sure to understand this paragraph. Are you saying that osstest will not get stuck if we get a "spurious" pass on some hardware in the future? Or will we need another force push? osstest *would* get stuck *if* we got such a spurious push. However, because osstest likes to retest failing tests on the same host as they failed on previously, such spurious passes are fairly unlikely. I say "likes to". The allocation system uses a set of heuristics to calculate a score for each possible host. The score takes into account both when the host will be available to this job, and information like "did the most recent run of this test, on this host, pass or fail". So I can't make guarantees but the amount of manual work to force push stuck branches will be tolerable. Thank you for the explanation. I agree with the force push to unblock master (and other tree I mentioned). However, it would still be nice to find the root causes of this bug and fix it. Cheers, -- Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH 0/3] x86/svm: virtual VMLOAD/VMSAVE
On 10/31/2017 06:03 PM, brian.wo...@amd.com wrote: > From: Brian Woods> > x86/svm: virtual VMLOAD/VMSAVE > > On AMD family 17h server processors, there is a feature called virtual > VMLOAD/VMSAVE. This allows a nested hypervisor to preform a VMLOAD or > VMSAVE without needing to be intercepted by the host hypervisor. > Virtual VMLOAD/VMSAVE requires the host hypervisor to be in long mode > and nested page tables to be enabled. For more information about it > please see: > > AMD64 Architecture Programmer’s Manual Volume 2: System Programming > http://support.amd.com/TechDocs/24593.pdf > Section: VMSAVE and VMLOAD Virtualization (Section 15.33.1) > > This patch series adds support to check for and enable the virtual > VMLOAD/VMSAVE features if available. > > Signed-off-by: Brian Woods > > Brian Woods (3): > x86/svm: rename lbr control field in vmcb > x86/svm: add virtual VMLOAD/VMSAVE feature definition > x86/svm: add virtual VMLOAD/VMSAVE support > > xen/arch/x86/hvm/svm/nestedsvm.c| 10 +- > xen/arch/x86/hvm/svm/svm.c | 3 ++- > xen/arch/x86/hvm/svm/svmdebug.c | 2 ++ > xen/arch/x86/hvm/svm/vmcb.c | 7 +++ > xen/include/asm-x86/hvm/svm/nestedsvm.h | 4 ++-- > xen/include/asm-x86/hvm/svm/svm.h | 2 ++ > xen/include/asm-x86/hvm/svm/vmcb.h | 7 --- > 7 files changed, 24 insertions(+), 11 deletions(-) > Reviewed-by: Boris Ostrovsky ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH RFC v2] Add SUPPORT.md
On 09/12/2017 11:39 AM, Roger Pau Monné wrote: > On Mon, Sep 11, 2017 at 06:01:59PM +0100, George Dunlap wrote: >> +## Toolstack >> + >> +### xl >> + >> +Status: Supported >> + >> +### Direct-boot kernel image format >> + >> +Supported, x86: bzImage > > ELF > >> +Supported, ARM32: zImage >> +Supported, ARM64: Image >> + >> +Format which the toolstack accept for direct-boot kernels > > IMHO it would be good to provide references to the specs, for ELF that > should be: > > http://refspecs.linuxbase.org/elf/elf.pdf I'm having trouble evaluating these to recommendations because I don't really know what the point of this section is. Who wants this information and why? I think most end-users will want to build a Linux / whatever binary. From that perspective, "bzImage" is probably the thing people want to know about. If you're doing unikernels or rolling your own custom system somehow, knowing that it's ELF is probably more useful. >> +### Qemu based disk backend (qdisk) for xl >> + >> +Status: Supported >> + >> +### Open vSwitch integration for xl >> + >> +Status: Supported > > Status, Linux: Supported > > I haven't played with vswitch on FreeBSD at all. Ack >> +### systemd support for xl >> + >> +Status: Supported >> + >> +### JSON output support for xl >> + >> +Status: Experimental >> + >> +Output of information in machine-parseable JSON format >> + >> +### AHCI support for xl >> + >> +Status, x86: Supported >> + >> +### ACPI guest >> + >> +Status, x86 HVM: Supported >> +Status, ARM: Tech Preview > > status, x86 PVH: Tech preview Is the interface and functionality mostly stable? Or are the interfaces likely to change / people using it likely to have crashes? >> +### PVUSB support for xl >> + >> +Status: Supported >> + >> +### HVM USB passthrough for xl >> + >> +Status, x86: Supported >> + >> +### QEMU backend hotplugging for xl >> + >> +Status: Supported > > What's this exactly? Is it referring to hot-adding PV disk and nics? > If so it shouldn't specifically reference xl, the same can be done > with blkback or netback for example. I think it means, xl knows how to hotplug QEMU backends. There was a time when I think this wasn't true. >> +## Scalability >> + >> +### 1GB/2MB super page support >> + >> +Status: Supported > > This needs something like: > > Status, x86 HVM/PVH: Supported Sounds good -- I'll have a line for ARM as well. > IIRC on ARM page sizes are different (64K?) > >> + >> +### x86/PV-on-HVM >> + >> +Status: Supported >> + >> +This is a useful label for a set of hypervisor features >> +which add paravirtualized functionality to HVM guests >> +for improved performance and scalability. >> +This includes exposing event channels to HVM guests. >> + >> +### x86/Deliver events to PVHVM guests using Xen event channels >> + >> +Status: Supported > > I think this should be labeled as "x86/HVM deliver guest events using > event channels", and the x86/PV-on-HVM section removed. Actually, I think 'PVHVM' should be the feature and this one should be removed. >> +### Blkfront >> + >> +Status, Linux: Supported >> +Status, FreeBSD: Supported, Security support external >> +Status, Windows: Supported > > Status, NetBSD: Supported, Security support external Ack >> +### Xen Console >> + >> +Status, Linux (hvc_xen): Supported >> +Status, Windows: Supported >> + >> +Guest-side driver capable of speaking the Xen PV console protocol > > Status, FreeBSD: Supported, Security support external > Status, NetBSD: Supported, Security support external Ack > >> + >> +### Xen PV keyboard >> + >> +Status, Linux (xen-kbdfront): Supported >> +Status, Windows: Supported >> + >> +Guest-side driver capable of speaking the Xen PV keyboard protocol >> + >> +[XXX 'Supported' here depends on the version we ship in 4.10 having some >> fixes] >> + >> +### Xen PVUSB protocol >> + >> +Status, Linux: Supported >> + >> +### Xen PV SCSI protocol >> + >> +Status, Linux: Supported, with caveats > > Should both of the above items be labeled with frontend/backend? Done. > And do we really need the 'Xen' prefix in all the items? Seems quite > redundant. Let me think about that. >> + >> +NB that while the pvSCSU frontend is in Linux and tested regularly, >> +there is currently no xl support. >> + >> +### Xen TPMfront > > PV TPM frotnend Ack >> +### PVCalls frontend >> + >> +Status, Linux: Tech Preview >> + >> +Guest-side driver capable of making pv system calls > > Didn't we merge the backend, but not the frontend? No idea >> + >> +## Virtual device support, host side >> + >> +### Blkback >> + >> +Status, Linux (blkback): Supported >> +Status, FreeBSD (blkback): Supported >^, security support > external Ack > Status, NetBSD (xbdback): Supported, security support external >> +Status, QEMU
Re: [Xen-devel] Commit moratorium to staging
Julien Grall writes ("Re: Commit moratorium to staging"): > Hi Ian, > > Thank you for the detailed e-mail. > > On 11/01/2017 02:07 PM, Ian Jackson wrote: > > Furthermore, the test is not intermittent, so a force push will be > > effective in the following sense: we would only get a "spurious" pass, > > resulting in the relevant osstest branch becoming stuck again, if a > > future test was unlucky and got an unaffected host. That will happen > > infrequently enough. ... > I am not entirely sure to understand this paragraph. Are you saying that > osstest will not get stuck if we get a "spurious" pass on some hardware > in the future? Or will we need another force push? osstest *would* get stuck *if* we got such a spurious push. However, because osstest likes to retest failing tests on the same host as they failed on previously, such spurious passes are fairly unlikely. I say "likes to". The allocation system uses a set of heuristics to calculate a score for each possible host. The score takes into account both when the host will be available to this job, and information like "did the most recent run of this test, on this host, pass or fail". So I can't make guarantees but the amount of manual work to force push stuck branches will be tolerable. Ian. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] VGA passthrough with USB passthrough
Hi, I have removed (00:02.0) from the grub argument line and been using: xen-pciback.hide=(00:14.0) modprobe.blacklist=i915,xhci_pci,xhci_hcd with DomU.cfg containing: gfx_passthru=1pci=['00:02.0','00:14.0'] Now I am seeing the VGA passthrough working, but the Windows DomU is stuck at 100% CPU utilization in the kernel. Any ideas why? I have PVDrivers installed, so I have added the pv list to this mail also. Neil On Mon, Oct 30, 2017 at 7:59 PM, Neil Sikkawrote: > Hello, I am trying to passthrough 2 physical devices to a DomU: my > integrated GPU that’s integrated into my CPU and my USB controller. These 2 > devices are shown in lspci as follows: > > > 00:02.0 VGA compatible controller: Intel Corporation Haswell Integrated > Graphics Controller (rev 06) > > … > > 00:14.0 USB controller: Intel Corporation Lynx Point USB xHCI Host > Controller (rev 04) > > > My Setup: > > xen_pciback compiled and loading as a module > > Dom0 has the necessary backend drivers, so I’m assuming its pvops, which > the docs say should be using “xen-pciback” rather than “pciback” in classic > kernels. > > Appended to my grub line: xen-pciback.hide=(00:02.0)(00:14.0) > modprobe.blacklist=i915,xhci_pci,xhci_hcd > > i915.ko is renamed to _i915.ko because it gets loaded despite > modprobe.blacklist argument, and I think i915 is competing with > xen-pciback.ko for (00:02.0) > > DomU.cfg has: gfx_passthru=1pci=['00:02.0','00:14.0'] > > Before starting DomU, I run xl pci-assignable-add 00:02.0 && xl > pci-assignable-add 00:14.0 > > > I have gotten each passthrough to independently work correctly on my > computer in the past, so I know the hardware supports it. When combining > the USB and GPU passthrough, I am seeing different things online and am > confused about the correct way to configure my Dom0/DomUs. Here: > > > https://wiki.xenproject.org/wiki/Xen_VGA_Passthrough > > > it says nothing about passing the BDF of the iGPU to xen-pciback.hide= > argument in grub. However that page links to a document, here: > > > https://wiki.xenproject.org/wiki/File:Xen_VGA_Passthrough_ > to_Windows_8_Consumer_Preview_64-bit_English_HVM_domU_and_ > Windows_XP_Home_Edition_SP3_HVM_domU_with_Xen_4.2- > unstable_Changeset_25070_and_Linux_Kernel_3.3.0_in_Ubuntu_ > 11.10_oneiric_ocelot_amd64_Final_Release_Dom0.pdf > > > which says to pass the BDF of the discrete GPU 01:00.0 to the grub > xen-pciback.hide parameter. When I use xen-pciback.hide=(00.02.0)(00:14.0), > I see 23 "vgaarb: this pci device is not a vga device" errors when I boot > Dom0 (which might be related to the fact that lspci reports 23 devices?). > When I remove (00:02.0), I dont see the vgaarb errors, but in both cases, > when I create the DomU, the VGA passthrough works. > > > When I use xen-pciback.hide=(00.02.0)(00:14.0) and try to passthrough > both together, I also get SATA write errors. > > > How can correctly I passthrough both the iGPU and the USB controller to > avoid SATA errors leading to disk corruption and vgaarb errors? > > > Thank You. > Neil > -- My Blog: http://www.neilscomputerblog.blogspot.com/ Twitter: @neilsikka ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH] scripts: warn about invalid MAINTAINERS patterns
(add mercurial-devel and xen-devel to cc's) On Tue, 2017-10-31 at 16:37 -0500, Tom Saeger wrote: > Add "--pattern-checks" option to get_maintainer.pl to warn about invalid > "F" and "X" patterns found in MAINTAINERS file(s). Hey again Tom. About mercurial/hg. While as far as I know there hasn't been a mercurial tree for the linux kernel sources in many years, I believe the mercurial command to list files should be different. > my %VCS_cmds_hg = ( > @@ -167,6 +169,7 @@ my %VCS_cmds_hg = ( > "subject_pattern" => "^HgSubject: (.*)", > "stat_pattern" => "^(\\d+)\t(\\d+)\t\$file\$", > "file_exists_cmd" => "hg files \$file", > +"list_files_cmd" => "hg files \$file", I think this should be "list_files_cmd" => "hg manifest -R \$file", It seems to work on a XEN test branch but does anyone really care about hg support in get_maintainers? btw: to the XEN maintainers The XEN mercurial branch for MAINTAINERS has a few odd entries and a few missing file patterns I think the XEN MAINTAINERS file should be updated to: --- diff -r c60f04b73240 MAINTAINERS --- a/MAINTAINERS Mon Oct 16 15:24:44 2017 +0100 +++ b/MAINTAINERS Wed Nov 01 09:39:34 2017 -0700 @@ -246,7 +246,8 @@ KCONFIG M: Doug GoldsteinS: Supported -F: docs/misc/kconfig{,-language}.txt +F: docs/misc/kconfig.txt +F: docs/misc/kconfig-language.txt F: xen/tools/kconfig/ KDD DEBUGGER @@ -257,8 +258,8 @@ KEXEC M: Andrew Cooper S: Supported -F: xen/common/{kexec,kimage}.c -F: xen/include/{kexec,kimage}.h +F: xen/common/kexec.[ch] +F: xen/common/kimage.[ch] F: xen/arch/x86/machine_kexec.c F: xen/arch/x86/x86_64/kexec_reloc.S --- After the patch above is applied, --self-test shows: $ ~/linux/next/scripts/get_maintainer.pl --self-test ./MAINTAINERS:403: warning: no matches F: drivers/xen/usb*/ ./MAINTAINERS:415: warning: no matches F: xen/arch/x88/hvm/vm_event.c ./MAINTAINERS:429: warning: no matches F: extras/mini-os/tpm* ./MAINTAINERS:430: warning: no matches F: extras/mini-os/include/tpm* ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
On 11/01/2017 11:51 AM, Juergen Gross wrote: > On 01/11/17 16:32, Waiman Long wrote: >> Currently, there are 3 different lock types that can be chosen for >> the x86 architecture: >> >> - qspinlock >> - pvqspinlock >> - unfair lock >> >> One of the above lock types will be chosen at boot time depending on >> a number of different factors. >> >> Ideally, the hypervisors should be able to pick the best performing >> lock type for the current VM configuration. That is not currently >> the case as the performance of each lock type are affected by many >> different factors like the number of vCPUs in the VM, the amount vCPU >> overcommitment, the CPU type and so on. >> >> Generally speaking, unfair lock performs well for VMs with a small >> number of vCPUs. Native qspinlock may perform better than pvqspinlock >> if there is vCPU pinning and there is no vCPU over-commitment. >> >> This patch adds a new kernel parameter to allow administrator to >> choose the paravirt spinlock type to be used. VM administrators can >> experiment with the different lock types and choose one that can best >> suit their need, if they want to. Hypervisor developers can also use >> that to experiment with different lock types so that they can come >> up with a better algorithm to pick the best lock type. >> >> The hypervisor paravirt spinlock code will override this new parameter >> in determining if pvqspinlock should be used. The parameter, however, >> will override Xen's xen_nopvspin in term of disabling unfair lock. > Hmm, I'm not sure we need pvlock_type _and_ xen_nopvspin. What do others > think? I don't think we need xen_nopvspin, but I don't want to remove that without agreement from the community. >> DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); >> >> void __init native_pv_lock_init(void) >> { >> -if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) >> +if (pv_spinlock_type == locktype_unfair) >> +return; >> + >> +if (!static_cpu_has(X86_FEATURE_HYPERVISOR) || >> + (pv_spinlock_type != locktype_auto)) >> static_branch_disable(_spin_lock_key); > Really? I don't think locktype_paravirt should disable the static key. With paravirt spinlock, it doesn't matter if the static key is disabled or not. Without CONFIG_PARAVIRT_SPINLOCKS, however, it does degenerate into the native qspinlock. So you are right, I should check for paravirt type as well. Cheers, Longman ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Commit moratorium to staging
On Wed, Nov 01, 2017 at 02:07:48PM +, Ian Jackson wrote: > So, investigations (mostly by Roger, and also a bit of archaeology in > the osstest db by me) have determined: > > * This bug is 100% reproducible on affected hosts. The repro is > to boot the Windows guest, save/restore it, then migrate it, > then shut down. (This is from an IRL conversation with Roger and > may not be 100% accurate. Roger, please correct me.) Yes, that's correct AFAICT. The affected hosts works fine if windows is booted and then shut down (without save/restore or migrations involved). > * Affected hosts differ from unaffected hosts according to cpuid. > Roger has repro'd the bug on an unaffected host by masking out > certain cpuid bits. There are 6 implicated bits and he is working > to narrow that down. I'm currently trying to narrow this down and make sure the above is accurate. > * It seems likely that this is therefore a real bug. Maybe in Xen and > perhaps indeed one that should indeed be a release blocker. > > * But this is not a regresson between master and staging. It affects > many osstest branches apparently equally. > > * This test is, effectively, new: before the osstest change > "HostDiskRoot: bump to 20G", these jobs would always fail earlier > and the affected step would not be run. > > * The passes we got on various osstest branches before were just > because those branches hadn't tested on an affected host yet. As > branches test different hosts, they will stick on affected hosts. > > ISTM that this situation would therefore justify a force push. We > have established that this bug is very unlikely to be anything to do > with the commits currently blocked by the failing pushes. I agree, this is a bug that's always been present (at least in the tested branches). It's triggered now because the windows tests have made further progress. > Furthermore, the test is not intermittent, so a force push will be > effective in the following sense: we would only get a "spurious" pass, > resulting in the relevant osstest branch becoming stuck again, if a > future test was unlucky and got an unaffected host. That will happen > infrequently enough. > > So unless anyone objects (and for xen.git#master, with Julien's > permission), I intend to force push all affected osstest branches when > the test report shows the only blockage is ws16 and/or win10 tests > failing the "guest-stop" step. > > Opinions ? I agree that a force push is justified. This is bug going to be quite annoying if osstest decides to tests on non-affected hosts, because then we will get sporadic success flights. Thanks, Roger. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [linux-next test] 115462: regressions - FAIL
flight 115462 linux-next real [real] http://logs.test-lab.xenproject.org/osstest/logs/115462/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-armhf-armhf-examine 8 reboot fail REGR. vs. 114658 test-armhf-armhf-xl-xsm 7 xen-boot fail REGR. vs. 114682 test-armhf-armhf-xl-arndale 7 xen-boot fail REGR. vs. 114682 test-armhf-armhf-xl-credit2 7 xen-boot fail REGR. vs. 114682 test-armhf-armhf-libvirt-xsm 7 xen-boot fail REGR. vs. 114682 test-armhf-armhf-xl-vhd 7 xen-boot fail REGR. vs. 114682 build-amd64-pvops 6 kernel-build fail REGR. vs. 114682 test-armhf-armhf-libvirt-raw 7 xen-boot fail REGR. vs. 114682 test-armhf-armhf-xl 7 xen-boot fail REGR. vs. 114682 test-armhf-armhf-xl-multivcpu 7 xen-bootfail REGR. vs. 114682 test-armhf-armhf-libvirt 7 xen-boot fail REGR. vs. 114682 test-armhf-armhf-xl-cubietruck 7 xen-boot fail REGR. vs. 114682 Regressions which are regarded as allowable (not blocking): test-armhf-armhf-xl-rtds 7 xen-boot fail REGR. vs. 114682 Tests which did not succeed, but are not blocking: test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked n/a test-amd64-amd64-rumprun-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemut-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemut-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pygrub 1 build-check(1) blocked n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked n/a test-amd64-amd64-examine 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemut-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemut-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-amd64-xl-rtds 1 build-check(1) blocked n/a test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114682 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 114682 test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass version targeted for testing: linux36ef71cae353f88fd6e095e2aaa3e5953af1685d baseline version: linuxebe6e90ccc6679cb01d2b280e4b61e6092d4bedb Last test of basis (not found) Failing since (not found) Testing same since 114796 2017-10-20 09:26:55 Z
Re: [Xen-devel] [PATCH v3 4/7] libxl: support mapping static shared memory areas during domain creation
On Thu, Oct 19, 2017 at 10:36:32AM +0800, Zhongze Liu wrote: > Add libxl__sshm_add to map shared pages from one DomU to another, The mapping > process involves the follwing steps: > > * Set defaults and check for further errors in the static_shm configs: > overlapping areas, invalid ranges, duplicated master domain, > no master domain etc. > * Write infomation of static shared memory areas into the appropriate > xenstore paths. > * Use xc_domain_add_to_physmap_batch to do the page sharing. > * Set the refcount of the shared region accordingly > > Temporarily mark this as unsupported on x86 because calling p2m_add_foregin on > two domU's is currently not allowd on x86 (see the comments in > x86/mm/p2m.c:p2m_add_foregin for more details). > > This is for the proposal "Allow setting up shared memory areas between VMs > from xl config file" (see [1]). > > [1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html > > Signed-off-by: Zhongze Liu> > Cc: Wei Liu > Cc: Ian Jackson > Cc: Stefano Stabellini > Cc: Julien Grall > Cc: xen-devel@lists.xen.org > --- > V3: > * unmap the successfully mapped pages whenever rc != 0 A general note: please properly capitalise the comments. > --- > tools/libxl/Makefile | 2 +- > tools/libxl/libxl_arch.h | 6 + > tools/libxl/libxl_arm.c | 15 ++ > tools/libxl/libxl_create.c | 27 +++ > tools/libxl/libxl_internal.h | 13 ++ > tools/libxl/libxl_sshm.c | 395 > +++ > tools/libxl/libxl_x86.c | 18 ++ > 7 files changed, 475 insertions(+), 1 deletion(-) > create mode 100644 tools/libxl/libxl_sshm.c > > diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile > index 5a861f72cb..91bc70cda2 100644 > --- a/tools/libxl/Makefile > +++ b/tools/libxl/Makefile [...] > + > +/* check if the sshm slave configs in @sshm overlap */ > +int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid, > + libxl_static_shm *sshms, int len) > +{ > + > +const libxl_static_shm **slave_sshms = NULL; > +int num_slaves; > +int i; > + > +if (!len) return 0; > + > +slave_sshms = libxl__calloc(gc, len, sizeof(slave_sshms[0])); > +num_slaves = 0; > +for (i = 0; i < len; ++i) { > +if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE) > +slave_sshms[num_slaves++] = sshms + i; > +} > +qsort(slave_sshms, num_slaves, sizeof(slave_sshms[0]), sshm_range_cmp); > + > +for (i = 0; i < num_slaves - 1; ++i) { > +if (slave_sshms[i+1]->begin < slave_sshms[i]->end) { > +SSHM_ERROR(domid, slave_sshms[i+1]->id, "slave ranges overlap."); > +return ERROR_INVAL; > +} > +} > + > +return 0; > +} > + > +/* libxl__sshm_do_map -- map pages into slave's physmap > + * > + * This functions maps > + * mater gfn: [@msshm->begin + @sshm->offset, @msshm->end + > @sshm->offset) "master" This is confusing. What if offset is not page aligned? > + * into > + * slave gfn: [@sshm->begin, @sshm->end) > + * > + * The gfns of the pages that are successfully mapped will be stored > + * in @mapped, and the number of the gfns will be stored in @nmapped. > + * > + * The caller have to guarentee that sshm->begin < sshm->end */ > +static int libxl__sshm_do_map(libxl__gc *gc, uint32_t mid, uint32_t sid, > + libxl_static_shm *sshm, libxl_static_shm > *msshm, > + xen_pfn_t *mapped, unsigned int *nmapped) > +{ > +int rc; > +int i; > +unsigned int num_mpages, num_spages, num_success, offset; > +int *errs; > +xen_ulong_t *idxs; > +xen_pfn_t *gpfns; > + > +num_mpages = (msshm->end - msshm->begin) >> XC_PAGE_SHIFT; > +num_spages = (sshm->end - sshm->begin) >> XC_PAGE_SHIFT; > +offset = sshm->offset >> XC_PAGE_SHIFT; > + > +/* Check range. Test offset < mpages first to avoid overflow */ > +if ((offset >= num_mpages) || (num_mpages - offset < num_spages)) { > +SSHM_ERROR(sid, sshm->id, "exceeds master's address space."); > +rc = ERROR_INVAL; > +goto out; > +} > + > +/* fill out the pfn's and do the mapping */ > +errs = libxl__calloc(gc, num_spages, sizeof(int)); > +idxs = libxl__calloc(gc, num_spages, sizeof(xen_ulong_t)); > +gpfns = libxl__calloc(gc, num_spages, sizeof(xen_pfn_t)); > +for (i = 0; i < num_spages; i++) { > +idxs[i] = (msshm->begin >> XC_PAGE_SHIFT) + offset + i; > +gpfns[i]= (sshm->begin >> XC_PAGE_SHIFT) + i; > +} > +rc = xc_domain_add_to_physmap_batch(CTX->xch, > +sid, mid, > +XENMAPSPACE_gmfn_foreign, > +num_spages, > +idxs,
Re: [Xen-devel] [PATCH v3 5/7] libxl: support unmapping static shared memory areas during domain destruction
On Thu, Oct 19, 2017 at 10:36:33AM +0800, Zhongze Liu wrote: > Add libxl__sshm_del to unmap static shared memory areas mapped by > libxl__sshm_add during domain creation. The unmapping process is: > > * For a master: decrease the refcount of the sshm region, if the refcount > reaches 0, cleanup the whole sshm path. > * For a slave: unmap the shared pages, and cleanup related xs entries. > decrease the refcount of the sshm region, if the refcount reaches 0, > cleanup the whole sshm path. > This appears to be in line with what we discussed. I would like to see some explanation for: if one or more of the things the code does fail half way, the system is still going to be in a consistent state. Most notably, there isn't going to be any page leaked with uncleaned refs. > + > +rc = libxl__xs_transaction_commit(gc, ); > +if (!rc) break; > +if (rc < 0) goto out; > + isretry = true; Indentation. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
On 01/11/17 16:32, Waiman Long wrote: > Currently, there are 3 different lock types that can be chosen for > the x86 architecture: > > - qspinlock > - pvqspinlock > - unfair lock > > One of the above lock types will be chosen at boot time depending on > a number of different factors. > > Ideally, the hypervisors should be able to pick the best performing > lock type for the current VM configuration. That is not currently > the case as the performance of each lock type are affected by many > different factors like the number of vCPUs in the VM, the amount vCPU > overcommitment, the CPU type and so on. > > Generally speaking, unfair lock performs well for VMs with a small > number of vCPUs. Native qspinlock may perform better than pvqspinlock > if there is vCPU pinning and there is no vCPU over-commitment. > > This patch adds a new kernel parameter to allow administrator to > choose the paravirt spinlock type to be used. VM administrators can > experiment with the different lock types and choose one that can best > suit their need, if they want to. Hypervisor developers can also use > that to experiment with different lock types so that they can come > up with a better algorithm to pick the best lock type. > > The hypervisor paravirt spinlock code will override this new parameter > in determining if pvqspinlock should be used. The parameter, however, > will override Xen's xen_nopvspin in term of disabling unfair lock. Hmm, I'm not sure we need pvlock_type _and_ xen_nopvspin. What do others think? > > Signed-off-by: Waiman Long> --- > Documentation/admin-guide/kernel-parameters.txt | 7 + > arch/x86/include/asm/paravirt.h | 9 ++ > arch/x86/kernel/kvm.c | 4 +++ > arch/x86/kernel/paravirt.c | 40 > - > arch/x86/xen/spinlock.c | 6 ++-- > 5 files changed, 62 insertions(+), 4 deletions(-) > > diff --git a/Documentation/admin-guide/kernel-parameters.txt > b/Documentation/admin-guide/kernel-parameters.txt > index f7df49d..c98d9c7 100644 > --- a/Documentation/admin-guide/kernel-parameters.txt > +++ b/Documentation/admin-guide/kernel-parameters.txt > @@ -3275,6 +3275,13 @@ > [KNL] Number of legacy pty's. Overwrites compiled-in > default number. > > + pvlock_type=[X86,PV_OPS] > + Specify the paravirt spinlock type to be used. > + Options are: > + queued - native queued spinlock > + pv - paravirt queued spinlock > + unfair - simple TATAS unfair lock > + > quiet [KNL] Disable most log messages > > r128= [HW,DRM] > diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h > index 12deec7..941a046 100644 > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -690,6 +690,15 @@ static __always_inline bool pv_vcpu_is_preempted(long > cpu) > > #endif /* SMP && PARAVIRT_SPINLOCKS */ > > +enum pv_spinlock_type { > + locktype_auto, > + locktype_queued, > + locktype_paravirt, > + locktype_unfair, > +}; > + > +extern enum pv_spinlock_type pv_spinlock_type; > + > #ifdef CONFIG_X86_32 > #define PV_SAVE_REGS "pushl %ecx; pushl %edx;" > #define PV_RESTORE_REGS "popl %edx; popl %ecx;" > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 8bb9594..3a5d3ec4 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -646,6 +646,10 @@ void __init kvm_spinlock_init(void) > if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) > return; > > + if ((pv_spinlock_type == locktype_queued) || > + (pv_spinlock_type == locktype_unfair)) > + return; > + > __pv_init_lock_hash(); > pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; > pv_lock_ops.queued_spin_unlock = > PV_CALLEE_SAVE(__pv_queued_spin_unlock); > diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c > index 041096b..ca35cd3 100644 > --- a/arch/x86/kernel/paravirt.c > +++ b/arch/x86/kernel/paravirt.c > @@ -115,11 +115,48 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void > *target, > return 5; > } > > +/* > + * The kernel argument "pvlock_type=" can be used to explicitly specify > + * which type of spinlocks to be used. Currently, there are 3 options: > + * 1) queued - the native queued spinlock > + * 2) pv - the paravirt queued spinlock (if CONFIG_PARAVIRT_SPINLOCKS) > + * 3) unfair - the simple TATAS unfair lock > + * > + * If this argument is not specified, the kernel will automatically choose > + * an appropriate one depending on X86_FEATURE_HYPERVISOR and hypervisor > + * specific settings. > + */ > +enum pv_spinlock_type __read_mostly pv_spinlock_type = locktype_auto; > + > +static int __init pvlock_setup(char *s) >
Re: [Xen-devel] [PATCH v3 for-next 4/4] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
On 11/01/2017 04:03 PM, Julien Grall wrote: > Most of the users of page_to_mfn and mfn_to_page are either overriding > the macros to make them work with mfn_t or use mfn_x/_mfn because the > rest of the function use mfn_t. > > So make __page_to_mfn and __mfn_to_page return mfn_t by default. > > Only reasonable clean-ups are done in this patch because it is > already quite big. So some of the files now override page_to_mfn and > mfn_to_page to avoid using mfn_t. > > Lastly, domain_page_to_mfn is also converted to use mfn_t given that > most of the callers are now switched to _mfn(domain_page_to_mfn(...)). > > Signed-off-by: Julien GrallAcked-by: Razvan Cojocaru Thanks, Razvan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2] xen: support priv-mapping in an HVM tools domain
On 01/11/17 14:45, Paul Durrant wrote: >> -Original Message- >> From: Juergen Gross [mailto:jgr...@suse.com] >> Sent: 01 November 2017 13:40 >> To: Paul Durrant; x...@kernel.org; xen- >> de...@lists.xenproject.org; linux-ker...@vger.kernel.org >> Cc: Boris Ostrovsky ; Thomas Gleixner >> ; Ingo Molnar ; H. Peter Anvin >> >> Subject: Re: [PATCH v2] xen: support priv-mapping in an HVM tools domain >> >> On 01/11/17 12:31, Paul Durrant wrote: >>> If the domain has XENFEAT_auto_translated_physmap then use of the PV- >>> specific HYPERVISOR_mmu_update hypercall is clearly incorrect. >>> >>> This patch adds checks in xen_remap_domain_gfn_array() and >>> xen_unmap_domain_gfn_array() which call through to the approprate >>> xlate_mmu function if the feature is present. >>> >>> This patch also moves xen_remap_domain_gfn_range() into the PV-only >> MMU >>> code and #ifdefs the (only) calling code in privcmd accordingly. >>> >>> Signed-off-by: Paul Durrant >>> --- >>> Cc: Boris Ostrovsky >>> Cc: Juergen Gross >>> Cc: Thomas Gleixner >>> Cc: Ingo Molnar >>> Cc: "H. Peter Anvin" >>> --- >>> arch/x86/xen/mmu.c| 36 +--- >>> arch/x86/xen/mmu_pv.c | 11 +++ >>> drivers/xen/privcmd.c | 17 + >>> include/xen/xen-ops.h | 7 +++ >>> 4 files changed, 48 insertions(+), 23 deletions(-) >>> >>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c >>> index 3e15345abfe7..01837c36e293 100644 >>> --- a/arch/x86/xen/mmu.c >>> +++ b/arch/x86/xen/mmu.c >>> @@ -91,12 +91,12 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, >> pgtable_t token, >>> return 0; >>> } >>> >>> -static int do_remap_gfn(struct vm_area_struct *vma, >>> - unsigned long addr, >>> - xen_pfn_t *gfn, int nr, >>> - int *err_ptr, pgprot_t prot, >>> - unsigned domid, >>> - struct page **pages) >>> +int xen_remap_gfn(struct vm_area_struct *vma, >>> + unsigned long addr, >>> + xen_pfn_t *gfn, int nr, >>> + int *err_ptr, pgprot_t prot, >>> + unsigned int domid, >>> + struct page **pages) >>> { >>> int err = 0; >>> struct remap_data rmd; >>> @@ -166,36 +166,34 @@ static int do_remap_gfn(struct vm_area_struct >> *vma, >>> return err < 0 ? err : mapped; >>> } >>> >>> -int xen_remap_domain_gfn_range(struct vm_area_struct *vma, >>> - unsigned long addr, >>> - xen_pfn_t gfn, int nr, >>> - pgprot_t prot, unsigned domid, >>> - struct page **pages) >>> -{ >>> - return do_remap_gfn(vma, addr, , nr, NULL, prot, domid, >> pages); >>> -} >>> -EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range); >>> - >>> int xen_remap_domain_gfn_array(struct vm_area_struct *vma, >>>unsigned long addr, >>>xen_pfn_t *gfn, int nr, >>>int *err_ptr, pgprot_t prot, >>>unsigned domid, struct page **pages) >>> { >>> + if (xen_feature(XENFEAT_auto_translated_physmap)) >>> + return xen_xlate_remap_gfn_array(vma, addr, gfn, nr, >> err_ptr, >>> +prot, domid, pages); >>> + >>> /* We BUG_ON because it's a programmer error to pass a NULL >> err_ptr, >>> * and the consequences later is quite hard to detect what the actual >>> * cause of "wrong memory was mapped in". >>> */ >>> BUG_ON(err_ptr == NULL); >>> - return do_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, >> pages); >>> + return xen_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, >>> +pages); >>> } >>> EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array); >>> >>> /* Returns: 0 success */ >>> int xen_unmap_domain_gfn_range(struct vm_area_struct *vma, >>> - int numpgs, struct page **pages) >>> + int nr, struct page **pages) >>> { >>> - if (!pages || !xen_feature(XENFEAT_auto_translated_physmap)) >>> + if (xen_feature(XENFEAT_auto_translated_physmap)) >>> + return xen_xlate_unmap_gfn_range(vma, nr, pages); >>> + >>> + if (!pages) >>> return 0; >>> >>> return -EINVAL; >>> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c >>> index 71495f1a86d7..4974d8a6c2b4 100644 >>> --- a/arch/x86/xen/mmu_pv.c >>> +++ b/arch/x86/xen/mmu_pv.c >>> @@ -2670,3 +2670,14 @@ phys_addr_t paddr_vmcoreinfo_note(void) >>> return __pa(vmcoreinfo_note); >>> } >>> #endif /* CONFIG_KEXEC_CORE */ >>> + >>> +int xen_remap_domain_gfn_range(struct vm_area_struct *vma, >>> +
Re: [Xen-devel] [PATCH v3 for-next 4/4] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
> -Original Message- > From: Julien Grall [mailto:julien.gr...@linaro.org] > Sent: 01 November 2017 14:03 > To: xen-devel@lists.xen.org > Cc: Julien Grall; Stefano Stabellini > ; Julien Grall ; Andrew > Cooper ; George Dunlap > ; Ian Jackson ; Jan > Beulich ; Konrad Rzeszutek Wilk > ; Tim (Xen.org) ; Wei Liu > ; Razvan Cojocaru ; > Tamas K Lengyel ; Paul Durrant > ; Boris Ostrovsky ; > Suravee Suthikulpanit ; Jun Nakajima > ; Kevin Tian ; George > Dunlap ; Gang Wei ; > Shane Wang > Subject: [PATCH v3 for-next 4/4] xen: Convert __page_to_mfn and > __mfn_to_page to use typesafe MFN > > Most of the users of page_to_mfn and mfn_to_page are either overriding > the macros to make them work with mfn_t or use mfn_x/_mfn because the > rest of the function use mfn_t. > > So make __page_to_mfn and __mfn_to_page return mfn_t by default. > > Only reasonable clean-ups are done in this patch because it is > already quite big. So some of the files now override page_to_mfn and > mfn_to_page to avoid using mfn_t. > > Lastly, domain_page_to_mfn is also converted to use mfn_t given that > most of the callers are now switched to _mfn(domain_page_to_mfn(...)). > > Signed-off-by: Julien Grall > emulate bits... Reviewed-by: Paul Durrant > --- > > Andrew suggested to drop IS_VALID_PAGE in xen/tmem_xen.h. His > comment > was: > > "/sigh This is tautological. The definition of a "valid mfn" in this > case is one for which we have frametable entry, and by having a struct > page_info in our hands, this is by definition true (unless you have a > wild pointer, at which point your bug is elsewhere). > > IS_VALID_PAGE() is only ever used in assertions and never usefully, so > instead I would remove it entirely rather than trying to fix it up." > > I can remove the function in a separate patch at the begining of the > series if Konrad (TMEM maintainer) is happy with that. > > Cc: Stefano Stabellini > Cc: Julien Grall > Cc: Andrew Cooper > Cc: George Dunlap > Cc: Ian Jackson > Cc: Jan Beulich > Cc: Konrad Rzeszutek Wilk > Cc: Tim Deegan > Cc: Wei Liu > Cc: Razvan Cojocaru > Cc: Tamas K Lengyel > Cc: Paul Durrant > Cc: Boris Ostrovsky > Cc: Suravee Suthikulpanit > Cc: Jun Nakajima > Cc: Kevin Tian > Cc: George Dunlap > Cc: Gang Wei > Cc: Shane Wang > > Changes in v3: > - Rebase on the latest staging and fix some conflicts. Tags > haven't be retained. > - Switch the printf format to PRI_mfn > > Changes in v2: > - Some part have been moved in separate patch > - Remove one spurious comment > - Convert domain_page_to_mfn to use mfn_t > --- > xen/arch/arm/domain_build.c | 2 -- > xen/arch/arm/kernel.c | 2 +- > xen/arch/arm/mem_access.c | 2 +- > xen/arch/arm/mm.c | 8 > xen/arch/arm/p2m.c | 10 ++ > xen/arch/x86/cpu/vpmu.c | 4 ++-- > xen/arch/x86/domain.c | 21 +++-- > xen/arch/x86/domain_page.c | 6 +++--- > xen/arch/x86/domctl.c | 2 +- > xen/arch/x86/hvm/dm.c | 2 +- > xen/arch/x86/hvm/dom0_build.c | 6 +++--- > xen/arch/x86/hvm/emulate.c | 6 +++--- > xen/arch/x86/hvm/hvm.c | 16 > xen/arch/x86/hvm/ioreq.c| 6 +++--- > xen/arch/x86/hvm/stdvga.c | 2 +- > xen/arch/x86/hvm/svm/svm.c | 4 ++-- > xen/arch/x86/hvm/viridian.c | 6 +++--- > xen/arch/x86/hvm/vmx/vmcs.c | 2 +- > xen/arch/x86/hvm/vmx/vmx.c | 10 +- > xen/arch/x86/hvm/vmx/vvmx.c | 6 +++--- > xen/arch/x86/mm.c | 6 -- > xen/arch/x86/mm/guest_walk.c| 6 +++--- > xen/arch/x86/mm/hap/guest_walk.c| 2 +- > xen/arch/x86/mm/hap/hap.c | 6 -- > xen/arch/x86/mm/hap/nested_ept.c| 2 +-
[Xen-devel] [PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
Currently, there are 3 different lock types that can be chosen for the x86 architecture: - qspinlock - pvqspinlock - unfair lock One of the above lock types will be chosen at boot time depending on a number of different factors. Ideally, the hypervisors should be able to pick the best performing lock type for the current VM configuration. That is not currently the case as the performance of each lock type are affected by many different factors like the number of vCPUs in the VM, the amount vCPU overcommitment, the CPU type and so on. Generally speaking, unfair lock performs well for VMs with a small number of vCPUs. Native qspinlock may perform better than pvqspinlock if there is vCPU pinning and there is no vCPU over-commitment. This patch adds a new kernel parameter to allow administrator to choose the paravirt spinlock type to be used. VM administrators can experiment with the different lock types and choose one that can best suit their need, if they want to. Hypervisor developers can also use that to experiment with different lock types so that they can come up with a better algorithm to pick the best lock type. The hypervisor paravirt spinlock code will override this new parameter in determining if pvqspinlock should be used. The parameter, however, will override Xen's xen_nopvspin in term of disabling unfair lock. Signed-off-by: Waiman Long--- Documentation/admin-guide/kernel-parameters.txt | 7 + arch/x86/include/asm/paravirt.h | 9 ++ arch/x86/kernel/kvm.c | 4 +++ arch/x86/kernel/paravirt.c | 40 - arch/x86/xen/spinlock.c | 6 ++-- 5 files changed, 62 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f7df49d..c98d9c7 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3275,6 +3275,13 @@ [KNL] Number of legacy pty's. Overwrites compiled-in default number. + pvlock_type=[X86,PV_OPS] + Specify the paravirt spinlock type to be used. + Options are: + queued - native queued spinlock + pv - paravirt queued spinlock + unfair - simple TATAS unfair lock + quiet [KNL] Disable most log messages r128= [HW,DRM] diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 12deec7..941a046 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -690,6 +690,15 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) #endif /* SMP && PARAVIRT_SPINLOCKS */ +enum pv_spinlock_type { + locktype_auto, + locktype_queued, + locktype_paravirt, + locktype_unfair, +}; + +extern enum pv_spinlock_type pv_spinlock_type; + #ifdef CONFIG_X86_32 #define PV_SAVE_REGS "pushl %ecx; pushl %edx;" #define PV_RESTORE_REGS "popl %edx; popl %ecx;" diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 8bb9594..3a5d3ec4 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -646,6 +646,10 @@ void __init kvm_spinlock_init(void) if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) return; + if ((pv_spinlock_type == locktype_queued) || + (pv_spinlock_type == locktype_unfair)) + return; + __pv_init_lock_hash(); pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 041096b..ca35cd3 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -115,11 +115,48 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target, return 5; } +/* + * The kernel argument "pvlock_type=" can be used to explicitly specify + * which type of spinlocks to be used. Currently, there are 3 options: + * 1) queued - the native queued spinlock + * 2) pv - the paravirt queued spinlock (if CONFIG_PARAVIRT_SPINLOCKS) + * 3) unfair - the simple TATAS unfair lock + * + * If this argument is not specified, the kernel will automatically choose + * an appropriate one depending on X86_FEATURE_HYPERVISOR and hypervisor + * specific settings. + */ +enum pv_spinlock_type __read_mostly pv_spinlock_type = locktype_auto; + +static int __init pvlock_setup(char *s) +{ + if (!s) + return -EINVAL; + + if (!strcmp(s, "queued")) + pv_spinlock_type = locktype_queued; + else if (!strcmp(s, "pv")) + pv_spinlock_type = locktype_paravirt; + else if (!strcmp(s, "unfair")) + pv_spinlock_type = locktype_unfair;
[Xen-devel] [PATCH v1 5/6] xl: add vkb config parser and CLI
From: Oleksandr GrytsovSigned-off-by: Oleksandr Grytsov --- tools/xl/Makefile | 2 +- tools/xl/xl.h | 3 ++ tools/xl/xl_cmdtable.c | 15 ++ tools/xl/xl_parse.c| 75 +- tools/xl/xl_parse.h| 2 +- tools/xl/xl_vkb.c | 142 + 6 files changed, 236 insertions(+), 3 deletions(-) create mode 100644 tools/xl/xl_vkb.c diff --git a/tools/xl/Makefile b/tools/xl/Makefile index 66bdbde..2769295 100644 --- a/tools/xl/Makefile +++ b/tools/xl/Makefile @@ -22,7 +22,7 @@ XL_OBJS += xl_vtpm.o xl_block.o xl_nic.o xl_usb.o XL_OBJS += xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o XL_OBJS += xl_info.o xl_console.o xl_misc.o XL_OBJS += xl_vmcontrol.o xl_saverestore.o xl_migrate.o -XL_OBJS += xl_vdispl.o xl_vsnd.o +XL_OBJS += xl_vdispl.o xl_vsnd.o xl_vkb.o $(XL_OBJS): CFLAGS += $(CFLAGS_libxentoollog) $(XL_OBJS): CFLAGS += $(CFLAGS_XL) diff --git a/tools/xl/xl.h b/tools/xl/xl.h index 703caa6..826e9c1 100644 --- a/tools/xl/xl.h +++ b/tools/xl/xl.h @@ -170,6 +170,9 @@ int main_vtpmdetach(int argc, char **argv); int main_vdisplattach(int argc, char **argv); int main_vdispllist(int argc, char **argv); int main_vdispldetach(int argc, char **argv); +int main_vkbattach(int argc, char **argv); +int main_vkblist(int argc, char **argv); +int main_vkbdetach(int argc, char **argv); int main_vsndattach(int argc, char **argv); int main_vsndlist(int argc, char **argv); int main_vsnddetach(int argc, char **argv); diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c index 8e162ce..8f076d0 100644 --- a/tools/xl/xl_cmdtable.c +++ b/tools/xl/xl_cmdtable.c @@ -378,6 +378,21 @@ struct cmd_spec cmd_table[] = { "Destroy a domain's virtual TPM device", " ", }, +{ "vkb-attach", + _vkbattach, 1, 1, + "Create a new virtual keyboard device", + " [id=] [backend-type=] [backend=]", +}, +{ "vkb-list", + _vkblist, 0, 0, + "List virtual keyboard devices for a domain", + " ", +}, +{ "vkb-detach", + _vkbdetach, 0, 1, + "Destroy a domain's virtual keyboard device", + " ", +}, { "vdispl-attach", _vdisplattach, 1, 1, "Create a new virtual display device", diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index d4c2efb..e018337 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -1099,6 +1099,77 @@ static void parse_vsnd_config(const XLU_Config *config, } } +int parse_vkb_config(libxl_device_vkb *vkb, char *token) +{ +char *oparg; + +if (MATCH_OPTION("backend", token, oparg)) { +vkb->backend_domname = strdup(oparg); +} else if (MATCH_OPTION("backend-type", token, oparg)) { +libxl_vkb_backend backend_type; +if (libxl_vkb_backend_from_string(oparg, _type)) { +fprintf(stderr, "Unknown backend_type \"%s\" in vkb spec\n", +oparg); +return -1; +} +vkb->backend_type = backend_type; +} else if (MATCH_OPTION("id", token, oparg)) { +vkb->id = strdup(oparg); +} else { +fprintf(stderr, "Unknown string \"%s\" in vkb spec\n", token); +return -1; +} + +return 0; +} + +static void parse_vkb_list(const XLU_Config *config, + libxl_domain_config *d_config) +{ +XLU_ConfigList *vkbs; +const char *item; +char *buf = NULL; +int rc; + +if (!xlu_cfg_get_list (config, "vkb", , 0, 0)) { +int entry = 0; +while ((item = xlu_cfg_get_listitem(vkbs, entry)) != NULL) { +libxl_device_vkb *vkb; +char *p; + +vkb = ARRAY_EXTEND_INIT(d_config->vkbs, +d_config->num_vkbs, +libxl_device_vkb_init); + +buf = strdup(item); + +p = strtok (buf, ","); +while (p != NULL) +{ +while (*p == ' ') p++; + +rc = parse_vkb_config(vkb, p); +if (rc) goto out; + +p = strtok (NULL, ","); +} + +if (vkb->backend_type == LIBXL_VKB_BACKEND_UNKNOWN) { +fprintf(stderr, "backend-type should be set in vkb spec\n"); +rc = -1; goto out; +} + +entry++; +} +} + +rc = 0; + +out: +free(buf); +if (rc) exit(EXIT_FAILURE); +} + void parse_config_data(const char *config_source, const char *config_data, int config_len, @@ -2419,7 +2490,9 @@ skip_usbdev: "Unknown gic_version \"%s\" specified\n", buf); exit(-ERROR_FAIL); } - } +} + +parse_vkb_list(config, d_config); xlu_cfg_destroy(config); } diff --git a/tools/xl/xl_parse.h b/tools/xl/xl_parse.h index 9a948ea..19f453a
[Xen-devel] [PATCH v1 2/6] libxl: fix vkb XS entry and type
From: Oleksandr Grytsovvkb has vkbd name in XS. Signed-off-by: Oleksandr Grytsov Acked-by: Wei Liu --- tools/libxl/libxl_vkb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/libxl/libxl_vkb.c b/tools/libxl/libxl_vkb.c index 0d01262..ea6fca8 100644 --- a/tools/libxl/libxl_vkb.c +++ b/tools/libxl/libxl_vkb.c @@ -51,7 +51,7 @@ out: return AO_INPROGRESS; } -static LIBXL_DEFINE_UPDATE_DEVID(vkb, "vkb") +static LIBXL_DEFINE_UPDATE_DEVID(vkb, "vkbd") #define libxl__add_vkbs NULL #define libxl_device_vkb_list NULL @@ -59,7 +59,7 @@ static LIBXL_DEFINE_UPDATE_DEVID(vkb, "vkb") LIBXL_DEFINE_DEVICE_REMOVE(vkb) -DEFINE_DEVICE_TYPE_STRUCT(vkb, +DEFINE_DEVICE_TYPE_STRUCT_X(vkb, vkb, vkbd .skip_attach = 1 ); -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 4/6] libxl: vkb add list and info functions
From: Oleksandr GrytsovSigned-off-by: Oleksandr Grytsov --- tools/libxl/libxl.h | 10 tools/libxl/libxl_types.idl | 11 tools/libxl/libxl_utils.h | 3 ++ tools/libxl/libxl_vkb.c | 129 ++-- 4 files changed, 150 insertions(+), 3 deletions(-) diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index acb73ce..f2f8442 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -1950,6 +1950,16 @@ int libxl_device_vkb_destroy(libxl_ctx *ctx, uint32_t domid, const libxl_asyncop_how *ao_how) LIBXL_EXTERNAL_CALLERS_ONLY; +libxl_device_vkb *libxl_device_vkb_list(libxl_ctx *ctx, +uint32_t domid, int *num) +LIBXL_EXTERNAL_CALLERS_ONLY; +void libxl_device_vkb_list_free(libxl_device_vkb* list, int num) +LIBXL_EXTERNAL_CALLERS_ONLY; +int libxl_device_vkb_getinfo(libxl_ctx *ctx, uint32_t domid, + libxl_device_vkb *vkb, + libxl_vkbinfo *vkbinfo) + LIBXL_EXTERNAL_CALLERS_ONLY; + /* Framebuffer */ int libxl_device_vfb_add(libxl_ctx *ctx, uint32_t domid, libxl_device_vfb *vfb, const libxl_asyncop_how *ao_how) diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index c3876a2..d19af46 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -1015,6 +1015,17 @@ libxl_vsndinfo = Struct("vsndinfo", [ ("pcms", Array(libxl_pcminfo, "num_vsnd_pcms")) ]) +libxl_vkbinfo = Struct("vkbinfo", [ +("backend", string), +("backend_id", uint32), +("frontend", string), +("frontend_id", uint32), +("devid", libxl_devid), +("state", integer), +("evtch", integer), +("rref", integer) +], dir=DIR_OUT) + # NUMA node characteristics: size and free are how much memory it has, and how # much of it is free, respectively. dists is an array of distances from this # node to each other node. diff --git a/tools/libxl/libxl_utils.h b/tools/libxl/libxl_utils.h index 5455752..44409af 100644 --- a/tools/libxl/libxl_utils.h +++ b/tools/libxl/libxl_utils.h @@ -79,6 +79,9 @@ int libxl_devid_to_device_vtpm(libxl_ctx *ctx, uint32_t domid, int libxl_devid_to_device_usbctrl(libxl_ctx *ctx, uint32_t domid, int devid, libxl_device_usbctrl *usbctrl); +int libxl_devid_to_device_vkb(libxl_ctx *ctx, uint32_t domid, + int devid, libxl_device_vkb *vkb); + int libxl_devid_to_device_vdispl(libxl_ctx *ctx, uint32_t domid, int devid, libxl_device_vdispl *vdispl); diff --git a/tools/libxl/libxl_vkb.c b/tools/libxl/libxl_vkb.c index 88ab186..72ae53d 100644 --- a/tools/libxl/libxl_vkb.c +++ b/tools/libxl/libxl_vkb.c @@ -13,6 +13,7 @@ */ #include "libxl_internal.h" +#include static int libxl__device_vkb_setdefault(libxl__gc *gc, uint32_t domid, libxl_device_vkb *vkb, bool hotplug) @@ -62,6 +63,45 @@ static int libxl__set_xenstore_vkb(libxl__gc *gc, uint32_t domid, return 0; } +static int libxl__vkb_from_xenstore(libxl__gc *gc, const char *libxl_path, +libxl_devid devid, +libxl_device_vkb *vkb) +{ +const char *be_path, *be_type, *fe_path; +int rc; + +vkb->devid = devid; + +rc = libxl__xs_read_mandatory(gc, XBT_NULL, + GCSPRINTF("%s/backend", libxl_path), + _path); +if (rc) goto out; + +rc = libxl__xs_read_mandatory(gc, XBT_NULL, + GCSPRINTF("%s/frontend", libxl_path), + _path); +if (rc) goto out; + +rc = libxl__xs_read_mandatory(gc, XBT_NULL, + GCSPRINTF("%s/backend-type", libxl_path), + _type); +if (rc) goto out; + +rc = libxl_vkb_backend_from_string(be_type, >backend_type); +if (rc) goto out; + +vkb->id = xs_read(CTX->xsh, XBT_NULL, GCSPRINTF("%s/id", fe_path), NULL); + +rc = libxl__backendpath_parse_domid(gc, be_path, >backend_domid); +if (rc) goto out; + +rc = 0; + +out: + +return rc; +} + int libxl_device_vkb_add(libxl_ctx *ctx, uint32_t domid, libxl_device_vkb *vkb, const libxl_asyncop_how *ao_how) { @@ -79,19 +119,102 @@ out: return AO_INPROGRESS; } +int libxl_devid_to_device_vkb(libxl_ctx *ctx, uint32_t domid, + int devid, libxl_device_vkb *vkb) +{ +GC_INIT(ctx); + +libxl_device_vkb *vkbs = NULL; +int n, i; +int rc; + +libxl_device_vkb_init(vkb); + +vkbs = libxl__device_list(gc, __vkb_devtype, domid,
[Xen-devel] [PATCH v1 0/6] libxl: create standalone vkb device
From: Oleksandr GrytsovChanges since initial: * add setting backend-type to xenstore * add id field to indentify the vkb device on backend side Oleksandr Grytsov (6): libxl: move vkb device to libxl_vkb.c libxl: fix vkb XS entry and type libxl: add backend type and id to vkb libxl: vkb add list and info functions xl: add vkb config parser and CLI docs: add vkb device to xl.cfg and xl docs/man/xl.cfg.pod.5.in| 28 ++ docs/man/xl.pod.1.in| 22 + tools/libxl/Makefile| 1 + tools/libxl/libxl.h | 10 ++ tools/libxl/libxl_console.c | 53 --- tools/libxl/libxl_create.c | 3 + tools/libxl/libxl_dm.c | 1 + tools/libxl/libxl_types.idl | 19 tools/libxl/libxl_utils.h | 3 + tools/libxl/libxl_vkb.c | 226 tools/xl/Makefile | 2 +- tools/xl/xl.h | 3 + tools/xl/xl_cmdtable.c | 15 +++ tools/xl/xl_parse.c | 75 ++- tools/xl/xl_parse.h | 2 +- tools/xl/xl_vkb.c | 142 16 files changed, 549 insertions(+), 56 deletions(-) create mode 100644 tools/libxl/libxl_vkb.c create mode 100644 tools/xl/xl_vkb.c -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 3/6] libxl: add backend type and id to vkb
From: Oleksandr GrytsovNew field backend_type is added to vkb device in order to have QEMU and user space backend simultaneously. Each vkb backend shall read appropriate XS entry and service only own frontends. Id is a string field which used by the backend to indentify the frontend. Signed-off-by: Oleksandr Grytsov --- tools/libxl/libxl_create.c | 3 +++ tools/libxl/libxl_dm.c | 1 + tools/libxl/libxl_types.idl | 8 tools/libxl/libxl_vkb.c | 33 - 4 files changed, 44 insertions(+), 1 deletion(-) diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c index f813114..60d8686 100644 --- a/tools/libxl/libxl_create.c +++ b/tools/libxl/libxl_create.c @@ -1376,6 +1376,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev, for (i = 0; i < d_config->num_vfbs; i++) { libxl__device_add(gc, domid, __vfb_devtype, _config->vfbs[i]); +} + +for (i = 0; i < d_config->num_vkbs; i++) { libxl__device_add(gc, domid, __vkb_devtype, _config->vkbs[i]); } diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c index 98f89a9..f07de35 100644 --- a/tools/libxl/libxl_dm.c +++ b/tools/libxl/libxl_dm.c @@ -1728,6 +1728,7 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc, vkb->backend_domid = 0; vkb->devid = 0; + return 0; } diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index cd0c06f..c3876a2 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -240,6 +240,12 @@ libxl_checkpointed_stream = Enumeration("checkpointed_stream", [ (2, "COLO"), ]) +libxl_vkb_backend = Enumeration("vkb_backend", [ +(0, "UNKNOWN"), +(1, "QEMU"), +(2, "LINUX") +]) + # # Complex libxl types # @@ -603,6 +609,8 @@ libxl_device_vkb = Struct("device_vkb", [ ("backend_domid", libxl_domid), ("backend_domname", string), ("devid", libxl_devid), +("backend_type", libxl_vkb_backend), +("id", string) ]) libxl_device_disk = Struct("device_disk", [ diff --git a/tools/libxl/libxl_vkb.c b/tools/libxl/libxl_vkb.c index ea6fca8..88ab186 100644 --- a/tools/libxl/libxl_vkb.c +++ b/tools/libxl/libxl_vkb.c @@ -17,6 +17,10 @@ static int libxl__device_vkb_setdefault(libxl__gc *gc, uint32_t domid, libxl_device_vkb *vkb, bool hotplug) { +if (vkb->backend_type == LIBXL_VKB_BACKEND_UNKNOWN) { +vkb->backend_type = LIBXL_VKB_BACKEND_QEMU; +} + return libxl__resolve_domid(gc, vkb->backend_domname, >backend_domid); } @@ -34,6 +38,30 @@ static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid, return 0; } +static int libxl__device_vkb_dm_needed(libxl_device_vkb *vkb, uint32_t domid) +{ + if (vkb->backend_type == LIBXL_VKB_BACKEND_QEMU) { +return 1; + } + +return 0; +} + +static int libxl__set_xenstore_vkb(libxl__gc *gc, uint32_t domid, + libxl_device_vkb *vkb, + flexarray_t *back, flexarray_t *front, + flexarray_t *ro_front) +{ +if (vkb->id) { +flexarray_append_pair(front, "id", vkb->id); +} + +flexarray_append_pair(back, "backend-type", + (char *)libxl_vkb_backend_to_string(vkb->backend_type)); + +return 0; +} + int libxl_device_vkb_add(libxl_ctx *ctx, uint32_t domid, libxl_device_vkb *vkb, const libxl_asyncop_how *ao_how) { @@ -60,7 +88,10 @@ static LIBXL_DEFINE_UPDATE_DEVID(vkb, "vkbd") LIBXL_DEFINE_DEVICE_REMOVE(vkb) DEFINE_DEVICE_TYPE_STRUCT_X(vkb, vkb, vkbd -.skip_attach = 1 +.skip_attach = 1, +.dm_needed = (device_dm_needed_fn_t)libxl__device_vkb_dm_needed, +.set_xenstore_config = (device_set_xenstore_config_fn_t) + libxl__set_xenstore_vkb ); /* -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 1/6] libxl: move vkb device to libxl_vkb.c
From: Oleksandr GrytsovLogically it is better to move vkb to separate file as vkb device used not only by vfb and console. Signed-off-by: Oleksandr Grytsov Acked-by: Wei Liu --- tools/libxl/Makefile| 1 + tools/libxl/libxl_console.c | 53 - tools/libxl/libxl_vkb.c | 72 + 3 files changed, 73 insertions(+), 53 deletions(-) create mode 100644 tools/libxl/libxl_vkb.c diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile index 2d52435..df1b710 100644 --- a/tools/libxl/Makefile +++ b/tools/libxl/Makefile @@ -139,6 +139,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \ libxl_vtpm.o libxl_nic.o libxl_disk.o libxl_console.o \ libxl_cpupool.o libxl_mem.o libxl_sched.o libxl_tmem.o \ libxl_9pfs.o libxl_domain.o libxl_vdispl.o libxl_vsnd.o \ + libxl_vkb.o \ $(LIBXL_OBJS-y) LIBXL_OBJS += libxl_genid.o LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o diff --git a/tools/libxl/libxl_console.c b/tools/libxl/libxl_console.c index 624bd01..09facaf 100644 --- a/tools/libxl/libxl_console.c +++ b/tools/libxl/libxl_console.c @@ -583,45 +583,6 @@ int libxl_device_channel_getinfo(libxl_ctx *ctx, uint32_t domid, return rc; } -static int libxl__device_vkb_setdefault(libxl__gc *gc, uint32_t domid, -libxl_device_vkb *vkb, bool hotplug) -{ -return libxl__resolve_domid(gc, vkb->backend_domname, >backend_domid); -} - -static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid, - libxl_device_vkb *vkb, - libxl__device *device) -{ -device->backend_devid = vkb->devid; -device->backend_domid = vkb->backend_domid; -device->backend_kind = LIBXL__DEVICE_KIND_VKBD; -device->devid = vkb->devid; -device->domid = domid; -device->kind = LIBXL__DEVICE_KIND_VKBD; - -return 0; -} - -int libxl_device_vkb_add(libxl_ctx *ctx, uint32_t domid, libxl_device_vkb *vkb, - const libxl_asyncop_how *ao_how) -{ -AO_CREATE(ctx, domid, ao_how); -int rc; - -rc = libxl__device_add(gc, domid, __vkb_devtype, vkb); -if (rc) { -LOGD(ERROR, domid, "Unable to add vkb device"); -goto out; -} - -out: -libxl__ao_complete(egc, ao, rc); -return AO_INPROGRESS; -} - -static LIBXL_DEFINE_UPDATE_DEVID(vkb, "vkb") - static int libxl__device_vfb_setdefault(libxl__gc *gc, uint32_t domid, libxl_device_vfb *vfb, bool hotplug) { @@ -706,8 +667,6 @@ static int libxl__set_xenstore_vfb(libxl__gc *gc, uint32_t domid, } /* The following functions are defined: - * libxl_device_vkb_remove - * libxl_device_vkb_destroy * libxl_device_vfb_remove * libxl_device_vfb_destroy */ @@ -716,18 +675,6 @@ static int libxl__set_xenstore_vfb(libxl__gc *gc, uint32_t domid, * 1. add support for secondary consoles to xenconsoled * 2. dynamically add/remove qemu chardevs via qmp messages. */ -/* vkb */ - -#define libxl__add_vkbs NULL -#define libxl_device_vkb_list NULL -#define libxl_device_vkb_compare NULL - -LIBXL_DEFINE_DEVICE_REMOVE(vkb) - -DEFINE_DEVICE_TYPE_STRUCT(vkb, -.skip_attach = 1 -); - #define libxl__add_vfbs NULL #define libxl_device_vfb_list NULL #define libxl_device_vfb_compare NULL diff --git a/tools/libxl/libxl_vkb.c b/tools/libxl/libxl_vkb.c new file mode 100644 index 000..0d01262 --- /dev/null +++ b/tools/libxl/libxl_vkb.c @@ -0,0 +1,72 @@ +/* + * Copyright (C) 2016 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include "libxl_internal.h" + +static int libxl__device_vkb_setdefault(libxl__gc *gc, uint32_t domid, +libxl_device_vkb *vkb, bool hotplug) +{ +return libxl__resolve_domid(gc, vkb->backend_domname, >backend_domid); +} + +static int libxl__device_from_vkb(libxl__gc *gc, uint32_t domid, + libxl_device_vkb *vkb, + libxl__device *device) +{ +device->backend_devid = vkb->devid; +device->backend_domid = vkb->backend_domid; +device->backend_kind = LIBXL__DEVICE_KIND_VKBD; +device->devid = vkb->devid; +device->domid = domid; +
[Xen-devel] [PATCH v1 6/6] docs: add vkb device to xl.cfg and xl
From: Oleksandr GrytsovSigned-off-by: Oleksandr Grytsov --- docs/man/xl.cfg.pod.5.in | 28 docs/man/xl.pod.1.in | 22 ++ 2 files changed, 50 insertions(+) diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in index 4948dd7..1859572 100644 --- a/docs/man/xl.cfg.pod.5.in +++ b/docs/man/xl.cfg.pod.5.in @@ -1317,6 +1317,34 @@ I =back +=over 4 + +=item
[Xen-devel] [PATCH v1 0/5] libxl: add PV sound device
From: Oleksandr GrytsovThis patch set adds PV sound device support to xl.cfg and xl. See sndif.h for protocol implementation details. Changes since initial: * fix code style * change unique-id from int to string (to make id more user readable) Oleksandr Grytsov (5): libxl: add PV sound device libxl: add vsnd list and info xl: add PV sound condif parser xl: add vsnd CLI commands docs: add PV sound device config docs/man/xl.cfg.pod.5.in | 150 docs/man/xl.pod.1.in | 30 ++ tools/libxl/Makefile | 2 +- tools/libxl/libxl.h | 24 ++ tools/libxl/libxl_create.c | 1 + tools/libxl/libxl_internal.h | 1 + tools/libxl/libxl_types.idl | 83 + tools/libxl/libxl_types_internal.idl | 1 + tools/libxl/libxl_utils.h| 3 + tools/libxl/libxl_vsnd.c | 699 +++ tools/xl/Makefile| 2 +- tools/xl/xl.h| 3 + tools/xl/xl_cmdtable.c | 15 + tools/xl/xl_parse.c | 246 tools/xl/xl_parse.h | 1 + tools/xl/xl_vsnd.c | 203 ++ 16 files changed, 1462 insertions(+), 2 deletions(-) create mode 100644 tools/libxl/libxl_vsnd.c create mode 100644 tools/xl/xl_vsnd.c -- 2.7.4 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v1 1/5] libxl: add PV sound device
From: Oleksandr GrytsovAdd PV sound device described in sndif.h Signed-off-by: Oleksandr Grytsov --- tools/libxl/Makefile | 2 +- tools/libxl/libxl.h | 14 ++ tools/libxl/libxl_create.c | 1 + tools/libxl/libxl_internal.h | 1 + tools/libxl/libxl_types.idl | 64 +++ tools/libxl/libxl_types_internal.idl | 1 + tools/libxl/libxl_vsnd.c | 330 +++ 7 files changed, 412 insertions(+), 1 deletion(-) create mode 100644 tools/libxl/libxl_vsnd.c diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile index 49b2c63..2d52435 100644 --- a/tools/libxl/Makefile +++ b/tools/libxl/Makefile @@ -138,7 +138,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \ libxl_dom_suspend.o libxl_dom_save.o libxl_usb.o \ libxl_vtpm.o libxl_nic.o libxl_disk.o libxl_console.o \ libxl_cpupool.o libxl_mem.o libxl_sched.o libxl_tmem.o \ - libxl_9pfs.o libxl_domain.o libxl_vdispl.o \ + libxl_9pfs.o libxl_domain.o libxl_vdispl.o libxl_vsnd.o \ $(LIBXL_OBJS-y) LIBXL_OBJS += libxl_genid.o LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index 7d853ca..7200d49 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -1913,6 +1913,20 @@ int libxl_device_vdispl_getinfo(libxl_ctx *ctx, uint32_t domid, libxl_vdisplinfo *vdisplinfo) LIBXL_EXTERNAL_CALLERS_ONLY; +/* Virtual sounds */ +int libxl_device_vsnd_add(libxl_ctx *ctx, uint32_t domid, + libxl_device_vsnd *vsnd, + const libxl_asyncop_how *ao_how) + LIBXL_EXTERNAL_CALLERS_ONLY; +int libxl_device_vsnd_remove(libxl_ctx *ctx, uint32_t domid, + libxl_device_vsnd *vsnd, + const libxl_asyncop_how *ao_how) + LIBXL_EXTERNAL_CALLERS_ONLY; +int libxl_device_vsnd_destroy(libxl_ctx *ctx, uint32_t domid, + libxl_device_vsnd *vsnd, + const libxl_asyncop_how *ao_how) + LIBXL_EXTERNAL_CALLERS_ONLY; + /* Keyboard */ int libxl_device_vkb_add(libxl_ctx *ctx, uint32_t domid, libxl_device_vkb *vkb, const libxl_asyncop_how *ao_how) diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c index 0ef54d2..f813114 100644 --- a/tools/libxl/libxl_create.c +++ b/tools/libxl/libxl_create.c @@ -1449,6 +1449,7 @@ const struct libxl_device_type *device_type_tbl[] = { __pcidev_devtype, __dtdev_devtype, __vdispl_devtype, +__vsnd_devtype, NULL }; diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h index 8b71517..6b403dc 100644 --- a/tools/libxl/libxl_internal.h +++ b/tools/libxl/libxl_internal.h @@ -3575,6 +3575,7 @@ extern const struct libxl_device_type libxl__usbdev_devtype; extern const struct libxl_device_type libxl__pcidev_devtype; extern const struct libxl_device_type libxl__vdispl_devtype; extern const struct libxl_device_type libxl__p9_devtype; +extern const struct libxl_device_type libxl__vsnd_devtype; extern const struct libxl_device_type *device_type_tbl[]; diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 756e120..aa30196 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -793,6 +793,69 @@ libxl_device_vdispl = Struct("device_vdispl", [ ("connectors", Array(libxl_connector_param, "num_connectors")) ]) +libxl_vsnd_pcm_format = Enumeration("vsnd_pcm_format", [ +(1, "S8"), +(2, "U8"), +(3, "S16_LE"), +(4, "S16_BE"), +(5, "U16_LE"), +(6, "U16_BE"), +(7, "S24_LE"), +(8, "S24_BE"), +(9, "U24_LE"), +(10, "U24_BE"), +(11, "S32_LE"), +(12, "S32_BE"), +(13, "U32_LE"), +(14, "U32_BE"), +(15, "F32_LE"), +(16, "F32_BE"), +(17, "F64_LE"), +(18, "F64_BE"), +(19, "IEC958_SUBFRAME_LE"), +(20, "IEC958_SUBFRAME_BE"), +(21, "MU_LAW"), +(22, "A_LAW"), +(23, "IMA_ADPCM"), +(24, "MPEG"), +(25, "GSM") +]) + +libxl_vsnd_params = Struct("vsnd_params", [ +("sample_rates", Array(uint32, "num_sample_rates")), +("sample_formats", Array(libxl_vsnd_pcm_format, "num_sample_formats")), +("channels_min", uint32), +("channels_max", uint32), +("buffer_size", uint32) +]) + +libxl_vsnd_stream_type = Enumeration("vsnd_stream_type", [ +(1, "P"), +(2, "C") +]) + +libxl_vsnd_stream = Struct("vsnd_stream", [ +("id", string), +("type", libxl_vsnd_stream_type), +("params", libxl_vsnd_params) +]) +
[Xen-devel] [PATCH v1 3/5] xl: add PV sound condif parser
From: Oleksandr GrytsovAdd config parser for virtual sound devices Signed-off-by: Oleksandr Grytsov --- tools/xl/xl_parse.c | 246 tools/xl/xl_parse.h | 1 + 2 files changed, 247 insertions(+) diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 0678fbc..e25d096 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -851,6 +851,250 @@ out: return rc; } +static int parse_vsnd_params(libxl_vsnd_params *params, char *token) +{ +char *oparg; +int i; + +if (MATCH_OPTION("sample-rates", token, oparg)) { +libxl_string_list rates = NULL; + +split_string_into_string_list(oparg, ";", ); + +params->num_sample_rates = libxl_string_list_length(); +params->sample_rates = calloc(params->num_sample_rates, + sizeof(*params->sample_rates)); + +for (i = 0; i < params->num_sample_rates; i++) { +params->sample_rates[i] = strtoul(rates[i], NULL, 0); +} + +libxl_string_list_dispose(); +} else if (MATCH_OPTION("sample-formats", token, oparg)) { +libxl_string_list formats = NULL; + +split_string_into_string_list(oparg, ";", ); + +params->num_sample_formats = libxl_string_list_length(); +params->sample_formats = calloc(params->num_sample_formats, +sizeof(*params->sample_formats)); + +for (i = 0; i < params->num_sample_formats; i++) { +libxl_vsnd_pcm_format format; + +if (libxl_vsnd_pcm_format_from_string(formats[i], )) { +fprintf(stderr, "Invalid pcm format: %s\n", formats[i]); +exit(EXIT_FAILURE); +} + +params->sample_formats[i] = format; +} + +libxl_string_list_dispose(); +} else if (MATCH_OPTION("channels-min", token, oparg)) { +params->channels_min = strtoul(oparg, NULL, 0); +} else if (MATCH_OPTION("channels-max", token, oparg)) { +params->channels_max = strtoul(oparg, NULL, 0); +} else if (MATCH_OPTION("buffer-size", token, oparg)) { +params->buffer_size = strtoul(oparg, NULL, 0); +} else { +return 1; +} + +return 0; +} + +static int parse_vsnd_pcm_stream(libxl_device_vsnd *vsnd, char *param) +{ +if (vsnd->num_vsnd_pcms == 0) { +fprintf(stderr, "No vsnd pcm device\n"); +return -1; +} + +libxl_vsnd_pcm *pcm = >pcms[vsnd->num_vsnd_pcms - 1]; + +if (pcm->num_vsnd_streams == 0) { +fprintf(stderr, "No vsnd stream\n"); +return -1; +} + +libxl_vsnd_stream *stream = >streams[pcm->num_vsnd_streams - 1]; + +if (parse_vsnd_params(>params, param)) { +char *oparg; + +if (MATCH_OPTION("id", param, oparg)) { +stream->id = strdup(oparg); +} else if (MATCH_OPTION("type", param, oparg)) { + +if (libxl_vsnd_stream_type_from_string(oparg, >type)) { +fprintf(stderr, "Invalid stream type: %s\n", oparg); +return -1; +} +} else { +fprintf(stderr, "Invalid parameter: %s\n", param); +return -1; +} +} + +return 0; +} + +static int parse_vsnd_pcm_param(libxl_device_vsnd *vsnd, char *param) +{ +if (vsnd->num_vsnd_pcms == 0) { +fprintf(stderr, "No pcm device\n"); +return -1; +} + +libxl_vsnd_pcm *pcm = >pcms[vsnd->num_vsnd_pcms - 1]; + +if (parse_vsnd_params(>params, param)) { +char *oparg; + +if (MATCH_OPTION("name", param, oparg)) { +pcm->name = strdup(oparg); +} else { +fprintf(stderr, "Invalid parameter: %s\n", param); +return -1; +} +} + +return 0; +} + +static int parse_vsnd_card_param(libxl_device_vsnd *vsnd, char *param) +{ +if (parse_vsnd_params(>params, param)) { +char *oparg; + +if (MATCH_OPTION("backend", param, oparg)) { +vsnd->backend_domname = strdup(oparg); +} else if (MATCH_OPTION("short-name", param, oparg)) { +vsnd->short_name = strdup(oparg); +} else if (MATCH_OPTION("long-name", param, oparg)) { +vsnd->long_name = strdup(oparg); +} else { +fprintf(stderr, "Invalid parameter: %s\n", param); +return -1; +} +} + +return 0; +} + +static int parse_vsnd_create_item(libxl_device_vsnd *vsnd, const char *key) +{ +if (strcasecmp(key, "card") == 0) { + +} else if (strcasecmp(key, "pcm") == 0) { +ARRAY_EXTEND_INIT_NODEVID(vsnd->pcms, vsnd->num_vsnd_pcms, + libxl_vsnd_pcm_init); +} else if (strcasecmp(key, "stream") == 0) { +if (vsnd->num_vsnd_pcms == 0) { +ARRAY_EXTEND_INIT_NODEVID(vsnd->pcms, vsnd->num_vsnd_pcms, +
[Xen-devel] [PATCH v1 5/5] docs: add PV sound device config
From: Oleksandr GrytsovUpdate documentation with virtual sound device Signed-off-by: Oleksandr Grytsov --- docs/man/xl.cfg.pod.5.in | 150 +++ docs/man/xl.pod.1.in | 30 ++ 2 files changed, 180 insertions(+) diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in index 247ae99..da78be6 100644 --- a/docs/man/xl.cfg.pod.5.in +++ b/docs/man/xl.cfg.pod.5.in @@ -1165,6 +1165,156 @@ connectors=id0:1920x1080;id1:800x600;id2:640x480 =back +=item
[Xen-devel] [PATCH v1 4/5] xl: add vsnd CLI commands
From: Oleksandr GrytsovAdd CLI commands to attach, detach and list virtual sound devices Signed-off-by: Oleksandr Grytsov Acked-by: Wei Liu --- tools/xl/Makefile | 2 +- tools/xl/xl.h | 3 + tools/xl/xl_cmdtable.c | 15 tools/xl/xl_vsnd.c | 203 + 4 files changed, 222 insertions(+), 1 deletion(-) create mode 100644 tools/xl/xl_vsnd.c diff --git a/tools/xl/Makefile b/tools/xl/Makefile index a5117ab..66bdbde 100644 --- a/tools/xl/Makefile +++ b/tools/xl/Makefile @@ -22,7 +22,7 @@ XL_OBJS += xl_vtpm.o xl_block.o xl_nic.o xl_usb.o XL_OBJS += xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o XL_OBJS += xl_info.o xl_console.o xl_misc.o XL_OBJS += xl_vmcontrol.o xl_saverestore.o xl_migrate.o -XL_OBJS += xl_vdispl.o +XL_OBJS += xl_vdispl.o xl_vsnd.o $(XL_OBJS): CFLAGS += $(CFLAGS_libxentoollog) $(XL_OBJS): CFLAGS += $(CFLAGS_XL) diff --git a/tools/xl/xl.h b/tools/xl/xl.h index 31d660b..703caa6 100644 --- a/tools/xl/xl.h +++ b/tools/xl/xl.h @@ -170,6 +170,9 @@ int main_vtpmdetach(int argc, char **argv); int main_vdisplattach(int argc, char **argv); int main_vdispllist(int argc, char **argv); int main_vdispldetach(int argc, char **argv); +int main_vsndattach(int argc, char **argv); +int main_vsndlist(int argc, char **argv); +int main_vsnddetach(int argc, char **argv); int main_usbctrl_attach(int argc, char **argv); int main_usbctrl_detach(int argc, char **argv); int main_usbdev_attach(int argc, char **argv); diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c index c304a85..8e162ce 100644 --- a/tools/xl/xl_cmdtable.c +++ b/tools/xl/xl_cmdtable.c @@ -397,6 +397,21 @@ struct cmd_spec cmd_table[] = { "Destroy a domain's virtual display device", " ", }, +{ "vsnd-attach", + _vsndattach, 1, 1, + "Create a new virtual sound device", + " ...", +}, +{ "vsnd-list", + _vsndlist, 0, 0, + "List virtual display devices for a domain", + " ", +}, +{ "vsnd-detach", + _vsnddetach, 0, 1, + "Destroy a domain's virtual sound device", + " ", +}, { "uptime", _uptime, 0, 0, "Print uptime for all/some domains", diff --git a/tools/xl/xl_vsnd.c b/tools/xl/xl_vsnd.c new file mode 100644 index 000..41ee0ba --- /dev/null +++ b/tools/xl/xl_vsnd.c @@ -0,0 +1,203 @@ +/* + * Copyright (C) 2016 EPAM Systems Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU Lesser General Public License as published + * by the Free Software Foundation; version 2.1 only. with the special + * exception on linking described in file LICENSE. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + */ + +#include + +#include +#include +#include + +#include "xl.h" +#include "xl_utils.h" +#include "xl_parse.h" + +int main_vsndattach(int argc, char **argv) +{ +int opt; +int rc; +uint32_t domid; +libxl_device_vsnd vsnd; + +SWITCH_FOREACH_OPT(opt, "", NULL, "vsnd-attach", 2) { +/* No options */ +} + +libxl_device_vsnd_init(); +domid = find_domain(argv[optind++]); + +for (argv += optind, argc -= optind; argc > 0; ++argv, --argc) { +rc = parse_vsnd_item(, *argv); +if (rc) goto out; +} + +if (dryrun_only) { +char *json = libxl_device_vsnd_to_json(ctx, ); +printf("vsnd: %s\n", json); +free(json); +goto out; +} + +if (libxl_device_vsnd_add(ctx, domid, , 0)) { +fprintf(stderr, "libxl_device_vsnd_add failed.\n"); +rc = ERROR_FAIL; goto out; +} + +rc = 0; + +out: +libxl_device_vsnd_dispose(); +return rc; +} + +static void print_params(libxl_vsnd_params *params) +{ +int i; + +if (params->channels_min) { +printf(", channels-min: %u", params->channels_min); +} + +if (params->channels_max) { +printf(", channels-max: %u", params->channels_max); +} + +if (params->buffer_size) { +printf(", buffer-size: %u", params->buffer_size); +} + +if (params->num_sample_rates) { +printf(", sample-rates: "); +for (i = 0; i < params->num_sample_rates - 1; i++) { +printf("%u;", params->sample_rates[i]); +} +printf("%u", params->sample_rates[i]); +} + +if (params->num_sample_formats) { +printf(", sample-formats: "); +for (i = 0; i < params->num_sample_formats - 1; i++) { +printf("%s;", libxl_vsnd_pcm_format_to_string(params->sample_formats[i])); +} +printf("%s", libxl_vsnd_pcm_format_to_string(params->sample_formats[i])); +
[Xen-devel] [PATCH v1 2/5] libxl: add vsnd list and info
From: Oleksandr GrytsovAdd getting vsnd list amd info API Signed-off-by: Oleksandr Grytsov --- tools/libxl/libxl.h | 10 ++ tools/libxl/libxl_types.idl | 19 +++ tools/libxl/libxl_utils.h | 3 + tools/libxl/libxl_vsnd.c| 375 +++- 4 files changed, 404 insertions(+), 3 deletions(-) diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index 7200d49..acb73ce 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -1927,6 +1927,16 @@ int libxl_device_vsnd_destroy(libxl_ctx *ctx, uint32_t domid, const libxl_asyncop_how *ao_how) LIBXL_EXTERNAL_CALLERS_ONLY; +libxl_device_vsnd *libxl_device_vsnd_list(libxl_ctx *ctx, + uint32_t domid, int *num) + LIBXL_EXTERNAL_CALLERS_ONLY; +void libxl_device_vsnd_list_free(libxl_device_vsnd* list, int num) + LIBXL_EXTERNAL_CALLERS_ONLY; +int libxl_device_vsnd_getinfo(libxl_ctx *ctx, uint32_t domid, + libxl_device_vsnd *vsnd, + libxl_vsndinfo *vsndlinfo) + LIBXL_EXTERNAL_CALLERS_ONLY; + /* Keyboard */ int libxl_device_vkb_add(libxl_ctx *ctx, uint32_t domid, libxl_device_vkb *vkb, const libxl_asyncop_how *ao_how) diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index aa30196..553e724 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -988,6 +988,25 @@ libxl_vdisplinfo = Struct("vdisplinfo", [ ("connectors", Array(libxl_connectorinfo, "num_connectors")) ], dir=DIR_OUT) +libxl_streaminfo = Struct("streaminfo", [ +("req_evtch", integer), +("req_rref", integer) +]) + +libxl_pcminfo = Struct("pcminfo", [ +("streams", Array(libxl_streaminfo, "num_vsnd_streams")) +]) + +libxl_vsndinfo = Struct("vsndinfo", [ +("backend", string), +("backend_id", uint32), +("frontend", string), +("frontend_id", uint32), +("devid", libxl_devid), +("state", integer), +("pcms", Array(libxl_pcminfo, "num_vsnd_pcms")) +]) + # NUMA node characteristics: size and free are how much memory it has, and how # much of it is free, respectively. dists is an array of distances from this # node to each other node. diff --git a/tools/libxl/libxl_utils.h b/tools/libxl/libxl_utils.h index 9e743dc..5455752 100644 --- a/tools/libxl/libxl_utils.h +++ b/tools/libxl/libxl_utils.h @@ -82,6 +82,9 @@ int libxl_devid_to_device_usbctrl(libxl_ctx *ctx, uint32_t domid, int libxl_devid_to_device_vdispl(libxl_ctx *ctx, uint32_t domid, int devid, libxl_device_vdispl *vdispl); +int libxl_devid_to_device_vsnd(libxl_ctx *ctx, uint32_t domid, + int devid, libxl_device_vsnd *vsnd); + int libxl_ctrlport_to_device_usbdev(libxl_ctx *ctx, uint32_t domid, int ctrl, int port, libxl_device_usbdev *usbdev); diff --git a/tools/libxl/libxl_vsnd.c b/tools/libxl/libxl_vsnd.c index 99e4be3..35f1aed 100644 --- a/tools/libxl/libxl_vsnd.c +++ b/tools/libxl/libxl_vsnd.c @@ -37,22 +37,247 @@ static int libxl__device_from_vsnd(libxl__gc *gc, uint32_t domid, return 0; } +static int libxl__sample_rates_from_string(libxl__gc *gc, const char *str, + libxl_vsnd_params *params) +{ +char *tmp = libxl__strdup(gc, str); + +params->num_sample_rates = 0; +params->sample_rates = NULL; + +char *p = strtok(tmp, " ,"); + +while (p != NULL) { +params->sample_rates = realloc(params->sample_rates, + sizeof(*params->sample_rates) * + (params->num_sample_rates + 1)); +params->sample_rates[params->num_sample_rates++] = strtoul(p, NULL, 0); +p = strtok(NULL, " ,"); +} + +return 0; +} + +static int libxl__sample_formats_from_string(libxl__gc *gc, const char *str, + libxl_vsnd_params *params) +{ +int rc; +char *tmp = libxl__strdup(gc, str); + +params->num_sample_formats = 0; +params->sample_formats = NULL; + +char *p = strtok(tmp, " ,"); + +while (p != NULL) { +params->sample_formats = realloc(params->sample_formats, + sizeof(*params->sample_formats) * + (params->num_sample_formats + 1)); + +libxl_vsnd_pcm_format format; + +rc = libxl_vsnd_pcm_format_from_string(p, ); +if (rc) goto out; + +params->sample_formats[params->num_sample_formats++] = format; +p = strtok(NULL, " ,"); +} + +rc = 0; + +out: + +return rc; +} + +static int
Re: [Xen-devel] Commit moratorium to staging
Hi Ian, Thank you for the detailed e-mail. On 11/01/2017 02:07 PM, Ian Jackson wrote: So, investigations (mostly by Roger, and also a bit of archaeology in the osstest db by me) have determined: * This bug is 100% reproducible on affected hosts. The repro is to boot the Windows guest, save/restore it, then migrate it, then shut down. (This is from an IRL conversation with Roger and may not be 100% accurate. Roger, please correct me.) * Affected hosts differ from unaffected hosts according to cpuid. Roger has repro'd the bug on an unaffected host by masking out certain cpuid bits. There are 6 implicated bits and he is working to narrow that down. * It seems likely that this is therefore a real bug. Maybe in Xen and perhaps indeed one that should indeed be a release blocker. * But this is not a regresson between master and staging. It affects many osstest branches apparently equally. * This test is, effectively, new: before the osstest change "HostDiskRoot: bump to 20G", these jobs would always fail earlier and the affected step would not be run. * The passes we got on various osstest branches before were just because those branches hadn't tested on an affected host yet. As branches test different hosts, they will stick on affected hosts. ISTM that this situation would therefore justify a force push. We have established that this bug is very unlikely to be anything to do with the commits currently blocked by the failing pushes. Furthermore, the test is not intermittent, so a force push will be effective in the following sense: we would only get a "spurious" pass, resulting in the relevant osstest branch becoming stuck again, if a future test was unlucky and got an unaffected host. That will happen infrequently enough. I am not entirely sure to understand this paragraph. Are you saying that osstest will not get stuck if we get a "spurious" pass on some hardware in the future? Or will we need another force push? So unless anyone objects (and for xen.git#master, with Julien's permission), I intend to force push all affected osstest branches when the test report shows the only blockage is ws16 and/or win10 tests failing the "guest-stop" step. This is not only blocking xen.git#master but also blocking other trees: - linux-linus - linux-4.9 Cheers, -- Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH for-4.10] xen/x86: p2m-pod: Prevent infinite loop when shattering 1GB pages
The PoD subsystem only have pool of 4KB and 2MB pages. When it comes accross a 1GB mapping, it will be splitted in 2MB one using p2m_set_entry and request the caller to retry (see ept_get_entry for instance). p2m_set_entry may fail to shatter if it is not possible to allocate memory for the new page table. However, the error is not progated resulting to the callers to retry infinitely the PoD. Prevent the infinite loop by return false when it is not possible to shatter the 1GB mapping. Signed-off-by: Julien Grall--- This is a potential candidate to backport and for Xen 4.10. Without it, there a potential for infinite loop if the memory is exhausted. --- xen/arch/x86/mm/p2m-pod.c | 8 +--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 0a811ccf28..69269a0bd1 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -1103,6 +1103,8 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn, */ if ( order == PAGE_ORDER_1G ) { +int rc; + pod_unlock(p2m); /* * Note that we are supposed to call p2m_set_entry() 512 times to @@ -1113,9 +1115,9 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, gfn_t gfn, * NOTE: In a fine-grained p2m locking scenario this operation * may need to promote its locking from gfn->1g superpage */ -p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M, - p2m_populate_on_demand, p2m->default_access); -return true; +rc = p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M, + p2m_populate_on_demand, p2m->default_access); +return !rc; } /* Only reclaim if we're in actual need of more cache. */ -- 2.11.0 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [RFC] ARM: New (Xen) VGIC design document
Hi Stefano, On 01/11/17 01:58, Stefano Stabellini wrote: > On Wed, 11 Oct 2017, Andre Przywara wrote: many thanks for going through all of this! >> (CC:ing some KVM/ARM folks involved in the VGIC) >> >> starting with the addition of the ITS support we were seeing more and >> more issues with the current implementation of our ARM Generic Interrupt >> Controller (GIC) emulation, the VGIC. >> Among other approaches to fix those issues it was proposed to copy the >> VGIC emulation used in KVM. This one was suffering from very similar >> issues, and a clean design from scratch lead to a very robust and >> capable re-implementation. Interestingly this implementation is fairly >> self-contained, so it seems feasible to copy it. Hopefully we only need >> minor adjustments, possibly we can even copy it verbatim with some >> additional glue layer code. >> >> Stefano asked for getting a design overview, to assess the feasibility >> of copying the KVM code without reviewing tons of code in the first >> place. >> So to follow Xen rules for new features, this design document below is >> an attempt to describe the current KVM VGIC design - in a hypervisor >> agnostic session. It is a bit of a retro-fit design description, as it >> is not strictly forward-looking only, but actually describing the >> existing implemenation [1]. >> >> Please have a look and let me know: >> 1) if this document has the right scope >> 2) if this document has the right level of detail >> 3) if there are points missing from the document >> 3) if the design in general is a fit > > Please read the following statements as genuine questions and concerns. > Most ideas on this document are good. Some of them I have even suggested > them myself in the context of GIC improvements for Xen. I asked for a > couple of clarifications. > > But I don't see why we cannot implement these ideas on top of the > existing code, rather than with a separate codebase, ending up with two > drivers. I would prefer a natual evolution. Specifically, the following > improvements would be simple and would give us most of the benefits on > top of the current codebase: > - adding the irq lock, and the refcount > - taking both vcpu locks when necessary (on migration code for example > it would help a lot), the lower vcpu_id first > - level irq emulation I think some of those points you mentioned are not easily implemented in the current Xen. For instance I ran into locking order issues with those *two* inflight and lr_queue lists, when trying to implement the lock and the refcount. Also this "put vIRQs into LRs early, but possibly rip them out again" is really complicating things a lot. I believe only level IRQs could be added in a relatively straight forward manner. So the problem with the evolutionary approach is that it generates a lot of patches, some of them quite invasive, others creating hard-to-read diffs, which are both hard to review. And chances are that the actual result would be pretty close to the KVM code. To be clear: I hacked the Xen VGIC into the KVM direction in a few days some months ago, but it took me *weeks* to make sane patches of only the first part of it. And this would not cover all those general, tedious corner cases that the VGIC comes with. Those would need to be fixed in a painful process, which we could avoid by "lifting" the KVM code. > If we do end up with a second separate driver for technical or process > reasons, I would expect the regular Xen submission/review process to be > followed. The code style will be different, the hooks into the rest of > the hypervisors will be different and things will be generally changed. > The new V/GIC might be derived from KVM, but it should end up looking > and feeling like a 100% genuine Xen component. After all, we'll > maintain it going forward. I don't want a copy of a Linux driver with > glue code. The Xen community cannot be expected not to review the > submission, but if we review it, then we'll ask for changes. Once we > change the code, there will be no point in keeping the Linux code > separate with glue code. We should fully adapt it to Xen. I see your point, and this actually simplifies *my* work, but I am a bit worried about the effects of having two separate implementations which then diverge over time. In the moment we have two separate implementations as well, but they are quite different, which has the advantage of doing things differently enough to help in finding bugs in the other one (something we should actually exploit in testing, I believe). So how is your feeling towards some shared "libvgic"? I understand that people are not too happy about that extra maintenance cost of having a separate repository, but I am curious what your, Marc's and Christoffer's take is on this idea. > That is what was done in the past when KVM took code from Xen (for > example async shadow pagetables). I am eager to avoid a situation like > the current SMMU driver in Xen, which comes from
[Xen-devel] [linux-linus test] 115459: regressions - FAIL
flight 115459 linux-linus real [real] http://logs.test-lab.xenproject.org/osstest/logs/115459/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail REGR. vs. 114682 Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 114682 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114682 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114682 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 114682 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114682 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 114682 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114682 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 114682 test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass version targeted for testing: linux287683d027a3ff83feb6c7044430c79881664ecf baseline version: linuxebe6e90ccc6679cb01d2b280e4b61e6092d4bedb Last test of basis 114682 2017-10-18 09:54:11 Z 14 days Failing since114781 2017-10-20 01:00:47 Z 12 days 21 attempts Testing same since 115459 2017-11-01 05:28:20 Z0 days1 attempts 423 people touched revisions under test, not listing them all jobs: build-amd64-xsm pass build-armhf-xsm pass build-i386-xsm pass build-amd64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-pvopspass build-armhf-pvops
Re: [Xen-devel] Commit moratorium to staging
So, investigations (mostly by Roger, and also a bit of archaeology in the osstest db by me) have determined: * This bug is 100% reproducible on affected hosts. The repro is to boot the Windows guest, save/restore it, then migrate it, then shut down. (This is from an IRL conversation with Roger and may not be 100% accurate. Roger, please correct me.) * Affected hosts differ from unaffected hosts according to cpuid. Roger has repro'd the bug on an unaffected host by masking out certain cpuid bits. There are 6 implicated bits and he is working to narrow that down. * It seems likely that this is therefore a real bug. Maybe in Xen and perhaps indeed one that should indeed be a release blocker. * But this is not a regresson between master and staging. It affects many osstest branches apparently equally. * This test is, effectively, new: before the osstest change "HostDiskRoot: bump to 20G", these jobs would always fail earlier and the affected step would not be run. * The passes we got on various osstest branches before were just because those branches hadn't tested on an affected host yet. As branches test different hosts, they will stick on affected hosts. ISTM that this situation would therefore justify a force push. We have established that this bug is very unlikely to be anything to do with the commits currently blocked by the failing pushes. Furthermore, the test is not intermittent, so a force push will be effective in the following sense: we would only get a "spurious" pass, resulting in the relevant osstest branch becoming stuck again, if a future test was unlucky and got an unaffected host. That will happen infrequently enough. So unless anyone objects (and for xen.git#master, with Julien's permission), I intend to force push all affected osstest branches when the test report shows the only blockage is ws16 and/or win10 tests failing the "guest-stop" step. Opinions ? Ian. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v3 for-next 2/4] xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash
The arm32 version of the function is_xen_heap_page currently define a variable _mfn. This will lead to a compiler when use typesafe MFN in a follow-up patch: called object '_mfn' is not a function or function pointer Fix it by renaming the local variable _mfn to mfn_. Signed-off-by: Julien Grall--- Cc: Stefano Stabellini Changes in v3: - Fix typo in the commit message --- xen/include/asm-arm/mm.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index cd6dfb54b9..737a429409 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -140,9 +140,9 @@ extern vaddr_t xenheap_virt_start; #ifdef CONFIG_ARM_32 #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page)) #define is_xen_heap_mfn(mfn) ({ \ -unsigned long _mfn = (mfn); \ -(_mfn >= mfn_x(xenheap_mfn_start) &&\ - _mfn < mfn_x(xenheap_mfn_end));\ +unsigned long mfn_ = (mfn); \ +(mfn_ >= mfn_x(xenheap_mfn_start) &&\ + mfn_ < mfn_x(xenheap_mfn_end));\ }) #else #define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap) -- 2.11.0 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v3 for-next 0/4] xen: Convert __page_to_mfn and _mfn_to_page to use typesafe MFN
Hi all, Most of the users of page_to_mfn and mfn_to_page are either overriding the macros to make them work with mfn_t or use mfn_x/_mfn becaue the rest of the function use mfn_t. So I think it is time to make __page_to_mfn and __mfn_to_page using typesafe MFN. The first 3 patches will convert of the code to use typesafe MFN, easing the tree-wide conversion in patch 4. Note that this was only build tested it on x86. Cheers, Cc: Andrew CooperCc: Boris Ostrovsky Cc: Gang Wei Cc: George Dunlap Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Julien Grall Cc: Jun Nakajima Cc: Kevin Tian Cc: Konrad Rzeszutek Wilk Cc: Paul Durrant Cc: Razvan Cojocaru Cc: Shane Wang Cc: Stefano Stabellini Cc: Suravee Suthikulpanit Cc: Tamas K Lengyel Cc: Tim Deegan Cc: Wei Liu Julien Grall (4): xen/arm: domain_build: Clean-up insert_11_bank xen/arm32: mm: Rework is_xen_heap_page to avoid nameclash xen/tmem: Convert the file common/tmem_xen.c to use typesafe MFN xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN xen/arch/arm/domain_build.c | 15 --- xen/arch/arm/kernel.c | 2 +- xen/arch/arm/mem_access.c | 2 +- xen/arch/arm/mm.c | 8 xen/arch/arm/p2m.c | 10 ++ xen/arch/x86/cpu/vpmu.c | 4 ++-- xen/arch/x86/domain.c | 21 +++-- xen/arch/x86/domain_page.c | 6 +++--- xen/arch/x86/domctl.c | 2 +- xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/dom0_build.c | 6 +++--- xen/arch/x86/hvm/emulate.c | 6 +++--- xen/arch/x86/hvm/hvm.c | 16 xen/arch/x86/hvm/ioreq.c| 6 +++--- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/hvm/svm/svm.c | 4 ++-- xen/arch/x86/hvm/viridian.c | 6 +++--- xen/arch/x86/hvm/vmx/vmcs.c | 2 +- xen/arch/x86/hvm/vmx/vmx.c | 10 +- xen/arch/x86/hvm/vmx/vvmx.c | 6 +++--- xen/arch/x86/mm.c | 6 -- xen/arch/x86/mm/guest_walk.c| 6 +++--- xen/arch/x86/mm/hap/guest_walk.c| 2 +- xen/arch/x86/mm/hap/hap.c | 6 -- xen/arch/x86/mm/hap/nested_ept.c| 2 +- xen/arch/x86/mm/mem_sharing.c | 5 - xen/arch/x86/mm/p2m-ept.c | 4 xen/arch/x86/mm/p2m-pod.c | 6 -- xen/arch/x86/mm/p2m.c | 6 -- xen/arch/x86/mm/paging.c| 6 -- xen/arch/x86/mm/shadow/private.h| 16 ++-- xen/arch/x86/numa.c | 2 +- xen/arch/x86/physdev.c | 2 +- xen/arch/x86/pv/callback.c | 6 -- xen/arch/x86/pv/descriptor-tables.c | 10 -- xen/arch/x86/pv/dom0_build.c| 6 ++ xen/arch/x86/pv/domain.c| 6 -- xen/arch/x86/pv/emul-gate-op.c | 6 -- xen/arch/x86/pv/emul-priv-op.c | 10 -- xen/arch/x86/pv/grant_table.c | 6 -- xen/arch/x86/pv/ro-page-fault.c | 6 -- xen/arch/x86/smpboot.c | 6 -- xen/arch/x86/tboot.c| 4 ++-- xen/arch/x86/traps.c| 4 ++-- xen/arch/x86/x86_64/mm.c| 6 ++ xen/common/domain.c | 4 ++-- xen/common/grant_table.c| 6 ++ xen/common/kimage.c | 6 -- xen/common/memory.c | 6 ++ xen/common/page_alloc.c | 6 ++ xen/common/tmem.c | 2 +- xen/common/tmem_xen.c | 26 ++ xen/common/trace.c | 6 ++ xen/common/vmap.c | 9 + xen/common/xenoprof.c | 2 -- xen/drivers/passthrough/amd/iommu_map.c | 6 ++ xen/drivers/passthrough/iommu.c | 2 +- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/include/asm-arm/mm.h| 22 -- xen/include/asm-arm/p2m.h | 4 ++-- xen/include/asm-x86/mm.h| 12 ++-- xen/include/asm-x86/p2m.h | 2 +- xen/include/asm-x86/page.h | 32 xen/include/xen/domain_page.h | 8 xen/include/xen/tmem_xen.h
[Xen-devel] [PATCH v3 for-next 4/4] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN
Most of the users of page_to_mfn and mfn_to_page are either overriding the macros to make them work with mfn_t or use mfn_x/_mfn because the rest of the function use mfn_t. So make __page_to_mfn and __mfn_to_page return mfn_t by default. Only reasonable clean-ups are done in this patch because it is already quite big. So some of the files now override page_to_mfn and mfn_to_page to avoid using mfn_t. Lastly, domain_page_to_mfn is also converted to use mfn_t given that most of the callers are now switched to _mfn(domain_page_to_mfn(...)). Signed-off-by: Julien Grall--- Andrew suggested to drop IS_VALID_PAGE in xen/tmem_xen.h. His comment was: "/sigh This is tautological. The definition of a "valid mfn" in this case is one for which we have frametable entry, and by having a struct page_info in our hands, this is by definition true (unless you have a wild pointer, at which point your bug is elsewhere). IS_VALID_PAGE() is only ever used in assertions and never usefully, so instead I would remove it entirely rather than trying to fix it up." I can remove the function in a separate patch at the begining of the series if Konrad (TMEM maintainer) is happy with that. Cc: Stefano Stabellini Cc: Julien Grall Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Cc: Wei Liu Cc: Razvan Cojocaru Cc: Tamas K Lengyel Cc: Paul Durrant Cc: Boris Ostrovsky Cc: Suravee Suthikulpanit Cc: Jun Nakajima Cc: Kevin Tian Cc: George Dunlap Cc: Gang Wei Cc: Shane Wang Changes in v3: - Rebase on the latest staging and fix some conflicts. Tags haven't be retained. - Switch the printf format to PRI_mfn Changes in v2: - Some part have been moved in separate patch - Remove one spurious comment - Convert domain_page_to_mfn to use mfn_t --- xen/arch/arm/domain_build.c | 2 -- xen/arch/arm/kernel.c | 2 +- xen/arch/arm/mem_access.c | 2 +- xen/arch/arm/mm.c | 8 xen/arch/arm/p2m.c | 10 ++ xen/arch/x86/cpu/vpmu.c | 4 ++-- xen/arch/x86/domain.c | 21 +++-- xen/arch/x86/domain_page.c | 6 +++--- xen/arch/x86/domctl.c | 2 +- xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/dom0_build.c | 6 +++--- xen/arch/x86/hvm/emulate.c | 6 +++--- xen/arch/x86/hvm/hvm.c | 16 xen/arch/x86/hvm/ioreq.c| 6 +++--- xen/arch/x86/hvm/stdvga.c | 2 +- xen/arch/x86/hvm/svm/svm.c | 4 ++-- xen/arch/x86/hvm/viridian.c | 6 +++--- xen/arch/x86/hvm/vmx/vmcs.c | 2 +- xen/arch/x86/hvm/vmx/vmx.c | 10 +- xen/arch/x86/hvm/vmx/vvmx.c | 6 +++--- xen/arch/x86/mm.c | 6 -- xen/arch/x86/mm/guest_walk.c| 6 +++--- xen/arch/x86/mm/hap/guest_walk.c| 2 +- xen/arch/x86/mm/hap/hap.c | 6 -- xen/arch/x86/mm/hap/nested_ept.c| 2 +- xen/arch/x86/mm/mem_sharing.c | 5 - xen/arch/x86/mm/p2m-ept.c | 4 xen/arch/x86/mm/p2m-pod.c | 6 -- xen/arch/x86/mm/p2m.c | 6 -- xen/arch/x86/mm/paging.c| 6 -- xen/arch/x86/mm/shadow/private.h| 16 ++-- xen/arch/x86/numa.c | 2 +- xen/arch/x86/physdev.c | 2 +- xen/arch/x86/pv/callback.c | 6 -- xen/arch/x86/pv/descriptor-tables.c | 10 -- xen/arch/x86/pv/dom0_build.c| 6 ++ xen/arch/x86/pv/domain.c| 6 -- xen/arch/x86/pv/emul-gate-op.c | 6 -- xen/arch/x86/pv/emul-priv-op.c | 10 -- xen/arch/x86/pv/grant_table.c | 6 -- xen/arch/x86/pv/ro-page-fault.c | 6 -- xen/arch/x86/smpboot.c | 6 -- xen/arch/x86/tboot.c| 4 ++-- xen/arch/x86/traps.c| 4 ++-- xen/arch/x86/x86_64/mm.c| 6 ++ xen/common/domain.c | 4 ++-- xen/common/grant_table.c| 6 ++ xen/common/kimage.c | 6 -- xen/common/memory.c | 6 ++ xen/common/page_alloc.c |
[Xen-devel] [PATCH v3 for-next 3/4] xen/tmem: Convert the file common/tmem_xen.c to use typesafe MFN
The file common/tmem_xen.c is now converted to use typesafe. This is requiring to override the macro page_to_mfn to make it work with mfn_t. Note that all variables converted to mfn_t havem there initial value, when set, switch from 0 to INVALID_MFN. This is fine because the initial values was always overriden before used. Also add a couple of missing newlines suggested by Andrew in the code. Signed-off-by: Julien GrallReviewed-by: Andrew Cooper --- Cc: Konrad Rzeszutek Wilk Changes in v2: - Add missing newlines - Add Andrew's reviewed-by --- xen/common/tmem_xen.c | 30 ++ 1 file changed, 18 insertions(+), 12 deletions(-) diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c index 20f74b268f..bd52e44faf 100644 --- a/xen/common/tmem_xen.c +++ b/xen/common/tmem_xen.c @@ -14,6 +14,10 @@ #include #include +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) + bool __read_mostly opt_tmem; boolean_param("tmem", opt_tmem); @@ -31,7 +35,7 @@ static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, dstmem); static DEFINE_PER_CPU_READ_MOSTLY(void *, scratch_page); #if defined(CONFIG_ARM) -static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn, +static inline void *cli_get_page(xen_pfn_t cmfn, mfn_t *pcli_mfn, struct page_info **pcli_pfp, bool cli_write) { ASSERT_UNREACHABLE(); @@ -39,14 +43,14 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn, } static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp, -unsigned long cli_mfn, bool mark_dirty) +mfn_t cli_mfn, bool mark_dirty) { ASSERT_UNREACHABLE(); } #else #include -static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn, +static inline void *cli_get_page(xen_pfn_t cmfn, mfn_t *pcli_mfn, struct page_info **pcli_pfp, bool cli_write) { p2m_type_t t; @@ -68,16 +72,17 @@ static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn, *pcli_mfn = page_to_mfn(page); *pcli_pfp = page; -return map_domain_page(_mfn(*pcli_mfn)); + +return map_domain_page(*pcli_mfn); } static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp, -unsigned long cli_mfn, bool mark_dirty) +mfn_t cli_mfn, bool mark_dirty) { if ( mark_dirty ) { put_page_and_type(cli_pfp); -paging_mark_dirty(current->domain, _mfn(cli_mfn)); +paging_mark_dirty(current->domain, cli_mfn); } else put_page(cli_pfp); @@ -88,14 +93,14 @@ static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp, int tmem_copy_from_client(struct page_info *pfp, xen_pfn_t cmfn, tmem_cli_va_param_t clibuf) { -unsigned long tmem_mfn, cli_mfn = 0; +mfn_t tmem_mfn, cli_mfn = INVALID_MFN; char *tmem_va, *cli_va = NULL; struct page_info *cli_pfp = NULL; int rc = 1; ASSERT(pfp != NULL); tmem_mfn = page_to_mfn(pfp); -tmem_va = map_domain_page(_mfn(tmem_mfn)); +tmem_va = map_domain_page(tmem_mfn); if ( guest_handle_is_null(clibuf) ) { cli_va = cli_get_page(cmfn, _mfn, _pfp, 0); @@ -125,7 +130,7 @@ int tmem_compress_from_client(xen_pfn_t cmfn, unsigned char *wmem = this_cpu(workmem); char *scratch = this_cpu(scratch_page); struct page_info *cli_pfp = NULL; -unsigned long cli_mfn = 0; +mfn_t cli_mfn = INVALID_MFN; void *cli_va = NULL; if ( dmem == NULL || wmem == NULL ) @@ -152,7 +157,7 @@ int tmem_compress_from_client(xen_pfn_t cmfn, int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp, tmem_cli_va_param_t clibuf) { -unsigned long tmem_mfn, cli_mfn = 0; +mfn_t tmem_mfn, cli_mfn = INVALID_MFN; char *tmem_va, *cli_va = NULL; struct page_info *cli_pfp = NULL; int rc = 1; @@ -165,7 +170,8 @@ int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp, return -EFAULT; } tmem_mfn = page_to_mfn(pfp); -tmem_va = map_domain_page(_mfn(tmem_mfn)); +tmem_va = map_domain_page(tmem_mfn); + if ( cli_va ) { memcpy(cli_va, tmem_va, PAGE_SIZE); @@ -181,7 +187,7 @@ int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp, int tmem_decompress_to_client(xen_pfn_t cmfn, void *tmem_va, size_t size, tmem_cli_va_param_t clibuf) { -unsigned long cli_mfn = 0; +mfn_t cli_mfn = INVALID_MFN; struct page_info *cli_pfp = NULL; void *cli_va = NULL; char *scratch = this_cpu(scratch_page); -- 2.11.0 ___ Xen-devel mailing list
[Xen-devel] [PATCH v3 for-next 1/4] xen/arm: domain_build: Clean-up insert_11_bank
- Remove spurious () - Add missing spaces - Turn 1 << to 1UL << - Rename spfn to smfn and switch to mfn_t Signed-off-by: Julien Grall--- Cc: Stefano Stabellini Changes in v2: - Remove double space - s/spfn/smfn/ and switch to mfn_t --- xen/arch/arm/domain_build.c | 17 ++--- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index bf29299707..5532068ab1 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -50,6 +50,8 @@ struct map_range_data /* Override macros from asm/page.h to make them work with mfn_t */ #undef virt_to_mfn #define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) +#undef page_to_mfn +#define page_to_mfn(pg) _mfn(__page_to_mfn(pg)) //#define DEBUG_11_ALLOCATION #ifdef DEBUG_11_ALLOCATION @@ -104,16 +106,16 @@ static bool insert_11_bank(struct domain *d, unsigned int order) { int res, i; -paddr_t spfn; +mfn_t smfn; paddr_t start, size; -spfn = page_to_mfn(pg); -start = pfn_to_paddr(spfn); -size = pfn_to_paddr((1 << order)); +smfn = page_to_mfn(pg); +start = mfn_to_maddr(smfn); +size = pfn_to_paddr(1UL << order); D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n", start, start + size, - 1UL << (order+PAGE_SHIFT-20), + 1UL << (order + PAGE_SHIFT - 20), /* Don't want format this as PRIpaddr (16 digit hex) */ (unsigned long)(kinfo->unassigned_mem >> 20), order); @@ -126,7 +128,7 @@ static bool insert_11_bank(struct domain *d, goto fail; } -res = guest_physmap_add_page(d, _gfn(spfn), _mfn(spfn), order); +res = guest_physmap_add_page(d, _gfn(mfn_x(smfn)), smfn, order); if ( res ) panic("Failed map pages to DOM0: %d", res); @@ -167,7 +169,8 @@ static bool insert_11_bank(struct domain *d, */ if ( start + size < bank->start && kinfo->mem.nr_banks < NR_MEM_BANKS ) { -memmove(bank + 1, bank, sizeof(*bank)*(kinfo->mem.nr_banks - i)); +memmove(bank + 1, bank, +sizeof(*bank) * (kinfo->mem.nr_banks - i)); kinfo->mem.nr_banks++; bank->start = start; bank->size = size; -- 2.11.0 ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v2] xen: support priv-mapping in an HVM tools domain
> -Original Message- > From: Juergen Gross [mailto:jgr...@suse.com] > Sent: 01 November 2017 13:40 > To: Paul Durrant; x...@kernel.org; xen- > de...@lists.xenproject.org; linux-ker...@vger.kernel.org > Cc: Boris Ostrovsky ; Thomas Gleixner > ; Ingo Molnar ; H. Peter Anvin > > Subject: Re: [PATCH v2] xen: support priv-mapping in an HVM tools domain > > On 01/11/17 12:31, Paul Durrant wrote: > > If the domain has XENFEAT_auto_translated_physmap then use of the PV- > > specific HYPERVISOR_mmu_update hypercall is clearly incorrect. > > > > This patch adds checks in xen_remap_domain_gfn_array() and > > xen_unmap_domain_gfn_array() which call through to the approprate > > xlate_mmu function if the feature is present. > > > > This patch also moves xen_remap_domain_gfn_range() into the PV-only > MMU > > code and #ifdefs the (only) calling code in privcmd accordingly. > > > > Signed-off-by: Paul Durrant > > --- > > Cc: Boris Ostrovsky > > Cc: Juergen Gross > > Cc: Thomas Gleixner > > Cc: Ingo Molnar > > Cc: "H. Peter Anvin" > > --- > > arch/x86/xen/mmu.c| 36 +--- > > arch/x86/xen/mmu_pv.c | 11 +++ > > drivers/xen/privcmd.c | 17 + > > include/xen/xen-ops.h | 7 +++ > > 4 files changed, 48 insertions(+), 23 deletions(-) > > > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c > > index 3e15345abfe7..01837c36e293 100644 > > --- a/arch/x86/xen/mmu.c > > +++ b/arch/x86/xen/mmu.c > > @@ -91,12 +91,12 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, > pgtable_t token, > > return 0; > > } > > > > -static int do_remap_gfn(struct vm_area_struct *vma, > > - unsigned long addr, > > - xen_pfn_t *gfn, int nr, > > - int *err_ptr, pgprot_t prot, > > - unsigned domid, > > - struct page **pages) > > +int xen_remap_gfn(struct vm_area_struct *vma, > > + unsigned long addr, > > + xen_pfn_t *gfn, int nr, > > + int *err_ptr, pgprot_t prot, > > + unsigned int domid, > > + struct page **pages) > > { > > int err = 0; > > struct remap_data rmd; > > @@ -166,36 +166,34 @@ static int do_remap_gfn(struct vm_area_struct > *vma, > > return err < 0 ? err : mapped; > > } > > > > -int xen_remap_domain_gfn_range(struct vm_area_struct *vma, > > - unsigned long addr, > > - xen_pfn_t gfn, int nr, > > - pgprot_t prot, unsigned domid, > > - struct page **pages) > > -{ > > - return do_remap_gfn(vma, addr, , nr, NULL, prot, domid, > pages); > > -} > > -EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range); > > - > > int xen_remap_domain_gfn_array(struct vm_area_struct *vma, > >unsigned long addr, > >xen_pfn_t *gfn, int nr, > >int *err_ptr, pgprot_t prot, > >unsigned domid, struct page **pages) > > { > > + if (xen_feature(XENFEAT_auto_translated_physmap)) > > + return xen_xlate_remap_gfn_array(vma, addr, gfn, nr, > err_ptr, > > +prot, domid, pages); > > + > > /* We BUG_ON because it's a programmer error to pass a NULL > err_ptr, > > * and the consequences later is quite hard to detect what the actual > > * cause of "wrong memory was mapped in". > > */ > > BUG_ON(err_ptr == NULL); > > - return do_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, > pages); > > + return xen_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, > > +pages); > > } > > EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array); > > > > /* Returns: 0 success */ > > int xen_unmap_domain_gfn_range(struct vm_area_struct *vma, > > - int numpgs, struct page **pages) > > + int nr, struct page **pages) > > { > > - if (!pages || !xen_feature(XENFEAT_auto_translated_physmap)) > > + if (xen_feature(XENFEAT_auto_translated_physmap)) > > + return xen_xlate_unmap_gfn_range(vma, nr, pages); > > + > > + if (!pages) > > return 0; > > > > return -EINVAL; > > diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c > > index 71495f1a86d7..4974d8a6c2b4 100644 > > --- a/arch/x86/xen/mmu_pv.c > > +++ b/arch/x86/xen/mmu_pv.c > > @@ -2670,3 +2670,14 @@ phys_addr_t paddr_vmcoreinfo_note(void) > > return __pa(vmcoreinfo_note); > > } > > #endif /* CONFIG_KEXEC_CORE */ > > + > > +int xen_remap_domain_gfn_range(struct vm_area_struct *vma, > > + unsigned long addr, > > + xen_pfn_t
Re: [Xen-devel] [PATCH v2] xen: support priv-mapping in an HVM tools domain
On 01/11/17 12:31, Paul Durrant wrote: > If the domain has XENFEAT_auto_translated_physmap then use of the PV- > specific HYPERVISOR_mmu_update hypercall is clearly incorrect. > > This patch adds checks in xen_remap_domain_gfn_array() and > xen_unmap_domain_gfn_array() which call through to the approprate > xlate_mmu function if the feature is present. > > This patch also moves xen_remap_domain_gfn_range() into the PV-only MMU > code and #ifdefs the (only) calling code in privcmd accordingly. > > Signed-off-by: Paul Durrant> --- > Cc: Boris Ostrovsky > Cc: Juergen Gross > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: "H. Peter Anvin" > --- > arch/x86/xen/mmu.c| 36 +--- > arch/x86/xen/mmu_pv.c | 11 +++ > drivers/xen/privcmd.c | 17 + > include/xen/xen-ops.h | 7 +++ > 4 files changed, 48 insertions(+), 23 deletions(-) > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c > index 3e15345abfe7..01837c36e293 100644 > --- a/arch/x86/xen/mmu.c > +++ b/arch/x86/xen/mmu.c > @@ -91,12 +91,12 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t > token, > return 0; > } > > -static int do_remap_gfn(struct vm_area_struct *vma, > - unsigned long addr, > - xen_pfn_t *gfn, int nr, > - int *err_ptr, pgprot_t prot, > - unsigned domid, > - struct page **pages) > +int xen_remap_gfn(struct vm_area_struct *vma, > + unsigned long addr, > + xen_pfn_t *gfn, int nr, > + int *err_ptr, pgprot_t prot, > + unsigned int domid, > + struct page **pages) > { > int err = 0; > struct remap_data rmd; > @@ -166,36 +166,34 @@ static int do_remap_gfn(struct vm_area_struct *vma, > return err < 0 ? err : mapped; > } > > -int xen_remap_domain_gfn_range(struct vm_area_struct *vma, > -unsigned long addr, > -xen_pfn_t gfn, int nr, > -pgprot_t prot, unsigned domid, > -struct page **pages) > -{ > - return do_remap_gfn(vma, addr, , nr, NULL, prot, domid, pages); > -} > -EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range); > - > int xen_remap_domain_gfn_array(struct vm_area_struct *vma, > unsigned long addr, > xen_pfn_t *gfn, int nr, > int *err_ptr, pgprot_t prot, > unsigned domid, struct page **pages) > { > + if (xen_feature(XENFEAT_auto_translated_physmap)) > + return xen_xlate_remap_gfn_array(vma, addr, gfn, nr, err_ptr, > + prot, domid, pages); > + > /* We BUG_ON because it's a programmer error to pass a NULL err_ptr, >* and the consequences later is quite hard to detect what the actual >* cause of "wrong memory was mapped in". >*/ > BUG_ON(err_ptr == NULL); > - return do_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, pages); > + return xen_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, > + pages); > } > EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array); > > /* Returns: 0 success */ > int xen_unmap_domain_gfn_range(struct vm_area_struct *vma, > -int numpgs, struct page **pages) > +int nr, struct page **pages) > { > - if (!pages || !xen_feature(XENFEAT_auto_translated_physmap)) > + if (xen_feature(XENFEAT_auto_translated_physmap)) > + return xen_xlate_unmap_gfn_range(vma, nr, pages); > + > + if (!pages) > return 0; > > return -EINVAL; > diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c > index 71495f1a86d7..4974d8a6c2b4 100644 > --- a/arch/x86/xen/mmu_pv.c > +++ b/arch/x86/xen/mmu_pv.c > @@ -2670,3 +2670,14 @@ phys_addr_t paddr_vmcoreinfo_note(void) > return __pa(vmcoreinfo_note); > } > #endif /* CONFIG_KEXEC_CORE */ > + > +int xen_remap_domain_gfn_range(struct vm_area_struct *vma, > +unsigned long addr, > +xen_pfn_t gfn, int nr, > +pgprot_t prot, unsigned int domid, > +struct page **pages) > +{ > + return xen_remap_gfn(vma, addr, , nr, NULL, prot, domid, > + pages); > +} > +EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range); > diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c > index feca75b07fdd..b58a1719b606 100644 > --- a/drivers/xen/privcmd.c > +++ b/drivers/xen/privcmd.c > @@ -215,6 +215,8 @@ static int traverse_pages_block(unsigned nelem, size_t > size, > return ret; > } > > +#ifdef CONFIG_XEN_PV >
[Xen-devel] [qemu-mainline test] 115463: regressions - FAIL
flight 115463 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/115463/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i3866 xen-buildfail REGR. vs. 114507 build-amd64-xsm 6 xen-buildfail REGR. vs. 114507 build-i386-xsm6 xen-buildfail REGR. vs. 114507 build-amd64 6 xen-buildfail REGR. vs. 114507 build-armhf 6 xen-buildfail REGR. vs. 114507 build-armhf-xsm 6 xen-buildfail REGR. vs. 114507 Tests which did not succeed, but are not blocking: test-amd64-i386-libvirt-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-i386-freebsd10-i386 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked n/a test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-armhf-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a test-amd64-i386-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-i386-freebsd10-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pair 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pygrub 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked n/a test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl1 build-check(1) blocked n/a build-i386-libvirt1 build-check(1) blocked n/a test-amd64-i386-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked n/a test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-i386-xl-xsm1 build-check(1) blocked n/a build-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-i386-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-i386-xl-raw1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-amd 1 build-check(1) blocked n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked n/a build-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-i386-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl 1 build-check(1) blocked n/a test-amd64-i386-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked n/a test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a test-armhf-armhf-xl-rtds 1
[Xen-devel] [linux-4.9 test] 115457: regressions - FAIL
flight 115457 linux-4.9 real [real] http://logs.test-lab.xenproject.org/osstest/logs/115457/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail REGR. vs. 114814 Tests which are failing intermittently (not blocking): test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail in 115432 pass in 115457 test-amd64-i386-freebsd10-i386 10 freebsd-install fail pass in 115432 test-armhf-armhf-xl-cubietruck 6 xen-install fail pass in 115432 test-amd64-amd64-xl-qemuu-ovmf-amd64 16 guest-localmigrate/x10 fail pass in 115432 Tests which did not succeed, but are not blocking: test-armhf-armhf-xl-cubietruck 13 migrate-support-check fail in 115432 never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-check fail in 115432 never pass test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114814 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114814 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass version targeted for testing: linuxd785062ef20f9b2cd8cedcafea55ca8264f25f3e baseline version: linux5d7a76acad403638f635c918cc63d1d44ffa4065 Last test of basis 114814 2017-10-20 20:51:56 Z 11 days Failing since114845 2017-10-21 16:14:17 Z 10 days 19 attempts Testing same since 115296 2017-10-27 11:07:37 Z5 days9 attempts People who touched revisions under test: Alan SternAlex Deucher Alexandre Belloni Andrew Morton Andrey Konovalov Anoob Soman Arend van Spriel Arnd Bergmann Bart Van Assche
Re: [Xen-devel] [PATCH v4 for-4.10] scripts: introduce a script for build test
On Mon, Oct 30, 2017 at 04:01:54PM +, Wei Liu wrote: > +git rev-list $BASE..$TIP | nl -ba | tac | \ > +while read num rev; do > +echo "Testing $num $rev" > + > +git checkout $rev > +ret=$? > +if test $ret -ne 0; then > +echo "Failed to checkout $num $rev with $ret" I don't think printing the return value of git-checkout is usefull, it is just too much information. git-checkout man page have nothing about the meaning of it. Beside that, and the fact that I don't like this style of while loop where `exit` doesn't exit the script, but only the loop ... Reviewed-by: Anthony PERARDFIY: One would write the loop like this: while read num rev; do : done < <(git rev-list $BASE..$TIP | nl -ba | tac) And then you could ret=$?;break; inside the loop, and have the correct $ret value after the loop. -- Anthony PERARD ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v2] xen: support priv-mapping in an HVM tools domain
If the domain has XENFEAT_auto_translated_physmap then use of the PV- specific HYPERVISOR_mmu_update hypercall is clearly incorrect. This patch adds checks in xen_remap_domain_gfn_array() and xen_unmap_domain_gfn_array() which call through to the approprate xlate_mmu function if the feature is present. This patch also moves xen_remap_domain_gfn_range() into the PV-only MMU code and #ifdefs the (only) calling code in privcmd accordingly. Signed-off-by: Paul Durrant--- Cc: Boris Ostrovsky Cc: Juergen Gross Cc: Thomas Gleixner Cc: Ingo Molnar Cc: "H. Peter Anvin" --- arch/x86/xen/mmu.c| 36 +--- arch/x86/xen/mmu_pv.c | 11 +++ drivers/xen/privcmd.c | 17 + include/xen/xen-ops.h | 7 +++ 4 files changed, 48 insertions(+), 23 deletions(-) diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 3e15345abfe7..01837c36e293 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -91,12 +91,12 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, return 0; } -static int do_remap_gfn(struct vm_area_struct *vma, - unsigned long addr, - xen_pfn_t *gfn, int nr, - int *err_ptr, pgprot_t prot, - unsigned domid, - struct page **pages) +int xen_remap_gfn(struct vm_area_struct *vma, + unsigned long addr, + xen_pfn_t *gfn, int nr, + int *err_ptr, pgprot_t prot, + unsigned int domid, + struct page **pages) { int err = 0; struct remap_data rmd; @@ -166,36 +166,34 @@ static int do_remap_gfn(struct vm_area_struct *vma, return err < 0 ? err : mapped; } -int xen_remap_domain_gfn_range(struct vm_area_struct *vma, - unsigned long addr, - xen_pfn_t gfn, int nr, - pgprot_t prot, unsigned domid, - struct page **pages) -{ - return do_remap_gfn(vma, addr, , nr, NULL, prot, domid, pages); -} -EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range); - int xen_remap_domain_gfn_array(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t *gfn, int nr, int *err_ptr, pgprot_t prot, unsigned domid, struct page **pages) { + if (xen_feature(XENFEAT_auto_translated_physmap)) + return xen_xlate_remap_gfn_array(vma, addr, gfn, nr, err_ptr, +prot, domid, pages); + /* We BUG_ON because it's a programmer error to pass a NULL err_ptr, * and the consequences later is quite hard to detect what the actual * cause of "wrong memory was mapped in". */ BUG_ON(err_ptr == NULL); - return do_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, pages); + return xen_remap_gfn(vma, addr, gfn, nr, err_ptr, prot, domid, +pages); } EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_array); /* Returns: 0 success */ int xen_unmap_domain_gfn_range(struct vm_area_struct *vma, - int numpgs, struct page **pages) + int nr, struct page **pages) { - if (!pages || !xen_feature(XENFEAT_auto_translated_physmap)) + if (xen_feature(XENFEAT_auto_translated_physmap)) + return xen_xlate_unmap_gfn_range(vma, nr, pages); + + if (!pages) return 0; return -EINVAL; diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index 71495f1a86d7..4974d8a6c2b4 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -2670,3 +2670,14 @@ phys_addr_t paddr_vmcoreinfo_note(void) return __pa(vmcoreinfo_note); } #endif /* CONFIG_KEXEC_CORE */ + +int xen_remap_domain_gfn_range(struct vm_area_struct *vma, + unsigned long addr, + xen_pfn_t gfn, int nr, + pgprot_t prot, unsigned int domid, + struct page **pages) +{ + return xen_remap_gfn(vma, addr, , nr, NULL, prot, domid, +pages); +} +EXPORT_SYMBOL_GPL(xen_remap_domain_gfn_range); diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c index feca75b07fdd..b58a1719b606 100644 --- a/drivers/xen/privcmd.c +++ b/drivers/xen/privcmd.c @@ -215,6 +215,8 @@ static int traverse_pages_block(unsigned nelem, size_t size, return ret; } +#ifdef CONFIG_XEN_PV + struct mmap_gfn_state { unsigned long va; struct vm_area_struct *vma; @@ -261,10 +263,6 @@ static long privcmd_ioctl_mmap(struct file *file, void __user *udata)
Re: [Xen-devel] Commit moratorium to staging
> -Original Message- > From: Wei Liu [mailto:wei.l...@citrix.com] > Sent: 01 November 2017 10:48 > To: Roger Pau Monne> Cc: Julien Grall ; committ...@xenproject.org; xen- > devel ; Lars Kurth ; > Paul Durrant ; Wei Liu > Subject: Re: Commit moratorium to staging > > On Tue, Oct 31, 2017 at 04:52:37PM +, Roger Pau Monné wrote: > > > > I have to admit I have no idea why Windows clears the STS power bit > > and then completely ignores it on certain occasions. > > > > I'm also afraid I have no idea how to debug Windows in order to know > > why this event is acknowledged but ignored. > > > > I've also tried to reproduce the same with a Debian guest, by doing > > the same amount of save/restores and migrations, and finally issuing a > > xl trigger power, but Debian has always worked fine and > > shut down. > > > > Any comments are welcome. > > After googling around, some articles suggest Windows can ignore ACPI > events under certain circumstances. Is it worth checking in the Windows > event log to see if an event is received but ignored for reason X? Dumping the event logs would definitely be a useful thing to do. > > For Windows Server 2012: > https://serverfault.com/questions/534042/windows-2012-how-to-make- > power-button-work-in-every-cases > > Can't find anything for Windows Server 2016. No, I couldn't either. I did find https://ethertubes.com/unattended-acpi-shutdown-of-windows-server/ too which seems to have some potentially useful suggestions. Paul ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Commit moratorium to staging
On Tue, Oct 31, 2017 at 04:52:37PM +, Roger Pau Monné wrote: > > I have to admit I have no idea why Windows clears the STS power bit > and then completely ignores it on certain occasions. > > I'm also afraid I have no idea how to debug Windows in order to know > why this event is acknowledged but ignored. > > I've also tried to reproduce the same with a Debian guest, by doing > the same amount of save/restores and migrations, and finally issuing a > xl trigger power, but Debian has always worked fine and > shut down. > > Any comments are welcome. After googling around, some articles suggest Windows can ignore ACPI events under certain circumstances. Is it worth checking in the Windows event log to see if an event is received but ignored for reason X? For Windows Server 2012: https://serverfault.com/questions/534042/windows-2012-how-to-make-power-button-work-in-every-cases Can't find anything for Windows Server 2016. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [xen-unstable test] 115450: regressions - FAIL
flight 115450 xen-unstable real [real] http://logs.test-lab.xenproject.org/osstest/logs/115450/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stopfail REGR. vs. 114644 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail REGR. vs. 114644 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail REGR. vs. 114644 Tests which are failing intermittently (not blocking): test-armhf-armhf-xl-rtds 6 xen-install fail in 115378 pass in 115450 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeat fail in 115378 pass in 115450 test-amd64-i386-xl-raw 19 guest-start/debian.repeat fail in 115378 pass in 115450 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail in 115378 pass in 115450 test-armhf-armhf-xl 6 xen-install fail in 115401 pass in 115450 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore.2 fail in 115401 pass in 115450 test-armhf-armhf-xl-vhd 15 guest-start/debian.repeat fail in 115401 pass in 115450 test-amd64-amd64-xl-qcow219 guest-start/debian.repeat fail pass in 115378 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeat fail pass in 115401 Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 114644 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114644 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114644 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114644 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114644 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 114644 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 114644 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass version targeted for testing: xen bb2c1a1cc98a22e2d4c14b18421aa7be6c2adf0d baseline version: xen 24fb44e971a62b345c7b6ca3c03b454a1e150abe Last test of basis 114644 2017-10-17 10:49:11 Z 14 days Failing since114670 2017-10-18 05:03:38 Z 14 days 22
Re: [Xen-devel] [PATCH for-4.10] gdbsx: prefer privcmd character device
Cc Julien and change tag. I think this is safe to be applied to 4.10. On Tue, Oct 31, 2017 at 01:58:07PM -0700, Elena Ufimtseva wrote: > On Tue, Oct 31, 2017 at 03:25:39PM +, Wei Liu wrote: > > On Tue, Oct 31, 2017 at 10:20:11AM -0500, Doug Goldstein wrote: > > > Prefer using the character device over the proc file if the character > > > device exists. > > > > > > CC: Elena Ufimtseva> > > CC: Ian Jackson > > > CC: Stefano Stabellini > > > CC: Wei Liu > > > Signed-off-by: Doug Goldstein > > > --- > > > So this was originally submitted with 9c89dc95201 and 7d418eab3b6 and > > > was rejected since the goal was to convert gdbsx to use libxc but that > > > hasn't happened. /dev/xen/privcmd should be preferred and this change > > > makes that happen. It would be nice if we landed this with the plan > > > to convert gdbsx happening when it happens. > > > > Oh well... I think this is fine. > > > > Elena has the final verdict. > > I think this is fine. > I will look into the conversion and relevant discussions if I find them and > see what can be done. > > Thanks! > > Meanwhile, > Reviewed-by: Elena Ufimtseva > > Elena ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [OSSTEST PATCH v2 00/19] Upgrade to Stretch
On Tue, Oct 31, 2017 at 06:42:28PM +, Wei Liu wrote: > On Tue, Oct 31, 2017 at 01:51:44PM +, Wei Liu wrote: > > First version of this series can be found at [0]. > > > > This version contains workaround for Arndale boards. They are now > > functional. > > > > A bunch of test cases failed: > > > > 1. Rumpkernel tests -- I've sent an email to Antti for advice. > > 2. Windows tests -- They don't look different from normal flights. > > 3. memdisk-try-append -- Osstest couldn't find some file. I don't think it > > is > >related to the code I modified. > > 4. guest-localmigrate/x10 for xl-qcow2 test -- Guest kernel bug. > > 5. nested hvm amd, pvhv2 -- Expected failure. > > > > Example flight: > > http://logs.test-lab.xenproject.org/osstest/logs/115404/ > > > > The armhf d-i failure is fixed with an additional patch ("Skip bootloader > > installaion for arm32 on Stretch) on top of the code for 15404, in: > > > > http://logs.test-lab.xenproject.org/osstest/logs/115404/ > > This should be 115433. And a complete run of the series: http://logs.test-lab.xenproject.org/osstest/logs/115436/ ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [qemu-mainline test] 115460: regressions - FAIL
flight 115460 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/115460/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i3866 xen-buildfail REGR. vs. 114507 build-amd64-xsm 6 xen-buildfail REGR. vs. 114507 build-i386-xsm6 xen-buildfail REGR. vs. 114507 build-amd64 6 xen-buildfail REGR. vs. 114507 build-armhf 6 xen-buildfail REGR. vs. 114507 build-armhf-xsm 6 xen-buildfail REGR. vs. 114507 Tests which did not succeed, but are not blocking: test-amd64-i386-libvirt-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-i386-freebsd10-i386 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked n/a test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-armhf-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a test-amd64-i386-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-i386-freebsd10-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pair 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pygrub 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked n/a test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl1 build-check(1) blocked n/a build-i386-libvirt1 build-check(1) blocked n/a test-amd64-i386-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked n/a test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-i386-xl-xsm1 build-check(1) blocked n/a build-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-i386-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-i386-xl-raw1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-amd 1 build-check(1) blocked n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked n/a build-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-i386-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl 1 build-check(1) blocked n/a test-amd64-i386-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked n/a test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a test-armhf-armhf-xl-rtds 1
Re: [Xen-devel] [RFC] ARM: New (Xen) VGIC design document
Hi, On 01/11/17 04:31, Christoffer Dall wrote: > On Wed, Nov 1, 2017 at 9:58 AM, Stefano Stabellini >wrote: > > [] Christoffer, many thanks for answering this! I think we have a lot of assumptions about the whole VGIC life cycle floating around, but it would indeed be good to get some numbers behind it. I would be all too happy to trace some workloads on Xen again and getting some metrics, though this sounds time consuming if done properly. Do you have any numbers on VGIC performance available somewhere? >>> ### List register management >>> >>> A list register (LR) holds the state of a virtual interrupt, which will >>> be used by the GIC hardware to simulate an IRQ life cycle for a guest. >>> Each GIC hardware implementation can choose to implement a number of LRs, >>> having four of them seems to be a common value. This design here does not >>> try to manage the LRs very cleverly, instead on every guest exit every LR >>> in use will be synced to the emulated state, then cleared. Upon guest entry >>> the top priority virtual IRQs will be inserted into the LRs. If there are >>> more pending or active IRQs than list registers, the GIC management IRQ >>> will be configured to notify the hypervisor of a free LR (once the guest >>> has EOIed one IRQ). This will trigger a normal exit, which will go through >>> the normal cleanup/repopulate scheme, possibly now queuing the leftover >>> interrupt(s). >>> To facilitate quick guest exit and entry times, the VGIC maintains the list >>> of pending or active interrupts (ap\_list) sorted by their priority. Active >>> interrupts always go first on the list, since a guest and the hardware GIC >>> expect those to stay until they have been explicitly deactivated. Failure >>> in keeping active IRQs around will result in error conditions in the GIC. >>> The second sort criteria for the ap\_list is their priority, so higher >>> priority pending interrupt always go first into the LRs. >> >> The suggestion of using this model in Xen was made in the past already. >> I always objected for the reason that we don't actually know how many >> LRs the hardware provides, potentially very many, and it is expensive >> and needless to read/write them all every time on entry/exit. >> >> I would prefer to avoid that, but I'll be honest: I can be convinced >> that that model of handling LRs is so much simpler that it is worth it. >> I am more concerned about the future maintainance of a separate new >> driver developed elsewhere. > > [Having just spent a fair amount of time optimizing KVM/ARM and > measuring GIC interaction, I'll comment on this and leave it up to > Andre to drive the rest of the discussion]. > > In KVM we currently only ever touch an LR when we absolutely have to. > For example, if there are no interrupts, we do not touch an LR. Yes, I think this is a key point. We only touch LRs that we need to touch: On guest entry we iterate our per-VCPU list of pending IRQs (ap_list, that could be empty!), and store that number in a variable. On entry we just sync back the first LRs. I think the code in KVM explains it quite well: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/virt/kvm/arm/vgic/vgic.c#n677 > When you do have an interrupt in flight, and have programmed one or > more LRs, you have to either read back that LR, or read one of the > status registers to figure out if the interrupt has become inactive > (and should potentially be injected again). I measured both on KVM > for various workloads and it was faster to never read the status > registers, but simply read back the LRs that were in use when entering > the guest. > > You can potentially micro-optimize slightly by remembering the exit > value of an LR (and not clearing it on guest exit), but you have to > pay the cost in terms of additional logic during VCPU migration and > when you enter a VM again, maintaining a mapping of the LR and the > virtual state, to avoid rewriting the same value to the LR again. We > tried that in KVM and could not measure any benefit using either a > pinned or oversubscribed workload; I speculate that the number of > times you exit with unprocessed interrupts in the LRs is extremely > rare. > > In terms of the number of LRs, I stil haven't seen an implementation > with anything else than 4 LRs. Yes, that is what I know of as well. The fast model has 16, but I guess this doesn't count - though it's good to test some code. I can try to learn the figure in newer hardware. In the past I traced some workloads and found only a small number of LRs to be actually used, with 4 or more being extremely rare. Cheers, Andre. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [distros-debian-squeeze test] 72403: tolerable FAIL
flight 72403 distros-debian-squeeze real [real] http://osstest.xs.citrite.net/~osstest/testlogs/logs/72403/ Failures :-/ but no regressions. Tests which did not succeed, but are not blocking: test-amd64-amd64-amd64-squeeze-netboot-pygrub 10 debian-di-install fail like 72350 test-amd64-i386-amd64-squeeze-netboot-pygrub 10 debian-di-install fail like 72350 test-amd64-i386-i386-squeeze-netboot-pygrub 10 debian-di-install fail like 72350 test-amd64-amd64-i386-squeeze-netboot-pygrub 10 debian-di-install fail like 72350 baseline version: flight 72350 jobs: build-amd64 pass build-armhf pass build-i386 pass build-amd64-pvopspass build-armhf-pvopspass build-i386-pvops pass test-amd64-amd64-amd64-squeeze-netboot-pygrubfail test-amd64-i386-amd64-squeeze-netboot-pygrub fail test-amd64-amd64-i386-squeeze-netboot-pygrub fail test-amd64-i386-i386-squeeze-netboot-pygrub fail sg-report-flight on osstest.xs.citrite.net logs: /home/osstest/logs images: /home/osstest/images Logs, config files, etc. are available at http://osstest.xs.citrite.net/~osstest/testlogs/logs Test harness code can be found at http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary Push not applicable. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [qemu-mainline test] 115456: regressions - FAIL
flight 115456 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/115456/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i3866 xen-buildfail REGR. vs. 114507 build-amd64-xsm 6 xen-buildfail REGR. vs. 114507 build-i386-xsm6 xen-buildfail REGR. vs. 114507 build-amd64 6 xen-buildfail REGR. vs. 114507 build-armhf 6 xen-buildfail REGR. vs. 114507 build-armhf-xsm 6 xen-buildfail REGR. vs. 114507 Tests which did not succeed, but are not blocking: test-amd64-i386-libvirt-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1)blocked n/a test-amd64-i386-freebsd10-i386 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked n/a test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-armhf-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-raw 1 build-check(1) blocked n/a test-amd64-i386-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-i386-freebsd10-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pair 1 build-check(1) blocked n/a test-armhf-armhf-xl-credit2 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-pygrub 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked n/a test-armhf-armhf-xl-arndale 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked n/a test-armhf-armhf-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-i386-xl1 build-check(1) blocked n/a build-i386-libvirt1 build-check(1) blocked n/a test-amd64-i386-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked n/a test-armhf-armhf-xl-multivcpu 1 build-check(1) blocked n/a test-amd64-i386-xl-xsm1 build-check(1) blocked n/a build-amd64-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1)blocked n/a test-amd64-i386-xl-qemuu-ovmf-amd64 1 build-check(1) blocked n/a test-amd64-i386-xl-raw1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-amd 1 build-check(1) blocked n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked n/a build-armhf-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked n/a test-amd64-i386-libvirt 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl 1 build-check(1) blocked n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked n/a test-amd64-i386-qemuu-rhel6hvm-intel 1 build-check(1) blocked n/a test-armhf-armhf-xl-vhd 1 build-check(1) blocked n/a test-amd64-i386-xl-qemuu-win7-amd64 1 build-check(1) blocked n/a test-amd64-amd64-xl 1 build-check(1) blocked n/a test-amd64-i386-pair 1 build-check(1) blocked n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked n/a test-armhf-armhf-xl-cubietruck 1 build-check(1) blocked n/a test-armhf-armhf-xl-rtds 1