[Xen-devel] [libvirt test] 146410: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146410 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146410/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt   6 libvirt-buildfail REGR. vs. 146182
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 146182
 build-arm64-libvirt   6 libvirt-buildfail REGR. vs. 146182
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a

version targeted for testing:
 libvirt  153fd683681be13f380378acfc531cc3df206fd1
baseline version:
 libvirt  a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z6 days
Failing since146211  2020-01-18 04:18:52 Z5 days6 attempts
Testing same since   146410  2020-01-23 04:18:59 Z0 days1 attempts


People who touched revisions under test:
  Christian Ehrhardt 
  Daniel P. Berrangé 
  Julio Faracco 
  Ján Tomko 
  Marek Marczykowski-Górecki 
  Pavel Hrdina 
  Peter Krempa 
  Richard W.M. Jones 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-arm64-libvirt  fail
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmblocked 
 test-amd64-amd64-libvirt-xsm blocked 
 test-arm64-arm64-libvirt-xsm blocked 
 test-amd64-i386-libvirt-xsm  blocked 
 test-amd64-amd64-libvirt blocked 
 test-arm64-arm64-libvirt blocked 
 test-armhf-armhf-libvirt blocked 
 test-amd64-i386-libvirt  blocked 
 test-amd64-amd64-libvirt-pairblocked 
 test-amd64-i386-libvirt-pair blocked 
 test-arm64-arm64-libvirt-qcow2   blocked 
 test-armhf-armhf-libvirt-raw blocked 
 test-amd64-amd64-libvirt-vhd blocked 



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 560 lines long.)


[Xen-devel] [qemu-mainline test] 146409: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146409 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146409/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-shadow 1 

[Xen-devel] [linux-5.4 test] 146398: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146398 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146398/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 146121
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
146121

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd 12 guest-start/redhat.repeat fail in 146354 
pass in 146398
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 146354 pass in 
146398
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10 fail pass in 146354

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass

version targeted for testing:
 linuxba19874032074ca5a3817ae82ebae27bd3343551
baseline version:
 linux

Re: [Xen-devel] [RFC XEN PATCH 00/23] xen: beginning support for RISC-V

2020-01-22 Thread Bobby Eshleman
On Wed, Jan 22, 2020 at 04:27:39PM +, Lars Kurth wrote:
> 
> You should also leverage the developer summit: see 
> https://events.linuxfoundation.org/xen-summit/program/cfp/ 
> 
> CfP closes March 6th. Design sessions can be submitted afterwards
> 
> Community calls may also be a good option to deal with specific issues / 
> questions, e.g. around compile support in the CI, etc.
> 
> Lars
>

That's a really good idea.  I'll submit as I do think I can get there if 
accepted.  Thanks for the tip on
community calls, I did not realize Xen did those!

-Bobby

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC XEN PATCH 00/23] xen: beginning support for RISC-V

2020-01-22 Thread Bobby Eshleman
On Wed, Jan 22, 2020 at 02:57:47PM +, Andrew Cooper wrote:
> How much time do you have to put towards the port?  Is this something in
> your free time, or something you are doing as part of work?  Ultimately,
> we are going to need to increase the level of RISC-V knowledge in the
> community to maintain things in the future.
> 

This is something in my free time, and I have about 20 hours a week to
put into it.

> Other than that, very RFC series are entirely fine.  A good first step
> would be simply to get the build working, and get some kind of
> cross-compile build in CI, to make sure that we don't clobber the RISC-V
> build with common or other-arch changes.
> 

That's something I can look at, if the idea of QEMU in CI is
not too horrific.


> I hope this helps.
> 
> ~Andrew


Definitely helps, thanks!

-Bobby

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 146405: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146405 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146405/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 145767
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
145767

version targeted for testing:
 ovmf 9a1f14ad721bbcd833ec5108944c44a502392f03
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z   15 days
Failing since145774  2020-01-08 02:50:20 Z   15 days   55 attempts
Testing same since   146346  2020-01-21 04:31:27 Z2 days7 attempts


People who touched revisions under test:
  Aaron Li 
  Albecki, Mateusz 
  Ard Biesheuvel 
  Ashish Singhal 
  Bob Feng 
  Brian R Haug 
  Eric Dong 
  Fan, ZhijuX 
  Hao A Wu 
  Jason Voelz 
  Jian J Wang 
  Krzysztof Koch 
  Laszlo Ersek 
  Leif Lindholm 
  Li, Aaron 
  Liming Gao 
  Mateusz Albecki 
  Michael D Kinney 
  Michael Kubacki 
  Pavana.K 
  Philippe Mathieu-Daud? 
  Philippe Mathieu-Daude 
  Siyuan Fu 
  Siyuan, Fu 
  Sudipto Paul 
  Vitaly Cheptsov 
  Vitaly Cheptsov via Groups.Io 
  Wei6 Xu 
  Xu, Wei6 
  Zhiguang Liu 
  Zhiju.Fan 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1164 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC XEN PATCH 00/23] xen: beginning support for RISC-V

2020-01-22 Thread Bobby Eshleman
On Wed, Jan 22, 2020 at 01:05:11PM -0800, Stefano Stabellini wrote:
> On Wed, 22 Jan 2020, Andrew Cooper wrote:
> > > My big questions are:
> > >   Does the Xen project have interest in RISC-V?
> > 
> > There is very large downstream interest in RISC-V.  So a definite yes.
> 
> Definite Yes from me too
> 

Both great to hear!

> 
> > >   What can be done to make the RISC-V port as upstreamable as
> > >   possible?
> > >   Any major pitfalls?
> > >
> > > It would be great to hear all of your feedback.
> > 
> > Both RISC-V and Power9 are frequently requested things, and both suffer
> > from the fact that, while we as a community would like them, the
> > upstream intersection of "people who know Xen" and "people who know
> > enough arch $X to do an initial port" is 0.
> > 
> > This series clearly demonstrates a change in the status quo, and I think
> > a lot of people will be happy.
> > 
> > To get RISC-V to being fully supported, we will ultimately need to get
> > hardware into the CI system, and an easy way for developers to test
> > changes.  Do you have any thoughts on production RISC-V hardware
> > (ideally server form factor) for the CI system, and/or dev boards which
> > might be available fairly cheaply?
> 
> My understanding is that virtualization development for RISC-V is done
> on QEMU right now (which could still be hooked into the CI system if
> somebody wanted to do the work I think.)

That is correct.  I think the RTL and hardware folks are waiting for the
spec to be finalized before committing to the effort, so everyone is
just developing against QEMU for now.

I can certainly look at hooking in QEMU to the CI at some point soon.  That
is the OSSTest repo, correct?

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [libvirt bisection] complete build-armhf-libvirt

2020-01-22 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job build-armhf-libvirt
testid libvirt-build

Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_gnulib https://git.savannah.gnu.org/git/gnulib.git/
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  4d5f50d86b760864240c695adc341379fb47a796
  Bug not present: a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/146411/


  commit 4d5f50d86b760864240c695adc341379fb47a796
  Author: Pavel Hrdina 
  Date:   Wed Jan 8 22:54:31 2020 +0100
  
  bootstrap.conf: stop creating AUTHORS file
  
  The existence of AUTHORS file is required for GNU projects but since
  commit <8bfb36db40f38e92823b657b5a342652064b5adc> we do not require
  these files to exist.
  
  Signed-off-by: Pavel Hrdina 
  Reviewed-by: Daniel P. Berrangé 


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/libvirt/build-armhf-libvirt.libvirt-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/libvirt/build-armhf-libvirt.libvirt-build 
--summary-out=tmp/146411.bisection-summary --basis-template=146182 
--blessings=real,real-bisect libvirt build-armhf-libvirt libvirt-build
Searching for failure / basis pass:
 146374 fail [host=cubietruck-braque] / 146182 [host=cubietruck-picasso] 146156 
[host=cubietruck-metzinger] 146103 [host=cubietruck-picasso] 146061 
[host=cubietruck-picasso] 145969 ok.
Failure / basis pass flights: 146374 / 145969
Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_gnulib https://git.savannah.gnu.org/git/gnulib.git/
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 6c1dddaf97b4ef70e27961c9f79b15c79a863ac5 
611869be9f1083e53305446d90a2909fc89914ef 
317d3eeb963a515e15a63fa356d8ebcda7041a51 
70911f1f4aee0366b6122f2b90d367ec0f066beb 
933ebad2470a169504799a1d95b8e410bd9847ef 
76551856b28d227cb0386a1ab0e774329b941f7d 
03bfe526ecadc86f31eda433b91dc90be0563919
Basis pass 4a09c143f6c467230ab60c20fea560e710ddeee0 
7d069378921bfa0d7c7198ea177aac0a2440016f 
317d3eeb963a515e15a63fa356d8ebcda7041a51 
70911f1f4aee0366b6122f2b90d367ec0f066beb 
933ebad2470a169504799a1d95b8e410bd9847ef 
f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 
fae249d23413b2bf7d98a97d8f649cf7d102c1ae
Generating revisions with ./adhoc-revtuple-generator  
git://libvirt.org/libvirt.git#4a09c143f6c467230ab60c20fea560e710ddeee0-6c1dddaf97b4ef70e27961c9f79b15c79a863ac5
 
https://git.savannah.gnu.org/git/gnulib.git/#7d069378921bfa0d7c7198ea177aac0a2440016f-611869be9f1083e53305446d90a2909fc89914ef
 
https://gitlab.com/keycodemap/keycodemapdb.git#317d3eeb963a515e15a63fa356d8ebcda7041a51-317d3eeb963a515e15a63fa356d8ebcda7041a51
 git://xenbits.xen.org/osstest/ovmf.git#70911f1f4aee0366b6122f2b90d367ec0f066be\
 b-70911f1f4aee0366b6122f2b90d367ec0f066beb 
git://xenbits.xen.org/qemu-xen.git#933ebad2470a169504799a1d95b8e410bd9847ef-933ebad2470a169504799a1d95b8e410bd9847ef
 
git://xenbits.xen.org/osstest/seabios.git#f21b5a4aeb020f2a5e2c6503f906a9349dd2f069-76551856b28d227cb0386a1ab0e774329b941f7d
 
git://xenbits.xen.org/xen.git#fae249d23413b2bf7d98a97d8f649cf7d102c1ae-03bfe526ecadc86f31eda433b91dc90be0563919
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to 
remove them.

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to 
remove them.

Use of uninitialized value $parents in array dereference at 
./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at 
./adhoc-revtuple-generator line 465.
Loaded 17537 nodes in revision graph
Searching for test results:
 145969 pass 4a09c143f6c467230ab60c20fea560e710ddeee0 
7d069378921bfa0d7c7198ea177aac0a2440016f 

[Xen-devel] [qemu-mainline test] 146403: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146403 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146403/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 build-i386-libvirt1 

[Xen-devel] [xen-unstable test] 146393: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146393 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146393/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 146058
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 146058

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd 10 redhat-install fail in 146379 pass in 
146393
 test-amd64-amd64-xl-rtds 15 guest-saverestore  fail pass in 146379

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10 fail in 146379 blocked in 
146058
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 146050
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 146058
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 146058
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 146058
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 146058
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail like 
146058
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 146058
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 146058
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 146058
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 146058
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 146058
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop 

Re: [Xen-devel] [PATCH v4 2/7] x86/hyperv: setup hypercall page

2020-01-22 Thread Michael Kelley
From: Wei Liu  On Behalf Of Wei Liu  Sent: Wednesday, 
January 22, 2020 12:24 PM
> 
> Use the top-most addressable page for that purpose. Adjust e820 code
> accordingly.
> 
> We also need to register Xen's guest OS ID to Hyper-V. Use 0x300 as the
> OS type.
> 
> Signed-off-by: Wei Liu 
> ---
> XXX the decision on Xen's vendor ID is pending.
> 
> v4:
> 1. Use fixmap
> 2. Follow routines listed in TLFS
> ---
>  xen/arch/x86/e820.c | 41 +++
>  xen/arch/x86/guest/hyperv/hyperv.c  | 53 +++--
>  xen/include/asm-x86/guest/hyperv-tlfs.h |  5 ++-
>  3 files changed, 86 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
> index 082f9928a1..5a4ef27a0b 100644
> --- a/xen/arch/x86/e820.c
> +++ b/xen/arch/x86/e820.c
> @@ -36,6 +36,22 @@ boolean_param("e820-verbose", e820_verbose);
>  struct e820map e820;
>  struct e820map __initdata e820_raw;
> 
> +static unsigned int find_phys_addr_bits(void)
> +{
> +uint32_t eax;
> +unsigned int phys_bits = 36;
> +
> +eax = cpuid_eax(0x8000);
> +if ( (eax >> 16) == 0x8000 && eax >= 0x8008 )
> +{
> +phys_bits = (uint8_t)cpuid_eax(0x8008);
> +if ( phys_bits > PADDR_BITS )
> +phys_bits = PADDR_BITS;
> +}
> +
> +return phys_bits;
> +}
> +
>  /*
>   * This function checks if the entire range  is mapped with type.
>   *
> @@ -357,6 +373,21 @@ static unsigned long __init find_max_pfn(void)
>  max_pfn = end;
>  }
> 
> +#ifdef CONFIG_HYPERV_GUEST
> +{
> + /*
> +  * We reserve the top-most page for hypercall page. Adjust
> +  * max_pfn if necessary.
> +  */
> +unsigned int phys_bits = find_phys_addr_bits();
> +unsigned long hcall_pfn =
> +  ((1ull << phys_bits) - 1) >> PAGE_SHIFT;
> +
> +if ( max_pfn >= hcall_pfn )
> +  max_pfn = hcall_pfn - 1;
> +}
> +#endif
> +
>  return max_pfn;
>  }
> 
> @@ -420,7 +451,7 @@ static uint64_t __init mtrr_top_of_ram(void)
>  {
>  uint32_t eax, ebx, ecx, edx;
>  uint64_t mtrr_cap, mtrr_def, addr_mask, base, mask, top;
> -unsigned int i, phys_bits = 36;
> +unsigned int i, phys_bits;
> 
>  /* By default we check only Intel systems. */
>  if ( e820_mtrr_clip == -1 )
> @@ -446,13 +477,7 @@ static uint64_t __init mtrr_top_of_ram(void)
>   return 0;
> 
>  /* Find the physical address size for this CPU. */
> -eax = cpuid_eax(0x8000);
> -if ( (eax >> 16) == 0x8000 && eax >= 0x8008 )
> -{
> -phys_bits = (uint8_t)cpuid_eax(0x8008);
> -if ( phys_bits > PADDR_BITS )
> -phys_bits = PADDR_BITS;
> -}
> +phys_bits = find_phys_addr_bits();
>  addr_mask = ((1ull << phys_bits) - 1) & ~((1ull << 12) - 1);
> 
>  rdmsrl(MSR_MTRRcap, mtrr_cap);
> diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
> b/xen/arch/x86/guest/hyperv/hyperv.c
> index 8d38313d7a..f986c1a805 100644
> --- a/xen/arch/x86/guest/hyperv/hyperv.c
> +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> @@ -18,17 +18,27 @@
>   *
>   * Copyright (c) 2019 Microsoft.
>   */
> +#include 
>  #include 
> 
> +#include 
>  #include 
>  #include 
> +#include 
> 
>  struct ms_hyperv_info __read_mostly ms_hyperv;
> 
> -static const struct hypervisor_ops ops = {
> -.name = "Hyper-V",
> -};
> +static uint64_t generate_guest_id(void)
> +{
> +uint64_t id = 0;
> +
> +id = (uint64_t)HV_XEN_VENDOR_ID << 48;
> +id |= (xen_major_version() << 16) | xen_minor_version();
> +
> +return id;
> +}
> 
> +static const struct hypervisor_ops ops;
>  const struct hypervisor_ops *__init hyperv_probe(void)
>  {
>  uint32_t eax, ebx, ecx, edx;
> @@ -72,6 +82,43 @@ const struct hypervisor_ops *__init hyperv_probe(void)
>  return 
>  }
> 
> +static void __init setup_hypercall_page(void)
> +{
> +union hv_x64_msr_hypercall_contents hypercall_msr;
> +union hv_guest_os_id guest_id;
> +unsigned long mfn;
> +
> +rdmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id.raw);
> +if ( !guest_id.raw )
> +{
> +guest_id.raw = generate_guest_id();
> +wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id.raw);
> +}
> +
> +rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
> +if ( !hypercall_msr.enable )
> +{
> +mfn = ((1ull << paddr_bits) - 1) >> HV_HYP_PAGE_SHIFT;
> +hypercall_msr.enable = 1;
> +hypercall_msr.guest_physical_address = mfn;
> +wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
> +} else {
> +mfn = hypercall_msr.guest_physical_address;
> +}
> +
> +set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
> +}
> +
> +static void __init setup(void)
> +{
> +setup_hypercall_page();
> +}
> +
> +static const struct hypervisor_ops ops = {
> +.name = "Hyper-V",
> +.setup = setup,
> +};
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-x86/guest/hyperv-tlfs.h b/xen/include/asm-
> 

[Xen-devel] [xen-unstable-smoke test] 146401: tolerable all pass - PUSHED

2020-01-22 Thread osstest service owner
flight 146401 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146401/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  021cc01ecac111be3301ad33ff5cda4543ca8b92
baseline version:
 xen  c081788f80f828a021bb192411da05133bd13957

Last test of basis   146396  2020-01-22 19:02:47 Z0 days
Testing same since   146401  2020-01-22 23:00:35 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c081788f80..021cc01eca  021cc01ecac111be3301ad33ff5cda4543ca8b92 -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 146395: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146395 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146395/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 145767
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
145767

version targeted for testing:
 ovmf 9a1f14ad721bbcd833ec5108944c44a502392f03
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z   15 days
Failing since145774  2020-01-08 02:50:20 Z   14 days   54 attempts
Testing same since   146346  2020-01-21 04:31:27 Z1 days6 attempts


People who touched revisions under test:
  Aaron Li 
  Albecki, Mateusz 
  Ard Biesheuvel 
  Ashish Singhal 
  Bob Feng 
  Brian R Haug 
  Eric Dong 
  Fan, ZhijuX 
  Hao A Wu 
  Jason Voelz 
  Jian J Wang 
  Krzysztof Koch 
  Laszlo Ersek 
  Leif Lindholm 
  Li, Aaron 
  Liming Gao 
  Mateusz Albecki 
  Michael D Kinney 
  Michael Kubacki 
  Pavana.K 
  Philippe Mathieu-Daud? 
  Philippe Mathieu-Daude 
  Siyuan Fu 
  Siyuan, Fu 
  Sudipto Paul 
  Vitaly Cheptsov 
  Vitaly Cheptsov via Groups.Io 
  Wei6 Xu 
  Xu, Wei6 
  Zhiguang Liu 
  Zhiju.Fan 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1164 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] libvirt support for scheduler credit2

2020-01-22 Thread Kevin Stange
On 1/22/20 12:56 PM, Jim Fehlig wrote:
> On 1/21/20 10:05 AM, Jürgen Groß wrote:
>> On 21.01.20 17:56, Kevin Stange wrote:
>>> Hi,
>>>
>>> I looked around a bit and wasn't able to find a good answer to this, so
>>> George suggested I ask here.
>>
>> Cc-ing Jim.
>>
>>>
>>> Since Xen 4.12, credit2 is the default scheduler, but at least as of
>>> libvirt 5.1.0 virsh doesn't appear to understand credit2 and produces
>>> this sort of output:
> 
> You would see the same with libvirt.git master, sorry. ATM the libvirt libxl 
> driver is unaware of the credit2 scheduler. Hmm, as I recall Dario was going 
> to 
> provide a patch for libvirt :-). But he is quite busy so it will have to be 
> added to my very long todo list.

Sorry to hear that's the case.  Due to my orchestration system I'll have
to hang on to credit a while longer.  Thanks for clarifying!

-- 
Kevin Stange
Chief Technology Officer
Steadfast | Managed Infrastructure, Datacenter and Cloud Services
800 S Wells, Suite 190 | Chicago, IL 60607
312.602.2689 X203 | Fax: 312.602.2688
ke...@steadfast.net | www.steadfast.net

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 146400: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146400 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146400/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1) 

Re: [Xen-devel] [PATCH v2 1/4] x86/microcode: Improve documentation and parsing for ucode=

2020-01-22 Thread Eslam Elnikety

On 21.01.20 21:51, Eslam Elnikety wrote:

On 21.01.20 10:27, Jan Beulich wrote:

On 21.01.2020 00:50, Eslam Elnikety wrote:

On 20.01.20 09:42, Jan Beulich wrote:

On 17.01.2020 20:06, Eslam Elnikety wrote:

On 20.12.19 10:53, Jan Beulich wrote:

On 19.12.2019 22:08, Eslam Elnikety wrote:

On 18.12.19 12:49, Jan Beulich wrote:

On 18.12.2019 02:32, Eslam Elnikety wrote:
Decouple the microcode referencing mechanism when using GRUB to 
that
when using EFI. This allows us to avoid the "unspecified 
effect" of

using ` | scan` along xen.efi.


I guess "unspecified effect" in the doc was pretty pointless - such
options have been ignored before; in fact ...


With that, Xen can explicitly
ignore those named options when using EFI.


... I don't see things becoming any more explicit (not even parsing
the options was quite explicit to me).



I agree that those options have been ignored so far in the case 
of EFI.

The documentation contradicts that however. The documentation:
* says  has unspecified effect.
* does not mention anything about scan being ignored.

With this patch, it is explicit in code and in documentation that 
both

options are ignored in case of EFI.


But isn't it rather that ucode=scan could (and hence perhaps should)
also have its value on EFI?



I do not see "ucode=scan" applicable in anyway in the case of EFI. In
EFI, there are not "modules" to scan through, but rather the efi 
config

points exactly to the microcode blob.


What would be wrong with the EFI code to also inspect whatever has been
specified with ramdisk= if there was no ucode= ?


I see, interesting. This sounds like a legitimate use case indeed. I
wonder, would I be breaking anything if I simply allow the existing code
that iterates over modules to kick in when ucode=scan irrespective of
EFI or legacy boot?


I don't think so, no, but it would need double checking (and
mentioning in the description and/or documentation).


Also, it seems to me that the ucode= specified by
efi.cfg would take precedence over the ucode=scan. Do not you think?


I guess we need to settle on what we want to take precedence and
then make sure code also behaves this way. But yes, I think ucode=
from the .cfg should supersede ucode=scan on the command line. A
possibly useful adjustment to this might be to distinguish whether
the ucode=scan was in a specific .cfg section while the ucode= was
in [global] (i.e. sort of a default), in which case maybe the
ucode=scan should win. Thoughts?

Jan



I think any ucode= in the EFI .cfg ought to supersede the ucode=scan. 
The semantics are simpler in this case, rather than having to worry 
about where exactly the ucode= was specified in the EFI .cfg. With that, 
an administrator would default to using ucode=scan on the commandline to 
load the ramdisk microcode, and a ucode= in .cfg would be an explicit 
signal to use different microcode.


Eslam



So that happens to be the existing behaviour already :)

I was under the impression that ucode=scan was simply ignored under EFI. 
That's not the case. It is only ignored if ucode= is specified 
in the EFI config. In other words, what we had just discussed above is 
already the case. This clearly needs spelling out in the documentation, 
which is the first patch in the "x86/microcode: Improve documentation 
and code" series I have sent just now.


Cheers,
Eslam






___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1 4/4] x86/microcode: use const qualifier for microcode buffer

2020-01-22 Thread Eslam Elnikety
The buffer holding the microcode bits should be marked as const.

Signed-off-by: Eslam Elnikety 
Acked-by: Jan Beulich 
---
 xen/arch/x86/microcode.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index a662a7f438..0639551173 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -88,7 +88,7 @@ static enum {
  * memory.
  */
 struct ucode_mod_blob {
-void *data;
+const void *data;
 size_t size;
 };
 
@@ -753,7 +753,7 @@ int microcode_update_one(bool start_update)
 int __init early_microcode_update_cpu(void)
 {
 int rc = 0;
-void *data = NULL;
+const void *data = NULL;
 size_t len;
 struct microcode_patch *patch;
 
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1 1/4] x86/microcode: Improve documentation for ucode=

2020-01-22 Thread Eslam Elnikety
Specify applicability and the default value. Also state that, in case of
EFI, the microcode update blob specified in the EFI cfg takes precedence
over `ucode=scan`, if the latter is specified on Xen commend line.

No functional changes.

Signed-off-by: Eslam Elnikety 
---
 docs/misc/xen-command-line.pandoc | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/docs/misc/xen-command-line.pandoc 
b/docs/misc/xen-command-line.pandoc
index 981a5e2381..ebec6d387e 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2134,7 +2134,12 @@ logic applies:
 ### ucode (x86)
 > `= List of [  | scan=, nmi= ]`
 
-Specify how and where to find CPU microcode update blob.
+Applicability: x86
+Default: `nmi`
+
+Controls for CPU microcode loading. For early loading, this parameter can
+specify how and where to find the microcode update blob. For late loading,
+this parameter specifies if the update happens within a NMI handler.
 
 'integer' specifies the CPU microcode update blob module index. When positive,
 this specifies the n-th module (in the GrUB entry, zero based) to be used
@@ -2152,6 +2157,7 @@ image that contains microcode. Depending on the platform 
the blob with the
 microcode in the cpio name space must be:
   - on Intel: kernel/x86/microcode/GenuineIntel.bin
   - on AMD  : kernel/x86/microcode/AuthenticAMD.bin
+If EFI boot, the `ucode=` config takes precendence over `scan`.
 
 'nmi' determines late loading is performed in NMI handler or just in
 stop_machine context. In NMI handler, even NMIs are blocked, which is
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1 0/4] x86/microcode: Improve documentation and code

2020-01-22 Thread Eslam Elnikety
This patch series introduces improvements to the existing documentation
and code of x86/microcode. Patches 1 and 2 improve the documentation and
parsing for `ucode=`. Patches 3 and 4 introduce nits/improvements to the
microcode early loading code.

Some (variant of the) patches have been sent earlier under "Support builtin CPU
microcode" as those patches were motivated by discussions following the initial
submission of the builtin microcode. On a second thought, such improvements
should have gone independently. So here it goes. (Those improvements will be
dropped from the builtin microcode series as I submit its v3).

Changes since submitted under [v2] x86/microcode: Support builtin CPU microcode
- Patch 1: New / explicitly document the current behaviour of ucode=scan with 
EFI
- Patch 2: Fix index data type, drop unwelcomed function rename
- Patch 3 and 4: Added Acked-by, otherwise as before

Eslam Elnikety (4):
  x86/microcode: Improve documentation for ucode=
  x86/microcode: Improve parsing for ucode=
  x86/microcode: avoid unnecessary xmalloc/memcpy of ucode data
  x86/microcode: use const qualifier for microcode buffer

 docs/misc/xen-command-line.pandoc | 14 --
 xen/arch/x86/microcode.c  | 74 +++
 2 files changed, 37 insertions(+), 51 deletions(-)

-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1 3/4] x86/microcode: avoid unnecessary xmalloc/memcpy of ucode data

2020-01-22 Thread Eslam Elnikety
When using `ucode=scan` and if a matching module is found, the microcode
payload is maintained in an xmalloc()'d region. This is unnecessary since
the bootmap would just do. Remove the xmalloc and xfree on the microcode
module scan path.

This commit also does away with the restriction on the microcode module
size limit. The concern that a large microcode module would consume too
much memory preventing guests launch is misplaced since this is all the
init path. While having such safeguards is valuable, this should apply
across the board for all early/late microcode loading. Having it just on
the `scan` path is confusing.

Looking forward, we are a bit closer (i.e., one xmalloc down) to pulling
the early microcode loading of the BSP a bit earlier in the early boot
process. This commit is the low hanging fruit. There is still a sizable
amount of work to get there as there are still a handful of xmalloc in
microcode_{amd,intel}.c.

First, there are xmallocs on the path of finding a matching microcode
update. Similar to the commit at hand, searching through the microcode
blob can be done on the already present buffer with no need to xmalloc
any further. Even better, do the filtering in microcode.c before
requesting the microcode update on all CPUs. The latter requires careful
restructuring and exposing the arch-specific logic for iterating over
patches and declaring a match.

Second, there are xmallocs for the microcode cache. Here, we would need
to ensure that the cache corresponding to the BSP gets xmalloc()'d and
populated after the fact.

Signed-off-by: Eslam Elnikety 
Acked-by: Jan Beulich 
---
 xen/arch/x86/microcode.c | 32 
 1 file changed, 4 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index e1d98fa55e..a662a7f438 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -141,11 +141,6 @@ static int __init parse_ucode(const char *s)
 }
 custom_param("ucode", parse_ucode);
 
-/*
- * 8MB ought to be enough.
- */
-#define MAX_EARLY_CPIO_MICROCODE (8 << 20)
-
 void __init microcode_scan_module(
 unsigned long *module_map,
 const multiboot_info_t *mbi)
@@ -190,31 +185,12 @@ void __init microcode_scan_module(
 cd = find_cpio_data(p, _blob_start, _blob_size,  /* ignore */);
 if ( cd.data )
 {
-/*
- * This is an arbitrary check - it would be sad if the blob
- * consumed most of the memory and did not allow guests
- * to launch.
- */
-if ( cd.size > MAX_EARLY_CPIO_MICROCODE )
-{
-printk("Multiboot %d microcode payload too big! (%ld, we 
can do %d)\n",
-   i, cd.size, MAX_EARLY_CPIO_MICROCODE);
-goto err;
-}
-ucode_blob.size = cd.size;
-ucode_blob.data = xmalloc_bytes(cd.size);
-if ( !ucode_blob.data )
-cd.data = NULL;
-else
-memcpy(ucode_blob.data, cd.data, cd.size);
+ucode_blob.size = cd.size;
+ucode_blob.data = cd.data;
+break;
 }
 bootstrap_map(NULL);
-if ( cd.data )
-break;
 }
-return;
-err:
-bootstrap_map(NULL);
 }
 void __init microcode_grab_module(
 unsigned long *module_map,
@@ -734,7 +710,7 @@ static int __init microcode_init(void)
  */
 if ( ucode_blob.size )
 {
-xfree(ucode_blob.data);
+bootstrap_map(NULL);
 ucode_blob.size = 0;
 ucode_blob.data = NULL;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1 2/4] x86/microcode: Improve parsing for ucode=

2020-01-22 Thread Eslam Elnikety
Decouple the microcode indexing mechanism when using GRUB to that
when using EFI. This allows us to avoid the "unspecified effect" of
using `` when booting via EFI. With that, Xen can explicitly
ignore that option when using EFI. This is the only functinal change
this commit introduces. Update the command line documentation for
consistency. As an added benefit, the 'parse_ucode' logic becomes
independent of GRUB vs. EFI.

While at it, drop the leading comment for parse_ucode. No practical
use for it given this commit.

Signed-off-by: Eslam Elnikety 
---
 docs/misc/xen-command-line.pandoc |  6 ++---
 xen/arch/x86/microcode.c  | 38 +--
 2 files changed, 24 insertions(+), 20 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc 
b/docs/misc/xen-command-line.pandoc
index ebec6d387e..821b9281a1 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2147,9 +2147,9 @@ for updating CPU micrcode. When negative, counting starts 
at the end of
 the modules in the GrUB entry (so with the blob commonly being last,
 one could specify `ucode=-1`). Note that the value of zero is not valid
 here (entry zero, i.e. the first module, is always the Dom0 kernel
-image). Note further that use of this option has an unspecified effect
-when used with xen.efi (there the concept of modules doesn't exist, and
-the blob gets specified via the `ucode=` config file/section
+image). This option should be used only with legacy boot, as it is explicitly
+ignored in EFI boot. When booting via EFI, the microcode update blob for
+early loading can be specified via the `ucode=` config file/section
 entry; see [EFI configuration file description](efi.html)).
 
 'scan' instructs the hypervisor to scan the multiboot images for an cpio
diff --git a/xen/arch/x86/microcode.c b/xen/arch/x86/microcode.c
index 6ced293d88..e1d98fa55e 100644
--- a/xen/arch/x86/microcode.c
+++ b/xen/arch/x86/microcode.c
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -61,6 +62,7 @@
 static module_t __initdata ucode_mod;
 static signed int __initdata ucode_mod_idx;
 static bool_t __initdata ucode_mod_forced;
+static unsigned int __initdata ucode_mod_efi_idx;
 static unsigned int nr_cores;
 
 /*
@@ -105,15 +107,10 @@ static struct microcode_patch *microcode_cache;
 
 void __init microcode_set_module(unsigned int idx)
 {
-ucode_mod_idx = idx;
+ucode_mod_efi_idx = idx;
 ucode_mod_forced = 1;
 }
 
-/*
- * The format is '[|scan=, nmi=]'. Both options are
- * optional. If the EFI has forced which of the multiboot payloads is to be
- * used, only nmi= is parsed.
- */
 static int __init parse_ucode(const char *s)
 {
 const char *ss;
@@ -126,18 +123,15 @@ static int __init parse_ucode(const char *s)
 
 if ( (val = parse_boolean("nmi", s, ss)) >= 0 )
 ucode_in_nmi = val;
-else if ( !ucode_mod_forced ) /* Not forced by EFI */
+else if ( (val = parse_boolean("scan", s, ss)) >= 0 )
+ucode_scan = val;
+else
 {
-if ( (val = parse_boolean("scan", s, ss)) >= 0 )
-ucode_scan = val;
-else
-{
-const char *q;
-
-ucode_mod_idx = simple_strtol(s, , 0);
-if ( q != ss )
-rc = -EINVAL;
-}
+const char *q;
+
+ucode_mod_idx = simple_strtol(s, , 0);
+if ( q != ss )
+rc = -EINVAL;
 }
 
 s = ss + 1;
@@ -228,6 +222,16 @@ void __init microcode_grab_module(
 {
 module_t *mod = (module_t *)__va(mbi->mods_addr);
 
+if ( efi_enabled(EFI_BOOT) )
+{
+if ( ucode_mod_forced ) /* Microcode specified by EFI */
+{
+ucode_mod = mod[ucode_mod_efi_idx];
+return;
+}
+goto scan;
+}
+
 if ( ucode_mod_idx < 0 )
 ucode_mod_idx += mbi->mods_count;
 if ( ucode_mod_idx <= 0 || ucode_mod_idx >= mbi->mods_count ||
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-smoke test] 146396: tolerable all pass - PUSHED

2020-01-22 Thread osstest service owner
flight 146396 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146396/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  c081788f80f828a021bb192411da05133bd13957
baseline version:
 xen  a4d457fd59f4ebfb524aec82cb6a3030087914ca

Last test of basis   146390  2020-01-22 16:00:25 Z0 days
Testing same since   146396  2020-01-22 19:02:47 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Juergen Gross 
  Julien Grall 
  Meng Xu 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a4d457fd59..c081788f80  c081788f80f828a021bb192411da05133bd13957 -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 7/7] x86/hyperv: setup VP assist page

2020-01-22 Thread Andrew Cooper
On 22/01/2020 20:23, Wei Liu wrote:
> diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
> b/xen/arch/x86/guest/hyperv/hyperv.c
> index 085e646dc6..89a8f316b2 100644
> --- a/xen/arch/x86/guest/hyperv/hyperv.c
> +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> @@ -32,6 +32,7 @@
>  struct ms_hyperv_info __read_mostly ms_hyperv;
>  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_pcpu_input_arg);
>  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> +DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);

You'll get fewer holes in the percpu data area by moving this
declaration up by one.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 3/7] x86/hyperv: provide Hyper-V hypercall functions

2020-01-22 Thread Andrew Cooper
On 22/01/2020 20:23, Wei Liu wrote:
> These functions will be used later to make hypercalls to Hyper-V.
>
> Signed-off-by: Wei Liu 

After some experimentation,

diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index cbc5701214..3708a60b5c 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -329,6 +329,8 @@ SECTIONS
   efi = .;
 #endif
 
+  hv_hcall_page = ABSOLUTE(0x82d0bfffe000);
+
   /* Sections to be discarded */
   /DISCARD/ : {
    *(.exit.text)

in the linker script lets direct calls work correctly:

82d080637935:   b9 01 00 00 40  mov    $0x4001,%ecx
82d08063793a:   0f 30   wrmsr 
82d08063793c:   ba 21 03 00 00  mov    $0x321,%edx
82d080637941:   bf 01 00 00 00  mov    $0x1,%edi
82d080637946:   e8 ac 4f c7 ff  callq  82d0802ac8f7
<__set_fixmap_x>
82d08063794b:   41 b8 00 00 00 00   mov    $0x0,%r8d
82d080637951:   b9 ff ff 00 00  mov    $0x,%ecx
82d080637956:   ba 00 00 00 00  mov    $0x0,%edx
82d08063795b:   e8 a0 66 9c 3f  callq  82d0bfffe000

82d080637960:   66 83 f8 02 cmp    $0x2,%ax

but it does throw:

Difference at .init:00032edf is 0xc000 (expected 0x4000)
Difference at .init:00032edf is 0xc000 (expected 0x4000)

as a diagnostic presumably from the final link  (both with a standard
Debian 2.28 binutils, and upstream 2.33 build).  I'm not sure what its
trying to complain about, as both xen.gz and xen.efi have correctly
generated code.

Depending on whether they are benign or not, a linker-friendly
fix_to_virt() should be all we need to keep these strictly as direct calls.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 146397: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146397 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146397/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   

Re: [Xen-devel] [PATCH v4 2/7] x86/hyperv: setup hypercall page

2020-01-22 Thread Andrew Cooper
On 22/01/2020 20:23, Wei Liu wrote:
> Use the top-most addressable page for that purpose. Adjust e820 code
> accordingly.
>
> We also need to register Xen's guest OS ID to Hyper-V. Use 0x300 as the
> OS type.
>
> Signed-off-by: Wei Liu 
> ---
> XXX the decision on Xen's vendor ID is pending.

Presumably this is pending a published update to the TLFS?  (And I
presume using 0x8088 is out of the question?  That is an X in the bottom
byte, not a reference to an 8 bit microprocessor.)

> diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
> index 082f9928a1..5a4ef27a0b 100644
> --- a/xen/arch/x86/e820.c
> +++ b/xen/arch/x86/e820.c
> @@ -36,6 +36,22 @@ boolean_param("e820-verbose", e820_verbose);
> @@ -357,6 +373,21 @@ static unsigned long __init find_max_pfn(void)
>  max_pfn = end;
>  }
>  
> +#ifdef CONFIG_HYPERV_GUEST
> +{
> + /*
> +  * We reserve the top-most page for hypercall page. Adjust
> +  * max_pfn if necessary.

It might be worth leaving a "TODO: Better algorithm/guess?" here.

> +  */
> +unsigned int phys_bits = find_phys_addr_bits();
> +unsigned long hcall_pfn =
> +  ((1ull << phys_bits) - 1) >> PAGE_SHIFT;

(1ull << (phys_bits - PAGE_SHIFT)) - 1 is equivalent, and doesn't
require a right shift.  I don't know if the compiler is smart enough to
make this optimisation automatically.

> +
> +if ( max_pfn >= hcall_pfn )
> +  max_pfn = hcall_pfn - 1;

Indentation looks weird.

> @@ -446,13 +477,7 @@ static uint64_t __init mtrr_top_of_ram(void)
>   return 0;
>  
>  /* Find the physical address size for this CPU. */
> -eax = cpuid_eax(0x8000);
> -if ( (eax >> 16) == 0x8000 && eax >= 0x8008 )
> -{
> -phys_bits = (uint8_t)cpuid_eax(0x8008);
> -if ( phys_bits > PADDR_BITS )
> -phys_bits = PADDR_BITS;
> -}
> +phys_bits = find_phys_addr_bits();
>  addr_mask = ((1ull << phys_bits) - 1) & ~((1ull << 12) - 1);

Note for whomever is next doing cleanup in this area.  This wants to be
& PAGE_MASK.

> diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
> b/xen/arch/x86/guest/hyperv/hyperv.c
> index 8d38313d7a..f986c1a805 100644
> --- a/xen/arch/x86/guest/hyperv/hyperv.c
> +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> @@ -72,6 +82,43 @@ const struct hypervisor_ops *__init hyperv_probe(void)
>  return 
>  }
>  
> +static void __init setup_hypercall_page(void)
> +{
> +union hv_x64_msr_hypercall_contents hypercall_msr;
> +union hv_guest_os_id guest_id;
> +unsigned long mfn;
> +
> +rdmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id.raw);
> +if ( !guest_id.raw )
> +{
> +guest_id.raw = generate_guest_id();
> +wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id.raw);
> +}
> +
> +rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
> +if ( !hypercall_msr.enable )
> +{
> +mfn = ((1ull << paddr_bits) - 1) >> HV_HYP_PAGE_SHIFT;
> +hypercall_msr.enable = 1;
> +hypercall_msr.guest_physical_address = mfn;
> +wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);

Is it worth reading back, and BUG() if it is different?  It will be a
more obvious failure than hypercalls disappearing mysteriously.

> +} else {
> +mfn = hypercall_msr.guest_physical_address;
> +}

Style.

Otherwise, LGTM.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [Vote] For Xen Project Code of Conduct (deadline March 31st)

2020-01-22 Thread Stefano Stabellini
On Fri, 17 Jan 2020, Lars Kurth wrote:
> Hi all,
> 
> for some time now we have been discussing the Xen Project Code of
> Conduct. The most recent set of feedback has been primarily around
> minor language issues (US vs UL English, etc.), which indicates to me 
> that the proposal is ready to be voted on
> 
> The final version which addresses all the latest minor feedback can be
> found at 
> http://xenbits.xenproject.org/gitweb/?p=people/larsk/code-of-conduct.git;a=tree;h=refs/heads/CoC-v5
>  
> 
> It should be read in the following order
> * 
> http://xenbits.xenproject.org/gitweb/?p=people/larsk/code-of-conduct.git;a=blob;f=code-of-conduct.md
>  
> * 
> http://xenbits.xenproject.org/gitweb/?p=people/larsk/code-of-conduct.git;a=blob;f=communication-guide.md
> * 
> http://xenbits.xenproject.org/gitweb/?p=people/larsk/code-of-conduct.git;a=blob;f=code-review-guide.md
> * 
> http://xenbits.xenproject.org/gitweb/?p=people/larsk/code-of-conduct.git;a=blob;f=communication-practice.md
>  
> * 
> http://xenbits.xenproject.org/gitweb/?p=people/larsk/code-of-conduct.git;a=blob;f=resolving-disagreement.md
>  
> 
> In accordance with https://xenproject.org/developers/governance/, I need the
> leadership teams of the three mature projects: the Hypervisor, the XAPI
> project and the Windows PV Driver project to vote on this proposal.
> 
> The specific voting rules in this case are outlined in section
> https://www.xenproject.org/governance.html#project-decisions 
> 
> People allowed to vote on behalf of the Hypervisor project are:
> Julien Grall, Andy Cooper, George Dunlap, Ian Jackson, Jan Beulich, Konrad R
> Wilk, Stefano Stabellini, Wei Liu and Paul Durrant (as Release Manager).
> 
> People allowed to vote on behalf of the XAPI project are:
> Chandrika Srinivasan, Christian Lindig, Konstantina Chremmou,
> Rob Hoes, Zhang Li
> 
> People allowed to vote on behalf of the Windows PV Driver Project are:
> Paul Durrant, Ben Chalmers, Owen Smith
> 
> I propose to tally the votes after March 31st. You can reply via
> +1: for proposal
> -1: against proposal
> in public or private.

+1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC XEN PATCH 00/23] xen: beginning support for RISC-V

2020-01-22 Thread Stefano Stabellini
On Wed, 22 Jan 2020, Andrew Cooper wrote:
> > My big questions are:
> > Does the Xen project have interest in RISC-V?
> 
> There is very large downstream interest in RISC-V.  So a definite yes.

Definite Yes from me too


> > What can be done to make the RISC-V port as upstreamable as
> > possible?
> > Any major pitfalls?
> >
> > It would be great to hear all of your feedback.
> 
> Both RISC-V and Power9 are frequently requested things, and both suffer
> from the fact that, while we as a community would like them, the
> upstream intersection of "people who know Xen" and "people who know
> enough arch $X to do an initial port" is 0.
> 
> This series clearly demonstrates a change in the status quo, and I think
> a lot of people will be happy.
> 
> To get RISC-V to being fully supported, we will ultimately need to get
> hardware into the CI system, and an easy way for developers to test
> changes.  Do you have any thoughts on production RISC-V hardware
> (ideally server form factor) for the CI system, and/or dev boards which
> might be available fairly cheaply?

My understanding is that virtualization development for RISC-V is done
on QEMU right now (which could still be hooked into the CI system if
somebody wanted to do the work I think.)___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 1/7] x86: provide executable fixmap facility

2020-01-22 Thread Andrew Cooper
On 22/01/2020 20:23, Wei Liu wrote:
> diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
> index 1cbf5acdfb..605d01f1dd 100644
> --- a/xen/arch/x86/boot/x86_64.S
> +++ b/xen/arch/x86/boot/x86_64.S
> @@ -85,7 +85,15 @@ GLOBAL(l2_directmap)
>   * 4k page.
>   */

Adjust this comment as well?

> diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> index d0cfbb70a8..4fa56ea0a9 100644
> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -218,7 +218,7 @@ extern unsigned char boot_edid_info[128];
>  /* Slot 261: high read-only compat machine-to-phys conversion table (1GB). */
>  #define HIRO_COMPAT_MPT_VIRT_START RDWR_COMPAT_MPT_VIRT_END
>  #define HIRO_COMPAT_MPT_VIRT_END (HIRO_COMPAT_MPT_VIRT_START + GB(1))
> -/* Slot 261: xen text, static data and bss (1GB). */
> +/* Slot 261: xen text, static data, bss and executable fixmap (1GB). */

And per-cpu stubs.  Might as well fix the comment while editing.

>  #define XEN_VIRT_START  (HIRO_COMPAT_MPT_VIRT_END)
>  #define XEN_VIRT_END(XEN_VIRT_START + GB(1))
>  
> diff --git a/xen/include/asm-x86/fixmap.h b/xen/include/asm-x86/fixmap.h
> index 9fb2f47946..c2a9d2b50a 100644
> --- a/xen/include/asm-x86/fixmap.h
> +++ b/xen/include/asm-x86/fixmap.h
> @@ -15,6 +15,9 @@
>  #include 
>  
>  #define FIXADDR_TOP (VMAP_VIRT_END - PAGE_SIZE)
> +#define FIXADDR_X_TOP (XEN_VIRT_END - PAGE_SIZE)
> +/* This constant is derived from enum fixed_addresses_x below */
> +#define MAX_FIXADDR_X_SIZE (2 << PAGE_SHIFT)

Answering slightly out of order, for clarity:

FIXADDR_X_SIZE should be 0 or 1 by the end of this patch.

As for MAX_FIXADDR_X_SIZE, how about simply
IS_ENABLED(CONFIG_HYPERV_GUEST) ?  That should work, even in a linker
script.

Somewhere, there should be a BUILD_BUG_ON() cross-checking
MAX_FIXADDR_X_SIZE and __end_of_fixed_addresses_x.  We don't yet have a
build_assertions() in x86/mm.c, so I guess now is the time to gain one.

>  
>  #ifndef __ASSEMBLY__
>  
> @@ -89,6 +92,31 @@ static inline unsigned long virt_to_fix(const unsigned 
> long vaddr)
>  return __virt_to_fix(vaddr);
>  }
>  
> +enum fixed_addresses_x {
> +/* Index 0 is reserved since fix_x_to_virt(0) == FIXADDR_X_TOP. */
> +FIX_X_RESERVED,
> +#ifdef CONFIG_HYPERV_GUEST
> +FIX_X_HYPERV_HCALL,
> +#endif
> +__end_of_fixed_addresses_x
> +};
> +
> +#define FIXADDR_X_SIZE  (__end_of_fixed_addresses_x << PAGE_SHIFT)

-1, seeing as 0 is reserved.

> +#define FIXADDR_X_START (FIXADDR_X_TOP - FIXADDR_X_SIZE)
> +
> +extern void __set_fixmap_x(
> +enum fixed_addresses_x idx, unsigned long mfn, unsigned long flags);
> +
> +#define set_fixmap_x(idx, phys) \
> +__set_fixmap_x(idx, (phys)>>PAGE_SHIFT, PAGE_HYPERVISOR_RX | 
> MAP_SMALL_PAGES)
> +
> +#define clear_fixmap_x(idx) __set_fixmap_x(idx, 0, 0)
> +
> +#define __fix_x_to_virt(x) (FIXADDR_X_TOP - ((x) << PAGE_SHIFT))
> +#define __virt_to_fix_x(x) ((FIXADDR_X_TOP - ((x)_MASK)) >> PAGE_SHIFT)

The _MASK is redundant, given the following shift, but can't be
optimised out because of its effect on the high 12 bits of the address
as well.  These helpers aren't safe to wild inputs, even with the
_MASK, so I'd just drop it.

Otherwise, LGTM.  There is some cleanup we ought to do to the fixmap
infrastructure, but that isn't appropriate for this series.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 5/7] x86/hyperv: provide percpu hypercall input page

2020-01-22 Thread Wei Liu
Hyper-V's input / output argument must be 8 bytes aligned an not cross
page boundary. One way to satisfy those requirements is to use percpu
page.

For the foreseeable future we only need to provide input for TLB
and APIC hypercalls, so skip setting up an output page.

We will also need to provide an ap_setup hook for secondary cpus to
setup its own input page.

Signed-off-by: Wei Liu 
---
v4:
1. Change wording in commit message
2. Prevent leak
3. Introduce a private header

v3:
1. Use xenheap page instead
2. Drop page tracking structure
3. Drop Paul's review tag
---
 xen/arch/x86/guest/hyperv/hyperv.c  | 25 +
 xen/arch/x86/guest/hyperv/private.h | 29 +
 2 files changed, 54 insertions(+)
 create mode 100644 xen/arch/x86/guest/hyperv/private.h

diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
b/xen/arch/x86/guest/hyperv/hyperv.c
index 536ce0d0dd..c5195af948 100644
--- a/xen/arch/x86/guest/hyperv/hyperv.c
+++ b/xen/arch/x86/guest/hyperv/hyperv.c
@@ -27,7 +27,10 @@
 #include 
 #include 
 
+#include "private.h"
+
 struct ms_hyperv_info __read_mostly ms_hyperv;
+DEFINE_PER_CPU_READ_MOSTLY(void *, hv_pcpu_input_arg);
 
 static uint64_t generate_guest_id(void)
 {
@@ -119,14 +122,36 @@ static void __init setup_hypercall_page(void)
 }
 }
 
+static void setup_hypercall_pcpu_arg(void)
+{
+void *mapping;
+
+if ( this_cpu(hv_pcpu_input_arg) )
+return;
+
+mapping = alloc_xenheap_page();
+if ( !mapping )
+panic("Failed to allocate hypercall input page for CPU%u\n",
+  smp_processor_id());
+
+this_cpu(hv_pcpu_input_arg) = mapping;
+}
+
 static void __init setup(void)
 {
 setup_hypercall_page();
+setup_hypercall_pcpu_arg();
+}
+
+static void ap_setup(void)
+{
+setup_hypercall_pcpu_arg();
 }
 
 static const struct hypervisor_ops ops = {
 .name = "Hyper-V",
 .setup = setup,
+.ap_setup = ap_setup,
 };
 
 /*
diff --git a/xen/arch/x86/guest/hyperv/private.h 
b/xen/arch/x86/guest/hyperv/private.h
new file mode 100644
index 00..b6902b5639
--- /dev/null
+++ b/xen/arch/x86/guest/hyperv/private.h
@@ -0,0 +1,29 @@
+/**
+ * arch/x86/guest/hyperv/private.h
+ *
+ * Definitions / declarations only useful to Hyper-V code.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see .
+ *
+ * Copyright (c) 2020 Microsoft.
+ */
+
+#ifndef __XEN_HYPERV_PRIVIATE_H__
+#define __XEN_HYPERV_PRIVIATE_H__
+
+#include 
+
+DECLARE_PER_CPU(void *, hv_pcpu_input_arg);
+
+#endif /* __XEN_HYPERV_PRIVIATE_H__  */
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 7/7] x86/hyperv: setup VP assist page

2020-01-22 Thread Wei Liu
VP assist page is rather important as we need to toggle some bits in it
for efficient nested virtualisation.

Signed-off-by: Wei Liu 
---
v4:
1. Use private.h
2. Prevent leak

v3:
1. Use xenheap page
2. Drop set_vp_assist

v2:
1. Use HV_HYP_PAGE_SHIFT instead
---
 xen/arch/x86/guest/hyperv/hyperv.c  | 26 ++
 xen/arch/x86/guest/hyperv/private.h |  1 +
 2 files changed, 27 insertions(+)

diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
b/xen/arch/x86/guest/hyperv/hyperv.c
index 085e646dc6..89a8f316b2 100644
--- a/xen/arch/x86/guest/hyperv/hyperv.c
+++ b/xen/arch/x86/guest/hyperv/hyperv.c
@@ -32,6 +32,7 @@
 struct ms_hyperv_info __read_mostly ms_hyperv;
 DEFINE_PER_CPU_READ_MOSTLY(void *, hv_pcpu_input_arg);
 DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
+DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
 
 static uint64_t generate_guest_id(void)
 {
@@ -142,15 +143,40 @@ static void setup_hypercall_pcpu_arg(void)
 this_cpu(hv_vp_index) = vp_index_msr;
 }
 
+static void setup_vp_assist(void)
+{
+void *mapping;
+uint64_t val;
+
+mapping = this_cpu(hv_vp_assist);
+
+if ( !mapping )
+{
+mapping = alloc_xenheap_page();
+if ( !mapping )
+panic("Failed to allocate vp_assist page for CPU%u\n",
+  smp_processor_id());
+
+clear_page(mapping);
+this_cpu(hv_vp_assist) = mapping;
+}
+
+val = (virt_to_mfn(mapping) << HV_HYP_PAGE_SHIFT)
+| HV_X64_MSR_VP_ASSIST_PAGE_ENABLE;
+wrmsrl(HV_X64_MSR_VP_ASSIST_PAGE, val);
+}
+
 static void __init setup(void)
 {
 setup_hypercall_page();
 setup_hypercall_pcpu_arg();
+setup_vp_assist();
 }
 
 static void ap_setup(void)
 {
 setup_hypercall_pcpu_arg();
+setup_vp_assist();
 }
 
 static const struct hypervisor_ops ops = {
diff --git a/xen/arch/x86/guest/hyperv/private.h 
b/xen/arch/x86/guest/hyperv/private.h
index da70990401..af419e9d4b 100644
--- a/xen/arch/x86/guest/hyperv/private.h
+++ b/xen/arch/x86/guest/hyperv/private.h
@@ -26,5 +26,6 @@
 
 DECLARE_PER_CPU(void *, hv_pcpu_input_arg);
 DECLARE_PER_CPU(unsigned int, hv_vp_index);
+DECLARE_PER_CPU(void *, hv_vp_assist);
 
 #endif /* __XEN_HYPERV_PRIVIATE_H__  */
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 1/7] x86: provide executable fixmap facility

2020-01-22 Thread Wei Liu
This allows us to set aside some address space for executable mapping.
This fixed map range starts from XEN_VIRT_END so that it is within reach
of the .text section.

Shift the percpu stub range and livepatch range accordingly.

Signed-off-by: Wei Liu 
---
 xen/arch/x86/boot/x86_64.S   | 10 +-
 xen/arch/x86/livepatch.c |  3 ++-
 xen/arch/x86/mm.c|  9 +
 xen/arch/x86/smpboot.c   |  2 +-
 xen/arch/x86/xen.lds.S   |  3 +++
 xen/include/asm-x86/config.h |  2 +-
 xen/include/asm-x86/fixmap.h | 28 
 7 files changed, 53 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
index 1cbf5acdfb..605d01f1dd 100644
--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -85,7 +85,15 @@ GLOBAL(l2_directmap)
  * 4k page.
  */
 GLOBAL(l2_xenmap)
-.fill L2_PAGETABLE_ENTRIES, 8, 0
+idx = 0
+.rept L2_PAGETABLE_ENTRIES
+.if idx == l2_table_offset(FIXADDR_X_TOP - 1)
+.quad sym_offs(l1_fixmap_x) + __PAGE_HYPERVISOR
+.else
+.quad 0
+.endif
+idx = idx + 1
+.endr
 .size l2_xenmap, . - l2_xenmap
 
 /* L2 mapping the fixmap.  Uses 1x 4k page. */
diff --git a/xen/arch/x86/livepatch.c b/xen/arch/x86/livepatch.c
index 2749cbc5cf..513b0f3841 100644
--- a/xen/arch/x86/livepatch.c
+++ b/xen/arch/x86/livepatch.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 
+#include 
 #include 
 #include 
 
@@ -311,7 +312,7 @@ void __init arch_livepatch_init(void)
 void *start, *end;
 
 start = (void *)xen_virt_end;
-end = (void *)(XEN_VIRT_END - NR_CPUS * PAGE_SIZE);
+end = (void *)(XEN_VIRT_END - FIXADDR_X_SIZE - NR_CPUS * PAGE_SIZE);
 
 BUG_ON(end <= start);
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 654190e9e9..aabe1a4c64 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -157,6 +157,8 @@
 /* Mapping of the fixmap space needed early. */
 l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
 l1_fixmap[L1_PAGETABLE_ENTRIES];
+l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
+l1_fixmap_x[L1_PAGETABLE_ENTRIES];
 
 paddr_t __read_mostly mem_hotplug;
 
@@ -5763,6 +5765,13 @@ void __set_fixmap(
 map_pages_to_xen(__fix_to_virt(idx), _mfn(mfn), 1, flags);
 }
 
+void __set_fixmap_x(
+enum fixed_addresses_x idx, unsigned long mfn, unsigned long flags)
+{
+BUG_ON(idx >= __end_of_fixed_addresses_x);
+map_pages_to_xen(__fix_x_to_virt(idx), _mfn(mfn), 1, flags);
+}
+
 void *__init arch_vmap_virt_end(void)
 {
 return fix_to_virt(__end_of_fixed_addresses);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index c9d1ab4423..2da42fb691 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -640,7 +640,7 @@ unsigned long alloc_stub_page(unsigned int cpu, unsigned 
long *mfn)
 unmap_domain_page(memset(__map_domain_page(pg), 0xcc, PAGE_SIZE));
 }
 
-stub_va = XEN_VIRT_END - (cpu + 1) * PAGE_SIZE;
+stub_va = XEN_VIRT_END - FIXADDR_X_SIZE - (cpu + 1) * PAGE_SIZE;
 if ( map_pages_to_xen(stub_va, page_to_mfn(pg), 1,
   PAGE_HYPERVISOR_RX | MAP_SMALL_PAGES) )
 {
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 07c6448dbb..cbc5701214 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -3,6 +3,8 @@
 
 #include 
 #include 
+
+#include 
 #include 
 #undef ENTRY
 #undef ALIGN
@@ -353,6 +355,7 @@ SECTIONS
 }
 
 ASSERT(__2M_rwdata_end <= XEN_VIRT_END - XEN_VIRT_START + __XEN_VIRT_START -
+  MAX_FIXADDR_X_SIZE -
   DIV_ROUND_UP(NR_CPUS, STUBS_PER_PAGE) * PAGE_SIZE,
"Xen image overlaps stubs area")
 
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index d0cfbb70a8..4fa56ea0a9 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -218,7 +218,7 @@ extern unsigned char boot_edid_info[128];
 /* Slot 261: high read-only compat machine-to-phys conversion table (1GB). */
 #define HIRO_COMPAT_MPT_VIRT_START RDWR_COMPAT_MPT_VIRT_END
 #define HIRO_COMPAT_MPT_VIRT_END (HIRO_COMPAT_MPT_VIRT_START + GB(1))
-/* Slot 261: xen text, static data and bss (1GB). */
+/* Slot 261: xen text, static data, bss and executable fixmap (1GB). */
 #define XEN_VIRT_START  (HIRO_COMPAT_MPT_VIRT_END)
 #define XEN_VIRT_END(XEN_VIRT_START + GB(1))
 
diff --git a/xen/include/asm-x86/fixmap.h b/xen/include/asm-x86/fixmap.h
index 9fb2f47946..c2a9d2b50a 100644
--- a/xen/include/asm-x86/fixmap.h
+++ b/xen/include/asm-x86/fixmap.h
@@ -15,6 +15,9 @@
 #include 
 
 #define FIXADDR_TOP (VMAP_VIRT_END - PAGE_SIZE)
+#define FIXADDR_X_TOP (XEN_VIRT_END - PAGE_SIZE)
+/* This constant is derived from enum fixed_addresses_x below */
+#define MAX_FIXADDR_X_SIZE (2 << PAGE_SHIFT)
 
 #ifndef __ASSEMBLY__
 
@@ -89,6 +92,31 @@ static inline unsigned long virt_to_fix(const unsigned long 
vaddr)
 

[Xen-devel] [PATCH v4 0/7] More Hyper-V infrastructure

2020-01-22 Thread Wei Liu
This patch sereis implements several important functionalities to run
Xen on top of Hyper-V.

See individual patches for more details. The first patch adds an
executable fixmap facility, which is x86 generic. The rest of the series
is Hyper-V specific.

I've checked the assembly code as well as putting in a test patch to
make sure the hypercall interface is implemented correctly.

Wei.

Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Wei Liu 
Cc: Roger Pau Monné 
Cc: Michael Kelley 
Cc: Paul Durrant 

Wei Liu (7):
  x86: provide executable fixmap facility
  x86/hyperv: setup hypercall page
  x86/hyperv: provide Hyper-V hypercall functions
  DO NOT APPLY: x86/hyperv: issue an hypercall
  x86/hyperv: provide percpu hypercall input page
  x86/hyperv: retrieve vp_index from Hyper-V
  x86/hyperv: setup VP assist page

 xen/arch/x86/boot/x86_64.S   |  10 +-
 xen/arch/x86/e820.c  |  41 ++--
 xen/arch/x86/guest/hyperv/hyperv.c   | 119 ++-
 xen/arch/x86/guest/hyperv/private.h  |  31 ++
 xen/arch/x86/livepatch.c |   3 +-
 xen/arch/x86/mm.c|   9 ++
 xen/arch/x86/smpboot.c   |   2 +-
 xen/arch/x86/xen.lds.S   |   3 +
 xen/include/asm-x86/config.h |   2 +-
 xen/include/asm-x86/fixmap.h |  28 ++
 xen/include/asm-x86/guest/hyperv-hcall.h |  98 +++
 xen/include/asm-x86/guest/hyperv-tlfs.h  |   5 +-
 12 files changed, 334 insertions(+), 17 deletions(-)
 create mode 100644 xen/arch/x86/guest/hyperv/private.h
 create mode 100644 xen/include/asm-x86/guest/hyperv-hcall.h

-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 3/7] x86/hyperv: provide Hyper-V hypercall functions

2020-01-22 Thread Wei Liu
These functions will be used later to make hypercalls to Hyper-V.

Signed-off-by: Wei Liu 
---
v4:
1. Adjust code due to previous patch has changed
2. Address comments
---
 xen/include/asm-x86/guest/hyperv-hcall.h | 98 
 1 file changed, 98 insertions(+)
 create mode 100644 xen/include/asm-x86/guest/hyperv-hcall.h

diff --git a/xen/include/asm-x86/guest/hyperv-hcall.h 
b/xen/include/asm-x86/guest/hyperv-hcall.h
new file mode 100644
index 00..509e57f481
--- /dev/null
+++ b/xen/include/asm-x86/guest/hyperv-hcall.h
@@ -0,0 +1,98 @@
+/**
+ * asm-x86/guest/hyperv-hcall.h
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms and conditions of the GNU General Public
+ * License, version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; If not, see .
+ *
+ * Copyright (c) 2019 Microsoft.
+ */
+
+#ifndef __X86_HYPERV_HCALL_H__
+#define __X86_HYPERV_HCALL_H__
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+static inline uint64_t hv_do_hypercall(uint64_t control, paddr_t input_addr,
+   paddr_t output_addr)
+{
+uint64_t status;
+register unsigned long r8 asm("r8") = output_addr;
+
+asm volatile ("INDIRECT_CALL %P[hcall_page]"
+  : "=a" (status), "+c" (control),
+"+d" (input_addr) ASM_CALL_CONSTRAINT
+  : "r" (r8),
+[hcall_page] "p" (fix_x_to_virt(FIX_X_HYPERV_HCALL))
+  : "memory");
+
+return status;
+}
+
+static inline uint64_t hv_do_fast_hypercall(uint16_t code,
+uint64_t input1, uint64_t input2)
+{
+uint64_t status;
+uint64_t control = code | HV_HYPERCALL_FAST_BIT;
+register unsigned long r8 asm("r8") = input2;
+
+asm volatile ("INDIRECT_CALL %P[hcall_page]"
+  : "=a" (status), "+c" (control),
+"+d" (input1) ASM_CALL_CONSTRAINT
+  : "r" (r8),
+[hcall_page] "p" (fix_x_to_virt(FIX_X_HYPERV_HCALL))
+  :);
+
+return status;
+}
+
+static inline uint64_t hv_do_rep_hypercall(uint16_t code, uint16_t rep_count,
+   uint16_t varhead_size,
+   paddr_t input, paddr_t output)
+{
+uint64_t control = code;
+uint64_t status;
+uint16_t rep_comp;
+
+control |= (uint64_t)varhead_size << HV_HYPERCALL_VARHEAD_OFFSET;
+control |= (uint64_t)rep_count << HV_HYPERCALL_REP_COMP_OFFSET;
+
+do {
+status = hv_do_hypercall(control, input, output);
+if ( (status & HV_HYPERCALL_RESULT_MASK) != HV_STATUS_SUCCESS )
+break;
+
+rep_comp = MASK_EXTR(status, HV_HYPERCALL_REP_COMP_MASK);
+
+control &= ~HV_HYPERCALL_REP_START_MASK;
+control |= MASK_INSR(rep_comp, HV_HYPERCALL_REP_COMP_MASK);
+} while ( rep_comp < rep_count );
+
+return status;
+}
+
+#endif /* __X86_HYPERV_HCALL_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 4/7] DO NOT APPLY: x86/hyperv: issue an hypercall

2020-01-22 Thread Wei Liu
Test if the infrastructure works.

Signed-off-by: Wei Liu 
---
 xen/arch/x86/guest/hyperv/hyperv.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
b/xen/arch/x86/guest/hyperv/hyperv.c
index f986c1a805..536ce0d0dd 100644
--- a/xen/arch/x86/guest/hyperv/hyperv.c
+++ b/xen/arch/x86/guest/hyperv/hyperv.c
@@ -23,6 +23,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -107,6 +108,15 @@ static void __init setup_hypercall_page(void)
 }
 
 set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
+
+/* XXX Wei: Issue an hypercall here to make sure things are set up
+ * correctly.  When there is actual use of the hypercall facility,
+ * this can be removed.
+ */
+{
+uint16_t r = hv_do_hypercall(0x, 0, 0);
+BUG_ON(r != HV_STATUS_INVALID_HYPERCALL_CODE);
+}
 }
 
 static void __init setup(void)
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 2/7] x86/hyperv: setup hypercall page

2020-01-22 Thread Wei Liu
Use the top-most addressable page for that purpose. Adjust e820 code
accordingly.

We also need to register Xen's guest OS ID to Hyper-V. Use 0x300 as the
OS type.

Signed-off-by: Wei Liu 
---
XXX the decision on Xen's vendor ID is pending.

v4:
1. Use fixmap
2. Follow routines listed in TLFS
---
 xen/arch/x86/e820.c | 41 +++
 xen/arch/x86/guest/hyperv/hyperv.c  | 53 +++--
 xen/include/asm-x86/guest/hyperv-tlfs.h |  5 ++-
 3 files changed, 86 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
index 082f9928a1..5a4ef27a0b 100644
--- a/xen/arch/x86/e820.c
+++ b/xen/arch/x86/e820.c
@@ -36,6 +36,22 @@ boolean_param("e820-verbose", e820_verbose);
 struct e820map e820;
 struct e820map __initdata e820_raw;
 
+static unsigned int find_phys_addr_bits(void)
+{
+uint32_t eax;
+unsigned int phys_bits = 36;
+
+eax = cpuid_eax(0x8000);
+if ( (eax >> 16) == 0x8000 && eax >= 0x8008 )
+{
+phys_bits = (uint8_t)cpuid_eax(0x8008);
+if ( phys_bits > PADDR_BITS )
+phys_bits = PADDR_BITS;
+}
+
+return phys_bits;
+}
+
 /*
  * This function checks if the entire range  is mapped with type.
  *
@@ -357,6 +373,21 @@ static unsigned long __init find_max_pfn(void)
 max_pfn = end;
 }
 
+#ifdef CONFIG_HYPERV_GUEST
+{
+   /*
+* We reserve the top-most page for hypercall page. Adjust
+* max_pfn if necessary.
+*/
+unsigned int phys_bits = find_phys_addr_bits();
+unsigned long hcall_pfn =
+  ((1ull << phys_bits) - 1) >> PAGE_SHIFT;
+
+if ( max_pfn >= hcall_pfn )
+  max_pfn = hcall_pfn - 1;
+}
+#endif
+
 return max_pfn;
 }
 
@@ -420,7 +451,7 @@ static uint64_t __init mtrr_top_of_ram(void)
 {
 uint32_t eax, ebx, ecx, edx;
 uint64_t mtrr_cap, mtrr_def, addr_mask, base, mask, top;
-unsigned int i, phys_bits = 36;
+unsigned int i, phys_bits;
 
 /* By default we check only Intel systems. */
 if ( e820_mtrr_clip == -1 )
@@ -446,13 +477,7 @@ static uint64_t __init mtrr_top_of_ram(void)
  return 0;
 
 /* Find the physical address size for this CPU. */
-eax = cpuid_eax(0x8000);
-if ( (eax >> 16) == 0x8000 && eax >= 0x8008 )
-{
-phys_bits = (uint8_t)cpuid_eax(0x8008);
-if ( phys_bits > PADDR_BITS )
-phys_bits = PADDR_BITS;
-}
+phys_bits = find_phys_addr_bits();
 addr_mask = ((1ull << phys_bits) - 1) & ~((1ull << 12) - 1);
 
 rdmsrl(MSR_MTRRcap, mtrr_cap);
diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
b/xen/arch/x86/guest/hyperv/hyperv.c
index 8d38313d7a..f986c1a805 100644
--- a/xen/arch/x86/guest/hyperv/hyperv.c
+++ b/xen/arch/x86/guest/hyperv/hyperv.c
@@ -18,17 +18,27 @@
  *
  * Copyright (c) 2019 Microsoft.
  */
+#include 
 #include 
 
+#include 
 #include 
 #include 
+#include 
 
 struct ms_hyperv_info __read_mostly ms_hyperv;
 
-static const struct hypervisor_ops ops = {
-.name = "Hyper-V",
-};
+static uint64_t generate_guest_id(void)
+{
+uint64_t id = 0;
+
+id = (uint64_t)HV_XEN_VENDOR_ID << 48;
+id |= (xen_major_version() << 16) | xen_minor_version();
+
+return id;
+}
 
+static const struct hypervisor_ops ops;
 const struct hypervisor_ops *__init hyperv_probe(void)
 {
 uint32_t eax, ebx, ecx, edx;
@@ -72,6 +82,43 @@ const struct hypervisor_ops *__init hyperv_probe(void)
 return 
 }
 
+static void __init setup_hypercall_page(void)
+{
+union hv_x64_msr_hypercall_contents hypercall_msr;
+union hv_guest_os_id guest_id;
+unsigned long mfn;
+
+rdmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id.raw);
+if ( !guest_id.raw )
+{
+guest_id.raw = generate_guest_id();
+wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id.raw);
+}
+
+rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+if ( !hypercall_msr.enable )
+{
+mfn = ((1ull << paddr_bits) - 1) >> HV_HYP_PAGE_SHIFT;
+hypercall_msr.enable = 1;
+hypercall_msr.guest_physical_address = mfn;
+wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+} else {
+mfn = hypercall_msr.guest_physical_address;
+}
+
+set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT);
+}
+
+static void __init setup(void)
+{
+setup_hypercall_page();
+}
+
+static const struct hypervisor_ops ops = {
+.name = "Hyper-V",
+.setup = setup,
+};
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/guest/hyperv-tlfs.h 
b/xen/include/asm-x86/guest/hyperv-tlfs.h
index 05c4044976..5d37efd2d2 100644
--- a/xen/include/asm-x86/guest/hyperv-tlfs.h
+++ b/xen/include/asm-x86/guest/hyperv-tlfs.h
@@ -318,15 +318,16 @@ struct ms_hyperv_tsc_page {
  *
  * Bit(s)
  * 63 - Indicates if the OS is Open Source or not; 1 is Open Source
- * 62:56 - Os Type; Linux is 0x100
+ * 62:56 - Os Type; Linux 0x100, FreeBSD 0x200, Xen 0x300
  * 55:48 - Distro specific 

[Xen-devel] [PATCH v4 6/7] x86/hyperv: retrieve vp_index from Hyper-V

2020-01-22 Thread Wei Liu
This will be useful when invoking hypercall that targets specific
vcpu(s).

Signed-off-by: Wei Liu 
Reviewed-by: Paul Durrant 
---
v4:
1. Use private.h
2. Add Paul's review tag

v2:
1. Fold into setup_pcpu_arg function
---
 xen/arch/x86/guest/hyperv/hyperv.c  | 5 +
 xen/arch/x86/guest/hyperv/private.h | 1 +
 2 files changed, 6 insertions(+)

diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
b/xen/arch/x86/guest/hyperv/hyperv.c
index c5195af948..085e646dc6 100644
--- a/xen/arch/x86/guest/hyperv/hyperv.c
+++ b/xen/arch/x86/guest/hyperv/hyperv.c
@@ -31,6 +31,7 @@
 
 struct ms_hyperv_info __read_mostly ms_hyperv;
 DEFINE_PER_CPU_READ_MOSTLY(void *, hv_pcpu_input_arg);
+DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
 
 static uint64_t generate_guest_id(void)
 {
@@ -125,6 +126,7 @@ static void __init setup_hypercall_page(void)
 static void setup_hypercall_pcpu_arg(void)
 {
 void *mapping;
+uint64_t vp_index_msr;
 
 if ( this_cpu(hv_pcpu_input_arg) )
 return;
@@ -135,6 +137,9 @@ static void setup_hypercall_pcpu_arg(void)
   smp_processor_id());
 
 this_cpu(hv_pcpu_input_arg) = mapping;
+
+rdmsrl(HV_X64_MSR_VP_INDEX, vp_index_msr);
+this_cpu(hv_vp_index) = vp_index_msr;
 }
 
 static void __init setup(void)
diff --git a/xen/arch/x86/guest/hyperv/private.h 
b/xen/arch/x86/guest/hyperv/private.h
index b6902b5639..da70990401 100644
--- a/xen/arch/x86/guest/hyperv/private.h
+++ b/xen/arch/x86/guest/hyperv/private.h
@@ -25,5 +25,6 @@
 #include 
 
 DECLARE_PER_CPU(void *, hv_pcpu_input_arg);
+DECLARE_PER_CPU(unsigned int, hv_vp_index);
 
 #endif /* __XEN_HYPERV_PRIVIATE_H__  */
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH V2 11/11] x86: tsc: avoid system instability in hibernation

2020-01-22 Thread Anchal Agarwal
On Tue, Jan 14, 2020 at 07:29:52PM +, Anchal Agarwal wrote:
> On Tue, Jan 14, 2020 at 12:30:02AM +0100, Rafael J. Wysocki wrote:
> > On Mon, Jan 13, 2020 at 10:50 PM Rafael J. Wysocki  
> > wrote:
> > >
> > > On Mon, Jan 13, 2020 at 1:43 PM Peter Zijlstra  
> > > wrote:
> > > >
> > > > On Mon, Jan 13, 2020 at 11:43:18AM +, Singh, Balbir wrote:
> > > > > For your original comment, just wanted to clarify the following:
> > > > >
> > > > > 1. After hibernation, the machine can be resumed on a different but 
> > > > > compatible
> > > > > host (these are VM images hibernated)
> > > > > 2. This means the clock between host1 and host2 can/will be different
> > > > >
> > > > > In your comments are you making the assumption that the host(s) 
> > > > > is/are the
> > > > > same? Just checking the assumptions being made and being on the same 
> > > > > page with
> > > > > them.
> > > >
> > > > I would expect this to be the same problem we have as regular suspend,
> > > > after power off the TSC will have been reset, so resume will have to
> > > > somehow bridge that gap. I've no idea if/how it does that.
> > >
> > > In general, this is done by timekeeping_resume() and the only special
> > > thing done for the TSC appears to be the tsc_verify_tsc_adjust(true)
> > > call in tsc_resume().
> > 
> > And I forgot about tsc_restore_sched_clock_state() that gets called
> > via restore_processor_state() on x86, before calling
> > timekeeping_resume().
> >
> In this case tsc_verify_tsc_adjust(true) this does nothing as
> feature bit X86_FEATURE_TSC_ADJUST is not available to guest. 
> I am no expert in this area, but could this be messing things up?
> 
> Thanks,
> Anchal
Gentle nudge on this. I will add more data here in case that helps.

1. Before this patch, tsc is stable but hibernation does not work
100% of the time. I agree if tsc is stable it should not be marked
unstable however, in this case if I run a cpu intensive workload
in the background and trigger reboot-hibernation loop I see a 
workqueue lockup. 

2. The lockup does not hose the system completely,
the reboot-hibernation carries out and system recovers. 
However, as mentioned in the commit message system does 
become unreachable for couple of seconds.

3. Xen suspend/resume seems to save/restore time_memory area in its
xen_arch_pre_suspend and xen_arch_post_suspend. The xen clock value
is saved. xen_sched_clock_offset is set at resume time to ensure a
monotonic clock value

4. Also, the instances do not have InvariantTSC exposed. Feature bit
X86_FEATURE_TSC_ADJUST is not available to guest and xen clocksource
is used by guests.

I am not sure if something needs to be fixed on hibernate path itself
or its very much ties to time handling on xen guest hibernation

Here is a part of log from last hibernation exit to next hibernation
entry. The loop was running for a while so boot to lockup log will be
huge. I am specifically including the timestamps.

...
01h 57m 15.627s(  16ms): [5.822701] OOM killer enabled.
01h 57m 15.627s(   0ms): [5.824981] Restarting tasks ... done.
01h 57m 15.627s(   0ms): [5.836397] PM: hibernation exit
01h 57m 17.636s(2009ms): [7.844471] PM: hibernation entry
01h 57m 52.725s(35089ms): [   42.934542] BUG: workqueue lockup - pool cpus=0
node=0 flags=0x0 nice=0 stuck for 37s!
01h 57m 52.730s(   5ms): [   42.941468] Showing busy workqueues and worker
pools:
01h 57m 52.734s(   4ms): [   42.945088] workqueue events: flags=0x0
01h 57m 52.737s(   3ms): [   42.948385]   pwq 0: cpus=0 node=0 flags=0x0 nice=0
active=2/256
01h 57m 52.742s(   5ms): [   42.952838] pending: vmstat_shepherd,
check_corruption
01h 57m 52.746s(   4ms): [   42.956927] workqueue events_power_efficient:
flags=0x80
01h 57m 52.749s(   3ms): [   42.960731]   pwq 0: cpus=0 node=0 flags=0x0 nice=0
active=4/256
01h 57m 52.754s(   5ms): [   42.964835] pending: neigh_periodic_work,
do_cache_clean [sunrpc], neigh_periodic_work, check_lifetime
01h 57m 52.781s(  27ms): [   42.971419] workqueue mm_percpu_wq: flags=0x8
01h 57m 52.781s(   0ms): [   42.974628]   pwq 0: cpus=0 node=0 flags=0x0 nice=0
active=1/256
01h 57m 52.781s(   0ms): [   42.978901] pending: vmstat_update
01h 57m 52.781s(   0ms): [   42.981822] workqueue ipv6_addrconf: flags=0x40008
01h 57m 52.781s(   0ms): [   42.985524]   pwq 0: cpus=0 node=0 flags=0x0 nice=0
active=1/1
01h 57m 52.781s(   0ms): [   42.989670] pending: addrconf_verify_work [ipv6]
01h 57m 52.782s(   1ms): [   42.993282] workqueue xfs-conv/xvda1: flags=0xc
01h 57m 52.786s(   4ms): [   42.996708]   pwq 0: cpus=0 node=0 flags=0x0 nice=0
active=3/256
01h 57m 52.790s(   4ms): [   43.000954] pending: xfs_end_io [xfs],
xfs_end_io [xfs], xfs_end_io [xfs]
01h 57m 52.795s(   5ms): [   43.005610] workqueue xfs-reclaim/xvda1: flags=0xc
01h 57m 52.798s(   3ms): [   43.008945]   pwq 0: cpus=0 node=0 flags=0x0 nice=0
active=1/256
01h 57m 52.802s(   4ms): [   43.012675] pending: xfs_reclaim_worker [xfs]
01h 57m 52.805s(   

Re: [Xen-devel] HVM Driver Domain

2020-01-22 Thread Marek Marczykowski-Górecki
On Wed, Jan 22, 2020 at 07:50:13PM +, tosher 1 wrote:
> 
> > I don't see what is wrong here. Are you sure the backend domain is running?
> If you mean the HVM network driver domain then, Yes, I am running the backend 
> domain.
> 
> > Probably irrelevant at this stage, but do you have "xendriverdomain" 
> > service running in the backend?
> I do not have this service running. However, my PV network driver domain 
> works fine, though this service is not running.
> 
> What version of Xen are you using that have xendriverdomain service?

I know it works with Xen 4.8, 4.12, and 4.13, but I wouldn't expect
any issues in a version between them.

I'd try increasing verbosity (xl -vvv create ...) and looking if
something earlier didn't gone wrong.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-22 Thread tosher 1

> I don't see what is wrong here. Are you sure the backend domain is running?
If you mean the HVM network driver domain then, Yes, I am running the backend 
domain.

> Probably irrelevant at this stage, but do you have "xendriverdomain" service 
> running in the backend?
I do not have this service running. However, my PV network driver domain works 
fine, though this service is not running.

What version of Xen are you using that have xendriverdomain service?

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-5.4 test] 146384: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146384 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146384/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 146121
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
146121

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd 12 guest-start/redhat.repeat fail in 146354 
pass in 146384
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10 fail pass in 146354

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds16 guest-start/debian.repeat fail REGR. vs. 146121

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass

version targeted for testing:
 linuxba19874032074ca5a3817ae82ebae27bd3343551
baseline 

Re: [Xen-devel] libvirt support for scheduler credit2

2020-01-22 Thread Jim Fehlig
On 1/21/20 10:05 AM, Jürgen Groß wrote:
> On 21.01.20 17:56, Kevin Stange wrote:
>> Hi,
>>
>> I looked around a bit and wasn't able to find a good answer to this, so
>> George suggested I ask here.
> 
> Cc-ing Jim.
> 
>>
>> Since Xen 4.12, credit2 is the default scheduler, but at least as of
>> libvirt 5.1.0 virsh doesn't appear to understand credit2 and produces
>> this sort of output:

You would see the same with libvirt.git master, sorry. ATM the libvirt libxl 
driver is unaware of the credit2 scheduler. Hmm, as I recall Dario was going to 
provide a patch for libvirt :-). But he is quite busy so it will have to be 
added to my very long todo list.

Regards,
Jim

>>
>> # xl sched-credit2 -d yw6hk7mo6zy3k8
>> Name    ID Weight  Cap
>> yw6hk7mo6zy3k8   4 10    0
>> # virsh schedinfo yw6hk7mo6zy3k8
>> Scheduler  : credit2
>>
>> Compared to a host running credit:
>>
>> # xl sched-credit -d gvz2b16sq38dv9
>> Name    ID Weight  Cap
>> gvz2b16sq38dv9  14    800    0
>> # virsh schedinfo gvz2b16sq38dv9
>> Scheduler  : credit
>> weight : 800
>> cap    : 0
>>
>> Trying to change the weight does nothing, not even producing an error
>> message:
>>
>> # virsh schedinfo syuxplsmdihcwc --weight 300
>> Scheduler  : credit2
>>
>> # xl sched-credit2 -d syuxplsmdihcwc
>> Name    ID Weight  Cap
>> syuxplsmdihcwc  23    400    0
>>
>> Is there a version of libvirt where I can expect this to work, or is it
>> not supported yet?  As a workaround for now I've added sched=credit to
>> my command line, but it would be nice to gain the benefits of improved
>> scheduling at some point.
>>
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-22 Thread Marek Marczykowski-Górecki
On Wed, Jan 22, 2020 at 04:56:15PM +, tosher 1 wrote:
> Hi Marek,
> 
> Thanks for your response. The server machine I am using for this setup is an 
> x86_64 Intel Xeon. For the Dom0, I am using Ubuntu 18.04 running on kernel 
> version 5.0.0-37-generic. My Xen version is 4.9.2. 
> 
> For the HVM driver domain, I am using Ubuntu 18.04 running on kernel version 
> 5.0.0-23-generic. I am doing a NIC PCI passthrough to this domain. The Xen 
> config file for this domain looks like the following.
> 
> builder = "hvm"
> name = "ubuntu-doment-hvm"
> memory = "2048"
> pci = [ '01:00.0,permissive=1' ]
> vcpus = 1
> disk = ['phy:/dev/vg/ubuntu-hvm,hda,w']
> vnc = 1
> boot="c"
> 
> I have installed xen-tools of version 4.7 in this driver domain so that the 
> vif-scirpts work. The network configuration here looks like the following 
> where ens5f0 is the interface name for the NIC I did passthrough.
> 
> auto lo
> iface lo inet loopback
> 
> iface ens5f0 inet manual
> 
> auto xenbr1
> iface xenbr1 inet static
>     bridge_ports ens5f0
>     address 192.168.1.3
>     netmask 255.255.255.0
>     gateway 192.168.1.1

Probably irrelevant at this stage, but do you have "xendriverdomain"
service running in the backend?

> The Xen config file content for the DomU is as the following.
> 
> name = "ubuntu_on_ubuntu"
> bootloader = "/usr/lib/xen-4.9/bin/pygrub"
> memory = 1024
> vcpus = 1
> vif = [ 'backend=ubuntu-domnet-hvm,bridge=xenbr1' ]
> disk = [ '/dev/vg/lv_vm_ubuntu_guest,raw,xvda,rw' ]
> 
> When I try to launch this DomU, I get the following error.
> 
> libxl: error: libxl_nic.c:652:libxl__device_nic_set_devids: Domain 31:Unable 
> to set nic defaults for nic 0.

I don't see what is wrong here. Are you sure the backend domain is
running?

> Are these configurations basically very different for what you do for Qubes? 
> Please let me know your thoughts.

Looks similar, although we do that through libvirt.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 146388: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146388 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146388/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 

[Xen-devel] [xen-unstable-smoke test] 146390: tolerable all pass - PUSHED

2020-01-22 Thread osstest service owner
flight 146390 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146390/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  a4d457fd59f4ebfb524aec82cb6a3030087914ca
baseline version:
 xen  f44a192d22a37dcb9171b95978b43637bc09718d

Last test of basis   146367  2020-01-21 22:01:10 Z0 days
Testing same since   146390  2020-01-22 16:00:25 Z0 days1 attempts


People who touched revisions under test:
  Jan Beulich 
  Roger Pau Monné 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f44a192d22..a4d457fd59  a4d457fd59f4ebfb524aec82cb6a3030087914ca -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 146385: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146385 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146385/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 145767
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 
145767

version targeted for testing:
 ovmf 9a1f14ad721bbcd833ec5108944c44a502392f03
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z   14 days
Failing since145774  2020-01-08 02:50:20 Z   14 days   53 attempts
Testing same since   146346  2020-01-21 04:31:27 Z1 days5 attempts


People who touched revisions under test:
  Aaron Li 
  Albecki, Mateusz 
  Ard Biesheuvel 
  Ashish Singhal 
  Bob Feng 
  Brian R Haug 
  Eric Dong 
  Fan, ZhijuX 
  Hao A Wu 
  Jason Voelz 
  Jian J Wang 
  Krzysztof Koch 
  Laszlo Ersek 
  Leif Lindholm 
  Li, Aaron 
  Liming Gao 
  Mateusz Albecki 
  Michael D Kinney 
  Michael Kubacki 
  Pavana.K 
  Philippe Mathieu-Daud? 
  Philippe Mathieu-Daude 
  Siyuan Fu 
  Siyuan, Fu 
  Sudipto Paul 
  Vitaly Cheptsov 
  Vitaly Cheptsov via Groups.Io 
  Wei6 Xu 
  Xu, Wei6 
  Zhiguang Liu 
  Zhiju.Fan 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1164 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap

2020-01-22 Thread Tamas K Lengyel
On Wed, Jan 22, 2020 at 8:35 AM Jan Beulich  wrote:
>
> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> > When plugging a hole in the target physmap don't use the access permission
> > returned by __get_gfn_type_access as it can be non-sensical,
>
> "can be" is too vague for my taste - it suggests there may also be cases
> where a sensible value is returned, and hence it should be used. Could
> you clarify this please? (The code change itself of course is simple and
> mechanical enough to look okay.)

Well, I can only speak of what I observed. The case seems to be that
most of the time the function actually returns p2m_access_rwx (which
is sensible), but occasionally something else. I didn't investigate
where that value actually comes from, but when populating a physmap
like this only the default_access is sane.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk

2020-01-22 Thread Tamas K Lengyel
On Wed, Jan 22, 2020 at 8:30 AM Jan Beulich  wrote:
>
> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> > @@ -538,24 +535,26 @@ static int audit(void)
> >  d = get_domain_by_id(g->domain);
> >  if ( d == NULL )
> >  {
> > -MEM_SHARING_DEBUG("Unknown dom: %hu, for PFN=%lx, 
> > MFN=%lx\n",
> > -  g->domain, g->gfn, mfn_x(mfn));
> > +gdprintk(XENLOG_ERR,
> > + "Unknown dom: %pd, for PFN=%lx, MFN=%lx\n",
> > + d, g->gfn, mfn_x(mfn));
>
> With "if ( d == NULL )" around this you hardly mean to pass d to
> the function here. This is a case where you really need to stick
> to logging a raw number.

Indeed..

>
> >  errors++;
> >  continue;
> >  }
> >  o_mfn = get_gfn_query_unlocked(d, g->gfn, );
> >  if ( !mfn_eq(o_mfn, mfn) )
> >  {
> > -MEM_SHARING_DEBUG("Incorrect P2M for d=%hu, PFN=%lx."
> > -  "Expecting MFN=%lx, got %lx\n",
> > -  g->domain, g->gfn, mfn_x(mfn), 
> > mfn_x(o_mfn));
> > +gdprintk(XENLOG_ERR, "Incorrect P2M for d=%pd, PFN=%lx."
>
> Here and elsewhere may I recommend dropping d= (or dom= further
> down)?

SGTM

>
> > @@ -757,10 +756,10 @@ static int debug_mfn(mfn_t mfn)
> >  return -EINVAL;
> >  }
> >
> > -MEM_SHARING_DEBUG(
> > -"Debug page: MFN=%lx is ci=%lx, ti=%lx, owner=%pd\n",
> > -mfn_x(page_to_mfn(page)), page->count_info,
> > -page->u.inuse.type_info, page_get_owner(page));
> > +gdprintk(XENLOG_ERR,
> > + "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner_id=%d\n",
> > + mfn_x(page_to_mfn(page)), page->count_info,
> > + page->u.inuse.type_info, page_get_owner(page)->domain_id);
>
> As indicated before (I think), please prefer %pd and a struct domain
> pointer over passing ->domain_id (at least one more instance further
> down).

I thought I fixed them all but evidently some remained.

Thanks,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries

2020-01-22 Thread Jan Beulich
On 22.01.2020 17:51, Tamas K Lengyel wrote:
> On Wed, Jan 22, 2020 at 8:23 AM Jan Beulich  wrote:
>>
>> On 21.01.2020 18:49, Tamas K Lengyel wrote:
>>> The owner domain of shared pages is dom_cow, use that for get_page
>>> otherwise the function fails to return the correct page.
>>
>> I think this description needs improvement: The function does the
>> special shared page dance in one place (on the "fast path")
>> already. This wants mentioning, either because it was a mistake
>> to have it just there, or because a new need has appeared to also
>> have it on the "slow path".
> 
> It was a pre-existing error not to get the page from dom_cow for a
> shared entry in the slow path. I only ran into it now because the
> erroneous type_count check move in the previous version of the series
> was resulting in all pages being fully deduplicated instead of mostly
> being shared. Now that the pages are properly shared running LibVMI on
> the fork resulted in failures do to this bug.
> 
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
>>>  if ( p2m_is_ram(*t) && mfn_valid(mfn) )
>>>  {
>>>  page = mfn_to_page(mfn);
>>> -if ( !get_page(page, p2m->domain) )
>>> +if ( !get_page(page, p2m->domain) &&
>>> + /* Page could be shared */
>>> + (!dom_cow || !p2m_is_shared(*t) ||
>>> +  !get_page(page, dom_cow)) )
>>
>> While there may be a reason why on the fast path two get_page()
>> invocations are be necessary, couldn't you get away with just
>> one
>>
>> if ( !get_page(page, !dom_cow || !p2m_is_shared(*t) ? p2m->domain
>> : dom_cow) )
>>
>> at least here? It's also not really clear to me why here and
>> there we need "!dom_cow || !p2m_is_shared(*t)" - wouldn't
>> p2m_is_shared() return consistently "false" when !dom_cow ?
> 
> I simply copied the existing code from the slow_path as-is. IMHO it
> would suffice to do a single get_page(page, !p2m_is_shared(*t) ?
> p2m->domain : dom_cow);  since we can't have any entries that are
> shared when dom_cow is NULL so this is safe, no need for the extra
> !dom_cow check. If you prefer I can make the change for both
> locations.

If the change is correct to make also in the other place, I'd
definitely prefer you doing so.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] HVM Driver Domain

2020-01-22 Thread tosher 1
Hi Marek,

Thanks for your response. The server machine I am using for this setup is an 
x86_64 Intel Xeon. For the Dom0, I am using Ubuntu 18.04 running on kernel 
version 5.0.0-37-generic. My Xen version is 4.9.2. 

For the HVM driver domain, I am using Ubuntu 18.04 running on kernel version 
5.0.0-23-generic. I am doing a NIC PCI passthrough to this domain. The Xen 
config file for this domain looks like the following.

builder = "hvm"
name = "ubuntu-doment-hvm"
memory = "2048"
pci = [ '01:00.0,permissive=1' ]
vcpus = 1
disk = ['phy:/dev/vg/ubuntu-hvm,hda,w']
vnc = 1
boot="c"

I have installed xen-tools of version 4.7 in this driver domain so that the 
vif-scirpts work. The network configuration here looks like the following where 
ens5f0 is the interface name for the NIC I did passthrough.

auto lo
iface lo inet loopback

iface ens5f0 inet manual

auto xenbr1
iface xenbr1 inet static
    bridge_ports ens5f0
    address 192.168.1.3
    netmask 255.255.255.0
    gateway 192.168.1.1

The Xen config file content for the DomU is as the following.

name = "ubuntu_on_ubuntu"
bootloader = "/usr/lib/xen-4.9/bin/pygrub"
memory = 1024
vcpus = 1
vif = [ 'backend=ubuntu-domnet-hvm,bridge=xenbr1' ]
disk = [ '/dev/vg/lv_vm_ubuntu_guest,raw,xvda,rw' ]

When I try to launch this DomU, I get the following error.

libxl: error: libxl_nic.c:652:libxl__device_nic_set_devids: Domain 31:Unable to 
set nic defaults for nic 0.

Are these configurations basically very different for what you do for Qubes? 
Please let me know your thoughts.

Thanks,
Mehrab

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries

2020-01-22 Thread Tamas K Lengyel
On Wed, Jan 22, 2020 at 8:23 AM Jan Beulich  wrote:
>
> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> > The owner domain of shared pages is dom_cow, use that for get_page
> > otherwise the function fails to return the correct page.
>
> I think this description needs improvement: The function does the
> special shared page dance in one place (on the "fast path")
> already. This wants mentioning, either because it was a mistake
> to have it just there, or because a new need has appeared to also
> have it on the "slow path".

It was a pre-existing error not to get the page from dom_cow for a
shared entry in the slow path. I only ran into it now because the
erroneous type_count check move in the previous version of the series
was resulting in all pages being fully deduplicated instead of mostly
being shared. Now that the pages are properly shared running LibVMI on
the fork resulted in failures do to this bug.

> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
> >  if ( p2m_is_ram(*t) && mfn_valid(mfn) )
> >  {
> >  page = mfn_to_page(mfn);
> > -if ( !get_page(page, p2m->domain) )
> > +if ( !get_page(page, p2m->domain) &&
> > + /* Page could be shared */
> > + (!dom_cow || !p2m_is_shared(*t) ||
> > +  !get_page(page, dom_cow)) )
>
> While there may be a reason why on the fast path two get_page()
> invocations are be necessary, couldn't you get away with just
> one
>
> if ( !get_page(page, !dom_cow || !p2m_is_shared(*t) ? p2m->domain
> : dom_cow) )
>
> at least here? It's also not really clear to me why here and
> there we need "!dom_cow || !p2m_is_shared(*t)" - wouldn't
> p2m_is_shared() return consistently "false" when !dom_cow ?

I simply copied the existing code from the slow_path as-is. IMHO it
would suffice to do a single get_page(page, !p2m_is_shared(*t) ?
p2m->domain : dom_cow);  since we can't have any entries that are
shared when dom_cow is NULL so this is safe, no need for the extra
!dom_cow check. If you prefer I can make the change for both
locations.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 00/16] Add support for qemu-xen runnning in a Linux-based stubdomain.

2020-01-22 Thread Jason Andryuk
On Tue, Jan 14, 2020 at 9:42 PM Marek Marczykowski-Górecki
 wrote:



> Later patches add QMP over libvchan connection support. The actual connection
> is made in a separate process. As discussed on Xen Summit 2019, this allows to
> apply some basic checks and/or filtering (not part of this series), to limit
> libxl exposure for potentially malicious stubdomain.

Thanks for working on this!  I think the separate process is nicer.

> The actual stubdomain implementation is here:
>
> https://github.com/marmarek/qubes-vmm-xen-stubdom-linux
> (branch for-upstream, tag for-upstream-v3)
>
> See readme there for build instructions.
> Beware: building on Debian is dangerous, as it require installing "dracut",
> which will remove initramfs-tools. You may end up with broken initrd on
> your host.

Just as an FYI, Marek's use of dracut is mainly for dracut-install to
copy a binary & dependent libraries when generating the initramfs
(https://github.com/marmarek/qubes-vmm-xen-stubdom-linux/blob/master/rootfs/gen).
The initramfs isn't running dracut scripts.  Using initramfs-tools
hook-functions:copy_exec() for similar functionality is a possibility.

> 1. There are extra patches for qemu that are necessary to run it in 
> stubdomain.
> While it is desirable to upstream them, I think it can be done after merging
> libxl part. Stubdomain's qemu build will in most cases be separate anyway, to
> limit qemu's dependencies (so the stubdomain size).

A mostly unpatched QEMU works for networking & disk.  The exception is
PCI passthrough, which needs some patches.  I tested this by removing
patches from Marek's repo, except for the seccomp ones and
disable-nic-option-rom.patch.  Without disable-nic-option-rom.patch,
QEMU fails to start with 'failed to find romfile "efi-rtl8139.rom"'

One issue I've noticed is QEMU ~4.1 calls getrandom() during startup.
In a stubdom there is insufficient entropy, so QEMU blocks and stubdom
startup times out.  You can avoid getrandom() blocking with
CONFIG_RANDOM_TRUST_CPU or
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=50ee7529ec4500c88f8664560770a7a1b65db72b
or some other way of adding entropy.

Regards,
Jason

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 2/9] xen: split parameter related definitions in own header file

2020-01-22 Thread Jan Beulich
On 21.01.2020 09:43, Juergen Gross wrote:
> Move the parameter related definitions from init.h into a new header
> file param.h. This will avoid include hell when new dependencies are
> added to parameter definitions.
> 
> Signed-off-by: Juergen Gross 

x86:
Acked-by: Jan Beulich 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params

2020-01-22 Thread Tamas K Lengyel
On Wed, Jan 22, 2020 at 8:01 AM Jan Beulich  wrote:
>
> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> > Currently the hvm parameters are only accessible via the HVMOP hypercalls. 
> > In
> > this patch we introduce a new function that can copy both the hvm context 
> > and
> > parameters directly into a target domain. No functional changes in existing
> > code.
> >
> > Signed-off-by: Tamas K Lengyel 
>
> In reply to my v4 comments you said "I don't have any objections to the
> things you pointed out." Yet only one aspect was actually changed here.
> It also doesn't help that there's no brief summary of the changes done
> for v5. I guess I'm confused.

Indeed it seems I missed some of your previous requests. I was halfway
through making the modifications but simply forgot to do the rest
after I stepped away. I still don't have any objections to them
though, so will catch up on it in v6.

Thanks,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable test] 146379: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146379 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146379/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd 10 redhat-install fail REGR. vs. 146058
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 146058
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 146058

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10  fail blocked in 146058
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 146050
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 146058
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 146058
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 146058
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 146058
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail like 
146058
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 146058
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 146058
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 146058
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 146058
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 146058
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass

version targeted for testing:
 xen  f44a192d22a37dcb9171b95978b43637bc09718d
baseline version:
 xen  

Re: [Xen-devel] [RFC XEN PATCH 00/23] xen: beginning support for RISC-V

2020-01-22 Thread Lars Kurth


> On 22 Jan 2020, at 14:57, Andrew Cooper  wrote:
> 
> On 22/01/2020 01:58, Bobby Eshleman wrote:
>> Hey everybody,
>> 
>> This is an RFC patchset for the very beginnings of adding RISC-V support
>> to Xen.  This RFC is really just to start a dialogue about supporting
>> RISC-V and align with the Xen project and community before going
>> further.  For that reason, it is very rough and very incomplete. 
>> 
>> My name is Bobby Eshleman, I'm a software engineer at
>> Star Lab / Wind River on the ARM team, mostly having worked in the Linux
>> kernel.  I've also been involved a good amount with Xen on ARM here,
>> mostly dealing with tooling, deployment, and testing.  A lot of this
>> patchset is heavily inspired by the Xen/ARM source code (particularly
>> the early setup up code).
>> 
>> Currently, this patchset really only sets up virtual memory for Xen and
>> initializes UART to enable print output.  None of RISC-V's
>> virtualization support has been implemented yet, although that is the
>> next road to start going down.  Many functions only contain dummy
>> implementations.  Many shortcuts have been taken and TODO's have been
>> left accordingly.  It is very, very rough.  Be forewarned: you are quite
>> likely to see some ungainly code here (despite my efforts to clean it up
>> before sending this patchset out).  My intent with this RFC is to align
>> early and gauge interest, as opposed to presenting a totally complete
>> patchset.
>> 
>> Because the ARM and RISC-V use cases will likely bear resemblance, the
>> RISC-V port should probably respect the design considerations that have
>> been laid out and respected by Xen on ARM for dom0less, safety
>> certification, etc...  My inclination has been to initially target or
>> prioritize dom0less (without excluding dom0full) and use the ARM
>> dom0less implementation as a model to follow.  I'd love feedback on this
>> point and on how the Xen project might envision a RISC-V implementation.
>> 
>> This patchset has _some_ code for future support for 32-bit, but
>> currently my focus is on 64-bit.
>> 
>> Again, this is a very, very rough and totally incomplete patchset.  My
>> goal here is just to gauge community interest, begin discussing what Xen
>> on RISC-V may look like, receive feedback, and see if I'm heading in the
>> right direction.
>> 
>> My big questions are:
>>  Does the Xen project have interest in RISC-V?
> 
> There is very large downstream interest in RISC-V.  So a definite yes.
> 
>>  What can be done to make the RISC-V port as upstreamable as
>>  possible?
>>  Any major pitfalls?
>> 
>> It would be great to hear all of your feedback.
> 
> Both RISC-V and Power9 are frequently requested things, and both suffer
> from the fact that, while we as a community would like them, the
> upstream intersection of "people who know Xen" and "people who know
> enough arch $X to do an initial port" is 0.
> 
> This series clearly demonstrates a change in the status quo, and I think
> a lot of people will be happy.
> 
> To get RISC-V to being fully supported, we will ultimately need to get
> hardware into the CI system, and an easy way for developers to test
> changes.  Do you have any thoughts on production RISC-V hardware
> (ideally server form factor) for the CI system, and/or dev boards which
> might be available fairly cheaply?
> 
> How much time do you have to put towards the port?  Is this something in
> your free time, or something you are doing as part of work?  Ultimately,
> we are going to need to increase the level of RISC-V knowledge in the
> community to maintain things in the future.
> 
> Other than that, very RFC series are entirely fine.  A good first step
> would be simply to get the build working, and get some kind of
> cross-compile build in CI, to make sure that we don't clobber the RISC-V
> build with common or other-arch changes.
> 
> I hope this helps.

I totally agree with what Andy says. 

You should also leverage the developer summit: see 
https://events.linuxfoundation.org/xen-summit/program/cfp/ 

CfP closes March 6th. Design sessions can be submitted afterwards

Community calls may also be a good option to deal with specific issues / 
questions, e.g. around compile support in the CI, etc.

Lars




___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 3/3] x86 / vmx: use a 'normal' domheap page for APIC_DEFAULT_PHYS_BASE

2020-01-22 Thread Durrant, Paul
> -Original Message-
> From: Jan Beulich 
> Sent: 22 January 2020 16:17
> To: Durrant, Paul 
> Cc: xen-devel@lists.xenproject.org; Jun Nakajima ;
> Kevin Tian ; Andrew Cooper
> ; Wei Liu ; Roger Pau Monné
> ; George Dunlap ; Ian
> Jackson ; Julien Grall ; Konrad
> Rzeszutek Wilk ; Stefano Stabellini
> 
> Subject: Re: [PATCH 3/3] x86 / vmx: use a 'normal' domheap page for
> APIC_DEFAULT_PHYS_BASE
> 
> On 21.01.2020 13:00, Paul Durrant wrote:
> > vmx_alloc_vlapic_mapping() currently contains some very odd looking code
> > that allocates a MEMF_no_owner domheap page and then shares with the
> guest
> > as if it were a xenheap page. This then requires
> vmx_free_vlapic_mapping()
> > to call a special function in the mm code: free_shared_domheap_page().
> >
> > By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
> > alloc_domheap_page()), the odd looking code in
> vmx_alloc_vlapic_mapping()
> > can simply use get_page_and_type() to set up a writable mapping before
> > insertion in the P2M and vmx_free_vlapic_mapping() can simply release
> the
> > page using put_page_alloc_ref() followed by put_page_and_type(). This
> > then allows free_shared_domheap_page() to be purged.
> >
> > There is, however, some fall-out from this simplification:
> >
> > - alloc_domheap_page() will now call assign_pages() and run into the
> fact
> >   that 'max_pages' is not set until some time after domain_create(). To
> >   avoid an allocation failure, assign_pages() is modified to ignore the
> >   max_pages limit if 'creation_finished' is false. That value is not set
> >   to true until domain_unpause_by_systemcontroller() is called, and thus
> >   the guest cannot run (and hence cause memory allocation) until
> >   creation_finished is set to true.
> 
> But this check is also to guard against the tool stack (or possibly
> the controlling stubdom) to cause excess allocation. I don't think
> the checking should be undermined like this (and see also below).
>

Ok.
 
> Since certainly you've looked into this while creating the patch,
> could you remind me why it is that this page needs to be owned (as
> in its owner field set accordingly) by the guest? It's a helper
> page only, after all.
> 

Not sure why it was done that way. It's inserted into the guest P2M so having 
it owned by the guest seems like the right thing to do. A malicious guest could 
decrease-reservation it and I guess it avoids special-casing there.

> > @@ -3034,12 +3034,22 @@ static int vmx_alloc_vlapic_mapping(struct
> domain *d)
> >  if ( !cpu_has_vmx_virtualize_apic_accesses )
> >  return 0;
> >
> > -pg = alloc_domheap_page(d, MEMF_no_owner);
> > +pg = alloc_domheap_page(d, 0);
> 
> Did you consider passing MEMF_no_refcount here, to avoid the
> fiddling with assign_pages()? That'll in particular also
> avoid ...
> 

You remember what happened last time we did that (with the ioreq server page), 
right? That's why assign_pages() vetoes non-refcounted pages.

> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -2269,7 +2269,8 @@ int assign_pages(
> >
> >  if ( !(memflags & MEMF_no_refcount) )
> >  {
> > -if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) )
> > +if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) &&
> > + d->creation_finished )
> >  {
> >  gprintk(XENLOG_INFO, "Over-allocation for domain %u: "
> >  "%u > %u\n", d->domain_id,
> 
> ... invoking domain_adjust_tot_pages() right below here, which
> is wrong for helper pages like this one (as it reduces the
> amount the domain is actually permitted to allocate).
> 

True, but there is 'slop' to deal with things like the ioreq pages and I think 
this page is logically similar.

  Paul

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 3/3] x86 / vmx: use a 'normal' domheap page for APIC_DEFAULT_PHYS_BASE

2020-01-22 Thread Jan Beulich
On 21.01.2020 13:00, Paul Durrant wrote:
> vmx_alloc_vlapic_mapping() currently contains some very odd looking code
> that allocates a MEMF_no_owner domheap page and then shares with the guest
> as if it were a xenheap page. This then requires vmx_free_vlapic_mapping()
> to call a special function in the mm code: free_shared_domheap_page().
> 
> By using a 'normal' domheap page (i.e. by not passing MEMF_no_owner to
> alloc_domheap_page()), the odd looking code in vmx_alloc_vlapic_mapping()
> can simply use get_page_and_type() to set up a writable mapping before
> insertion in the P2M and vmx_free_vlapic_mapping() can simply release the
> page using put_page_alloc_ref() followed by put_page_and_type(). This
> then allows free_shared_domheap_page() to be purged.
> 
> There is, however, some fall-out from this simplification:
> 
> - alloc_domheap_page() will now call assign_pages() and run into the fact
>   that 'max_pages' is not set until some time after domain_create(). To
>   avoid an allocation failure, assign_pages() is modified to ignore the
>   max_pages limit if 'creation_finished' is false. That value is not set
>   to true until domain_unpause_by_systemcontroller() is called, and thus
>   the guest cannot run (and hence cause memory allocation) until
>   creation_finished is set to true.

But this check is also to guard against the tool stack (or possibly
the controlling stubdom) to cause excess allocation. I don't think
the checking should be undermined like this (and see also below).

Since certainly you've looked into this while creating the patch,
could you remind me why it is that this page needs to be owned (as
in its owner field set accordingly) by the guest? It's a helper
page only, after all.

> @@ -3034,12 +3034,22 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
>  if ( !cpu_has_vmx_virtualize_apic_accesses )
>  return 0;
>  
> -pg = alloc_domheap_page(d, MEMF_no_owner);
> +pg = alloc_domheap_page(d, 0);

Did you consider passing MEMF_no_refcount here, to avoid the
fiddling with assign_pages()? That'll in particular also
avoid ...

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2269,7 +2269,8 @@ int assign_pages(
>  
>  if ( !(memflags & MEMF_no_refcount) )
>  {
> -if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) )
> +if ( unlikely((d->tot_pages + (1 << order)) > d->max_pages) &&
> + d->creation_finished )
>  {
>  gprintk(XENLOG_INFO, "Over-allocation for domain %u: "
>  "%u > %u\n", d->domain_id,

... invoking domain_adjust_tot_pages() right below here, which
is wrong for helper pages like this one (as it reduces the
amount the domain is actually permitted to allocate).

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/3] x86 / hvm: add domain_relinquish_resources() method

2020-01-22 Thread Durrant, Paul
> -Original Message-
> From: Jan Beulich 
> Sent: 22 January 2020 16:01
> To: Durrant, Paul 
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper
> ; Wei Liu ; Roger Pau Monné
> ; Jun Nakajima ; Kevin Tian
> 
> Subject: Re: [PATCH 2/3] x86 / hvm: add domain_relinquish_resources()
> method
> 
> On 22.01.2020 16:56, Durrant, Paul wrote:
> >> -Original Message-
> >> From: Jan Beulich 
> >> Sent: 22 January 2020 15:51
> >> To: Durrant, Paul 
> >> Cc: xen-devel@lists.xenproject.org; Andrew Cooper
> >> ; Wei Liu ; Roger Pau Monné
> >> ; Jun Nakajima ; Kevin
> Tian
> >> 
> >> Subject: Re: [PATCH 2/3] x86 / hvm: add domain_relinquish_resources()
> >> method
> >>
> >> On 21.01.2020 13:00, Paul Durrant wrote:
> >>> There are two functions in hvm.c to deal with tear-down and a domain:
> >>> hvm_domain_relinquish_resources() and hvm_domain_destroy(). However,
> >> only
> >>> the latter has an associated method in 'hvm_funcs'. This patch adds
> >>> a method for the former and stub definitions for SVM and VMX.
> >>
> >> Why the stubs? Simply ...
> >>
> >>> --- a/xen/arch/x86/hvm/hvm.c
> >>> +++ b/xen/arch/x86/hvm/hvm.c
> >>> @@ -715,6 +715,8 @@ int hvm_domain_initialise(struct domain *d)
> >>>
> >>>  void hvm_domain_relinquish_resources(struct domain *d)
> >>>  {
> >>> +hvm_funcs.domain_relinquish_resources(d);
> >>
> >> ... stick a NULL check around this one. I also wonder whether, it
> >> being entirely new, this wouldn't better use alternative call
> >> patching right from the beginning. It's not the hottest path, but
> >> avoiding indirect calls seems quite desirable, especially when
> >> doing so is relatively cheap.
> >>
> >
> > I'd like it to align with the rest of the hvm_funcs so I'll add the
> > NULL check, but alternatives patch for all hvm_funcs seems like a
> > good thing I the longer term.
> 
> The frequently used ones have been converted already. Hence my
> suggestion to make new ones use that model from the beginning.
> 

Oh, ok. I'll go look for some examples.

  Paul
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/3] x86 / hvm: add domain_relinquish_resources() method

2020-01-22 Thread Jan Beulich
On 22.01.2020 16:56, Durrant, Paul wrote:
>> -Original Message-
>> From: Jan Beulich 
>> Sent: 22 January 2020 15:51
>> To: Durrant, Paul 
>> Cc: xen-devel@lists.xenproject.org; Andrew Cooper
>> ; Wei Liu ; Roger Pau Monné
>> ; Jun Nakajima ; Kevin Tian
>> 
>> Subject: Re: [PATCH 2/3] x86 / hvm: add domain_relinquish_resources()
>> method
>>
>> On 21.01.2020 13:00, Paul Durrant wrote:
>>> There are two functions in hvm.c to deal with tear-down and a domain:
>>> hvm_domain_relinquish_resources() and hvm_domain_destroy(). However,
>> only
>>> the latter has an associated method in 'hvm_funcs'. This patch adds
>>> a method for the former and stub definitions for SVM and VMX.
>>
>> Why the stubs? Simply ...
>>
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -715,6 +715,8 @@ int hvm_domain_initialise(struct domain *d)
>>>
>>>  void hvm_domain_relinquish_resources(struct domain *d)
>>>  {
>>> +hvm_funcs.domain_relinquish_resources(d);
>>
>> ... stick a NULL check around this one. I also wonder whether, it
>> being entirely new, this wouldn't better use alternative call
>> patching right from the beginning. It's not the hottest path, but
>> avoiding indirect calls seems quite desirable, especially when
>> doing so is relatively cheap.
>>
> 
> I'd like it to align with the rest of the hvm_funcs so I'll add the
> NULL check, but alternatives patch for all hvm_funcs seems like a
> good thing I the longer term.

The frequently used ones have been converted already. Hence my
suggestion to make new ones use that model from the beginning.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/3] x86 / hvm: add domain_relinquish_resources() method

2020-01-22 Thread Durrant, Paul
> -Original Message-
> From: Jan Beulich 
> Sent: 22 January 2020 15:51
> To: Durrant, Paul 
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper
> ; Wei Liu ; Roger Pau Monné
> ; Jun Nakajima ; Kevin Tian
> 
> Subject: Re: [PATCH 2/3] x86 / hvm: add domain_relinquish_resources()
> method
> 
> On 21.01.2020 13:00, Paul Durrant wrote:
> > There are two functions in hvm.c to deal with tear-down and a domain:
> > hvm_domain_relinquish_resources() and hvm_domain_destroy(). However,
> only
> > the latter has an associated method in 'hvm_funcs'. This patch adds
> > a method for the former and stub definitions for SVM and VMX.
> 
> Why the stubs? Simply ...
> 
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -715,6 +715,8 @@ int hvm_domain_initialise(struct domain *d)
> >
> >  void hvm_domain_relinquish_resources(struct domain *d)
> >  {
> > +hvm_funcs.domain_relinquish_resources(d);
> 
> ... stick a NULL check around this one. I also wonder whether, it
> being entirely new, this wouldn't better use alternative call
> patching right from the beginning. It's not the hottest path, but
> avoiding indirect calls seems quite desirable, especially when
> doing so is relatively cheap.
> 

I'd like it to align with the rest of the hvm_funcs so I'll add the NULL check, 
but alternatives patch for all hvm_funcs seems like a good thing I the longer 
term.

  Paul
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/3] x86 / hvm: add domain_relinquish_resources() method

2020-01-22 Thread Jan Beulich
On 21.01.2020 13:00, Paul Durrant wrote:
> There are two functions in hvm.c to deal with tear-down and a domain:
> hvm_domain_relinquish_resources() and hvm_domain_destroy(). However, only
> the latter has an associated method in 'hvm_funcs'. This patch adds
> a method for the former and stub definitions for SVM and VMX.

Why the stubs? Simply ...

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -715,6 +715,8 @@ int hvm_domain_initialise(struct domain *d)
>  
>  void hvm_domain_relinquish_resources(struct domain *d)
>  {
> +hvm_funcs.domain_relinquish_resources(d);

... stick a NULL check around this one. I also wonder whether, it
being entirely new, this wouldn't better use alternative call
patching right from the beginning. It's not the hottest path, but
avoiding indirect calls seems quite desirable, especially when
doing so is relatively cheap.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/3] x86 / vmx: make apic_access_mfn type-safe

2020-01-22 Thread Jan Beulich
On 22.01.2020 15:05, Andrew Cooper wrote:
> On 21/01/2020 12:00, Paul Durrant wrote:
>> Use mfn_t rather than unsigned long and change previous tests against 0 to
>> tests against INVALID_MFN (also introducing initialization to that value).
>>
>> Signed-off-by: Paul Durrant 
> 
> I'm afraid this breaks the idempotency of vmx_free_vlapic_mapping(),
> which gets in the way of domain/vcpu create/destroy cleanup.
> 
> Its fine to use 0 as the sentinel.

And with this adjustment
Reviewed-by: Jan Beulich 

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 4/5] x86/boot: Simplify pagetable manipulation loops

2020-01-22 Thread Andrew Cooper
On 20/01/2020 10:46, Jan Beulich wrote:
> On 17.01.2020 21:42, Andrew Cooper wrote:
>> For __page_tables_{start,end} and L3 bootmap initialisation, the logic is
>> unnecesserily complicated owing to its attempt to use the LOOP instruction,
>> which results in an off-by-8 memory address owing to LOOP's termination
>> condition.
>>
>> Rewrite both loops for improved clarity and speed.
>>
>> Misc notes:
>>  * TEST $IMM, MEM can't macrofuse.  The loop has 0x1200 iterations, so pull
>>the $_PAGE_PRESENT constant out into a spare register to turn the TEST 
>> into
>>its %REG, MEM form, which can macrofuse.
>>  * Avoid the use of %fs-relative references.  %esi-relative is the more 
>> common
>>form in the code, and doesn't suffer an address generation overhead.
>>  * Avoid LOOP.  CMP/JB isn't microcoded and faster to execute in all cases.
>>  * For a 4 interation trivial loop, even compilers unroll these.  The
>>generated code size is a fraction larger, but this is init and the asm is
>>far easier to follow.
>>  * Reposition the l2=>l1 bootmap construction so the asm reads in pagetable
>>level order.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper 
> Reviewed-by: Jan Beulich 
> with two remarks/questions, but leaving it up to you whether
> you want to adjust the code:
>
>> --- a/xen/arch/x86/boot/head.S
>> +++ b/xen/arch/x86/boot/head.S
>> @@ -662,11 +662,17 @@ trampoline_setup:
>>  mov %edx,sym_fs(boot_tsc_stamp)+4
>>  
>>  /* Relocate pagetables to point at Xen's current location in 
>> memory. */
>> -mov $((__page_tables_end-__page_tables_start)/8),%ecx
>> -1:  testl   $_PAGE_PRESENT,sym_fs(__page_tables_start)-8(,%ecx,8)
>> +mov $_PAGE_PRESENT, %edx
>> +lea sym_esi(__page_tables_start), %eax
>> +lea sym_esi(__page_tables_end), %edi
>> +
>> +1:  testb   %dl, (%eax)  /* if page present */
> When it's an immediate, using TESTB is generally helpful because
> there's no (sign- or whatever-)extended immediate form of it.
> When using a register, I think it would generally be better to
> use native size, even if for register reads the partial register
> access penalty may (today) be zero.

I don't think it is plausible that partial access penalties will be
introduced.  Partial merge penalties occur as a consequence of making
register reads consistent under renaming, and implicit zeroing behaviour
exists to remove merge penalties.

Any 32bit or larger register write results in allocating a fresh
physical register entry, filling it with the data provided, and updating
the register allocation table.

For 16bit or 8bit writes, either the physical register file needs to
support RMW updates to an architectural register, or an extra set of
uops are needed to perform the merge in the pipeline itself, before
making a 32bit writeback.

What matters in this case is the size of the memory access, and whether
8bit vs 32bit within the same cache line will ever be different.

However, we should switch to a 32bit access here, so we don't intermix
an 8bit read with a 32bit RMW.  Memory disambiguation speculation will
have an easier time of it on some parts, which will make an overall
difference.

>
>> @@ -701,22 +707,27 @@ trampoline_setup:
>>  cmp %edx, %ecx
>>  jbe 1b
>>  
>> -/* Initialize L3 boot-map page directory entries. */
>> -lea 
>> __PAGE_HYPERVISOR+(L2_PAGETABLE_ENTRIES*8)*3+sym_esi(l2_bootmap),%eax
>> -mov $4,%ecx
>> -1:  mov %eax,sym_fs(l3_bootmap)-8(,%ecx,8)
>> -sub $(L2_PAGETABLE_ENTRIES*8),%eax
>> -loop1b
>> -
>> -/* Map the permanent trampoline page into l{1,2}_bootmap[]. */
>> +/* Map 4x l2_bootmap[] into l3_bootmap[0...3] */
>> +lea __PAGE_HYPERVISOR + sym_esi(l2_bootmap), %eax
>> +mov $PAGE_SIZE, %edx
>> +mov %eax, 0  + sym_esi(l3_bootmap)
>> +add %edx, %eax
>> +mov %eax, 8  + sym_esi(l3_bootmap)
>> +add %edx, %eax
>> +mov %eax, 16 + sym_esi(l3_bootmap)
>> +add %edx, %eax
>> +mov %eax, 24 + sym_esi(l3_bootmap)
> It took me a moment to realize the code is correct despite there
> not being any mention of PAGE_SIZE between each of the MOVs. As
> you don't view code size as a (primary) concern, perhaps worth
> using
>
> add $PAGE_SIZE, %eax
>
> everywhere, the more that this has a special, ModR/M-less
> encoding?

I had it that way first time around.  Sadly, $PAGE_SIZE can't be
expressed as imm8, which is why I switched to using %edx.

I'm not overly fussed either way, so given the confusion, I'll switch
back to this form.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap

2020-01-22 Thread Jan Beulich
On 21.01.2020 18:49, Tamas K Lengyel wrote:
> When plugging a hole in the target physmap don't use the access permission
> returned by __get_gfn_type_access as it can be non-sensical,

"can be" is too vague for my taste - it suggests there may also be cases
where a sensible value is returned, and hence it should be used. Could
you clarify this please? (The code change itself of course is simple and
mechanical enough to look okay.)

Jan

> leading to
> spurious vm_events being sent out for access violations at unexpected
> locations. Make use of p2m->default_access instead.
> 
> Signed-off-by: Tamas K Lengyel 
> ---
>  xen/arch/x86/mm/mem_sharing.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index eac8047c07..e3ddb63b4f 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1071,11 +1071,10 @@ int add_to_physmap(struct domain *sd, unsigned long 
> sgfn, shr_handle_t sh,
>  p2m_type_t smfn_type, cmfn_type;
>  struct gfn_info *gfn_info;
>  struct p2m_domain *p2m = p2m_get_hostp2m(cd);
> -p2m_access_t a;
>  struct two_gfns tg;
>  
>  get_two_gfns(sd, _gfn(sgfn), _type, NULL, ,
> - cd, _gfn(cgfn), _type, , , 0, , lock);
> + cd, _gfn(cgfn), _type, NULL, , 0, , lock);
>  
>  /* Get the source shared page, check and lock */
>  ret = XENMEM_SHARING_OP_S_HANDLE_INVALID;
> @@ -1110,7 +1109,7 @@ int add_to_physmap(struct domain *sd, unsigned long 
> sgfn, shr_handle_t sh,
>  }
>  
>  ret = p2m_set_entry(p2m, _gfn(cgfn), smfn, PAGE_ORDER_4K,
> -p2m_ram_shared, a);
> +p2m_ram_shared, p2m->default_access);
>  
>  /* Tempted to turn this into an assert */
>  if ( ret )
> 


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 12/18] x86/mem_sharing: Enable mem_sharing on first memop

2020-01-22 Thread Jan Beulich
On 21.01.2020 18:49, Tamas K Lengyel wrote:
> It is wasteful to require separate hypercalls to enable sharing on both the
> parent and the client domain during VM forking. To speed things up we enable
> sharing on the first memop in case it wasn't already enabled.
> 
> Signed-off-by: Tamas K Lengyel 

Reviewed-by: Jan Beulich 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk

2020-01-22 Thread Jan Beulich
On 21.01.2020 18:49, Tamas K Lengyel wrote:
> @@ -538,24 +535,26 @@ static int audit(void)
>  d = get_domain_by_id(g->domain);
>  if ( d == NULL )
>  {
> -MEM_SHARING_DEBUG("Unknown dom: %hu, for PFN=%lx, MFN=%lx\n",
> -  g->domain, g->gfn, mfn_x(mfn));
> +gdprintk(XENLOG_ERR,
> + "Unknown dom: %pd, for PFN=%lx, MFN=%lx\n",
> + d, g->gfn, mfn_x(mfn));

With "if ( d == NULL )" around this you hardly mean to pass d to
the function here. This is a case where you really need to stick
to logging a raw number.

>  errors++;
>  continue;
>  }
>  o_mfn = get_gfn_query_unlocked(d, g->gfn, );
>  if ( !mfn_eq(o_mfn, mfn) )
>  {
> -MEM_SHARING_DEBUG("Incorrect P2M for d=%hu, PFN=%lx."
> -  "Expecting MFN=%lx, got %lx\n",
> -  g->domain, g->gfn, mfn_x(mfn), 
> mfn_x(o_mfn));
> +gdprintk(XENLOG_ERR, "Incorrect P2M for d=%pd, PFN=%lx."

Here and elsewhere may I recommend dropping d= (or dom= further
down)?

> @@ -757,10 +756,10 @@ static int debug_mfn(mfn_t mfn)
>  return -EINVAL;
>  }
>  
> -MEM_SHARING_DEBUG(
> -"Debug page: MFN=%lx is ci=%lx, ti=%lx, owner=%pd\n",
> -mfn_x(page_to_mfn(page)), page->count_info,
> -page->u.inuse.type_info, page_get_owner(page));
> +gdprintk(XENLOG_ERR,
> + "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner_id=%d\n",
> + mfn_x(page_to_mfn(page)), page->count_info,
> + page->u.inuse.type_info, page_get_owner(page)->domain_id);

As indicated before (I think), please prefer %pd and a struct domain
pointer over passing ->domain_id (at least one more instance further
down).

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries

2020-01-22 Thread Jan Beulich
On 21.01.2020 18:49, Tamas K Lengyel wrote:
> The owner domain of shared pages is dom_cow, use that for get_page
> otherwise the function fails to return the correct page.

I think this description needs improvement: The function does the
special shared page dance in one place (on the "fast path")
already. This wants mentioning, either because it was a mistake
to have it just there, or because a new need has appeared to also
have it on the "slow path".

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
>  if ( p2m_is_ram(*t) && mfn_valid(mfn) )
>  {
>  page = mfn_to_page(mfn);
> -if ( !get_page(page, p2m->domain) )
> +if ( !get_page(page, p2m->domain) &&
> + /* Page could be shared */
> + (!dom_cow || !p2m_is_shared(*t) ||
> +  !get_page(page, dom_cow)) )

While there may be a reason why on the fast path two get_page()
invocations are be necessary, couldn't you get away with just
one

if ( !get_page(page, !dom_cow || !p2m_is_shared(*t) ? p2m->domain
: dom_cow) )

at least here? It's also not really clear to me why here and
there we need "!dom_cow || !p2m_is_shared(*t)" - wouldn't
p2m_is_shared() return consistently "false" when !dom_cow ?

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params

2020-01-22 Thread Jan Beulich
On 21.01.2020 18:49, Tamas K Lengyel wrote:
> Currently the hvm parameters are only accessible via the HVMOP hypercalls. In
> this patch we introduce a new function that can copy both the hvm context and
> parameters directly into a target domain. No functional changes in existing
> code.
> 
> Signed-off-by: Tamas K Lengyel 

In reply to my v4 comments you said "I don't have any objections to the
things you pointed out." Yet only one aspect was actually changed here.
It also doesn't help that there's no brief summary of the changes done
for v5. I guess I'm confused.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC XEN PATCH 00/23] xen: beginning support for RISC-V

2020-01-22 Thread Andrew Cooper
On 22/01/2020 01:58, Bobby Eshleman wrote:
> Hey everybody,
>
> This is an RFC patchset for the very beginnings of adding RISC-V support
> to Xen.  This RFC is really just to start a dialogue about supporting
> RISC-V and align with the Xen project and community before going
> further.  For that reason, it is very rough and very incomplete. 
>
> My name is Bobby Eshleman, I'm a software engineer at
> Star Lab / Wind River on the ARM team, mostly having worked in the Linux
> kernel.  I've also been involved a good amount with Xen on ARM here,
> mostly dealing with tooling, deployment, and testing.  A lot of this
> patchset is heavily inspired by the Xen/ARM source code (particularly
> the early setup up code).
>
> Currently, this patchset really only sets up virtual memory for Xen and
> initializes UART to enable print output.  None of RISC-V's
> virtualization support has been implemented yet, although that is the
> next road to start going down.  Many functions only contain dummy
> implementations.  Many shortcuts have been taken and TODO's have been
> left accordingly.  It is very, very rough.  Be forewarned: you are quite
> likely to see some ungainly code here (despite my efforts to clean it up
> before sending this patchset out).  My intent with this RFC is to align
> early and gauge interest, as opposed to presenting a totally complete
> patchset.
>
> Because the ARM and RISC-V use cases will likely bear resemblance, the
> RISC-V port should probably respect the design considerations that have
> been laid out and respected by Xen on ARM for dom0less, safety
> certification, etc...  My inclination has been to initially target or
> prioritize dom0less (without excluding dom0full) and use the ARM
> dom0less implementation as a model to follow.  I'd love feedback on this
> point and on how the Xen project might envision a RISC-V implementation.
>
> This patchset has _some_ code for future support for 32-bit, but
> currently my focus is on 64-bit.
>
> Again, this is a very, very rough and totally incomplete patchset.  My
> goal here is just to gauge community interest, begin discussing what Xen
> on RISC-V may look like, receive feedback, and see if I'm heading in the
> right direction.
>
> My big questions are:
>   Does the Xen project have interest in RISC-V?

There is very large downstream interest in RISC-V.  So a definite yes.

>   What can be done to make the RISC-V port as upstreamable as
>   possible?
>   Any major pitfalls?
>
> It would be great to hear all of your feedback.

Both RISC-V and Power9 are frequently requested things, and both suffer
from the fact that, while we as a community would like them, the
upstream intersection of "people who know Xen" and "people who know
enough arch $X to do an initial port" is 0.

This series clearly demonstrates a change in the status quo, and I think
a lot of people will be happy.

To get RISC-V to being fully supported, we will ultimately need to get
hardware into the CI system, and an easy way for developers to test
changes.  Do you have any thoughts on production RISC-V hardware
(ideally server form factor) for the CI system, and/or dev boards which
might be available fairly cheaply?

How much time do you have to put towards the port?  Is this something in
your free time, or something you are doing as part of work?  Ultimately,
we are going to need to increase the level of RISC-V knowledge in the
community to maintain things in the future.

Other than that, very RFC series are entirely fine.  A good first step
would be simply to get the build working, and get some kind of
cross-compile build in CI, to make sure that we don't clobber the RISC-V
build with common or other-arch changes.

I hope this helps.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 1/7] libxl: add definition of INVALID_DOMID to the API

2020-01-22 Thread Roger Pau Monné
On Wed, Jan 22, 2020 at 02:44:40PM +, Paul Durrant wrote:
> Currently both xl and libxl have internal definitions of INVALID_DOMID
> which happen to be identical. However, for the purposes of describing the
> behaviour of libxl_domain_create_new/restore() it is useful to have a
> specified invalid value for a domain id.
> 
> This patch therefore moves the libxl definition from libxl_internal.h to
> libxl.h and removes the internal definition from xl_utils.h. The hardcoded
> '-1' passed back via domcreate_complete() is then updated to INVALID_DOMID
> and comment above libxl_domain_create_new/restore() is accordingly
> modified.

Urg, it's kind of ugly to add another definition of invalid domid when
there's already DOMID_INVALID in the public headers. I guess there's a
reason I'm missing for not using DOMID_INVALID instead of introducing
a new value?

If so could this be mentioned in the commit message?

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 7/7] xl: allow domid to be preserved on save/restore or migrate

2020-01-22 Thread Paul Durrant
This patch adds a '-D' command line option to save and migrate to allow
the domain id to be incorporated into the saved domain configuration and
hence be preserved.

Signed-off-by: Paul Durrant 
---
Cc: Ian Jackson 
Cc: Wei Liu 
Cc: Anthony PERARD 

v2:
 - Heavily re-worked based on new libxl_domain_create_info
---
 docs/man/xl.1.pod.in  | 14 ++
 tools/xl/xl.h |  1 +
 tools/xl/xl_cmdtable.c|  6 --
 tools/xl/xl_migrate.c | 15 ++-
 tools/xl/xl_saverestore.c | 19 ++-
 tools/xl/xl_vmcontrol.c   |  3 ++-
 6 files changed, 45 insertions(+), 13 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index d4b5e8e362..937eda690f 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -490,6 +490,13 @@ Display huge (!) amount of debug information during the 
migration process.
 
 Leave the domain on the receive side paused after migration.
 
+=item B<-D>
+
+Preserve the B in the domain coniguration that is transferred
+such that it will be identical on the destination host, unless that
+configuration is overridden using the B<-C> option. Note that it is not
+possible to use this option for a 'localhost' migration.
+
 =back
 
 =item B [I] I I
@@ -692,6 +699,13 @@ Leave the domain running after creating the snapshot.
 
 Leave the domain paused after creating the snapshot.
 
+=item B<-D>
+
+Preserve the B in the domain coniguration that is embedded in
+the state file such that it will be identical when the domain is restored,
+unless that configuration is overridden. (See the B operation
+above).
+
 =back
 
 =item B [I]
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 2b4709efb2..06569c6c4a 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -99,6 +99,7 @@ struct save_file_header {
 #define SAVEFILE_BYTEORDER_VALUE ((uint32_t)0x01020304UL)
 
 void save_domain_core_begin(uint32_t domid,
+int preserve_domid,
 const char *override_config_file,
 uint8_t **config_data_r,
 int *config_len_r);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 3b302b2f20..08335394e5 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -153,7 +153,8 @@ struct cmd_spec cmd_table[] = {
   "[options]   []",
   "-h  Print this help.\n"
   "-c  Leave domain running after creating the snapshot.\n"
-  "-p  Leave domain paused after creating the snapshot."
+  "-p  Leave domain paused after creating the snapshot.\n"
+  "-D  Store the domain id in the configration."
 },
 { "migrate",
   _migrate, 0, 1,
@@ -167,7 +168,8 @@ struct cmd_spec cmd_table[] = {
   "-e  Do not wait in the background (on ) for the 
death\n"
   "of the domain.\n"
   "--debug Print huge (!) amount of debug during the migration 
process.\n"
-  "-p  Do not unpause domain after migrating it."
+  "-p  Do not unpause domain after migrating it.\n"
+  "-D  Preserve the domain id"
 },
 { "restore",
   _restore, 0, 1,
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index 22f0429b84..0813beb801 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -176,7 +176,8 @@ static void migrate_do_preamble(int send_fd, int recv_fd, 
pid_t child,
 
 }
 
-static void migrate_domain(uint32_t domid, const char *rune, int debug,
+static void migrate_domain(uint32_t domid, int preserve_domid,
+   const char *rune, int debug,
const char *override_config_file)
 {
 pid_t child = -1;
@@ -187,7 +188,7 @@ static void migrate_domain(uint32_t domid, const char 
*rune, int debug,
 uint8_t *config_data;
 int config_len, flags = LIBXL_SUSPEND_LIVE;
 
-save_domain_core_begin(domid, override_config_file,
+save_domain_core_begin(domid, preserve_domid, override_config_file,
_data, _len);
 
 if (!config_len) {
@@ -537,13 +538,14 @@ int main_migrate(int argc, char **argv)
 char *rune = NULL;
 char *host;
 int opt, daemonize = 1, monitor = 1, debug = 0, pause_after_migration = 0;
+int preserve_domid = 0;
 static struct option opts[] = {
 {"debug", 0, 0, 0x100},
 {"live", 0, 0, 0x200},
 COMMON_LONG_OPTS
 };
 
-SWITCH_FOREACH_OPT(opt, "FC:s:ep", opts, "migrate", 2) {
+SWITCH_FOREACH_OPT(opt, "FC:s:epD", opts, "migrate", 2) {
 case 'C':
 config_filename = optarg;
 break;
@@ -560,6 +562,9 @@ int main_migrate(int argc, char **argv)
 case 'p':
 pause_after_migration = 1;
 break;
+case 'D':
+preserve_domid = 1;
+break;
 case 0x100: /* --debug */
 debug = 1;
 break;
@@ -596,7 +601,7 @@ int main_migrate(int argc, char **argv)
   pause_after_migration ? " -p" : "");
 }
 
-

[Xen-devel] [PATCH v4 6/7] xl.conf: introduce 'domid_policy'

2020-01-22 Thread Paul Durrant
This patch adds a new global 'domid_policy' configuration option to decide
how domain id values are allocated for new domains. It may be set to one of
two values:

"xen", the default value, will cause an invalid domid value to be passed
to do_domain_create() preserving the existing behaviour of having Xen
choose the domid value during domain_create().

"random" will cause the special RANDOM_DOMID value to be passed to
do_domain_create() such that libxl__domain_make() will select a random
domid value.

Signed-off-by: Paul Durrant 
Acked-by: Ian Jackson 
---
Cc: Wei Liu 
Cc: Anthony PERARD 

v2:
 - New in v2
---
 docs/man/xl.conf.5.pod  | 10 ++
 tools/examples/xl.conf  |  4 
 tools/xl/xl.c   | 10 ++
 tools/xl/xl.h   |  1 +
 tools/xl/xl_vmcontrol.c |  2 ++
 5 files changed, 27 insertions(+)

diff --git a/docs/man/xl.conf.5.pod b/docs/man/xl.conf.5.pod
index 207ab3e77a..41ee428744 100644
--- a/docs/man/xl.conf.5.pod
+++ b/docs/man/xl.conf.5.pod
@@ -45,6 +45,16 @@ The semantics of each C defines which form of C 
is required.
 
 =over 4
 
+=item B
+
+Determines how domain-id is set when creating a new domain.
+
+If set to "xen" then the hypervisor will allocate new domain-id values on a 
sequential basis.
+
+If set to "random" then a random domain-id value will be chosen.
+
+Default: "xen"
+
 =item B
 
 If set to "on" then C will automatically reduce the amount of
diff --git a/tools/examples/xl.conf b/tools/examples/xl.conf
index 0446deb304..95f2f442d3 100644
--- a/tools/examples/xl.conf
+++ b/tools/examples/xl.conf
@@ -1,5 +1,9 @@
 ## Global XL config file ##
 
+# Set domain-id policy. "xen" means that the hypervisor will choose the
+# id of a new domain. "random" means that a random value will be chosen.
+#domid_policy="xen"
+
 # Control whether dom0 is ballooned down when xen doesn't have enough
 # free memory to create a domain.  "auto" means only balloon if dom0
 # starts with all the host's memory.
diff --git a/tools/xl/xl.c b/tools/xl/xl.c
index 3d4390a46d..2a5ddd4390 100644
--- a/tools/xl/xl.c
+++ b/tools/xl/xl.c
@@ -54,6 +54,7 @@ int claim_mode = 1;
 bool progress_use_cr = 0;
 int max_grant_frames = -1;
 int max_maptrack_frames = -1;
+libxl_domid domid_policy = INVALID_DOMID;
 
 xentoollog_level minmsglevel = minmsglevel_default;
 
@@ -228,6 +229,15 @@ static void parse_global_config(const char *configfile,
 else
 libxl_bitmap_set_any(_pv_affinity_mask);
 
+if (!xlu_cfg_get_string (config, "domid_policy", , 0)) {
+if (!strcmp(buf, "xen"))
+domid_policy = INVALID_DOMID;
+else if (!strcmp(buf, "random"))
+domid_policy = RANDOM_DOMID;
+else
+fprintf(stderr, "invalid domid_policy option");
+}
+
 xlu_cfg_destroy(config);
 }
 
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 60bdad8ffb..2b4709efb2 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -283,6 +283,7 @@ extern int max_maptrack_frames;
 extern libxl_bitmap global_vm_affinity_mask;
 extern libxl_bitmap global_hvm_affinity_mask;
 extern libxl_bitmap global_pv_affinity_mask;
+extern libxl_domid domid_policy;
 
 enum output_format {
 OUTPUT_FORMAT_JSON,
diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c
index e520b1da79..39292acfe6 100644
--- a/tools/xl/xl_vmcontrol.c
+++ b/tools/xl/xl_vmcontrol.c
@@ -899,6 +899,8 @@ start:
 autoconnect_console_how = 0;
 }
 
+d_config.c_info.domid = domid_policy;
+
 if ( restoring ) {
 libxl_domain_restore_params params;
 
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 5/7] libxl: allow creation of domains with a specified or random domid

2020-01-22 Thread Paul Durrant
This patch adds a 'domid' field to libxl_domain_create_info and then
modifies libxl__domain_make() to have Xen use that value if it is valid.
If the domid value is invalid then Xen will choose the domid, as before,
unless the value is the new special RANDOM_DOMID value added to the API.
This value instructs libxl__domain_make() to choose a random domid value
for Xen to use.

If Xen determines that a domid specified to or chosen by
libxl__domain_make() co-incides with an existing domain then the create
operation will fail. In this case, if RANDOM_DOMID was specified to
libxl__domain_make() then a new random value will be chosen and the create
operation will be re-tried, otherwise libxl__domain_make() will fail.

After Xen has successfully created a new domain, libxl__domain_make() will
check whether its domid matches any recently used domid values. If it does
then the domain will be destroyed. If the domid used in creation was
specified to libxl__domain_make() then it will fail at this point,
otherwise the create operation will be re-tried with either a new random
or Xen-selected domid value.

NOTE: libxl__logv() is also modified to only log valid domid values in
  messages rather than any domid, valid or otherwise, that is not
  INVALID_DOMID.

Signed-off-by: Paul Durrant 
---
Cc: Ian Jackson 
Cc: Wei Liu 
Cc: Anthony PERARD 
Cc: Andrew Cooper 
Cc: George Dunlap 
Cc: Jan Beulich 
Cc: Julien Grall 
Cc: Konrad Rzeszutek Wilk 
Cc: Stefano Stabellini 
Cc: Jason Andryuk 

v4:
 - Not added Jason's R-b because of substantial change
 - Check for recent domid *after* creation
 - Re-worked commit comment

v3:
 - Added DOMID_MASK definition used to mask randomized values
 - Use stack variable to avoid assuming endianness

v2:
 - Re-worked to use a value from libxl_domain_create_info
---
 tools/libxl/libxl.h  |  9 
 tools/libxl/libxl_create.c   | 43 +++-
 tools/libxl/libxl_internal.c |  2 +-
 tools/libxl/libxl_types.idl  |  1 +
 xen/include/public/xen.h |  3 +++
 5 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 1d235ecb1c..31c6f4b11a 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1268,6 +1268,14 @@ void libxl_mac_copy(libxl_ctx *ctx, libxl_mac *dst, 
const libxl_mac *src);
  */
 #define LIBXL_HAVE_DOMAIN_NEED_MEMORY_CONFIG
 
+/*
+ * LIBXL_HAVE_CREATEINFO_DOMID
+ *
+ * libxl_domain_create_new() and libxl_domain_create_restore() will use
+ * a domid specified in libxl_domain_create_info().
+ */
+#define LIBXL_HAVE_CREATEINFO_DOMID
+
 typedef char **libxl_string_list;
 void libxl_string_list_dispose(libxl_string_list *sl);
 int libxl_string_list_length(const libxl_string_list *sl);
@@ -1528,6 +1536,7 @@ int libxl_ctx_free(libxl_ctx *ctx /* 0 is OK */);
 /* domain related functions */
 
 #define INVALID_DOMID ~0
+#define RANDOM_DOMID (INVALID_DOMID - 1)
 
 /* If the result is ERROR_ABORTED, the domain may or may not exist
  * (in a half-created state).  *domid will be valid and will be the
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e4aab4fd1c..593bf9d225 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -600,9 +600,50 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config 
*d_config,
 goto out;
 }
 
-ret = xc_domain_create(ctx->xch, domid, );
+if (libxl_domid_valid_guest(info->domid))
+*domid = info->domid;
+
+again:
+for (;;) {
+if (info->domid == RANDOM_DOMID) {
+uint16_t v;
+
+ret = libxl__random_bytes(gc, (void *), sizeof(v));
+if (ret < 0)
+break;
+
+v &= DOMID_MASK;
+if (!libxl_domid_valid_guest(v))
+continue;
+
+*domid = v;
+}
+
+ret = xc_domain_create(ctx->xch, domid, );
+if (ret == 0 || errno != EEXIST || info->domid != RANDOM_DOMID)
+break;
+}
+
 if (ret < 0) {
 LOGED(ERROR, *domid, "domain creation fail");
+*domid = INVALID_DOMID;
+rc = ERROR_FAIL;
+goto out;
+}
+
+if (libxl__is_domid_recent(gc, *domid)) {
+if (*domid == info->domid) /* domid was specified */
+LOGED(ERROR, *domid, "domain id recently used");
+
+ret = xc_domain_destroy(ctx->xch, *domid);
+if (!ret) {
+*domid = INVALID_DOMID;
+
+/* If the domid was not specified then have another go */
+if (!libxl_domid_valid_guest(info->domid))
+goto again;
+}
+
 rc = ERROR_FAIL;
 goto out;
 }
diff --git a/tools/libxl/libxl_internal.c b/tools/libxl/libxl_internal.c
index bbd4c6cba9..d93a75533f 100644
--- a/tools/libxl/libxl_internal.c
+++ 

[Xen-devel] [PATCH v4 2/7] libxl_create: make 'soft reset' explicit

2020-01-22 Thread Paul Durrant
The 'soft reset' code path in libxl__domain_make() is currently taken if a
valid domid is passed into the function. A subsequent patch will enable
higher levels of the toolstack to determine the domid of newly created or
restored domains and therefore this criteria for choosing 'soft reset'
will no longer be usable.

This patch adds an extra boolean option to libxl__domain_make() to specify
whether it is being invoked in soft reset context and appropriately
modifies callers to choose the right value. To facilitate this, a new
'soft_reset' boolean field is added to struct libxl__domain_create_state
and the 'domid_soft_reset' field is renamed to 'domid' in anticipation of
its wider remit. For the moment do_domain_create() will always set
domid to INVALID_DOMID and hence we can add an assertion into
libxl__domain_create() that, if it is not called in soft reset context,
the passed in domid is exactly that value.

Whilst in the neighbourhood, some checks of 'restore_fd > -1' have been
replaced by 'restore_fd >= 0' to be more conventional and consistent with
checks of 'restore_fd < 0'.

Signed-off-by: Paul Durrant 
Acked-by: Ian Jackson 
---
Cc: Wei Liu 
Cc: Anthony PERARD 
---
 tools/libxl/libxl_create.c   | 56 ++--
 tools/libxl/libxl_dm.c   |  2 +-
 tools/libxl/libxl_internal.h |  5 ++--
 3 files changed, 38 insertions(+), 25 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 8a1bff6cd3..73a2883357 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -538,7 +538,7 @@ out:
 
 int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
libxl__domain_build_state *state,
-   uint32_t *domid)
+   uint32_t *domid, bool soft_reset)
 {
 libxl_ctx *ctx = libxl__gc_owner(gc);
 int ret, rc, nb_vm;
@@ -555,14 +555,15 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config 
*d_config,
 libxl_domain_create_info *info = _config->c_info;
 libxl_domain_build_info *b_info = _config->b_info;
 
+assert(soft_reset || *domid == INVALID_DOMID);
+
 uuid_string = libxl__uuid2string(gc, info->uuid);
 if (!uuid_string) {
 rc = ERROR_NOMEM;
 goto out;
 }
 
-/* Valid domid here means we're soft resetting. */
-if (!libxl_domid_valid_guest(*domid)) {
+if (!soft_reset) {
 struct xen_domctl_createdomain create = {
 .ssidref = info->ssidref,
 .max_vcpus = b_info->max_vcpus,
@@ -611,6 +612,14 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config 
*d_config,
 goto out;
 }
 
+/*
+ * If soft_reset is set the the domid will have been valid on entry.
+ * If it was not set then xc_domain_create() should have assigned a
+ * valid value. Either way, if we reach this point, domid should be
+ * valid.
+ */
+assert(libxl_domid_valid_guest(*domid));
+
 ret = xc_cpupool_movedomain(ctx->xch, info->poolid, *domid);
 if (ret < 0) {
 LOGED(ERROR, *domid, "domain move fail");
@@ -1091,13 +1100,14 @@ static void initiate_domain_create(libxl__egc *egc,
 libxl_domain_config *const d_config = dcs->guest_config;
 const int restore_fd = dcs->restore_fd;
 
-domid = dcs->domid_soft_reset;
+domid = dcs->domid;
 libxl__domain_build_state_init(>build_state);
 
 ret = libxl__domain_config_setdefault(gc,d_config,domid);
 if (ret) goto error_out;
 
-ret = libxl__domain_make(gc, d_config, >build_state, );
+ret = libxl__domain_make(gc, d_config, >build_state, ,
+ dcs->soft_reset);
 if (ret) {
 LOGD(ERROR, domid, "cannot make domain: %d", ret);
 dcs->guest_domid = domid;
@@ -1141,7 +1151,7 @@ static void initiate_domain_create(libxl__egc *egc,
 if (ret)
 goto error_out;
 
-if (restore_fd >= 0 || dcs->domid_soft_reset != INVALID_DOMID) {
+if (restore_fd >= 0 || dcs->soft_reset) {
 LOGD(DEBUG, domid, "restoring, not running bootloader");
 domcreate_bootloader_done(egc, >bl, 0);
 } else  {
@@ -1217,7 +1227,7 @@ static void domcreate_bootloader_done(libxl__egc *egc,
 dcs->sdss.dm.callback = domcreate_devmodel_started;
 dcs->sdss.callback = domcreate_devmodel_started;
 
-if (restore_fd < 0 && dcs->domid_soft_reset == INVALID_DOMID) {
+if (restore_fd < 0 && !dcs->soft_reset) {
 rc = libxl__domain_build(gc, domid, dcs);
 domcreate_rebuild_done(egc, dcs, rc);
 return;
@@ -1827,7 +1837,7 @@ static int do_domain_create(libxl_ctx *ctx, 
libxl_domain_config *d_config,
 libxl_domain_config_copy(ctx, >dcs.guest_config_saved, d_config);
 cdcs->dcs.restore_fd = cdcs->dcs.libxc_fd = restore_fd;
 cdcs->dcs.send_back_fd = send_back_fd;
-if (restore_fd > -1) {
+if (restore_fd >= 0) {
 cdcs->dcs.restore_params = *params;
 rc = libxl__fd_flags_modify_save(gc, 

[Xen-devel] [PATCH v4 0/7] xl/libxl: domid allocation/preservation changes

2020-01-22 Thread Paul Durrant
This series was previously named "xl/libxl: allow creation of domains with
a specified domid".

Paul Durrant (7):
  libxl: add definition of INVALID_DOMID to the API
  libxl_create: make 'soft reset' explicit
  libxl: generalise libxl__domain_userdata_lock()
  libxl: add infrastructure to track and query 'recent' domids
  libxl: allow creation of domains with a specified or random domid
  xl.conf: introduce 'domid_policy'
  xl: allow domid to be preserved on save/restore or migrate

 docs/man/xl.1.pod.in  |  14 
 docs/man/xl.conf.5.pod|  10 +++
 tools/examples/xl.conf|   4 +
 tools/helpers/xen-init-dom0.c |  30 +++
 tools/libxl/libxl.h   |  15 +++-
 tools/libxl/libxl_create.c| 105 ++--
 tools/libxl/libxl_device.c|   4 +-
 tools/libxl/libxl_disk.c  |  12 +--
 tools/libxl/libxl_dm.c|   2 +-
 tools/libxl/libxl_dom.c   |  12 +--
 tools/libxl/libxl_domain.c| 149 --
 tools/libxl/libxl_internal.c  |  67 +--
 tools/libxl/libxl_internal.h  |  30 +--
 tools/libxl/libxl_mem.c   |   8 +-
 tools/libxl/libxl_pci.c   |   4 +-
 tools/libxl/libxl_types.idl   |   1 +
 tools/libxl/libxl_usb.c   |   8 +-
 tools/xl/xl.c |  10 +++
 tools/xl/xl.h |   2 +
 tools/xl/xl_cmdtable.c|   6 +-
 tools/xl/xl_migrate.c |  15 ++--
 tools/xl/xl_saverestore.c |  19 +++--
 tools/xl/xl_utils.h   |   2 -
 tools/xl/xl_vmcontrol.c   |   3 +
 xen/include/public/xen.h  |   3 +
 25 files changed, 432 insertions(+), 103 deletions(-)

-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 4/7] libxl: add infrastructure to track and query 'recent' domids

2020-01-22 Thread Paul Durrant
A domid is considered recent if the domain it represents was destroyed
less than a specified number of seconds ago. The number can be set using
the environment variable LIBXL_DOMID_REUSE_TIMEOUT. If the variable does
not exist then a default value of 60s is used.

Whenever a domain is destroyed, a time-stamped record will be written into
a history file (/var/run/xen/domid-history). To avoid the history file
growing too large, any records with time-stamps that indicate that the
age of a domid has exceeded the re-use timeout will also be purged.

A new utility function, libxl__is_recent_domid(), has been added. This
function reads the same history file checking whether a specified domid
has a record that does not exceed the re-use timeout. Since this utility
function does not write to the file, no records are actually purged by it.

NOTE: The history file is purged on boot to it is safe to use
  CLOCK_MONOTONIC as a time source.

Signed-off-by: Paul Durrant 
---
Cc: Ian Jackson 
Cc: Wei Liu 
Cc: Anthony PERARD 

v4:
 - Use new generalised libxl__flock
 - Don't read and write the same file
 - Use 'recent' rather than 'retired'
 - Add code into xen-init-dom0 to delete an old history file at boot

v2:
 - New in v2
---
 tools/helpers/xen-init-dom0.c |  30 
 tools/libxl/libxl.h   |   2 +
 tools/libxl/libxl_domain.c| 135 ++
 tools/libxl/libxl_internal.c  |  10 +++
 tools/libxl/libxl_internal.h  |  14 
 5 files changed, 191 insertions(+)

diff --git a/tools/helpers/xen-init-dom0.c b/tools/helpers/xen-init-dom0.c
index a1e5729458..56f69ab66f 100644
--- a/tools/helpers/xen-init-dom0.c
+++ b/tools/helpers/xen-init-dom0.c
@@ -12,6 +12,32 @@
 #define DOMNAME_PATH   "/local/domain/0/name"
 #define DOMID_PATH "/local/domain/0/domid"
 
+int clear_domid_history(void)
+{
+int rc = 1;
+xentoollog_logger_stdiostream *logger;
+libxl_ctx *ctx;
+
+logger = xtl_createlogger_stdiostream(stderr, XTL_ERROR, 0);
+if (!logger)
+return 1;
+
+if (libxl_ctx_alloc(, LIBXL_VERSION, 0,
+(xentoollog_logger *)logger)) {
+fprintf(stderr, "cannot init libxl context\n");
+goto outlog;
+}
+
+if (!libxl_clear_domid_history(ctx))
+rc = 0;
+
+libxl_ctx_free(ctx);
+
+outlog:
+xtl_logger_destroy((xentoollog_logger *)logger);
+return rc;
+}
+
 int main(int argc, char **argv)
 {
 int rc;
@@ -70,6 +96,10 @@ int main(int argc, char **argv)
 if (rc)
 goto out;
 
+rc = clear_domid_history();
+if (rc)
+goto out;
+
 /* Write xenstore entries. */
 if (!xs_write(xsh, XBT_NULL, DOMID_PATH, "0", strlen("0"))) {
 fprintf(stderr, "cannot set domid for Dom0\n");
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 18c1a2d6bf..1d235ecb1c 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -2657,6 +2657,8 @@ static inline int 
libxl_qemu_monitor_command_0x041200(libxl_ctx *ctx,
 
 #include 
 
+int libxl_clear_domid_history(libxl_ctx *ctx);
+
 #endif /* LIBXL_H */
 
 /*
diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index 1bdb1615d8..d424a8542f 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -1268,6 +1268,140 @@ static void dm_destroy_cb(libxl__egc *egc,
 libxl__devices_destroy(egc, >drs);
 }
 
+static unsigned int libxl__get_domid_reuse_timeout(void)
+{
+const char *env_timeout = getenv("LIBXL_DOMID_REUSE_TIMEOUT");
+
+return env_timeout ? strtol(env_timeout, NULL, 0) :
+LIBXL_DOMID_REUSE_TIMEOUT;
+}
+
+char *libxl__domid_history_path(libxl__gc *gc, const char *suffix)
+{
+return GCSPRINTF("%s/domid-history%s", libxl__run_dir_path(),
+ suffix ?: "");
+}
+
+int libxl_clear_domid_history(libxl_ctx *ctx)
+{
+GC_INIT(ctx);
+char *path;
+int rc = ERROR_FAIL;
+
+path = libxl__domid_history_path(gc, NULL);
+if (!path)
+goto out;
+
+if (unlink(path) < 0 && errno != ENOENT) {
+LOGE(ERROR, "failed to remove '%s'\n", path);
+goto out;
+}
+
+rc = 0;
+
+out:
+GC_FREE;
+return rc;
+}
+
+static void libxl__mark_domid_recent(libxl__gc *gc, uint32_t domid)
+{
+long timeout = libxl__get_domid_reuse_timeout();
+libxl__flock *lock;
+char *old, *new;
+FILE *of = NULL, *nf = NULL;
+struct timespec ts;
+char line[64];
+
+lock = libxl__lock_domid_history(gc);
+if (!lock) {
+LOGED(ERROR, domid, "failed to acquire lock");
+goto out;
+}
+
+old = libxl__domid_history_path(gc, NULL);
+of = fopen(old, "r");
+if (!of && errno != ENOENT)
+LOGED(WARN, domid, "failed to open '%s'", old);
+
+new = libxl__domid_history_path(gc, ".new");
+nf = fopen(new, "a");
+if (!nf) {
+LOGED(ERROR, domid, "failed to open '%s'", new);
+goto out;
+}
+
+clock_gettime(CLOCK_MONOTONIC, );
+
+while (of && fgets(line, sizeof(line), 

[Xen-devel] [PATCH v4 1/7] libxl: add definition of INVALID_DOMID to the API

2020-01-22 Thread Paul Durrant
Currently both xl and libxl have internal definitions of INVALID_DOMID
which happen to be identical. However, for the purposes of describing the
behaviour of libxl_domain_create_new/restore() it is useful to have a
specified invalid value for a domain id.

This patch therefore moves the libxl definition from libxl_internal.h to
libxl.h and removes the internal definition from xl_utils.h. The hardcoded
'-1' passed back via domcreate_complete() is then updated to INVALID_DOMID
and comment above libxl_domain_create_new/restore() is accordingly
modified.

Signed-off-by: Paul Durrant 
Acked-by: Ian Jackson 
---
Cc: Wei Liu 
Cc: Anthony PERARD 
---
 tools/libxl/libxl.h  | 4 +++-
 tools/libxl/libxl_create.c   | 2 +-
 tools/libxl/libxl_internal.h | 1 -
 tools/xl/xl_utils.h  | 2 --
 4 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 54abb9db1f..18c1a2d6bf 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1527,9 +1527,11 @@ int libxl_ctx_free(libxl_ctx *ctx /* 0 is OK */);
 
 /* domain related functions */
 
+#define INVALID_DOMID ~0
+
 /* If the result is ERROR_ABORTED, the domain may or may not exist
  * (in a half-created state).  *domid will be valid and will be the
- * domain id, or -1, as appropriate */
+ * domain id, or INVALID_DOMID, as appropriate */
 
 int libxl_domain_create_new(libxl_ctx *ctx, libxl_domain_config *d_config,
 uint32_t *domid,
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 69fceff061..8a1bff6cd3 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1773,7 +1773,7 @@ static void domcreate_complete(libxl__egc *egc,
 libxl__domain_destroy(egc, >dds);
 return;
 }
-dcs->guest_domid = -1;
+dcs->guest_domid = INVALID_DOMID;
 }
 dcs->callback(egc, dcs, rc, dcs->guest_domid);
 }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index d919f91882..f2f753c72b 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -121,7 +121,6 @@
 #define STUBDOM_SPECIAL_CONSOLES 3
 #define TAP_DEVICE_SUFFIX "-emu"
 #define DOMID_XS_PATH "domid"
-#define INVALID_DOMID ~0
 #define PVSHIM_BASENAME "xen-shim"
 #define PVSHIM_CMDLINE "pv-shim console=xen,pv"
 
diff --git a/tools/xl/xl_utils.h b/tools/xl/xl_utils.h
index 7b9ccca30a..d98b419f10 100644
--- a/tools/xl/xl_utils.h
+++ b/tools/xl/xl_utils.h
@@ -52,8 +52,6 @@
 #define STR_SKIP_PREFIX( a, b ) \
 ( STR_HAS_PREFIX(a, b) ? ((a) += strlen(b), 1) : 0 )
 
-#define INVALID_DOMID ~0
-
 #define LOG(_f, _a...)   dolog(__FILE__, __LINE__, __func__, _f "\n", ##_a)
 
 /*
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 3/7] libxl: generalise libxl__domain_userdata_lock()

2020-01-22 Thread Paul Durrant
This function implements a file-based lock with a file name generated
from a domid.

This patch splits it into two, generalising the core of the locking code
into a new libxl__lock_file() function which operates on a specified file,
leaving just the file name generation in libxl__domain_userdata_lock().

This patch also generalises libxl__unlock_domain_userdata() to
libxl__unlock_file() and modifies all call-sites.

Suggested-by: Ian Jackson 
Signed-off-by: Paul Durrant 
---
Cc: Ian Jackson 
Cc: Wei Liu 
Cc: Anthony PERARD 

v4:
 - New in v4.
---
 tools/libxl/libxl_create.c   |  4 +--
 tools/libxl/libxl_device.c   |  4 +--
 tools/libxl/libxl_disk.c | 12 
 tools/libxl/libxl_dom.c  | 12 
 tools/libxl/libxl_domain.c   | 14 -
 tools/libxl/libxl_internal.c | 55 +---
 tools/libxl/libxl_internal.h | 10 ---
 tools/libxl/libxl_mem.c  |  8 +++---
 tools/libxl/libxl_pci.c  |  4 +--
 tools/libxl/libxl_usb.c  |  8 +++---
 10 files changed, 72 insertions(+), 59 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 73a2883357..e4aab4fd1c 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1755,7 +1755,7 @@ static void domcreate_complete(libxl__egc *egc,
 bool retain_domain = !rc || rc == ERROR_ABORTED;
 
 if (retain_domain) {
-libxl__domain_userdata_lock *lock;
+libxl__flock *lock;
 
 /* Note that we hold CTX lock at this point so only need to
  * take data store lock
@@ -1769,7 +1769,7 @@ static void domcreate_complete(libxl__egc *egc,
 (gc, dcs->guest_domid, d_config_saved);
 if (!rc)
 rc = cfg_rc;
-libxl__unlock_domain_userdata(lock);
+libxl__unlock_file(lock);
 }
 }
 
diff --git a/tools/libxl/libxl_device.c b/tools/libxl/libxl_device.c
index 9d05d2fd13..0381c5d509 100644
--- a/tools/libxl/libxl_device.c
+++ b/tools/libxl/libxl_device.c
@@ -1850,7 +1850,7 @@ void libxl__device_add_async(libxl__egc *egc, uint32_t 
domid,
 xs_transaction_t t = XBT_NULL;
 libxl_domain_config d_config;
 void *type_saved;
-libxl__domain_userdata_lock *lock = NULL;
+libxl__flock *lock = NULL;
 int rc;
 
 libxl_domain_config_init(_config);
@@ -1946,7 +1946,7 @@ void libxl__device_add_async(libxl__egc *egc, uint32_t 
domid,
 
 out:
 libxl__xs_transaction_abort(gc, );
-if (lock) libxl__unlock_domain_userdata(lock);
+if (lock) libxl__unlock_file(lock);
 dt->dispose(type_saved);
 libxl_domain_config_dispose(_config);
 aodev->rc = rc;
diff --git a/tools/libxl/libxl_disk.c b/tools/libxl/libxl_disk.c
index 64a6691424..e0de1c5781 100644
--- a/tools/libxl/libxl_disk.c
+++ b/tools/libxl/libxl_disk.c
@@ -245,7 +245,7 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
 xs_transaction_t t = XBT_NULL;
 libxl_domain_config d_config;
 libxl_device_disk disk_saved;
-libxl__domain_userdata_lock *lock = NULL;
+libxl__flock *lock = NULL;
 
 libxl_domain_config_init(_config);
 libxl_device_disk_init(_saved);
@@ -436,7 +436,7 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
 
 out:
 libxl__xs_transaction_abort(gc, );
-if (lock) libxl__unlock_domain_userdata(lock);
+if (lock) libxl__unlock_file(lock);
 libxl_device_disk_dispose(_saved);
 libxl_domain_config_dispose(_config);
 aodev->rc = rc;
@@ -794,7 +794,7 @@ static void cdrom_insert_ejected(libxl__egc *egc,
 {
 EGC_GC;
 libxl__cdrom_insert_state *cis = CONTAINER_OF(qmp, *cis, qmp);
-libxl__domain_userdata_lock *data_lock = NULL;
+libxl__flock *data_lock = NULL;
 libxl__device device;
 const char *be_path, *libxl_path;
 flexarray_t *empty = NULL;
@@ -896,7 +896,7 @@ static void cdrom_insert_ejected(libxl__egc *egc,
 out:
 libxl__xs_transaction_abort(gc, );
 libxl_domain_config_dispose(_config);
-if (data_lock) libxl__unlock_domain_userdata(data_lock);
+if (data_lock) libxl__unlock_file(data_lock);
 if (rc) {
 cdrom_insert_done(egc, cis, rc); /* must be last */
 } else if (!has_callback) {
@@ -951,7 +951,7 @@ static void cdrom_insert_inserted(libxl__egc *egc,
 {
 EGC_GC;
 libxl__cdrom_insert_state *cis = CONTAINER_OF(qmp, *cis, qmp);
-libxl__domain_userdata_lock *data_lock = NULL;
+libxl__flock *data_lock = NULL;
 libxl_domain_config d_config;
 flexarray_t *insert = NULL;
 xs_transaction_t t = XBT_NULL;
@@ -1029,7 +1029,7 @@ static void cdrom_insert_inserted(libxl__egc *egc,
 out:
 libxl__xs_transaction_abort(gc, );
 libxl_domain_config_dispose(_config);
-if (data_lock) libxl__unlock_domain_userdata(data_lock);
+if (data_lock) libxl__unlock_file(data_lock);
 cdrom_insert_done(egc, cis, rc); /* must be last */
 }
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index e0b6d4a8d3..021cbb4e1c 100644
--- 

Re: [Xen-devel] [PATCH v4 08/16] tools/libvchan: notify server when client is connected

2020-01-22 Thread Jason Andryuk
On Tue, Jan 21, 2020 at 4:28 PM Marek Marczykowski-Górecki
 wrote:
>
> On Mon, Jan 20, 2020 at 02:44:58PM -0500, Jason Andryuk wrote:
> > On Tue, Jan 14, 2020 at 9:42 PM Marek Marczykowski-Górecki
> >  wrote:
> > >
> > > Let the server know when the client is connected. Otherwise server will
> > > notice only when client send some data.
> > > This change does not break existing clients, as libvchan user should
> > > handle spurious notifications anyway (for example acknowledge of remote
> > > side reading the data).
> > >
> > > Signed-off-by: Marek Marczykowski-Górecki 
> > > 
> > > ---
> > > I had this patch in Qubes for a long time and totally forgot it wasn't
> > > upstream thing...
> > > ---
> > >  tools/libvchan/init.c | 3 +++
> > >  1 file changed, 3 insertions(+)
> > >
> > > diff --git a/tools/libvchan/init.c b/tools/libvchan/init.c
> > > index 180833d..50a64c1 100644
> > > --- a/tools/libvchan/init.c
> > > +++ b/tools/libvchan/init.c
> > > @@ -447,6 +447,9 @@ struct libxenvchan *libxenvchan_client_init(struct 
> > > xentoollog_logger *logger,
> > > ctrl->ring->cli_live = 1;
> > > ctrl->ring->srv_notify = VCHAN_NOTIFY_WRITE;
> > >
> > > +/* wake up the server */
> > > +xenevtchn_notify(ctrl->event, ctrl->event_port);
> >
> > Looks like you used 4 spaces, but the upstream file uses hard tabs.
>
> Indeed. CODING_STYLE says spaces, but it also says some tools/* are not
> directly covered by this file. Should I use this occasion to convert
> tools/libvchan/* to spaces (in a separate patch), or keep tabs (and
> adjust my patch)?

Maybe adjust your patch for tabs in case someone wants to backport it.
And then convert to spaces in a separate patch.

Regards,
Jason

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2] x86/EPT: drop redundant ept_p2m_type_to_flags() parameters

2020-01-22 Thread Jan Beulich
All callers set the respective fields in the entry being updated before
the call.

Take the opportunity and also constify the first parameter as well as
make a few style adjustments.

Signed-off-by: Jan Beulich 
---
v2: Drop redundant function parameters instead.

--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -61,8 +61,8 @@ static int atomic_write_ept_entry(struct
 return 0;
 }
 
-static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
-  p2m_type_t type, p2m_access_t access)
+static void ept_p2m_type_to_flags(const struct p2m_domain *p2m,
+  ept_entry_t *entry)
 {
 /*
  * First apply type permissions.
@@ -75,7 +75,7 @@ static void ept_p2m_type_to_flags(struct
  * D bit is set for all writable types in EPT leaf entry, except for
  * log-dirty type with PML.
  */
-switch(type)
+switch ( entry->sa_p2mt )
 {
 case p2m_invalid:
 case p2m_mmio_dm:
@@ -143,9 +143,8 @@ static void ept_p2m_type_to_flags(struct
 break;
 }
 
-
 /* Then restrict with access permissions */
-switch (access) 
+switch ( entry->access )
 {
 case p2m_access_n:
 case p2m_access_n2rwx:
@@ -269,7 +268,7 @@ static bool_t ept_split_super_page(struc
 epte->snp = is_iommu_enabled(p2m->domain) && iommu_snoop;
 epte->suppress_ve = 1;
 
-ept_p2m_type_to_flags(p2m, epte, epte->sa_p2mt, epte->access);
+ept_p2m_type_to_flags(p2m, epte);
 
 if ( (level - 1) == target )
 continue;
@@ -521,7 +520,7 @@ static int resolve_misconfig(struct p2m_
 if ( nt != e.sa_p2mt )
 {
 e.sa_p2mt = nt;
-ept_p2m_type_to_flags(p2m, , e.sa_p2mt, e.access);
+ept_p2m_type_to_flags(p2m, );
 }
 e.recalc = 0;
 wrc = atomic_write_ept_entry(p2m, [i], e, level);
@@ -574,7 +573,7 @@ static int resolve_misconfig(struct p2m_
 e.ipat = ipat;
 e.recalc = 0;
 if ( recalc && p2m_is_changeable(e.sa_p2mt) )
-ept_p2m_type_to_flags(p2m, , e.sa_p2mt, e.access);
+ept_p2m_type_to_flags(p2m, );
 wrc = atomic_write_ept_entry(p2m, [i], e, level);
 ASSERT(wrc == 0);
 }
@@ -789,7 +788,7 @@ ept_set_entry(struct p2m_domain *p2m, gf
  iommu_flags )
 need_modify_vtd_table = 0;
 
-ept_p2m_type_to_flags(p2m, _entry, p2mt, p2ma);
+ept_p2m_type_to_flags(p2m, _entry);
 }
 
 if ( sve != -1 )

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 07/16] xl: add stubdomain related options to xl config parser

2020-01-22 Thread Jason Andryuk
On Tue, Jan 21, 2020 at 4:22 PM Marek Marczykowski-Górecki
 wrote:
>
> On Mon, Jan 20, 2020 at 02:41:07PM -0500, Jason Andryuk wrote:
> > On Tue, Jan 14, 2020 at 9:40 PM Marek Marczykowski-Górecki
> >  wrote:
> > >
> > > Signed-off-by: Marek Marczykowski-Górecki 
> > > 
> > > Reviewed-by: Jason Andryuk 
> > > ---
> > >  docs/man/xl.cfg.5.pod.in | 23 +++
> > >  tools/xl/xl_parse.c  |  7 +++
> > >  2 files changed, 26 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> > > index 245d3f9..6ae0bd0 100644
> > > --- a/docs/man/xl.cfg.5.pod.in
> > > +++ b/docs/man/xl.cfg.5.pod.in
> > > @@ -2720,10 +2720,25 @@ model which they were installed with.
> > >
> > Also:
> >
> > +=item B
> > +
> > +Start the stubdomain with MBYTES megabytes of RAM.
>
> Added, together with default value.

Thanks.  Good idea to add the default.

Regards,
Jason

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 06/16] libxl: write qemu arguments into separate xenstore keys

2020-01-22 Thread Jason Andryuk
On Tue, Jan 21, 2020 at 4:19 PM Marek Marczykowski-Górecki
 wrote:
>
> On Mon, Jan 20, 2020 at 02:36:08PM -0500, Jason Andryuk wrote:
> > On Tue, Jan 14, 2020 at 9:41 PM Marek Marczykowski-Górecki
> >  wrote:
> > >
> > > This allows using arguments with spaces, like -append, without
> > > nominating any special "separator" character.
> > >
> > > Signed-off-by: Marek Marczykowski-Górecki 
> > > 
> > > ---
> > > Changes in v3:
> > >  - previous version of this patch "libxl: use \x1b to separate qemu
> > >arguments for linux stubdomain" used specific non-printable
> > >separator, but it was rejected as xenstore doesn't cope well with
> > >non-printable chars
> > > ---
> >
> > The code looks good.
> >
> > Reviewed-by: Jason Andryuk 
> >
> > One thought I have is dmargs is a string for mini-os and a directory
> > for linux stubdom.  It's toolstack managed, so it's not a problem.
> > But would a different xenstore node be less surprising to humans?
>
> dmargs_list?

dmargs_list works.  dmargv to mimic argv?  That might be too subtle.

I'm not asking for the change.  I just wanted to bring it up for discussion.

Regards,
Jason

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 05/16] libxl: Handle Linux stubdomain specific QEMU options.

2020-01-22 Thread Jason Andryuk
On Tue, Jan 21, 2020 at 4:18 PM Marek Marczykowski-Górecki
 wrote:
>
> On Mon, Jan 20, 2020 at 02:24:18PM -0500, Jason Andryuk wrote:
> > On Tue, Jan 14, 2020 at 9:42 PM Marek Marczykowski-Górecki
> >  wrote:
> > >
> > > From: Eric Shelton 
> > >
> > > This patch creates an appropriate command line for the QEMU instance
> > > running in a Linux-based stubdomain.
> > >
> > > NOTE: a number of items are not currently implemented for Linux-based
> > > stubdomains, such as:
> > > - save/restore
> > > - QMP socket
> > > - graphics output (e.g., VNC)
> > >
> > > Signed-off-by: Eric Shelton 
> > >
> > > Simon:
> > >  * fix disk path
> > >  * fix cdrom path and "format"
> > >  * pass downscript for network interfaces
> >
> > Since this is here...
> >
> > > Signed-off-by: Simon Gaiser 
> > > [drop Qubes-specific parts]
> >
> > ...maybe mention dropping downscript here?  Otherwise the commit
> > message and contents don't match.
>
> Ah, indeed.
>
> >
> > > Signed-off-by: Marek Marczykowski-Górecki 
> > > 
> > > ---
> >
> > 
> >
> > > diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> > > index 142b960..a6d40b7 100644
> > > --- a/tools/libxl/libxl_create.c
> > > +++ b/tools/libxl/libxl_create.c
> > > @@ -169,6 +169,31 @@ int libxl__domain_build_info_setdefault(libxl__gc 
> > > *gc,
> > >  }
> > >  }
> > >
> > > +if (b_info->type == LIBXL_DOMAIN_TYPE_HVM &&
> > > +libxl_defbool_val(b_info->device_model_stubdomain)) {
> > > +if (!b_info->stubdomain_kernel) {
> > > +switch (b_info->device_model_version) {
> > > +case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
> > > +b_info->stubdomain_kernel =
> > > +libxl__abs_path(NOGC, "ioemu-stubdom.gz", 
> > > libxl__xenfirmwaredir_path());
> > > +b_info->stubdomain_ramdisk = NULL;
> > > +break;
> > > +case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
> > > +b_info->stubdomain_kernel =
> > > +libxl__abs_path(NOGC,
> > > +"stubdom-linux-kernel",
> >
> > Not to bikeshed, but this came up in a conversation a little while
> > ago.  Stubdom is a generic name, and this code is for a device model.
> > So some combination of qemu{,-dm}{,-linux}-kernel seems more
> > descriptive.
>
> Minios-based use ioemu-stubdom, so maybe
> ioemu-stubdom-linux-{kernel,rootfs}?

I think ioemu is the name of the legacy fork of qemu.  Linux stubdoms
are running close to upstream qemu, so that's why I suggested that
name.  But ioemu does match Mini-os, and convey the purpose of the
stubdom, so it works from the perspective.  I leave the name up to
you.

> > Having said that, I'm fine with it as is since I don't imagine more
> > stubdoms showing up.
> >
> > > +libxl__xenfirmwaredir_path());
> > > +b_info->stubdomain_ramdisk =
> > > +libxl__abs_path(NOGC,
> > > +"stubdom-linux-rootfs",
> > > +libxl__xenfirmwaredir_path());

I set stubdomain_ramdisk, but not stubdomain_kernel, and the ramdisk
option wasn't honored.  This assignment needs to be under 'if
(!b_info->stubdomain_ramdisk) {'

> > > +break;
> > > +default:
> > > +abort();
> >
> > Can we return an error instead?
>
> For invalid enum value?

Okay, that use makes sense.  It was a reflexive response to seeing
abort in a library.

Regards,
Jason

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/x86: domain: Free all the pages associated to struct domain

2020-01-22 Thread Julien Grall

Hi David,

On 22/01/2020 13:50, Woodhouse, David wrote:

On Wed, 2020-01-22 at 13:15 +, Andrew Cooper wrote:

I'd much rather see the original patch reverted.  The current size of
struct domain with lockprofile enabled is 3200 bytes.


Let me have a look first to see when/why struct domain is less than 4K
with lockprofile.


In the intervening time, Juergen has totally rewritten lock profiling,
and the virtual timer infrastructure has been taken out-of-line.

Its probably the latter which is the dominating factor.


OK, so if we revert 8916fcf4577 is it reasonable for me to then assume
that 'struct domain' will always fit within a page, and declare that
live update shall not work to a Xen where that isn't true?
While it is nice to have struct domain small, I would rather not bake to 
size assumption in live update.


The more on Arm we have been quite often close to the limit. So I don't 
want to limit my choices.




I have a nasty trick in mind...

During live update, we need to do a pass over the live update data in
early boot in order to work out which pages to reserve. That part has
to be done early, while the boot allocator is active. It works by
setting the PGC_allocated bit in the page_info of the reserved pages,
so that init_heap_pages() knows not to include them. I've posted that
part already.

Also during live update, we need to consume the actual domain state
that was passed over from the previous Xen, and fill in the owner (and
refcount etc.) in the same page_info structures, before those pages are
in a truly consistent state.

Right now, we need the latter to happen *after* the boot allocator is
done and we're able to allocate from the heap... because we need to be
able to allocate the domain structures, and we don't want to ensure
that there's enough space in the LU reserved bootmem for that many
domain structures.


As you pointed out above, the struct domain itself is a page (i.e 4KB on 
x86). Let say you have 256 domains, this would only use 1MB. Which is 
not too bad to take into account.


But struct domain has a lot of out-of-line allocation. If you are not 
planning to make struct domain part of the ABI, then you would need 
quite a few allocations even in your cases.


Of course, you could just half initialize struct domain. However, I 
would be cautious with this solution as it would be more difficult to 
chase down bug around struct page. Imagine the domain pointer is 
accessed earlier than expected.



Hence the nasty trick: What if we allocate the new struct domain on
*top* of the old one, since we happen to know that page *wasn't* used
by the previous Xen for anything that needs to be preserved. The
structure itself isn't an ABI and never can be, and it will have to be
repopulated from the live migration data, of course — but if we can at
least make the assumption that it'll *fit*, then perhaps we can manage
to do both of the above with only one pass over all the domain pages.

This will have a direct effect on the customer-observed pause time for
a live update. So it's kind of icky, but also *very* tempting...


May I recommend to get some numbers with the multiple pass and "nicer" 
code first? We can decide on that what sort of hackery we need to lower 
the pause time.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 146387: regressions - FAIL

2020-01-22 Thread osstest service owner
flight 146387 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/146387/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 

Re: [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K

2020-01-22 Thread Jan Beulich
On 22.01.2020 15:04, Jason Andryuk wrote:
> On Wed, Jan 22, 2020 at 5:52 AM Roger Pau Monné  wrote:
>> On Wed, Jan 22, 2020 at 11:27:24AM +0100, Jan Beulich wrote:
>>> On 21.01.2020 17:57, Roger Pau Monné wrote:
 Ie: Xen should refuse to pass through any memory BAR that's not page
 aligned. How the alignment is accomplished is out of the scope to Xen,
 as long as memory BARs are aligned.
>>>
>>> That's an acceptable model, as long as it wouldn't typically break
>>> existing configurations, and as long as for those who we would
>>> break there are easy to follow steps to unbreak their setups.
>>
>> Jason, do you think you could take a stab at adding a check in order
>> to make sure memory BAR addresses are 4K aligned when assigning a
>> device to a guest?
> 
> I can take a look.  You want the hypervisor to make the enforcement
> and not the toolstack?

Well, if ...

> Waving my hands a little bit, but it might be possible to have `xl
> pci-assignable-add` trigger the linux pci resource_alignment at
> runtime.

... this was possible, then it would be a change to both. Anyway I
think for the purpose of better diagnostics the tool stack should
do the check, but the hypervisor should do so too (as the ultimate
entity wanting this enforced).

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] tools/save: Drop unused parameters from xc_domain_save()

2020-01-22 Thread Andrew Cooper
On 22/01/2020 14:08, Wei Liu wrote:
> On Tue, Jan 07, 2020 at 12:19:36PM +, Wei Liu wrote:
>> On Mon, Jan 06, 2020 at 05:03:52PM +, Andrew Cooper wrote:
>>> XCFLAGS_CHECKPOINT_COMPRESS has been unused since c/s b15bc4345 (2015),
>>> XCFLAGS_HVM since c/s 9e8672f1c (2013), and XCFLAGS_STDVGA since c/s
>>> 087d43326 (2007).  Drop the constants, and code which sets them.
>>>
>>> The separate hvm parameter (appeared in c/s d11bec8a1, 2007 and ultimately
>>> redundant with XCFLAGS_HVM), is used for sanity checking and debug printing,
>>> then discarded and replaced with Xen's idea of whether the domain is PV or
>>> HVM.
>>>
>>> Rearrange the logic in xc_domain_save() to ask Xen sightly earlier, and use 
>>> a
>>> consistent idea of 'hvm' throughout.  Removing this parameter removes the
>>> final user of libxl's dss->hvm, so drop that field as well.
>>>
>>> Update the doxygen comment to be accurate.
>>>
>>> Signed-off-by: Andrew Cooper 
>> Acked-by: Ian Jackson 
> This is a mistake. I obviously shouldn't use Ian's name and address to
> ack a patch.
>
> Acked-by: Wei Liu 

Erm, oops...  I already committed it with "Ian's" ack.

I was talking to him about it on IRC at around this time (in relation to
its companion patch on the restore side, which he really did ack), so I
didn't think twice.

Overall, no harm done, but I will try to be more vigilant in the future.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K

2020-01-22 Thread Jan Beulich
On 22.01.2020 11:51, Roger Pau Monné wrote:
> On Wed, Jan 22, 2020 at 11:27:24AM +0100, Jan Beulich wrote:
>> On 21.01.2020 17:57, Roger Pau Monné wrote:
>>> The PCI spec actually recommends memory BARs to be at least of page
>>> size, but that's not a strict requirement. I would hope there aren't
>>> that many devices with memory BARs smaller than a page.
>>
>> I've simply gone and grep-ed all the stored lspci output I have
>> for some of the test systems I have here:
>>
>> 0/12
>> 3/31 (all 4k-aligned)
>> 6/13 (all 4k-aligned)
>> 3/12
>> 6/19 (all 4k-aligned)
>> 3/7 (all 4k-aligned)
> 
> What does X/Y at the beginning of the line stand for?

 / 

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] tools/save: Drop unused parameters from xc_domain_save()

2020-01-22 Thread Wei Liu
On Tue, Jan 07, 2020 at 12:19:36PM +, Wei Liu wrote:
> On Mon, Jan 06, 2020 at 05:03:52PM +, Andrew Cooper wrote:
> > XCFLAGS_CHECKPOINT_COMPRESS has been unused since c/s b15bc4345 (2015),
> > XCFLAGS_HVM since c/s 9e8672f1c (2013), and XCFLAGS_STDVGA since c/s
> > 087d43326 (2007).  Drop the constants, and code which sets them.
> > 
> > The separate hvm parameter (appeared in c/s d11bec8a1, 2007 and ultimately
> > redundant with XCFLAGS_HVM), is used for sanity checking and debug printing,
> > then discarded and replaced with Xen's idea of whether the domain is PV or
> > HVM.
> > 
> > Rearrange the logic in xc_domain_save() to ask Xen sightly earlier, and use 
> > a
> > consistent idea of 'hvm' throughout.  Removing this parameter removes the
> > final user of libxl's dss->hvm, so drop that field as well.
> > 
> > Update the doxygen comment to be accurate.
> > 
> > Signed-off-by: Andrew Cooper 
> 
> Acked-by: Ian Jackson 

This is a mistake. I obviously shouldn't use Ian's name and address to
ack a patch.

Acked-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/3] x86 / vmx: make apic_access_mfn type-safe

2020-01-22 Thread Andrew Cooper
On 21/01/2020 12:00, Paul Durrant wrote:
> Use mfn_t rather than unsigned long and change previous tests against 0 to
> tests against INVALID_MFN (also introducing initialization to that value).
>
> Signed-off-by: Paul Durrant 

I'm afraid this breaks the idempotency of vmx_free_vlapic_mapping(),
which gets in the way of domain/vcpu create/destroy cleanup.

Its fine to use 0 as the sentinel.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 4/4] xen/netback: fix grant copy across page boundary

2020-01-22 Thread Wei Liu
On Wed, Jan 22, 2020 at 10:07:35AM +, Sergey Dyasli wrote:
> On 20/01/2020 08:58, Paul Durrant wrote:
> > On Fri, 17 Jan 2020 at 12:59, Sergey Dyasli  
> > wrote:
> >>
> >> From: Ross Lagerwall 
> >>
> >> When KASAN (or SLUB_DEBUG) is turned on, there is a higher chance that
> >> non-power-of-two allocations are not aligned to the next power of 2 of
> >> the size. Therefore, handle grant copies that cross page boundaries.
> >>
> >> Signed-off-by: Ross Lagerwall 
> >> Signed-off-by: Sergey Dyasli 
> >> ---
> >> v1 --> v2:
> >> - Use sizeof_field(struct sk_buff, cb)) instead of magic number 48
> >> - Slightly update commit message
> >>
> >> RFC --> v1:
> >> - Added BUILD_BUG_ON to the netback patch
> >> - xenvif_idx_release() now located outside the loop
> >>
> >> CC: Wei Liu 
> >> CC: Paul Durrant 
> >
> > Acked-by: Paul Durrant 
> 
> Thanks! I believe this patch can go in independently from the other
> patches in the series. What else is required for this?

This patch didn't Cc the network development list so David Miller
wouldn't be able to pick it up.

Wei.

> 
> --
> Sergey

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 02/16] Document ioemu Linux stubdomain protocol

2020-01-22 Thread Jason Andryuk
On Tue, Jan 21, 2020 at 4:08 PM Marek Marczykowski-Górecki
 wrote:
>
> On Mon, Jan 20, 2020 at 01:54:04PM -0500, Jason Andryuk wrote:
> > On Tue, Jan 14, 2020 at 9:41 PM Marek Marczykowski-Górecki
> >  wrote:
> >
> > 
> >
> > > +
> > > +Limitations:
> > > + - PCI passthrough require permissive mode
> > > + - only one nic is supported
> >
> > Why is only 1 nic supported?  Multiple were supported previously, but
> > peeking ahead in the series,
>
> This is mostly limitation of stubdomain side, not toolstack side.
> Startup script setup eth0 only.

I peeked the script, and it looks like the nic ifname= sed expression
only handles one nic.  Since dmargs is now an array, it should to
handle multiple.

Anyway, there doesn't seem to be an hard limitation.

> > script=/etc/qemu-ifup is no longer
> > specified.
>
> Yes, that's to allow -sandbox ...,spawn=deny inside stubdomain.
> The equivalent actions are handled by listening for qmp events.

Ah, okay.  Yeah, that's a good idea.

Thanks,
Jason

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] x86/hvmloader: round up memory BAR size to 4K

2020-01-22 Thread Jason Andryuk
On Wed, Jan 22, 2020 at 5:52 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 22, 2020 at 11:27:24AM +0100, Jan Beulich wrote:
> > On 21.01.2020 17:57, Roger Pau Monné wrote:
> > > On Tue, Jan 21, 2020 at 05:15:20PM +0100, Jan Beulich wrote:
> > >> On 21.01.2020 16:57, Roger Pau Monné wrote:
> > >>> On Tue, Jan 21, 2020 at 11:43:58AM +0100, Jan Beulich wrote:
> >  On 21.01.2020 11:29, Roger Pau Monné wrote:
> > > So I'm not sure how to progress with this patch, are we fine with
> > > those limitations?
> > 
> >  I'm afraid this depends on ...
> > 
> > > As I said, Xen hasn't got enough knowledge to correctly isolate the
> > > BARs, and hence we have to rely on dom0 DTRT. We could add checks in
> > > Xen to make sure no BARs share a page, but it's a non-trivial amount
> > > of scanning and sizing each possible BAR on the system.
> > 
> >  ... whether Dom0 actually "DTRT", which in turn is complicated by there
> >  not being a specific Dom0 kernel incarnation to check against. Perhaps
> >  rather than having Xen check _all_ BARs, Xen or the tool stack could
> >  check BARs of devices about to be handed to a guest? Perhaps we need to
> >  pass auxiliary information to hvmloader to be able to judge whether a
> >  BAR shares a page with another one? Perhaps there also needs to be a
> >  way for hvmloader to know what offset into a page has to be maintained
> >  for any particular BAR, as follows from Jason's recent reply?
> > >>>
> > >>> Linux has an option to force resource alignment (as reported by
> > >>> Jason), maybe we could force all BARs to be aligned to page size in
> > >>> order to be passed through?
> > >>>
> > >>> That would make it easier to check (as Xen/Qemu would only need to
> > >>> assert that the BAR address is aligned), and won't require much extra
> > >>> work in Xen apart from the check itself.
> > >>>
> > >>> Do you think this would be an acceptable solution?
> > >>
> > >> In principle yes, but there are loose ends:
> > >> - What do you mean by "we could force"? We have no control over the
> > >>   Dom0 kernel.
> > >
> > > I should rephrase:
> > >
> > > ... maybe we should require dom0 to align all memory BARs to page size
> > > in order to be passed through?
> > >
> > > Ie: Xen should refuse to pass through any memory BAR that's not page
> > > aligned. How the alignment is accomplished is out of the scope to Xen,
> > > as long as memory BARs are aligned.
> >
> > That's an acceptable model, as long as it wouldn't typically break
> > existing configurations, and as long as for those who we would
> > break there are easy to follow steps to unbreak their setups.
>
> Jason, do you think you could take a stab at adding a check in order
> to make sure memory BAR addresses are 4K aligned when assigning a
> device to a guest?

I can take a look.  You want the hypervisor to make the enforcement
and not the toolstack?

Waving my hands a little bit, but it might be possible to have `xl
pci-assignable-add` trigger the linux pci resource_alignment at
runtime.

It may also be possible for the Linux kernel, when running as the
initial domain, to set the minimum alignment for the PCI bus.

> > >> - What about non-Linux Dom0?
> > >
> > > Other OSes would have to provide similar functionality in order to
> > > align the memory BARs. Right now Linux is the only dom0 that supports
> > > PCI passthrough AFAIK.
> > >
> > >> Also, apart from extra resource (address space) consumption,
> > >
> > > The PCI spec actually recommends memory BARs to be at least of page
> > > size, but that's not a strict requirement. I would hope there aren't
> > > that many devices with memory BARs smaller than a page.
> >
> > I've simply gone and grep-ed all the stored lspci output I have
> > for some of the test systems I have here:
> >
> > 0/12
> > 3/31 (all 4k-aligned)
> > 6/13 (all 4k-aligned)
> > 3/12
> > 6/19 (all 4k-aligned)
> > 3/7 (all 4k-aligned)
>
> What does X/Y at the beginning of the line stand for?

I think it's BARs smaller than 4k out of total BARs.

Ivy Bridge HP laptop: 7/15 (all 4k aligned)
Sandy Bridge Dell desktop: 5/13 (all 4k-aligned)
Kaby Lake Dell laptop: 2/18 (all 4k-aligned)

> > This is without regard to what specific devices these are, and
> > hence whether there would be any point in wanting to pass it to
> > a guest in the first place. I'd like to note though that there
> > are a fair amount of USB controllers among the ones with BARs
> > smaller than a page's worth.

The Intel EHCI USB controllers on my Sandy Bridge and Ivy Bridge
systems have 1K BARs.  USB controllers for their USB vm  may have
motivated Qubes's original work with handling small BARs.

Regards,
Jason

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/x86: domain: Free all the pages associated to struct domain

2020-01-22 Thread Woodhouse, David
On Wed, 2020-01-22 at 13:15 +, Andrew Cooper wrote:
> > > I'd much rather see the original patch reverted.  The current size of
> > > struct domain with lockprofile enabled is 3200 bytes.
> > 
> > Let me have a look first to see when/why struct domain is less than 4K
> > with lockprofile.
> 
> In the intervening time, Juergen has totally rewritten lock profiling,
> and the virtual timer infrastructure has been taken out-of-line.
> 
> Its probably the latter which is the dominating factor.

OK, so if we revert 8916fcf4577 is it reasonable for me to then assume
that 'struct domain' will always fit within a page, and declare that
live update shall not work to a Xen where that isn't true?

I have a nasty trick in mind...

During live update, we need to do a pass over the live update data in
early boot in order to work out which pages to reserve. That part has
to be done early, while the boot allocator is active. It works by
setting the PGC_allocated bit in the page_info of the reserved pages,
so that init_heap_pages() knows not to include them. I've posted that
part already.

Also during live update, we need to consume the actual domain state
that was passed over from the previous Xen, and fill in the owner (and
refcount etc.) in the same page_info structures, before those pages are
in a truly consistent state.

Right now, we need the latter to happen *after* the boot allocator is
done and we're able to allocate from the heap... because we need to be
able to allocate the domain structures, and we don't want to ensure
that there's enough space in the LU reserved bootmem for that many
domain structures.

Hence the nasty trick: What if we allocate the new struct domain on
*top* of the old one, since we happen to know that page *wasn't* used
by the previous Xen for anything that needs to be preserved. The
structure itself isn't an ABI and never can be, and it will have to be
repopulated from the live migration data, of course — but if we can at
least make the assumption that it'll *fit*, then perhaps we can manage
to do both of the above with only one pass over all the domain pages.

This will have a direct effect on the customer-observed pause time for
a live update. So it's kind of icky, but also *very* tempting...



Amazon Development Centre (London) Ltd. Registered in England and Wales with 
registration number 04543232 with its registered office at 1 Principal Place, 
Worship Street, London EC2A 2FA, United Kingdom.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/x86: domain: Free all the pages associated to struct domain

2020-01-22 Thread Andrew Cooper
On 22/01/2020 13:13, Julien Grall wrote:
> Hi Andrew,
>
> On 22/01/2020 12:52, Andrew Cooper wrote:
>> On 20/01/2020 14:31, Julien Grall wrote:
>>> From: Julien Grall 
>>>
>>> The structure domain may be bigger than a page size when lock profiling
>>> is enabled. However, the function free_domheap_struct will only free
>>> the
>>> first page.
>>>
>>> This is not a security issue because struct domain can only be bigger
>>> than a page size for lock profiling. The feature can only be selected
>>> in DEBUG and EXPERT mode.
>>>
>>> Fixes: 8916fcf4577 ("x86/domain: compile with lock_profile=y enabled")
>>> Reported-by: David Woodhouse 
>>> Signed-off-by: Julien Grall 
>>> ---
>>>   xen/arch/x86/domain.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>>> index 28fefa1f81..a5380b9bab 100644
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -344,7 +344,7 @@ struct domain *alloc_domain_struct(void)
>>>     void free_domain_struct(struct domain *d)
>>>   {
>>> -    free_xenheap_page(d);
>>> +    free_xenheap_pages(d, get_order_from_bytes(sizeof(*d)));
>>
>> :(
>>
>> I'm entirely certain I raised this during the review of the original
>> patch.
>>
>> I'd much rather see the original patch reverted.  The current size of
>> struct domain with lockprofile enabled is 3200 bytes.
>
> Let me have a look first to see when/why struct domain is less than 4K
> with lockprofile.

In the intervening time, Juergen has totally rewritten lock profiling,
and the virtual timer infrastructure has been taken out-of-line.

Its probably the latter which is the dominating factor.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] xen/x86: domain: Free all the pages associated to struct domain

2020-01-22 Thread Julien Grall

Hi Andrew,

On 22/01/2020 12:52, Andrew Cooper wrote:

On 20/01/2020 14:31, Julien Grall wrote:

From: Julien Grall 

The structure domain may be bigger than a page size when lock profiling
is enabled. However, the function free_domheap_struct will only free the
first page.

This is not a security issue because struct domain can only be bigger
than a page size for lock profiling. The feature can only be selected
in DEBUG and EXPERT mode.

Fixes: 8916fcf4577 ("x86/domain: compile with lock_profile=y enabled")
Reported-by: David Woodhouse 
Signed-off-by: Julien Grall 
---
  xen/arch/x86/domain.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 28fefa1f81..a5380b9bab 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -344,7 +344,7 @@ struct domain *alloc_domain_struct(void)
  
  void free_domain_struct(struct domain *d)

  {
-free_xenheap_page(d);
+free_xenheap_pages(d, get_order_from_bytes(sizeof(*d)));


:(

I'm entirely certain I raised this during the review of the original patch.

I'd much rather see the original patch reverted.  The current size of
struct domain with lockprofile enabled is 3200 bytes.


Let me have a look first to see when/why struct domain is less than 4K 
with lockprofile.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups

2020-01-22 Thread Dario Faggioli
On Wed, 2020-01-08 at 16:23 +0100, Juergen Gross wrote:
> Move all scheduler related hypervisor code to xen/common/sched/ and
> do a lot of cleanups.
> 
> Juergen Gross (9):
>   xen/sched: move schedulers and cpupool coding to dedicated
> directory
>   xen/sched: make sched-if.h really scheduler private
>   xen/sched: cleanup sched.h
>   xen/sched: remove special cases for free cpus in schedulers
>   xen/sched: use scratch cpumask instead of allocating it on the
> stack
>   xen/sched: replace null scheduler percpu-variable with pdata hook
>   xen/sched: switch scheduling to bool where appropriate
>   xen/sched: eliminate sched_tick_suspend() and sched_tick_resume()
>   xen/sched: add const qualifier where appropriate
>
Ok, unless I'm missing something, I think that "scheduling-wise" this
series if fully Rev/Acked-by.

Thanks Juergen for the cleanups. The code looks a lot better with this
patches applied! :-)

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers

2020-01-22 Thread Dario Faggioli
On Wed, 2020-01-08 at 16:23 +0100, Juergen Gross wrote:
> With the idle scheduler now taking care of all cpus not in any
> cpupool
> the special cases in the other schedulers for no cpupool associated
> can be removed.
> 
> Signed-off-by: Juergen Gross 
>
Reviewed-by: Dario Faggioli 

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook

2020-01-22 Thread Dario Faggioli
On Wed, 2020-01-08 at 16:23 +0100, Juergen Gross wrote:
> Instead of having an own percpu-variable for private data per cpu the
> generic scheduler interface for that purpose should be used.
> 
> Signed-off-by: Juergen Gross 
>
Reviewed-by: Dario Faggioli 

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  1   2   >