[Xen-devel] [qemu-mainline test] 145845: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145845 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145845/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-shadow1 

[Xen-devel] [xen-unstable test] 145826: tolerable FAIL - PUSHED

2020-01-08 Thread osstest service owner
flight 145826 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145826/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 17 guest-saverestore.2  fail REGR. vs. 145796

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 145796
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 145796
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 145796
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 145796
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 145796
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 145796
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 145796
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 145796
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 145796
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass

version targeted for testing:
 xen  00691c6c90b2fd28d7b7037baeb288f6801e6182
baseline version:
 xen  4dde27b6e0a0b0dcb8fdfc7580fbd9c976aa103f

Last test of basis   145796  2020-01-08 11:36:42 Z0 days
Testing same since   145826  2020-01-08 22:06:27 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anthony PERARD 
  George Dunlap 
  Jan Beulich 
  Juergen Gross 
  Marek Marczykowski-Górecki 
  Wei Liu 
  Wei Liu 

jobs:
 

Re: [Xen-devel] PV DRM doesn't work without auto_translated_physmap feature in Dom0

2020-01-08 Thread Oleksandr Andrushchenko

On 1/8/20 5:38 PM, Santucco wrote:
> Thank you very much for all your answers.
>
> Среда, 8 января 2020, 10:54 +03:00 от Oleksandr Andrushchenko
>  >:
> On 1/6/20 10:38 AM, Jürgen Groß wrote:
> > On 06.01.20 08:56, Santucco wrote:
> >> Hello,
> >>
> >> I’m trying to use vdispl interface from PV OS, it doesn’t work.
> >> Configuration details:
> >>  Xen 4.12.1
> >>  Dom0: Linux 4.20.17-gentoo #13 SMP Sat Dec 28 11:12:24 MSK
> 2019
> >> x86_64 Intel(R) Celeron(R) CPU N3050 @ 1.60GHz GenuineIntel
> GNU/Linux
> >>  DomU: x86 Plan9, PV
> >>  displ_be as a backend for vdispl and vkb
> >>
> >> when VM starts, displ_be reports about an error:
> >> gnttab: error: ioctl DMABUF_EXP_FROM_REFS failed: Invalid argument
> >> (displ_be.log:221)
> >>
> >> related Dom0 output is:
> >> [  191.579278] Cannot provide dma-buf: use_ptemode 1
> >> (dmesg.create.log:123)
> >
> > This seems to be a limitation of the xen dma-buf driver. It was
> written
> > for being used on ARM initially where PV is not available.
> This is true and we never tried/targeted PV domains with this
> implementation,
> so if there is a need for that someone has to take a look on the
> proper
> implementation for PV…
>
> Have I got your right and there is no the proper implementation :-)?
There is no
>
> >
> > CC-ing Oleksandr Andrushchenko who is the author of that driver. He
> > should be able to tell us what would be needed to enable PV dom0.
> >
> > Depending on your use case it might be possible to use PVH dom0, but
> > support for this mode is "experimental" only and some features
> are not
> > yet working.
> >
> Well, one of the workarounds possible is to drop zero-copying use-case
> (this is why display backend tries to create dmu-bufs from grants
> passed
> by the guest domain and fails because of "Cannot provide dma-buf:
> use_ptemode 1")
> So, in this case display backend will do memory copying for the
> incoming
> frames
> and won't touch DMABUF_EXP_FROM_REFS ioctl.
> To do so just disable zero-copying while building the backend [1]
>
> Thanks, I have just tried the workaround.  The backend has failed 
> in an other place not corresponding with dma_buf.
> Anyway it is enough to continue debugging  my frontend implementation.
> Do you know how big is performance penalty in comparison with 
> the zero-copy variant?
Well, it solely depends on your setup, so I cannot tell what
would the numbers be in your case. Comparing to what I have doesn't
make any sense to me: one should compare apples to apples
> Does it make a sense if I make a dedicated HVM domain with linux only 
> for the purpose of vdispl and vkbd backends? Is there a hope this 
> approach will work?
You can try if this approach fits your design and requirements
>
> >
> > Juergen
> >
> [1]
> https://github.com/xen-troops/displ_be/blob/master/CMakeLists.txt#L12
> 
> 
>
> Best regards,
>   Alexander Sychev
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate

2020-01-08 Thread Meng Xu
On Wed, Jan 8, 2020 at 7:24 AM Juergen Gross  wrote:
>
> Scheduling code has several places using int or bool_t instead of bool.
> Switch those.
>
> Signed-off-by: Juergen Gross 
> ---
> V2:
> - rename bool "pos" to "first" (Dario Faggioli)
> ---
>  xen/common/sched/arinc653.c |  8 
>  xen/common/sched/core.c | 14 +++---
>  xen/common/sched/cpupool.c  | 10 +-
>  xen/common/sched/credit.c   | 12 ++--
>  xen/common/sched/private.h  |  2 +-
>  xen/common/sched/rt.c   | 18 +-
>  xen/include/xen/sched.h |  6 +++---
>  7 files changed, 35 insertions(+), 35 deletions(-)
>

As to  xen/common/sched/rt.c,

Reviewed-by: Meng Xu 

Cheers,

Meng

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack

2020-01-08 Thread Meng Xu
On Wed, Jan 8, 2020 at 7:24 AM Juergen Gross  wrote:
>
> In rt scheduler there are three instances of cpumasks allocated on the
> stack. Replace them by using cpumask_scratch.
>
> Signed-off-by: Juergen Gross 
> ---
>  xen/common/sched/rt.c | 56 
> ++-
>  1 file changed, 37 insertions(+), 19 deletions(-)
>

Reviewed-by: Meng Xu 

Meng

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate

2020-01-08 Thread Meng Xu
On Wed, Jan 8, 2020 at 7:23 AM Juergen Gross  wrote:
>
> Make use of the const qualifier more often in scheduling code.
>
> Signed-off-by: Juergen Gross 
> Reviewed-by: Dario Faggioli 
> ---
>  xen/common/sched/arinc653.c |  4 ++--
>  xen/common/sched/core.c | 25 +++---
>  xen/common/sched/cpupool.c  |  2 +-
>  xen/common/sched/credit.c   | 44 --
>  xen/common/sched/credit2.c  | 52 
> +++--
>  xen/common/sched/null.c | 17 ---
>  xen/common/sched/rt.c   | 32 ++--
>  xen/include/xen/sched.h |  9 
>  8 files changed, 96 insertions(+), 89 deletions(-)
>

As to xen/common/sched/rt.c,

Acked-by: Meng Xu 

Meng

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] xen: make CONFIG_DEBUG_LOCKS usable without CONFIG_DEBUG

2020-01-08 Thread Juergen Gross
In expert mode it is possible to enable CONFIG_DEBUG_LOCKS without
having enabled CONFIG_DEBUG. The coding is depending on CONFIG_DEBUG
as it is using ASSERT(), however.

Fix that by introducing assert() doing the same as ASSERT(), but being
available in non-debug builds, too, and use that in spinlock debug
code.

Signed-off-by: Juergen Gross 
---
 xen/common/spinlock.c | 2 +-
 xen/include/xen/lib.h | 6 --
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 286f916bca..8f54580d24 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -86,7 +86,7 @@ static void got_lock(union lock_debug *debug)
 static void rel_lock(union lock_debug *debug)
 {
 if ( atomic_read(_debug) > 0 )
-ASSERT(debug->cpu == smp_processor_id());
+assert(debug->cpu == smp_processor_id());
 debug->cpu = SPINLOCK_NO_CPU;
 }
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 8fbe84032d..000ea677d0 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -32,9 +32,11 @@
 #define gcov_string ""
 #endif
 
-#ifndef NDEBUG
-#define ASSERT(p) \
+#define assert(p) \
 do { if ( unlikely(!(p)) ) assert_failed(#p); } while (0)
+
+#ifndef NDEBUG
+#define ASSERT(p) assert(p)
 #define ASSERT_UNREACHABLE() assert_failed("unreachable")
 #define debug_build() 1
 #else
-- 
2.16.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 145831: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145831 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145831/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 145767

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail pass in 145825

version targeted for testing:
 ovmf 972d88726410e21b1fff1a528854202c67e97ef1
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z1 days
Failing since145774  2020-01-08 02:50:20 Z1 days6 attempts
Testing same since   145790  2020-01-08 09:10:30 Z0 days5 attempts


People who touched revisions under test:
  Ashish Singhal 
  Pavana.K 
  Siyuan Fu 
  Siyuan, Fu 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 fail
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 972d88726410e21b1fff1a528854202c67e97ef1
Author: Ashish Singhal 
Date:   Tue Dec 24 10:57:47 2019 +0800

MdeModulePkg: Add EDK2 Platform Boot Manager Protocol

Add edk2 platform boot manager protocol which would have platform
specific refreshes to the auto enumerated as well as NV boot options
for the platform.

Signed-off-by: Ashish Singhal 
Reviewed-by: Ray Ni 

commit c9d72628432126cbce58a48b440e4944baa4beab
Author: Pavana.K 
Date:   Thu Jan 2 20:30:27 2020 +

CryptoPkg: Support for SHA384 & SHA512 RSA signing schemes

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2389

Currently RSA signing scheme support is available for MD5, SHA-1 or
SHA-256 algorithms.The fix is to extend this support for SHA384 and
SHA512.

Cc: Liming Gao 
Cc: Jian J Wang 
Cc: Bob Feng 

Signed-off-by: Pavana.K 
Reviewed-by: Jian J Wang 

commit 396e791059f37062cbee85696e2b4186ec72a9e3
Author: Siyuan, Fu 
Date:   Fri Jan 3 14:59:27 2020 +0800

UefiCpuPkg: Always load microcode patch on AP processor.

This patch updates the microcode loader to always perform a microcode
detect and load on both BSP and AP processor. This is to fix a potential
microcode revision mismatch issue in below situation:
1. Assume there are two microcode co-exists in flash: one production
   version and one debug version microcode.
2. FIT loads production microcode to BSP and all AP.
3. UefiCpuPkg loader loads debug microcode to BSP, and skip the loading
   on AP.
As a result, different microcode patches are loaded to BSP and AP, and
trigger microcode mismatch error during OS boot.

BZ link: https://bugzilla.tianocore.org/show_bug.cgi?id=2431

Cc: Eric Dong 
Cc: Ray Ni 
Signed-off-by: Siyuan Fu 
Reviewed-by: Eric Dong 

commit 08a475df10b75f84cdeb9b11e38f8eee9b5c048d
Author: Siyuan Fu 
Date:   Fri Jan 3 15:11:51 2020 +0800

UefiCpuPkg: Remove alignment check when calculate microcode size.

This patch removes the unnecessary alignment check on microcode patch
TotalSize introduced by commit d786a172. The TotalSize has already been
checked with 1K alignment and MAX_ADDRESS in previous code as below:

if ( (UINTN)MicrocodeEntryPoint > (MAX_ADDRESS - TotalSize) ||
 ((UINTN)MicrocodeEntryPoint + TotalSize) > MicrocodeEnd ||
 (DataSize & 0x3) != 0 ||
 (TotalSize & (SIZE_1KB - 1)) != 0 ||
 TotalSize < DataSize
   ) {

Cc: Eric Dong 
Cc: Ray Ni 
Cc: Hao A Wu 
Signed-off-by: Siyuan Fu 
Reviewed-by: Ray Ni 

[Xen-devel] [qemu-mainline test] 145834: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145834 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145834/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 

[Xen-devel] [qemu-mainline test] 145829: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145829 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145829/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 

Re: [Xen-devel] [PATCH v2] xen/x86: clear per cpu stub page information in cpu_smpboot_free()

2020-01-08 Thread Tao Xu

Thank you Juergen. This patch fix the issue in

XEN crash and double fault when doing cpu online/offline
https://lists.xenproject.org/archives/html/xen-devel/2020-01/msg00424.html

Tested-by: Tao Xu 

On 1/8/2020 10:34 PM, Juergen Gross wrote:

cpu_smpboot_free() removes the stubs for the cpu going offline, but it
isn't clearing the related percpu variables. This will result in
crashes when a stub page is released due to all related cpus gone
offline and one of those cpus going online later.

Fix that by clearing stubs.addr and stubs.mfn in order to allocate a
new stub page when needed.

Fixes: 2e6c8f182c9c50 ("x86: distinguish CPU offlining from CPU removal")
Signed-off-by: Juergen Gross 
Reviewed-by: Wei Liu 
---
  xen/arch/x86/smpboot.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 7e29704080..46c0729214 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -945,6 +945,8 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove)
   (per_cpu(stubs.addr, cpu) | ~PAGE_MASK) + 1);
  if ( i == STUBS_PER_PAGE )
  free_domheap_page(mfn_to_page(mfn));
+per_cpu(stubs.addr, cpu) = 0;
+per_cpu(stubs.mfn, cpu) = 0;
  }

  FREE_XENHEAP_PAGE(per_cpu(compat_gdt, cpu));
--
2.16.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel




___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [BUG] XEN crash and double fault when doing cpu online/offline

2020-01-08 Thread Tao Xu

On 1/8/2020 6:45 PM, Jürgen Groß wrote:

On 08.01.20 09:32, Tao Xu wrote:


On 1/8/20 3:50 PM, Jürgen Groß wrote:

On 08.01.20 06:50, Tao Xu wrote:

Hi,

When I use xen-hptool cpu-offline/cpu-online to let CPU in a socket 
online/offline using the script as follows:


for((j=48;j<=95;j++));
do
   xen-hptool cpu-offline $j
done

for((j=48;j<=95;j++));
do
   xen-hptool cpu-online $j
done

Xen crash when cpu re-online. I use the upstream XEN(0dd92688) and 
try many days, it still crash. But if I only do cpu online/offline 
for CPU 48~59, Xen will not crash. The bug can be reproduced when we 
do cpu online/offline for most CPU in a socket. And interesting 
thing is when we use the script as follow:


for((j=48;j<=95;j++));
do
   xen-hptool cpu-offline $j
   xen-hptool cpu-online $j
done

Xen will not crash too. Is there a bug in sched_credit2?

The crash message as follows:

(XEN) Adding cpu 77 to runqueue 1
(XEN) Adding cpu 78 to runqueue 1
(XEN) Adding cpu 79 to runqueue 1
(XEN) Adding cpu 80 to runqueue 1
(X(ENXE) N) *** DOUBLE FAULT ***
(XEN) Assertion 'debug->cpu == smp_processor_id()' failed at 
spinlock.c:88

(XEN) [ Xen-4.14-unstable  x86_64  debug=y   Not tainted ]
(XEN) Debugging connection not set up.
(XEN) CPU:    48
(XEN) [ Xen-4.14-unstable  x86_64  debug=y   Not tainted ]
(XEN) CPU:    0
(XEN) RIP:    e008:[] _spin_unlock+0x40/0x42


So the original problem causes a double fault, but spinlock debugging
causes a subsequent panic.

Can you please retry the tests with the attached patch? It should
result in diagnostic data related to the real problem.


Juergen


Hi Juergen,

After apply your patch, spin_lock still assert. And the address 
82d0bffce880 is not in the xen-syms.


Yes, I had a bug in my modified ASSERT(), but this time the data is
better.



(XEN) Adding cpu 78 to runqueue 1
(XEN) *** DOUBLE FAULT ***
(XEN) [ Xen-4.14-unstable  x86_64  debug=y   Not tainted ]
(XEN) CPU:    49
(XEN) RIP:    e008:[] 82d0bffce880


This seems to be a crash in the stub page of cpu 48.

I don't think this is related to the scheduler, but to stub page
handling.

Can you please try the attached patch?


Juergen


Thank you Juergen, this patch works.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 145825: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145825 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145825/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 145767

version targeted for testing:
 ovmf 972d88726410e21b1fff1a528854202c67e97ef1
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z0 days
Failing since145774  2020-01-08 02:50:20 Z0 days5 attempts
Testing same since   145790  2020-01-08 09:10:30 Z0 days4 attempts


People who touched revisions under test:
  Ashish Singhal 
  Pavana.K 
  Siyuan Fu 
  Siyuan, Fu 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 972d88726410e21b1fff1a528854202c67e97ef1
Author: Ashish Singhal 
Date:   Tue Dec 24 10:57:47 2019 +0800

MdeModulePkg: Add EDK2 Platform Boot Manager Protocol

Add edk2 platform boot manager protocol which would have platform
specific refreshes to the auto enumerated as well as NV boot options
for the platform.

Signed-off-by: Ashish Singhal 
Reviewed-by: Ray Ni 

commit c9d72628432126cbce58a48b440e4944baa4beab
Author: Pavana.K 
Date:   Thu Jan 2 20:30:27 2020 +

CryptoPkg: Support for SHA384 & SHA512 RSA signing schemes

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2389

Currently RSA signing scheme support is available for MD5, SHA-1 or
SHA-256 algorithms.The fix is to extend this support for SHA384 and
SHA512.

Cc: Liming Gao 
Cc: Jian J Wang 
Cc: Bob Feng 

Signed-off-by: Pavana.K 
Reviewed-by: Jian J Wang 

commit 396e791059f37062cbee85696e2b4186ec72a9e3
Author: Siyuan, Fu 
Date:   Fri Jan 3 14:59:27 2020 +0800

UefiCpuPkg: Always load microcode patch on AP processor.

This patch updates the microcode loader to always perform a microcode
detect and load on both BSP and AP processor. This is to fix a potential
microcode revision mismatch issue in below situation:
1. Assume there are two microcode co-exists in flash: one production
   version and one debug version microcode.
2. FIT loads production microcode to BSP and all AP.
3. UefiCpuPkg loader loads debug microcode to BSP, and skip the loading
   on AP.
As a result, different microcode patches are loaded to BSP and AP, and
trigger microcode mismatch error during OS boot.

BZ link: https://bugzilla.tianocore.org/show_bug.cgi?id=2431

Cc: Eric Dong 
Cc: Ray Ni 
Signed-off-by: Siyuan Fu 
Reviewed-by: Eric Dong 

commit 08a475df10b75f84cdeb9b11e38f8eee9b5c048d
Author: Siyuan Fu 
Date:   Fri Jan 3 15:11:51 2020 +0800

UefiCpuPkg: Remove alignment check when calculate microcode size.

This patch removes the unnecessary alignment check on microcode patch
TotalSize introduced by commit d786a172. The TotalSize has already been
checked with 1K alignment and MAX_ADDRESS in previous code as below:

if ( (UINTN)MicrocodeEntryPoint > (MAX_ADDRESS - TotalSize) ||
 ((UINTN)MicrocodeEntryPoint + TotalSize) > MicrocodeEnd ||
 (DataSize & 0x3) != 0 ||
 (TotalSize & (SIZE_1KB - 1)) != 0 ||
 TotalSize < DataSize
   ) {

Cc: Eric Dong 
Cc: Ray Ni 
Cc: Hao A Wu 
Signed-off-by: Siyuan Fu 
Reviewed-by: Ray Ni 
Reviewed-by: Eric Dong 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org

Re: [Xen-devel] [xen-unstable test] 145796: tolerable FAIL - PUSHED

2020-01-08 Thread Julien Grall
On Wed, 8 Jan 2020 at 21:40, osstest service owner
 wrote:
>
> flight 145796 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/145796/
>
> Failures :-/ but no regressions.
>
> Tests which are failing intermittently (not blocking):
>  test-amd64-amd64-xl-rtds15 guest-saverestore fail in 145773 pass in 
> 145796
>  test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 
> guest-start/debianhvm.repeat fail in 145773 pass in 145796
>  test-armhf-armhf-xl-rtds 12 guest-start  fail in 145773 pass in 
> 145796

It looks like this test has been failing for a while (although not reliably).
I looked at  a few flights, the cause seems to be the same:

Jan  8 15:02:14.700784 (XEN) Assertion '!unit_on_replq(svc)' failed at
sched_rt.c:586
Jan  8 15:02:26.715030 (XEN) [ Xen-4.14-unstable  arm32  debug=y
Not tainted ]
Jan  8 15:02:26.720756 (XEN) CPU:1
Jan  8 15:02:26.722158 (XEN) PC: 0023a750
common/sched_rt.c#replq_insert+0x7c/0xcc
Jan  8 15:02:26.727851 (XEN) CPSR:   200300da MODE:Hypervisor
Jan  8 15:02:26.731334 (XEN)  R0: 002a51a4 R1: 400614a0 R2:
3d64b900 R3: 40061338
Jan  8 15:02:26.736830 (XEN)  R4: 400614a0 R5: 002a51a4 R6:
3cf1cbf0 R7: 01cb
Jan  8 15:02:26.742600 (XEN)  R8: 4003d1b0 R9: 400614a8
R10:4003d1b0 R11:400ffe54 R12:400ffde4
Jan  8 15:02:26.749119 (XEN) HYP: SP: 400ffe2c LR: 0023b6e8
Jan  8 15:02:26.752296 (XEN)
Jan  8 15:02:26.753036 (XEN)   VTCR_EL2: 80003558
Jan  8 15:02:26.755479 (XEN)  VTTBR_EL2: 0002bbff4000
Jan  8 15:02:26.758757 (XEN)
Jan  8 15:02:26.759366 (XEN)  SCTLR_EL2: 30cd187f
Jan  8 15:02:26.761755 (XEN)HCR_EL2: 0078663f
Jan  8 15:02:26.764250 (XEN)  TTBR0_EL2: bc029000
Jan  8 15:02:26.767364 (XEN)
Jan  8 15:02:26.767980 (XEN)ESR_EL2: 
Jan  8 15:02:26.770485 (XEN)  HPFAR_EL2: 00030010
Jan  8 15:02:26.772795 (XEN)  HDFAR: e0800f00
Jan  8 15:02:26.775272 (XEN)  HIFAR: c0605744
Jan  8 15:02:26.48 (XEN)
Jan  8 15:02:26.778505 (XEN) Xen stack trace from sp=400ffe2c:
Jan  8 15:02:26.781910 (XEN) 3cf1cbf0 400614a0 002a51a4
3cf1cbf0 01cb 4003d1b0 6003005a
Jan  8 15:02:26.788991 (XEN)400613f8 400ffe7c 0023b6e8 002f9300
4004c000 400613f8 3cf1cbf0 01cb
Jan  8 15:02:26.796093 (XEN)4003d1b0 6003005a 400613f8 400ffeac
00242988 4004c000 002425ac 40058000
Jan  8 15:02:26.803237 (XEN)4004c000 4004f000 10f45000 10f45008
4004b080 40058000 60030013 400ffebc
Jan  8 15:02:26.810360 (XEN)00209984 0002 4004f000 400ffedc
0020eddc 0020caf8 db097cd4 0020
Jan  8 15:02:26.817504 (XEN)c13afbec  db15fd68 400ffee4
0020c9dc 400fff34 0020d5e8 4004e000
Jan  8 15:02:26.824615 (XEN) 400fff44 400fff44 0002
 4004e8fa 4004e8f4 400fff1c
Jan  8 15:02:26.831737 (XEN)400fff1c 6003005a 0020caf8 400fff58
0020 c13afbec  db15fd68
Jan  8 15:02:26.838798 (XEN)60030013 400fff54 0026c150 c1204d08
c13afbec   
Jan  8 15:02:26.845877 (XEN)0002 400fff58 002753b0 0009
db097cd4 db173008 0002 c1204d08
Jan  8 15:02:26.852986 (XEN) 0002 c13afbec 
db15fd68 60030013 db15fd3c 0020
Jan  8 15:02:26.860044 (XEN) b6cdccb3 c0107ed0 a0030093
4a000ea1 be951568 c136edc0 c010d3a0
Jan  8 15:02:26.867171 (XEN)db097cd0 c056c7f8 c136edcc c010d720
c136edd8 c010d7e0  
Jan  8 15:02:26.874526 (XEN)   c136ede4
c136ede4 00030030 60070193 80030093
Jan  8 15:02:26.881450 (XEN)60030193    0001
Jan  8 15:02:26.886519 (XEN) Xen call trace:
Jan  8 15:02:26.888168 (XEN)[<0023a750>]
common/sched_rt.c#replq_insert+0x7c/0xcc (PC)
Jan  8 15:02:26.894240 (XEN)[<0023b6e8>]
common/sched_rt.c#rt_unit_wake+0xf4/0x274 (LR)
Jan  8 15:02:26.900246 (XEN)[<0023b6e8>]
common/sched_rt.c#rt_unit_wake+0xf4/0x274
Jan  8 15:02:26.905775 (XEN)[<00242988>] vcpu_wake+0x1e4/0x688
Jan  8 15:02:26.909743 (XEN)[<00209984>] domain_unpause+0x64/0x84
Jan  8 15:02:26.913956 (XEN)[<0020eddc>]
common/event_fifo.c#evtchn_fifo_unmask+0xd8/0xf0
Jan  8 15:02:26.920167 (XEN)[<0020c9dc>] evtchn_unmask+0x7c/0xc0
Jan  8 15:02:26.924173 (XEN)[<0020d5e8>] do_event_channel_op+0xaf0/0xdac
Jan  8 15:02:26.928922 (XEN)[<0026c150>] do_trap_guest_sync+0x350/0x4d0
Jan  8 15:02:26.933647 (XEN)[<002753b0>] entry.o#return_from_trap+0/0x4
Jan  8 15:02:26.938299 (XEN)
Jan  8 15:02:26.939039 (XEN)
Jan  8 15:02:26.939668 (XEN) 
Jan  8 15:02:26.943794 (XEN) Panic on CPU 1:
Jan  8 15:02:26.945872 (XEN) Assertion '!unit_on_replq(svc)' failed at
sched_rt.c:586
Jan  8 15:02:26.951492 (XEN) 

I believe the domain_unpause() is coming from guest_clear_bit(). This
would mean the atomics didn't succeed without pausing the domain. This
makes sense as, per the log:

 CPU1: Guest atomics will try 1 times before pausing the domain

I am under the impression that 

[Xen-devel] [xen-unstable-smoke test] 145822: tolerable all pass - PUSHED

2020-01-08 Thread osstest service owner
flight 145822 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145822/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  c6c63b6dbffcdf32a59efa1fd6e578437fba06ff
baseline version:
 xen  00691c6c90b2fd28d7b7037baeb288f6801e6182

Last test of basis   145814  2020-01-08 18:00:23 Z0 days
Testing same since   145822  2020-01-08 21:03:04 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Jan Beulich 
  Juergen Gross 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   00691c6c90..c6c63b6dbf  c6c63b6dbffcdf32a59efa1fd6e578437fba06ff -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 145823: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145823 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145823/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 build-amd64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 

[Xen-devel] [xen-unstable test] 145796: tolerable FAIL - PUSHED

2020-01-08 Thread osstest service owner
flight 145796 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145796/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds15 guest-saverestore fail in 145773 pass in 145796
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 
guest-start/debianhvm.repeat fail in 145773 pass in 145796
 test-armhf-armhf-xl-rtds 12 guest-start  fail in 145773 pass in 145796
 test-armhf-armhf-xl   7 xen-boot   fail pass in 145773

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl 13 migrate-support-check fail in 145773 never pass
 test-armhf-armhf-xl 14 saverestore-support-check fail in 145773 never pass
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10   fail  like 145725
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 145725
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 145725
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 145725
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 145725
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 145725
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 145725
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 145725
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 145725
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 145725
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 145725
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass

version targeted for testing:
 xen  4dde27b6e0a0b0dcb8fdfc7580fbd9c976aa103f
baseline version:
 xen  

[Xen-devel] [ovmf test] 145817: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145817 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145817/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 145767

version targeted for testing:
 ovmf 972d88726410e21b1fff1a528854202c67e97ef1
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z0 days
Failing since145774  2020-01-08 02:50:20 Z0 days4 attempts
Testing same since   145790  2020-01-08 09:10:30 Z0 days3 attempts


People who touched revisions under test:
  Ashish Singhal 
  Pavana.K 
  Siyuan Fu 
  Siyuan, Fu 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 972d88726410e21b1fff1a528854202c67e97ef1
Author: Ashish Singhal 
Date:   Tue Dec 24 10:57:47 2019 +0800

MdeModulePkg: Add EDK2 Platform Boot Manager Protocol

Add edk2 platform boot manager protocol which would have platform
specific refreshes to the auto enumerated as well as NV boot options
for the platform.

Signed-off-by: Ashish Singhal 
Reviewed-by: Ray Ni 

commit c9d72628432126cbce58a48b440e4944baa4beab
Author: Pavana.K 
Date:   Thu Jan 2 20:30:27 2020 +

CryptoPkg: Support for SHA384 & SHA512 RSA signing schemes

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2389

Currently RSA signing scheme support is available for MD5, SHA-1 or
SHA-256 algorithms.The fix is to extend this support for SHA384 and
SHA512.

Cc: Liming Gao 
Cc: Jian J Wang 
Cc: Bob Feng 

Signed-off-by: Pavana.K 
Reviewed-by: Jian J Wang 

commit 396e791059f37062cbee85696e2b4186ec72a9e3
Author: Siyuan, Fu 
Date:   Fri Jan 3 14:59:27 2020 +0800

UefiCpuPkg: Always load microcode patch on AP processor.

This patch updates the microcode loader to always perform a microcode
detect and load on both BSP and AP processor. This is to fix a potential
microcode revision mismatch issue in below situation:
1. Assume there are two microcode co-exists in flash: one production
   version and one debug version microcode.
2. FIT loads production microcode to BSP and all AP.
3. UefiCpuPkg loader loads debug microcode to BSP, and skip the loading
   on AP.
As a result, different microcode patches are loaded to BSP and AP, and
trigger microcode mismatch error during OS boot.

BZ link: https://bugzilla.tianocore.org/show_bug.cgi?id=2431

Cc: Eric Dong 
Cc: Ray Ni 
Signed-off-by: Siyuan Fu 
Reviewed-by: Eric Dong 

commit 08a475df10b75f84cdeb9b11e38f8eee9b5c048d
Author: Siyuan Fu 
Date:   Fri Jan 3 15:11:51 2020 +0800

UefiCpuPkg: Remove alignment check when calculate microcode size.

This patch removes the unnecessary alignment check on microcode patch
TotalSize introduced by commit d786a172. The TotalSize has already been
checked with 1K alignment and MAX_ADDRESS in previous code as below:

if ( (UINTN)MicrocodeEntryPoint > (MAX_ADDRESS - TotalSize) ||
 ((UINTN)MicrocodeEntryPoint + TotalSize) > MicrocodeEnd ||
 (DataSize & 0x3) != 0 ||
 (TotalSize & (SIZE_1KB - 1)) != 0 ||
 TotalSize < DataSize
   ) {

Cc: Eric Dong 
Cc: Ray Ni 
Cc: Hao A Wu 
Signed-off-by: Siyuan Fu 
Reviewed-by: Ray Ni 
Reviewed-by: Eric Dong 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org

Re: [Xen-devel] [RFC PATCH V2 09/11] xen: Clear IRQD_IRQ_STARTED flag during shutdown PIRQs

2020-01-08 Thread Anchal Agarwal
On Wed, Jan 08, 2020 at 04:23:25PM +0100, Thomas Gleixner wrote:
> Anchal Agarwal  writes:
> 
> > shutdown_pirq is invoked during hibernation path and hence
> > PIRQs should be restarted during resume.
> > Before this commit'020db9d3c1dc0a' xen/events: Fix interrupt lost
> > during irq_disable and irq_enable startup_pirq was automatically
> > called during irq_enable however, after this commit pirq's did not
> > get explicitly started once resumed from hibernation.
> >
> > chip->irq_startup is called only if IRQD_IRQ_STARTED is unset during
> > irq_startup on resume. This flag gets cleared by free_irq->irq_shutdown
> > during suspend. free_irq() never gets explicitly called for ioapic-edge
> > and ioapic-level interrupts as respective drivers do nothing during
> > suspend/resume. So we shut them down explicitly in the first place in
> > syscore_suspend path to clear IRQ<>event channel mapping. shutdown_pirq
> > being called explicitly during suspend does not clear this flags, hence
> > .irq_enable is called in irq_startup during resume instead and pirq's
> > never start up.
> 
> What? 
> 
> > +void irq_state_clr_started(struct irq_desc *desc)
> >  {
> > irqd_clear(>irq_data, IRQD_IRQ_STARTED);
> >  }
> > +EXPORT_SYMBOL_GPL(irq_state_clr_started);
> 
> This is core internal state and not supposed to be fiddled with by
> drivers.
> 
> irq_chip has irq_suspend/resume/pm_shutdown callbacks for a reason.
>
I agree, as its mentioned in the previous patch {[RFC PATCH V2 08/11]} this is 
one way of explicitly shutting down legacy devices without introducing too much 
code for each of the legacy devices. . for eg. in case of floppy there 
is no suspend/freeze handler which should have done the needful.
.
Either we implement them for all the legacy devices that have them missing or
explicitly shutdown pirqs. I have choosen later for simplicity. I understand
that ideally we should enable/disable devices interrupts in suspend/resume 
devices but that requires adding code for doing that to few drivers[and I may
not know all of them either]

Now I discovered during the flow in hibernation_platform_enter under resume 
devices that for such devices irq_startup is called which checks for 
IRQD_IRQ_STARTED flag and based on that it calls irq_enable or irq_startup.
They are only restarted if the flag is not set which is cleared during 
shutdown. 
shutdown_pirq does not do that. Only masking/unmasking of evtchn does not work 
as pirq needs to be restarted.
xen-pirq.enable_irq is called rather than stratup_pirq. On resume if these pirqs
are not restarted in this case ACPI SCI interrupts, I do not see receiving 
any interrupts under cat /proc/interrupts even though host keeps generating 
S4 ACPI events. 
Does that makes sense?

Thanks,
Anchal
> Thanks,
> 
>tglx

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 145816: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145816 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145816/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 

[Xen-devel] [xen-unstable-smoke test] 145814: tolerable all pass - PUSHED

2020-01-08 Thread osstest service owner
flight 145814 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145814/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  00691c6c90b2fd28d7b7037baeb288f6801e6182
baseline version:
 xen  4dde27b6e0a0b0dcb8fdfc7580fbd9c976aa103f

Last test of basis   145752  2020-01-07 18:00:34 Z1 days
Failing since145806  2020-01-08 15:00:58 Z0 days2 attempts
Testing same since   145814  2020-01-08 18:00:23 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Anthony PERARD 
  George Dunlap 
  Jan Beulich 
  Juergen Gross 
  Marek Marczykowski-Górecki 
  Wei Liu 
  Wei Liu 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4dde27b6e0..00691c6c90  00691c6c90b2fd28d7b7037baeb288f6801e6182 -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [...], USB-passthru only works with qemu-traditional

2020-01-08 Thread Jason Andryuk
On Tue, Jan 7, 2020 at 2:36 PM Steffen Einsle  wrote:
>
> Hello,
>
> you're probably right about the malformed commandline for USB-passthru:
> With upstream qemu I get
>
> qemu-system-x86_64: -usbdevice tablet: '-usbdevice' is deprecated,
> please use '-device usb-...' instead
> qemu-system-x86_64: -usbdevice host:0d46:3003: '-usbdevice' is
> deprecated, please use '-device usb-...' instead
> qemu-system-x86_64: -usbdevice host:0d46:3003: could not add USB device
> 'host:0d46:3003'

QEMU (as of 2.12?) no longer parses 'host:0d46:3003'.  You need to
supply arguments like this:
-device usb-host,vendorid=0x0d46,productid=0x3003

qemu-system-x86_64 -device qemu-xhci -device
usb-host,vendorid=0x0d46,productid=0x3003

libxl needs to be modified to change the arguments.  You *might* be
able to sneak around it by settings
device_model_args=["-device
usb-host,vendorid=0x0d46,productid=0x3003"] and dropping the host
device from usbdevice.
It may need to be
device_model_args=["-device","usb-host,vendorid=0x0d46,productid=0x3003"]
if spaces in arguments aren't handled properly.

Regards,
Jason

> I'm not quite sure if this ever worked (without trad), but if it did, it
> was some years ago... perhaps at the times of xen 4.1 ?
>
>
> Am 06.01.2020 um 11:23 schrieb Durrant, Paul:
> >> -Original Message-
> >> From: win-pv-devel  On Behalf
> >> Of Steffen Einsle
> >> Sent: 05 January 2020 00:44
> >> To: win-pv-de...@lists.xenproject.org
> >> Subject: [win-pv-devel] Driver 9.0.0 no keyboard in vncviewer, USB-
> >> passthru only with qemu-traditional
> >>
> >> Hello,
> >>
> >> I just installed a Windows 2019 Server with the new 9.0.0 PV drivers
> >> under xen 4.12.1. I use gentoo and since I need usb-passthru I have to
> >> use the qemu-traditional useflag (or device_model_version =
> >> 'qemu-xen-traditional').
> >>
> >> - USB-passthru works only with qemu-traditional
> >That seems odd, but I guess nor many people use USB passthru so it could 
> > have got broken with upstream somewhere along the way.
> >> Is there a general trick to get USB-passthru working with qemu-xen?
> >> (without qemu-traditional my usbdevice = ['tablet', 'host:0d46:3003']
> >> prevents domu creation - device-model-exited-error)
> >I think that is probably something to post on xen-users or xen-devel. 
> > Have you ever had USB passthrough working with upstream QEMU? There's 
> > nothing at https://wiki.xenproject.org/wiki/Xen_USB_Passthrough to suggest 
> > it is only supported using trad so if it is broken it needs fixing. What 
> > does your qemu log (under /var/log/xen) say was the reason for failure? 
> > (I'm guessing it was probably malformed command line, which would mean 
> > there's a bug in libxl).
> > Paul
>
>
>
> ___
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Tamas K Lengyel
On Wed, Jan 8, 2020 at 11:37 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 08, 2020 at 11:14:46AM -0700, Tamas K Lengyel wrote:
> > On Wed, Jan 8, 2020 at 11:01 AM Roger Pau Monné  
> > wrote:
> > >
> > > On Wed, Jan 08, 2020 at 08:32:22AM -0700, Tamas K Lengyel wrote:
> > > > On Wed, Jan 8, 2020 at 8:08 AM Roger Pau Monné  
> > > > wrote:
> > > > >
> > > > > On Tue, Dec 31, 2019 at 09:36:01AM -0700, Tamas K Lengyel wrote:
> > > > > > On Tue, Dec 31, 2019 at 9:08 AM Tamas K Lengyel 
> > > > > >  wrote:
> > > > > > >
> > > > > > > On Tue, Dec 31, 2019 at 8:11 AM Roger Pau Monné 
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
> > > > > > > > > On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné 
> > > > > > > > >  wrote:
> > > > > > > > > >
> > > > > > > > > > On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel 
> > > > > > > > > > wrote:
> > > > > > > > > > > On Mon, Dec 30, 2019 at 5:20 PM Julien Grall 
> > > > > > > > > > >  wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > Hi,
> > > > > > > > > > > >
> > > > > > > > > > > > On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel, 
> > > > > > > > > > > >  wrote:
> > > > > > > > > > > >>
> > > > > > > > > > > >> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall 
> > > > > > > > > > > >>  wrote:
> > > > > > > > > > > >> But keep in mind that the "fork-vm" command even with 
> > > > > > > > > > > >> this update
> > > > > > > > > > > >> would still not produce for you a "fully functional" 
> > > > > > > > > > > >> VM on its own.
> > > > > > > > > > > >> The user still has to produce a new VM config file, 
> > > > > > > > > > > >> create the new
> > > > > > > > > > > >> disk, save the QEMU state, etc.
> > > > > > > > > >
> > > > > > > > > > IMO the default behavior of the fork command should be to 
> > > > > > > > > > leave the
> > > > > > > > > > original VM paused, so that you can continue using the same 
> > > > > > > > > > disk and
> > > > > > > > > > network config in the fork and you won't need to pass a new 
> > > > > > > > > > config
> > > > > > > > > > file.
> > > > > > > > > >
> > > > > > > > > > As Julien already said, maybe I wasn't clear in my previous 
> > > > > > > > > > replies:
> > > > > > > > > > I'm not asking you to implement all this, it's fine if the
> > > > > > > > > > implementation of the fork-vm xl command requires you to 
> > > > > > > > > > pass certain
> > > > > > > > > > options, and that the default behavior is not implemented.
> > > > > > > > > >
> > > > > > > > > > We need an interface that's sane, and that's designed to be 
> > > > > > > > > > easy and
> > > > > > > > > > comprehensive to use, not an interface built around what's 
> > > > > > > > > > currently
> > > > > > > > > > implemented.
> > > > > > > > >
> > > > > > > > > OK, so I think that would look like "xl fork-vm 
> > > > > > > > > " with
> > > > > > > > > additional options for things like name, disk, vlan, or a 
> > > > > > > > > completely
> > > > > > > > > new config, all of which are currently not implemented, + an
> > > > > > > > > additional option to not launch QEMU at all, which would be 
> > > > > > > > > the only
> > > > > > > > > one currently working. Also keeping the separate "xl 
> > > > > > > > > fork-launch-dm"
> > > > > > > > > as is. Is that what we are talking about?
> > > > > > > >
> > > > > > > > I think fork-launch-vm should just be an option of fork-vm (ie:
> > > > > > > > --launch-dm-only or some such). I don't think there's a reason 
> > > > > > > > to have
> > > > > > > > a separate top-level command to just launch the device model.
> > > > > > >
> > > > > > > It's just that the fork-launch-dm needs the domid of the fork, 
> > > > > > > while
> > > > > > > the fork-vm needs the parent's domid. But I guess we can 
> > > > > > > interpret the
> > > > > > > "domid" required input differently depending on which sub-option 
> > > > > > > is
> > > > > > > specified for the command. Let's see how it pans out.
> > > > > >
> > > > > > How does the following look for the interface?
> > > > > >
> > > > > > { "fork-vm",
> > > > > >   _fork_vm, 0, 1,
> > > > > >   "Fork a domain from the running parent domid",
> > > > > >   "[options] ",
> > > > > >   "-h   Print this help.\n"
> > > > > >   "-N Assign name to VM fork.\n"
> > > > > >   "-D Assign disk to VM fork.\n"
> > > > > >   "-B  > > > > >   "-V Assign vlan to VM fork.\n"
> > > > >
> > > > > IMO I think the name of fork is the only useful option. Being able to
> > > > > assign disks or bridges from the command line seems quite complicated.
> > > > > What about VMs with multiple disks? Or VMs with multiple nics on
> > > > > different bridges?
> > > > >
> > > > > I think it's easier for both the implementation and the user to just
> > > > > use a config file in that case.
> > > >
> > > > I agree but it sounded to me you 

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Tamas K Lengyel
On Wed, Jan 8, 2020 at 11:44 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 08, 2020 at 11:23:29AM -0700, Tamas K Lengyel wrote:
> > > > > > Why do you need a config file for launching the Qemu device model?
> > > > > > Doesn't the save-file contain all the information?
> > > > >
> > > > > The config is used to populate xenstore, not just for QEMU. The QEMU
> > > > > save file doesn't contain the xl config. This is not a full VM save
> > > > > file, it is only the QEMU state that gets dumped with
> > > > > xen-save-devices-state.
> > > >
> > > > TBH I think it would be easier to have something like my proposal
> > > > below, where you tell xl the parent and the forked VM names and xl
> > > > does the rest. Even better would be to not have to tell xl the parent
> > > > VM name (since I guess this is already tracked internally somewhere?).
> > >
> > > The forked VM has no "name" when it's created. For performance reasons
> > > when the VM fork is created with "--launch-dm no" we explicitly want
> > > to avoid touching Xenstore. Even parsing the config file would be
> > > unneeded overhead at that stage.
> >
> > And to answer your question, no, the parent VM's name is not recorded
> > anywhere for the fork. Technically not even the parent's domain id is
> > kept by Xen. The fork only keeps a pointer to the parent's "struct
> > domain"
>
> There's the domain_id field inside of struct domain, so it seems quite
> easy to get the parent domid from the fork if there's a pointer to the
> parent's struct domain.
>
> > So right now there is no hypercall interface to retrieve a
> > fork's parent's ID - it is assumed the tools using the interface are
> > keeping track of that. Could this information be dumped into Xenstore
> > as well? Yes. But we specifically want be able to create the fork as
> > fast possible without any unnecessary overhead.
>
> I think it would be nice to identify forked domains using
> XEN_DOMCTL_getdomaininfo: you could add a parent_domid field to
> xen_domctl_getdomaininfo and if it's set to something different than
> DOMID_INVALID then the domain is a fork of the given domid.
>
> Not saying it should be done now, but AFAICT getting the parent's
> domid is feasible and doesn't require xenstore.
>

Of course it could be done. I was just pointing out that it's not
currently kept separately and there is no interface to retrieve it.
But TBH I have lost the train the though why we would need that in the
first place? When QEMU is being launched the fork is already created
and QEMU doesn't need to know anything about the parent.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 3/6] x86/boot: Remove the preconstructed low 16M superpage mappings

2020-01-08 Thread Andrew Cooper
On 08/01/2020 11:23, Jan Beulich wrote:
>>>  This would then also ease shrinking the build
>>> time mappings further, e.g. to the low 1Mb (instead of touching
>>> several of the places you touch now, it would again mainly be an
>>> adjustment to BOOTSTRAP_MAP_BASE, alongside the assembly file
>>> changes needed).
>> ... as you correctly identify here, it is a property of the prebuilt
>> tables (in l?_identmap[]), not a property of where we chose to put the
>> dynamic boot mappings (in the l?_bootmap[]).  Another change (blocked
>> behind the above bug) moves BOOTSTRAP_MAP_BASE to be 1G to reduce the
>> chance of an offset from a NULL pointer hitting a present mapping.
> Right, BOOTSTRAP_MAP_BASE was (ab)used for a 2nd purpose. But this
> would better be dealt with by introducing a new manifest constant
> (e.g. PREBUILT_MAP_LIMIT) instead of open-coding 2Mb everywhere.

I'm hoping to get rid of even this, (although it is complicated by
CONFIG_VIDEO's blind use of the legacy VGA range).

> Plus there's (aiui) a PREBUILT_MAP_LIMIT <= BOOTSTRAP_MAP_BASE
> requirement, which would better be verified (e.g. by a BUILD_BUG_ON())
> then.

Is there?  I don't see a real connection between the two, even in this
patch.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 145799: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145799 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145799/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 145767

version targeted for testing:
 ovmf 972d88726410e21b1fff1a528854202c67e97ef1
baseline version:
 ovmf 70911f1f4aee0366b6122f2b90d367ec0f066beb

Last test of basis   145767  2020-01-08 00:39:09 Z0 days
Failing since145774  2020-01-08 02:50:20 Z0 days3 attempts
Testing same since   145790  2020-01-08 09:10:30 Z0 days2 attempts


People who touched revisions under test:
  Ashish Singhal 
  Pavana.K 
  Siyuan Fu 
  Siyuan, Fu 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  fail



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 972d88726410e21b1fff1a528854202c67e97ef1
Author: Ashish Singhal 
Date:   Tue Dec 24 10:57:47 2019 +0800

MdeModulePkg: Add EDK2 Platform Boot Manager Protocol

Add edk2 platform boot manager protocol which would have platform
specific refreshes to the auto enumerated as well as NV boot options
for the platform.

Signed-off-by: Ashish Singhal 
Reviewed-by: Ray Ni 

commit c9d72628432126cbce58a48b440e4944baa4beab
Author: Pavana.K 
Date:   Thu Jan 2 20:30:27 2020 +

CryptoPkg: Support for SHA384 & SHA512 RSA signing schemes

BZ: https://bugzilla.tianocore.org/show_bug.cgi?id=2389

Currently RSA signing scheme support is available for MD5, SHA-1 or
SHA-256 algorithms.The fix is to extend this support for SHA384 and
SHA512.

Cc: Liming Gao 
Cc: Jian J Wang 
Cc: Bob Feng 

Signed-off-by: Pavana.K 
Reviewed-by: Jian J Wang 

commit 396e791059f37062cbee85696e2b4186ec72a9e3
Author: Siyuan, Fu 
Date:   Fri Jan 3 14:59:27 2020 +0800

UefiCpuPkg: Always load microcode patch on AP processor.

This patch updates the microcode loader to always perform a microcode
detect and load on both BSP and AP processor. This is to fix a potential
microcode revision mismatch issue in below situation:
1. Assume there are two microcode co-exists in flash: one production
   version and one debug version microcode.
2. FIT loads production microcode to BSP and all AP.
3. UefiCpuPkg loader loads debug microcode to BSP, and skip the loading
   on AP.
As a result, different microcode patches are loaded to BSP and AP, and
trigger microcode mismatch error during OS boot.

BZ link: https://bugzilla.tianocore.org/show_bug.cgi?id=2431

Cc: Eric Dong 
Cc: Ray Ni 
Signed-off-by: Siyuan Fu 
Reviewed-by: Eric Dong 

commit 08a475df10b75f84cdeb9b11e38f8eee9b5c048d
Author: Siyuan Fu 
Date:   Fri Jan 3 15:11:51 2020 +0800

UefiCpuPkg: Remove alignment check when calculate microcode size.

This patch removes the unnecessary alignment check on microcode patch
TotalSize introduced by commit d786a172. The TotalSize has already been
checked with 1K alignment and MAX_ADDRESS in previous code as below:

if ( (UINTN)MicrocodeEntryPoint > (MAX_ADDRESS - TotalSize) ||
 ((UINTN)MicrocodeEntryPoint + TotalSize) > MicrocodeEnd ||
 (DataSize & 0x3) != 0 ||
 (TotalSize & (SIZE_1KB - 1)) != 0 ||
 TotalSize < DataSize
   ) {

Cc: Eric Dong 
Cc: Ray Ni 
Cc: Hao A Wu 
Signed-off-by: Siyuan Fu 
Reviewed-by: Ray Ni 
Reviewed-by: Eric Dong 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org

[Xen-devel] [qemu-mainline test] 145808: regressions - FAIL

2020-01-08 Thread osstest service owner
flight 145808 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145808/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm   6 xen-buildfail REGR. vs. 144861
 build-arm64   6 xen-buildfail REGR. vs. 144861
 build-amd64   6 xen-buildfail REGR. vs. 144861
 build-amd64-xsm   6 xen-buildfail REGR. vs. 144861
 build-i386-xsm6 xen-buildfail REGR. vs. 144861
 build-i3866 xen-buildfail REGR. vs. 144861
 build-armhf   6 xen-buildfail REGR. vs. 144861

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked 
n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl   1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)   blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-pvshim 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)   blocked  n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 build-arm64-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)  blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-shadow 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-shadow1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvshim1 build-check(1)   blocked  n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 build-armhf-libvirt   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-i386-xl-xsm1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 build-i386-libvirt1 build-check(1)

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Roger Pau Monné
On Wed, Jan 08, 2020 at 11:23:29AM -0700, Tamas K Lengyel wrote:
> > > > > Why do you need a config file for launching the Qemu device model?
> > > > > Doesn't the save-file contain all the information?
> > > >
> > > > The config is used to populate xenstore, not just for QEMU. The QEMU
> > > > save file doesn't contain the xl config. This is not a full VM save
> > > > file, it is only the QEMU state that gets dumped with
> > > > xen-save-devices-state.
> > >
> > > TBH I think it would be easier to have something like my proposal
> > > below, where you tell xl the parent and the forked VM names and xl
> > > does the rest. Even better would be to not have to tell xl the parent
> > > VM name (since I guess this is already tracked internally somewhere?).
> >
> > The forked VM has no "name" when it's created. For performance reasons
> > when the VM fork is created with "--launch-dm no" we explicitly want
> > to avoid touching Xenstore. Even parsing the config file would be
> > unneeded overhead at that stage.
> 
> And to answer your question, no, the parent VM's name is not recorded
> anywhere for the fork. Technically not even the parent's domain id is
> kept by Xen. The fork only keeps a pointer to the parent's "struct
> domain"

There's the domain_id field inside of struct domain, so it seems quite
easy to get the parent domid from the fork if there's a pointer to the
parent's struct domain.

> So right now there is no hypercall interface to retrieve a
> fork's parent's ID - it is assumed the tools using the interface are
> keeping track of that. Could this information be dumped into Xenstore
> as well? Yes. But we specifically want be able to create the fork as
> fast possible without any unnecessary overhead.

I think it would be nice to identify forked domains using
XEN_DOMCTL_getdomaininfo: you could add a parent_domid field to
xen_domctl_getdomaininfo and if it's set to something different than
DOMID_INVALID then the domain is a fork of the given domid.

Not saying it should be done now, but AFAICT getting the parent's
domid is feasible and doesn't require xenstore.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Roger Pau Monné
On Wed, Jan 08, 2020 at 11:14:46AM -0700, Tamas K Lengyel wrote:
> On Wed, Jan 8, 2020 at 11:01 AM Roger Pau Monné  wrote:
> >
> > On Wed, Jan 08, 2020 at 08:32:22AM -0700, Tamas K Lengyel wrote:
> > > On Wed, Jan 8, 2020 at 8:08 AM Roger Pau Monné  
> > > wrote:
> > > >
> > > > On Tue, Dec 31, 2019 at 09:36:01AM -0700, Tamas K Lengyel wrote:
> > > > > On Tue, Dec 31, 2019 at 9:08 AM Tamas K Lengyel  
> > > > > wrote:
> > > > > >
> > > > > > On Tue, Dec 31, 2019 at 8:11 AM Roger Pau Monné 
> > > > > >  wrote:
> > > > > > >
> > > > > > > On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
> > > > > > > > On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné 
> > > > > > > >  wrote:
> > > > > > > > >
> > > > > > > > > On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel 
> > > > > > > > > wrote:
> > > > > > > > > > On Mon, Dec 30, 2019 at 5:20 PM Julien Grall 
> > > > > > > > > >  wrote:
> > > > > > > > > > >
> > > > > > > > > > > Hi,
> > > > > > > > > > >
> > > > > > > > > > > On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel, 
> > > > > > > > > > >  wrote:
> > > > > > > > > > >>
> > > > > > > > > > >> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall 
> > > > > > > > > > >>  wrote:
> > > > > > > > > > >> But keep in mind that the "fork-vm" command even with 
> > > > > > > > > > >> this update
> > > > > > > > > > >> would still not produce for you a "fully functional" VM 
> > > > > > > > > > >> on its own.
> > > > > > > > > > >> The user still has to produce a new VM config file, 
> > > > > > > > > > >> create the new
> > > > > > > > > > >> disk, save the QEMU state, etc.
> > > > > > > > >
> > > > > > > > > IMO the default behavior of the fork command should be to 
> > > > > > > > > leave the
> > > > > > > > > original VM paused, so that you can continue using the same 
> > > > > > > > > disk and
> > > > > > > > > network config in the fork and you won't need to pass a new 
> > > > > > > > > config
> > > > > > > > > file.
> > > > > > > > >
> > > > > > > > > As Julien already said, maybe I wasn't clear in my previous 
> > > > > > > > > replies:
> > > > > > > > > I'm not asking you to implement all this, it's fine if the
> > > > > > > > > implementation of the fork-vm xl command requires you to pass 
> > > > > > > > > certain
> > > > > > > > > options, and that the default behavior is not implemented.
> > > > > > > > >
> > > > > > > > > We need an interface that's sane, and that's designed to be 
> > > > > > > > > easy and
> > > > > > > > > comprehensive to use, not an interface built around what's 
> > > > > > > > > currently
> > > > > > > > > implemented.
> > > > > > > >
> > > > > > > > OK, so I think that would look like "xl fork-vm " 
> > > > > > > > with
> > > > > > > > additional options for things like name, disk, vlan, or a 
> > > > > > > > completely
> > > > > > > > new config, all of which are currently not implemented, + an
> > > > > > > > additional option to not launch QEMU at all, which would be the 
> > > > > > > > only
> > > > > > > > one currently working. Also keeping the separate "xl 
> > > > > > > > fork-launch-dm"
> > > > > > > > as is. Is that what we are talking about?
> > > > > > >
> > > > > > > I think fork-launch-vm should just be an option of fork-vm (ie:
> > > > > > > --launch-dm-only or some such). I don't think there's a reason to 
> > > > > > > have
> > > > > > > a separate top-level command to just launch the device model.
> > > > > >
> > > > > > It's just that the fork-launch-dm needs the domid of the fork, while
> > > > > > the fork-vm needs the parent's domid. But I guess we can interpret 
> > > > > > the
> > > > > > "domid" required input differently depending on which sub-option is
> > > > > > specified for the command. Let's see how it pans out.
> > > > >
> > > > > How does the following look for the interface?
> > > > >
> > > > > { "fork-vm",
> > > > >   _fork_vm, 0, 1,
> > > > >   "Fork a domain from the running parent domid",
> > > > >   "[options] ",
> > > > >   "-h   Print this help.\n"
> > > > >   "-N Assign name to VM fork.\n"
> > > > >   "-D Assign disk to VM fork.\n"
> > > > >   "-B  > > > >   "-V Assign vlan to VM fork.\n"
> > > >
> > > > IMO I think the name of fork is the only useful option. Being able to
> > > > assign disks or bridges from the command line seems quite complicated.
> > > > What about VMs with multiple disks? Or VMs with multiple nics on
> > > > different bridges?
> > > >
> > > > I think it's easier for both the implementation and the user to just
> > > > use a config file in that case.
> > >
> > > I agree but it sounded to me you guys wanted to have a "complete"
> > > interface even if it's unimplemented. This is what a complete
> > > interface would look to me.
> >
> > I would add those options afterwards if there's a need for them. I was
> > mainly concerned about introducing a top level command (ie: fork-vm)
> > that 

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Tamas K Lengyel
> > > > Why do you need a config file for launching the Qemu device model?
> > > > Doesn't the save-file contain all the information?
> > >
> > > The config is used to populate xenstore, not just for QEMU. The QEMU
> > > save file doesn't contain the xl config. This is not a full VM save
> > > file, it is only the QEMU state that gets dumped with
> > > xen-save-devices-state.
> >
> > TBH I think it would be easier to have something like my proposal
> > below, where you tell xl the parent and the forked VM names and xl
> > does the rest. Even better would be to not have to tell xl the parent
> > VM name (since I guess this is already tracked internally somewhere?).
>
> The forked VM has no "name" when it's created. For performance reasons
> when the VM fork is created with "--launch-dm no" we explicitly want
> to avoid touching Xenstore. Even parsing the config file would be
> unneeded overhead at that stage.

And to answer your question, no, the parent VM's name is not recorded
anywhere for the fork. Technically not even the parent's domain id is
kept by Xen. The fork only keeps a pointer to the parent's "struct
domain". So right now there is no hypercall interface to retrieve a
fork's parent's ID - it is assumed the tools using the interface are
keeping track of that. Could this information be dumped into Xenstore
as well? Yes. But we specifically want be able to create the fork as
fast possible without any unnecessary overhead.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Tamas K Lengyel
On Wed, Jan 8, 2020 at 11:01 AM Roger Pau Monné  wrote:
>
> On Wed, Jan 08, 2020 at 08:32:22AM -0700, Tamas K Lengyel wrote:
> > On Wed, Jan 8, 2020 at 8:08 AM Roger Pau Monné  wrote:
> > >
> > > On Tue, Dec 31, 2019 at 09:36:01AM -0700, Tamas K Lengyel wrote:
> > > > On Tue, Dec 31, 2019 at 9:08 AM Tamas K Lengyel  
> > > > wrote:
> > > > >
> > > > > On Tue, Dec 31, 2019 at 8:11 AM Roger Pau Monné 
> > > > >  wrote:
> > > > > >
> > > > > > On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
> > > > > > > On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné 
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel wrote:
> > > > > > > > > On Mon, Dec 30, 2019 at 5:20 PM Julien Grall 
> > > > > > > > >  wrote:
> > > > > > > > > >
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel, 
> > > > > > > > > >  wrote:
> > > > > > > > > >>
> > > > > > > > > >> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall 
> > > > > > > > > >>  wrote:
> > > > > > > > > >> But keep in mind that the "fork-vm" command even with this 
> > > > > > > > > >> update
> > > > > > > > > >> would still not produce for you a "fully functional" VM on 
> > > > > > > > > >> its own.
> > > > > > > > > >> The user still has to produce a new VM config file, create 
> > > > > > > > > >> the new
> > > > > > > > > >> disk, save the QEMU state, etc.
> > > > > > > >
> > > > > > > > IMO the default behavior of the fork command should be to leave 
> > > > > > > > the
> > > > > > > > original VM paused, so that you can continue using the same 
> > > > > > > > disk and
> > > > > > > > network config in the fork and you won't need to pass a new 
> > > > > > > > config
> > > > > > > > file.
> > > > > > > >
> > > > > > > > As Julien already said, maybe I wasn't clear in my previous 
> > > > > > > > replies:
> > > > > > > > I'm not asking you to implement all this, it's fine if the
> > > > > > > > implementation of the fork-vm xl command requires you to pass 
> > > > > > > > certain
> > > > > > > > options, and that the default behavior is not implemented.
> > > > > > > >
> > > > > > > > We need an interface that's sane, and that's designed to be 
> > > > > > > > easy and
> > > > > > > > comprehensive to use, not an interface built around what's 
> > > > > > > > currently
> > > > > > > > implemented.
> > > > > > >
> > > > > > > OK, so I think that would look like "xl fork-vm " 
> > > > > > > with
> > > > > > > additional options for things like name, disk, vlan, or a 
> > > > > > > completely
> > > > > > > new config, all of which are currently not implemented, + an
> > > > > > > additional option to not launch QEMU at all, which would be the 
> > > > > > > only
> > > > > > > one currently working. Also keeping the separate "xl 
> > > > > > > fork-launch-dm"
> > > > > > > as is. Is that what we are talking about?
> > > > > >
> > > > > > I think fork-launch-vm should just be an option of fork-vm (ie:
> > > > > > --launch-dm-only or some such). I don't think there's a reason to 
> > > > > > have
> > > > > > a separate top-level command to just launch the device model.
> > > > >
> > > > > It's just that the fork-launch-dm needs the domid of the fork, while
> > > > > the fork-vm needs the parent's domid. But I guess we can interpret the
> > > > > "domid" required input differently depending on which sub-option is
> > > > > specified for the command. Let's see how it pans out.
> > > >
> > > > How does the following look for the interface?
> > > >
> > > > { "fork-vm",
> > > >   _fork_vm, 0, 1,
> > > >   "Fork a domain from the running parent domid",
> > > >   "[options] ",
> > > >   "-h   Print this help.\n"
> > > >   "-N Assign name to VM fork.\n"
> > > >   "-D Assign disk to VM fork.\n"
> > > >   "-B  > > >   "-V Assign vlan to VM fork.\n"
> > >
> > > IMO I think the name of fork is the only useful option. Being able to
> > > assign disks or bridges from the command line seems quite complicated.
> > > What about VMs with multiple disks? Or VMs with multiple nics on
> > > different bridges?
> > >
> > > I think it's easier for both the implementation and the user to just
> > > use a config file in that case.
> >
> > I agree but it sounded to me you guys wanted to have a "complete"
> > interface even if it's unimplemented. This is what a complete
> > interface would look to me.
>
> I would add those options afterwards if there's a need for them. I was
> mainly concerned about introducing a top level command (ie: fork-vm)
> that would require calling other commands in order to get a functional
> fork. I'm not so concerned about having all the possible options
> listed now, as long as the default behavior of fork-vm is something
> sane that produces a working fork, even if not fully implemented at
> this stage.

OK

> > > Why do you need 

Re: [Xen-devel] [PATCH] x86/flush: use APIC ALLBUT destination shorthand when possible

2020-01-08 Thread Roger Pau Monné
On Wed, Jan 08, 2020 at 02:54:57PM +0100, Jan Beulich wrote:
> On 08.01.2020 14:30, Roger Pau Monné  wrote:
> > On Fri, Jan 03, 2020 at 01:55:51PM +0100, Jan Beulich wrote:
> >> On 03.01.2020 13:34, Roger Pau Monné wrote:
> >>> On Fri, Jan 03, 2020 at 01:08:20PM +0100, Jan Beulich wrote:
>  On 24.12.2019 13:44, Roger Pau Monne wrote:
>  Further a question on lock nesting: Since the commit message
>  doesn't say anything in this regard, did you check there are no
>  TLB flush invocations with the get_cpu_maps() lock held?
> >>>
> >>> The CPU maps lock is a recursive one, so it should be fine to attempt
> >>> a TLB flush with the lock already held.
> >>
> >> When already held by the same CPU - sure. It being a recursive
> >> one (which I paid attention to when writing my earlier reply)
> >> doesn't make it (together with any other one) immune against
> >> ABBA deadlocks, though.
> > 
> > There's no possibility of a deadlock here because get_cpu_maps does a
> > trylock, so if another cpu is holding the lock the flush will just
> > fallback to not using the shorthand.
> 
> Well, with the _exact_ arrangements (flush_lock used only in one
> place, and cpu_add_remove_lock only used in ways similar to how it
> is used now), there's no such risk, I agree. But there's nothing
> at all making sure this doesn't change. Hence, as said, at the very
> least this needs reasoning about in the description (or a code
> comment).

I'm afraid you will have to bear with me, but I'm still not sure how
the addition of get_cpu_maps in flush_area_mask can lead to deadlocks.
As said above get_cpu_maps does a trylock, which means that it will
never deadlock, and that's the only way to lock cpu_add_remove_lock.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Roger Pau Monné
On Wed, Jan 08, 2020 at 04:34:49PM +, George Dunlap wrote:
> On 12/31/19 3:11 PM, Roger Pau Monné wrote:
> > On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
> >> On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné  
> >> wrote:
> >>>
> >>> On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel wrote:
>  On Mon, Dec 30, 2019 at 5:20 PM Julien Grall  
>  wrote:
> >
> > Hi,
> >
> > On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel,  wrote:
> >>
> >> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall  wrote:
> >> But keep in mind that the "fork-vm" command even with this update
> >> would still not produce for you a "fully functional" VM on its own.
> >> The user still has to produce a new VM config file, create the new
> >> disk, save the QEMU state, etc.
> >>>
> >>> IMO the default behavior of the fork command should be to leave the
> >>> original VM paused, so that you can continue using the same disk and
> >>> network config in the fork and you won't need to pass a new config
> >>> file.
> >>>
> >>> As Julien already said, maybe I wasn't clear in my previous replies:
> >>> I'm not asking you to implement all this, it's fine if the
> >>> implementation of the fork-vm xl command requires you to pass certain
> >>> options, and that the default behavior is not implemented.
> >>>
> >>> We need an interface that's sane, and that's designed to be easy and
> >>> comprehensive to use, not an interface built around what's currently
> >>> implemented.
> >>
> >> OK, so I think that would look like "xl fork-vm " with
> >> additional options for things like name, disk, vlan, or a completely
> >> new config, all of which are currently not implemented, + an
> >> additional option to not launch QEMU at all, which would be the only
> >> one currently working. Also keeping the separate "xl fork-launch-dm"
> >> as is. Is that what we are talking about?
> > 
> > I think fork-launch-vm should just be an option of fork-vm (ie:
> > --launch-dm-only or some such). I don't think there's a reason to have
> > a separate top-level command to just launch the device model.
> 
> So first of all, Tamas -- do you actually need to exec xl here?  Would
> it make sense for these to start out simply as libxl functions that are
> called by your system?
> 
> I actually disagree that we want a single command to do all of these.
> If we did want `exec xl` to be one of the supported interfaces, I think
> it would break down something like this:
> 
> `xl fork-domain`: Only forks the domain.
> `xl fork-launch-dm`: (or attach-dm?): Start up and attach the
> devicemodel to the domain
> 
> Then `xl fork` (or maybe `xl fork-vm`) would be something implemented in
> the future that would fork the entire domain.

I don't have a strong opinion on whether we should have a bunch of
fork-* commands or a single one. My preference would be for a single
one because I think other commands can be implemented as options.

What I would like to prevent is ending up with something like
fork-domain and fork-vm commands, which look like aliases, and can
lead to confusion.

Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Roger Pau Monné
On Wed, Jan 08, 2020 at 08:32:22AM -0700, Tamas K Lengyel wrote:
> On Wed, Jan 8, 2020 at 8:08 AM Roger Pau Monné  wrote:
> >
> > On Tue, Dec 31, 2019 at 09:36:01AM -0700, Tamas K Lengyel wrote:
> > > On Tue, Dec 31, 2019 at 9:08 AM Tamas K Lengyel  
> > > wrote:
> > > >
> > > > On Tue, Dec 31, 2019 at 8:11 AM Roger Pau Monné  
> > > > wrote:
> > > > >
> > > > > On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
> > > > > > On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné 
> > > > > >  wrote:
> > > > > > >
> > > > > > > On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel wrote:
> > > > > > > > On Mon, Dec 30, 2019 at 5:20 PM Julien Grall 
> > > > > > > >  wrote:
> > > > > > > > >
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel, 
> > > > > > > > >  wrote:
> > > > > > > > >>
> > > > > > > > >> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall 
> > > > > > > > >>  wrote:
> > > > > > > > >> But keep in mind that the "fork-vm" command even with this 
> > > > > > > > >> update
> > > > > > > > >> would still not produce for you a "fully functional" VM on 
> > > > > > > > >> its own.
> > > > > > > > >> The user still has to produce a new VM config file, create 
> > > > > > > > >> the new
> > > > > > > > >> disk, save the QEMU state, etc.
> > > > > > >
> > > > > > > IMO the default behavior of the fork command should be to leave 
> > > > > > > the
> > > > > > > original VM paused, so that you can continue using the same disk 
> > > > > > > and
> > > > > > > network config in the fork and you won't need to pass a new config
> > > > > > > file.
> > > > > > >
> > > > > > > As Julien already said, maybe I wasn't clear in my previous 
> > > > > > > replies:
> > > > > > > I'm not asking you to implement all this, it's fine if the
> > > > > > > implementation of the fork-vm xl command requires you to pass 
> > > > > > > certain
> > > > > > > options, and that the default behavior is not implemented.
> > > > > > >
> > > > > > > We need an interface that's sane, and that's designed to be easy 
> > > > > > > and
> > > > > > > comprehensive to use, not an interface built around what's 
> > > > > > > currently
> > > > > > > implemented.
> > > > > >
> > > > > > OK, so I think that would look like "xl fork-vm " with
> > > > > > additional options for things like name, disk, vlan, or a completely
> > > > > > new config, all of which are currently not implemented, + an
> > > > > > additional option to not launch QEMU at all, which would be the only
> > > > > > one currently working. Also keeping the separate "xl fork-launch-dm"
> > > > > > as is. Is that what we are talking about?
> > > > >
> > > > > I think fork-launch-vm should just be an option of fork-vm (ie:
> > > > > --launch-dm-only or some such). I don't think there's a reason to have
> > > > > a separate top-level command to just launch the device model.
> > > >
> > > > It's just that the fork-launch-dm needs the domid of the fork, while
> > > > the fork-vm needs the parent's domid. But I guess we can interpret the
> > > > "domid" required input differently depending on which sub-option is
> > > > specified for the command. Let's see how it pans out.
> > >
> > > How does the following look for the interface?
> > >
> > > { "fork-vm",
> > >   _fork_vm, 0, 1,
> > >   "Fork a domain from the running parent domid",
> > >   "[options] ",
> > >   "-h   Print this help.\n"
> > >   "-N Assign name to VM fork.\n"
> > >   "-D Assign disk to VM fork.\n"
> > >   "-B  > >   "-V Assign vlan to VM fork.\n"
> >
> > IMO I think the name of fork is the only useful option. Being able to
> > assign disks or bridges from the command line seems quite complicated.
> > What about VMs with multiple disks? Or VMs with multiple nics on
> > different bridges?
> >
> > I think it's easier for both the implementation and the user to just
> > use a config file in that case.
> 
> I agree but it sounded to me you guys wanted to have a "complete"
> interface even if it's unimplemented. This is what a complete
> interface would look to me.

I would add those options afterwards if there's a need for them. I was
mainly concerned about introducing a top level command (ie: fork-vm)
that would require calling other commands in order to get a functional
fork. I'm not so concerned about having all the possible options
listed now, as long as the default behavior of fork-vm is something
sane that produces a working fork, even if not fully implemented at
this stage.

> >
> > >   "-C   Use config file for VM fork.\n"
> > >   "-Q   Use qemu save file for VM fork.\n"
> > >   "--launch-dm Launch device model (QEMU) for VM 
> > > fork.\n"
> > >   "--fork-reset Reset VM fork.\n"
> > >   "-p   Do not unpause VMs after fork."
> >
> > I think the default 

Re: [Xen-devel] [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards

2020-01-08 Thread Pavel Tatashin
On Mon, Jan 6, 2020 at 12:19 PM Stefano Stabellini
 wrote:
>
> On Thu, 2 Jan 2020, Pavel Tatashin wrote:
> > The arm and arm64 versions of hypercall.h are missing the include
> > guards. This is needed because C inlines for privcmd_call are going to
> > be added to the files.
> >
> > Signed-off-by: Pavel Tatashin 
> > Reviewed-by: Julien Grall 
>
> Acked-by: Stefano Stabellini 


Thank you,
Pasha

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 1/5] x86/hyperv: setup hypercall page

2020-01-08 Thread Andrew Cooper
On 08/01/2020 17:43, Wei Liu wrote:
> On Tue, Jan 07, 2020 at 03:42:14PM +, Wei Liu wrote:
>> On Sun, Jan 05, 2020 at 09:57:56PM +, Andrew Cooper wrote:
>> [...]
> The locked bit is probably a good idea, but one aspect missing here is
> the check to see whether the hypercall page is already enabled, which I
> expect is for a kexec crash scenario.
>
> However, the most important point is the one which describes the #GP
> properties of the guest trying to modify the page.  This can only be
> achieved with an EPT/NPT mapping lacking the W permission, which will
> shatter host superpages.   Therefore, putting it in .text is going to be
> rather poor, perf wise.
>
> I also note that Xen's implementation of the Viridian hypercall page
> doesn't conform to these properties, and wants fixing.  It is going to
> need a new kind identification of the page (probably a new p2m type)
> which injects #GP if we ever see an EPT_VIOLATION/NPT_FAULT against it.
>
> As for suggestions here, I'm struggling to find any memory map details
> exposed in the Viridian interface, and therefore which gfn is best to
> choose.  I have a sinking feeling that the answer is ACPI...
 TLFS only says "go find one suitable page yourself" without further
 hints.

 Since we're still quite far away from a functioning system, finding a
 most suitable page isn't my top priority at this point. If there is a
 simple way to extrapolate suitable information from ACPI, that would be
 great. If it requires writing a set of functionalities, than that will
 need to wait till later.
>>> To cope with the "one is already established and it is already locked"
>>> case, the only option is to have a fixmap entry which can be set
>>> dynamically.  The problem is that the fixmap region is marked NX and 64G
>>> away from .text.
>>>
>>> Possibly the least bad option is to have some build-time space (so 0 or
>>> 4k depending on CONFIG_HYPERV) between the per-cpu stubs and
>>> XEN_VIRT_END, which operates like the fixmap, but ends up as X/RO mappings.
>>>
>> OK. This is probably not too difficult. 
>>
> I have a closer look at this today and want to make sure I understand
> what you had in mind.
>
> Suppose we set aside some space in linker script. Using the following
> will give you a WAX section. I still need to add CONFIG_GUEST around it,
> but you get the idea.
>
>
> diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
> index 111edb5360..a7af321139 100644
> --- a/xen/arch/x86/xen.lds.S
> +++ b/xen/arch/x86/xen.lds.S
> @@ -305,6 +305,15 @@ SECTIONS
> . = ALIGN(POINTER_ALIGN);
> __bss_end = .;
>} :text
> +
> +  . = ALIGN(SECTION_ALIGN);
> +  DECL_SECTION(.text.hypercall_page) {
> +   . = ALIGN(PAGE_SIZE);
> +   __hypercall_page_start = .;
> +   . = . + PAGE_SIZE;
> +   __hypercall_page_end = .;
> +  } :text=0x9090
> +
>_end = . ;
>  
>. = ALIGN(SECTION_ALIGN);
>
>
> And then? Use map_pages_to_xen(..., PAGE_HYPERVISOR_RX) to point that
> address to (MAXPHYSADDR-PAGE_SIZE)?

Ah no.  This still puts the hypercall page (or at least, space for it)
in the Xen image itself, which is something we are trying to avoid.

What I meant was to basically copy(/extend) the existing fixmap
implementation, calling it fixmap_x() (or something better), and put
FIXMAP_X_SIZE as an additional gap in the calculation
alloc_stub_page().  Even the current fixmap code has an ifdef example
for CONFIG_XEN_GUEST.

You'd then end up with something like
__set_fixmap_x(FIXMAP_X_HYPERV_HYPERCALL_PAGE, mfn) which uses _RX in
the underlying call to map_pages_to_xen()

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] Updating https://wiki.xenproject.org/wiki/Outreach_Program_Projects

2020-01-08 Thread Lars Kurth
Hi all,

the deadline for GSoC mentoring orgs is approaching again and I think there is 
a good chance we might get in (usually we get in every 3 years and the last 
time we got in in 2020). We do however need to get 
https://wiki.xenproject.org/wiki/Outreach_Program_Projects into a decent state 
PRIOR to the application deadline around Feb 5th

And this year the list is potentially in a worse than in its usual state at 
least in terms of e-mail addresses that may be wrong, etc. 

To make things a little easier look out for your name below

@George: is the project below still applicable - I saw quite a lot of activity 
around this indicating that maybe the project is done or should be changed
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#golang_bindings_for_libxl
@George: another one against you
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Add_Centos_Virt_SIG_Xen_packages_test_to_the_CentOS_CI_loop

@Paul: This is against your Citrix address - would you still support this 
project from within AWS. There was also some work from postgrads as far as I 
recall
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#KDD_.28Windows_Debugger_Stub.29_enhancements
 

@Stefano, @Julien: the 5 projects below are against you - are these still valid?
@Julien: these are against your Arm address
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Xen_Hypervisor
- 
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Xen_on_ARM:_Trap_.26_sanitize_ID_registers_.28ID_PFR0.2C_ID_DFR0.2C_etc.29
- 
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Xen_on_ARM.2C_dom0less:_configurable_memory_layout_for_guests
- https://wiki.xenproject.org/wiki/Outreach_Program_Projects#ARMv8.1_atomics
- 
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Xen_on_ARM:_dynamic_virtual_memory_layout
- 
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Xen_on_ARM:_Performance_Counters_Virtualization

@Simon, @Felipe: the 4 projects below are against you - are these still valid? 
Or have they been implemented?
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Unikraft
- 
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#New_Platform_Support
- 
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#FreeBSD.27s_Network_Stack_Port
- https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Go_Language_Support
- 
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Enhanced_Profiling_and_Tracing_Support

@Roger: is this still valid?
https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Add_more_FreeBSD_testing_to_osstest

Regards
Lars



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 1/5] x86/hyperv: setup hypercall page

2020-01-08 Thread Wei Liu
On Tue, Jan 07, 2020 at 03:42:14PM +, Wei Liu wrote:
> On Sun, Jan 05, 2020 at 09:57:56PM +, Andrew Cooper wrote:
> [...]
> > >
> > >> The locked bit is probably a good idea, but one aspect missing here is
> > >> the check to see whether the hypercall page is already enabled, which I
> > >> expect is for a kexec crash scenario.
> > >>
> > >> However, the most important point is the one which describes the #GP
> > >> properties of the guest trying to modify the page.  This can only be
> > >> achieved with an EPT/NPT mapping lacking the W permission, which will
> > >> shatter host superpages.   Therefore, putting it in .text is going to be
> > >> rather poor, perf wise.
> > >>
> > >> I also note that Xen's implementation of the Viridian hypercall page
> > >> doesn't conform to these properties, and wants fixing.  It is going to
> > >> need a new kind identification of the page (probably a new p2m type)
> > >> which injects #GP if we ever see an EPT_VIOLATION/NPT_FAULT against it.
> > >>
> > >> As for suggestions here, I'm struggling to find any memory map details
> > >> exposed in the Viridian interface, and therefore which gfn is best to
> > >> choose.  I have a sinking feeling that the answer is ACPI...
> > > TLFS only says "go find one suitable page yourself" without further
> > > hints.
> > >
> > > Since we're still quite far away from a functioning system, finding a
> > > most suitable page isn't my top priority at this point. If there is a
> > > simple way to extrapolate suitable information from ACPI, that would be
> > > great. If it requires writing a set of functionalities, than that will
> > > need to wait till later.
> > 
> > To cope with the "one is already established and it is already locked"
> > case, the only option is to have a fixmap entry which can be set
> > dynamically.  The problem is that the fixmap region is marked NX and 64G
> > away from .text.
> > 
> > Possibly the least bad option is to have some build-time space (so 0 or
> > 4k depending on CONFIG_HYPERV) between the per-cpu stubs and
> > XEN_VIRT_END, which operates like the fixmap, but ends up as X/RO mappings.
> > 
> 
> OK. This is probably not too difficult. 
> 

I have a closer look at this today and want to make sure I understand
what you had in mind.

Suppose we set aside some space in linker script. Using the following
will give you a WAX section. I still need to add CONFIG_GUEST around it,
but you get the idea.


diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 111edb5360..a7af321139 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -305,6 +305,15 @@ SECTIONS
. = ALIGN(POINTER_ALIGN);
__bss_end = .;
   } :text
+
+  . = ALIGN(SECTION_ALIGN);
+  DECL_SECTION(.text.hypercall_page) {
+   . = ALIGN(PAGE_SIZE);
+   __hypercall_page_start = .;
+   . = . + PAGE_SIZE;
+   __hypercall_page_end = .;
+  } :text=0x9090
+
   _end = . ;
 
   . = ALIGN(SECTION_ALIGN);


And then? Use map_pages_to_xen(..., PAGE_HYPERVISOR_RX) to point that
address to (MAXPHYSADDR-PAGE_SIZE)?

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] x86/hvm: improve performance of HVMOP_flush_tlbs

2020-01-08 Thread Roger Pau Monné
On Fri, Jan 03, 2020 at 04:17:14PM +0100, Jan Beulich wrote:
> On 24.12.2019 14:26, Roger Pau Monne wrote:
> > There's no need to call paging_update_cr3 unless CR3 trapping is
> > enabled, and that's only the case when using shadow paging or when
> > requested for introspection purposes, otherwise there's no need to
> > pause all the vCPUs of the domain in order to perform the flush.
> > 
> > Check whether CR3 trapping is currently in use in order to decide
> > whether the vCPUs should be paused, otherwise just perform the flush.
> 
> First of all - with the commit introducing the pausing not saying
> anything on the "why", you must have gained some understanding
> there. Could you share this?

hap_update_cr3 does a "v->arch.hvm.hw_cr[3] = v->arch.hvm.guest_cr[3]"
unconditionally, and such access would be racy if the vCPU is running
and also modifying cr3 at the same time AFAICT.

Just pausing each vCPU before calling paging_update_cr3 should be fine
and would have a smaller performance penalty.

> I can't see why this was needed, and
> sh_update_cr3() also doesn't look to have any respective ASSERT()
> or alike. I'm having even more trouble seeing why in HAP mode the
> pausing would be needed.
>
> As a result I wonder whether, rather than determining whether
> pausing is needed inside the function, this shouldn't be a flag
> in struct paging_mode.
>
> Next I seriously doubt introspection hooks should be called here.
> Introspection should be about guest actions, and us calling
> paging_update_cr3() is an implementation detail of Xen, not
> something the guest controls. Even more, there not being any CR3
> change here I wonder whether the call by the hooks to
> hvm_update_guest_cr3() couldn't be suppressed altogether in this
> case. Quite possibly in the shadow case there could be more
> steps that aren't really needed, so perhaps a separate hook might
> be on order.

Right, I guess just having a hook that does a flush would be enough.
Let me try to propose something slightly better.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] x86/mem_sharing: Fix RANDCONFIG build

2020-01-08 Thread Tamas K Lengyel
On Wed, Jan 8, 2020 at 10:24 AM Andrew Cooper  wrote:
>
> Travis reports: https://travis-ci.org/andyhhp/xen/jobs/633751811
>
>   mem_sharing.c:361:13: error: 'rmap_has_entries' defined but not used 
> [-Werror=unused-function]
>static bool rmap_has_entries(const struct page_info *page)
>^
>   cc1: all warnings being treated as errors
>
> This happens in a release build (disables MEM_SHARING_AUDIT) when
> CONFIG_MEM_SHARING is enabled.
>
> Expand both trivial helpers into their single callsite.
>
> Signed-off-by: Andrew Cooper 

Thanks!

Acked-by: Tamas K Lengyel 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Tamas K Lengyel
> >> I actually disagree that we want a single command to do all of these.
> >> If we did want `exec xl` to be one of the supported interfaces, I think
> >> it would break down something like this:
> >>
> >> `xl fork-domain`: Only forks the domain.
> >> `xl fork-launch-dm`: (or attach-dm?): Start up and attach the
> >> devicemodel to the domain
> >>
> >> Then `xl fork` (or maybe `xl fork-vm`) would be something implemented in
> >> the future that would fork the entire domain.
> >
> > I really don't have a strong opinion about this either way. I can see
> > it working either way. Having them all bundled under a single
> > top-level comment doesn't pollute the help text when someone is just
> > looking at what xl can do in general. Makes that command a lot more
> > complex for sure but I don't think it's too bad.
>
> One thing I don't like about having a single command is that since
> you're not planning on implementing the end-to-end "vm fork" command,
> then when running the base "fork-vm" command, you'll have to print an
> error message that says "This command is not available in its
> completeness; you'll have to implement your own via fork-vm --domain,
> fork-vm --save-dm, and fork-vm --launch-dm."
>
> Which we could do, but seem a bit strange. :-)

Yea, it's not a single step to get to a fully functional fork but its close:
1. pause parent vm
2. generate qemu_save_file
3. xl fork-vm -C config -Q qemu_save_file 

For the second fork - provided it has its own config file ready to go
- it is enough to just run the 3. step. Technically we could integrate
all these three steps into one and then the user would only have to
generate the new config file. But I found this setup to be "good
enough" already.

Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [RFC PATCH 0/3] Live update boot memory management

2020-01-08 Thread David Woodhouse
When doing a live update, Xen needs to be very careful not to scribble
on pages which contain guest memory or state information for the
domains which are being preserved.

The information about which pages are in use is contained in the live
update state passed from the previous Xen — which is mostly just a
guest-transparent live migration data stream, except that it points to
the page tables in place in memory while traditional live migration
obviously copies the pages separately.

Our initial implementation actually prepended a list of 'in-use' ranges
to the live update state, and made the boot allocator treat them the
same as 'bad pages'. That worked well enough for initial development
but wouldn't scale to a live production system, mainly because the boot
allocator has a limit of 512 memory ranges that it can keep track of,
and a real system would end up more fragmented than that.

My other concern with that approach is that it required two passes over
the domain-owned pages. We have to do a later pass *anyway*, as we set
up ownership in the frametable for each page — and that has to happen
after we've managed to allocate a 'struct domain' for each page_info to
point to. If we want to keep the pause time due to a live update down
to a bare minimum, doing two passes over the full set of domain pages
isn't my favourite strategy.

So we've settled on a simpler approach — reserve a contiguous region
of physical memory which *won't* be used for domain pages. Let the boot
allocator see *only* that region of memory, and plug the rest of the
memory in later only after doing a full pass of the live update state.

This means that we have to ensure the reserved region is large enough,
but ultimately we had that problem either way — even if we were
processing the actual free ranges, if the page_info grew and we didn't
have enough contiguous space for the new frametable we were hosed
anyway.

So the straw man patch ends up being really simple, as a seed for
bikeshedding. Just take a 'liveupdate=' region on the command line,
which kexec(8) can find from the running Xen. The initial Xen needs to
ensure that it *won't* allocate any pages from that range which will
subsequently need to be preserved across live update, which isn't done
yet. We just need to make sure that any page which might be given to
share_xen_page_with_guest() is allocated appropriately.

The part which actually hands over the live update state isn't included
yet, so this really does just *defer* the addition of the memory until
a little bit later in __start_xen(). Actually taking ranges out of it
will come later.


David Woodhouse (3):
  x86/setup: Don't skip 2MiB underneath relocated Xen image
  x86/boot: Reserve live update boot memory
  Add KEXEC_RANGE_MA_LIVEUPDATE

 xen/arch/x86/machine_kexec.c |  15 --
 xen/arch/x86/setup.c | 122 +++
 xen/include/public/kexec.h   |   1 +
 3 files changed, 124 insertions(+), 14 deletions(-)


smime.p7s
Description: S/MIME cryptographic signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [RFC PATCH 1/3] x86/setup: Don't skip 2MiB underneath relocated Xen image

2020-01-08 Thread David Woodhouse
From: David Woodhouse 

Set 'e' correctly to reflect the location that Xen is actually relocated
to from its default 2MiB location. Not 2MiB below that.

Signed-off-by: David Woodhouse 
---
 xen/arch/x86/setup.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 501f3f5e4b..47e065e5fe 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1077,9 +1077,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 unsigned long pte_update_limit;
 
 /* Select relocation address. */
-e = end - reloc_size;
-xen_phys_start = e;
-bootsym(trampoline_xen_phys_start) = e;
+xen_phys_start = end - reloc_size;
+e = xen_phys_start + XEN_IMG_OFFSET;
+bootsym(trampoline_xen_phys_start) = xen_phys_start;
 
 /*
  * No PTEs pointing above this address are candidates for 
relocation.
@@ -1096,7 +1096,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
  * data until after we have switched to the relocated pagetables!
  */
 barrier();
-move_memory(e + XEN_IMG_OFFSET, XEN_IMG_OFFSET, _end - _start, 1);
+move_memory(e, XEN_IMG_OFFSET, _end - _start, 1);
 
 /* Walk initial pagetables, relocating page directory entries. */
 pl4e = __va(__pa(idle_pg_table));
-- 
2.21.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [RFC PATCH 3/3] Add KEXEC_RANGE_MA_LIVEUPDATE

2020-01-08 Thread David Woodhouse
From: David Woodhouse 

This allows kexec userspace to tell the next Xen where the range is,
on its command line.

Signed-off-by: David Woodhouse 
---
 xen/arch/x86/machine_kexec.c | 15 ---
 xen/arch/x86/setup.c |  2 +-
 xen/include/public/kexec.h   |  1 +
 3 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/machine_kexec.c b/xen/arch/x86/machine_kexec.c
index b70d5a6a86..f0c4617234 100644
--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -184,11 +184,20 @@ void machine_kexec(struct kexec_image *image)
 image->head, image->entry_maddr, reloc_flags);
 }
 
+extern unsigned long lu_bootmem_start, lu_bootmem_size;
+
 int machine_kexec_get(xen_kexec_range_t *range)
 {
-   if (range->range != KEXEC_RANGE_MA_XEN)
-   return -EINVAL;
-   return machine_kexec_get_xen(range);
+switch (range->range) {
+case KEXEC_RANGE_MA_XEN:
+return machine_kexec_get_xen(range);
+case KEXEC_RANGE_MA_LIVEUPDATE:
+range->start = lu_bootmem_start;
+range->size = lu_bootmem_size;
+return 0;
+default:
+return -EINVAL;
+}
 }
 
 void arch_crash_save_vmcoreinfo(void)
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 650d70c1fc..11c1ba8e91 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -678,7 +678,7 @@ static unsigned int __init copy_bios_e820(struct e820entry 
*map, unsigned int li
 return n;
 }
 
-static unsigned long lu_bootmem_start, lu_bootmem_size, lu_data;
+unsigned long lu_bootmem_start, lu_bootmem_size, lu_data;
 
 static int __init parse_liveupdate(const char *str)
 {
diff --git a/xen/include/public/kexec.h b/xen/include/public/kexec.h
index 3f2a118381..298381af8d 100644
--- a/xen/include/public/kexec.h
+++ b/xen/include/public/kexec.h
@@ -150,6 +150,7 @@ typedef struct xen_kexec_load_v1 {
 #define KEXEC_RANGE_MA_EFI_MEMMAP 5 /* machine address and size of
  * of the EFI Memory Map */
 #define KEXEC_RANGE_MA_VMCOREINFO 6 /* machine address and size of vmcoreinfo 
*/
+#define KEXEC_RANGE_MA_LIVEUPDATE 7 /* Boot mem for live update */
 
 /*
  * Find the address and size of certain memory areas
-- 
2.21.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [RFC PATCH 2/3] x86/boot: Reserve live update boot memory

2020-01-08 Thread David Woodhouse
From: David Woodhouse 

For live update to work, it will need a region of memory that can be
given to the boot allocator while it parses the state information from
the previous Xen and works out which of the other pages of memory it
can consume.

Reserve that like the crashdump region, and accept it on the command
line. Use only that region for early boot, and register the remaining
RAM (all of it for now, until the real live update happens) later.

Signed-off-by: David Woodhouse 
---
 xen/arch/x86/setup.c | 114 ---
 1 file changed, 107 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 47e065e5fe..650d70c1fc 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -678,6 +678,41 @@ static unsigned int __init copy_bios_e820(struct e820entry 
*map, unsigned int li
 return n;
 }
 
+static unsigned long lu_bootmem_start, lu_bootmem_size, lu_data;
+
+static int __init parse_liveupdate(const char *str)
+{
+const char *cur;
+lu_bootmem_size = parse_size_and_unit(cur = str, );
+if (!lu_bootmem_size || cur == str)
+return -EINVAL;
+
+if (!*str) {
+printk("Live update size 0x%lx\n", lu_bootmem_size);
+return 0;
+}
+if (*str != '@')
+return -EINVAL;
+lu_bootmem_start = parse_size_and_unit(cur = str + 1, );
+if (!lu_bootmem_start || cur == str)
+return -EINVAL;
+
+printk("Live update area 0x%lx-0x%lx (0x%lx)\n", lu_bootmem_start,
+   lu_bootmem_start + lu_bootmem_size, lu_bootmem_size);
+
+if (!*str)
+return 0;
+if (*str != ':')
+return -EINVAL;
+lu_data = simple_strtoull(cur = str + 1, , 0);
+if (!lu_data || cur == str)
+return -EINVAL;
+
+printk("Live update data at 0x%lx\n", lu_data);
+return 0;
+}
+custom_param("liveupdate", parse_liveupdate);
+
 void __init noreturn __start_xen(unsigned long mbi_p)
 {
 char *memmap_type = NULL;
@@ -687,7 +722,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 module_t *mod;
 unsigned long nr_pages, raw_max_page, modules_headroom, module_map[1];
 int i, j, e820_warn = 0, bytes = 0;
-bool acpi_boot_table_init_done = false, relocated = false;
+bool acpi_boot_table_init_done = false, relocated = false, lu_reserved = 
false;
 int ret;
 struct ns16550_defaults ns16550 = {
 .data_bits = 8,
@@ -980,6 +1015,22 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 set_kexec_crash_area_size((u64)nr_pages << PAGE_SHIFT);
 kexec_reserve_area(_e820);
 
+if ( lu_bootmem_start )
+{
+/* XX: Check it's in usable memory first */
+reserve_e820_ram(_e820, lu_bootmem_start, lu_bootmem_start + 
lu_bootmem_size);
+
+/* Since it will already be out of the e820 map by the time the first
+ * loop over physical memory, map it manually already. */
+set_pdx_range(lu_bootmem_start >> PAGE_SHIFT,
+  (lu_bootmem_start + lu_bootmem_size) >> PAGE_SHIFT);
+map_pages_to_xen((unsigned long)__va(lu_bootmem_start),
+ maddr_to_mfn(lu_bootmem_start),
+ PFN_DOWN(lu_bootmem_size), PAGE_HYPERVISOR);
+
+lu_reserved = true;
+}
+
 initial_images = mod;
 nr_initial_images = mbi->mods_count;
 
@@ -1204,6 +1255,16 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 printk("New Xen image base address: %#lx\n", xen_phys_start);
 }
 
+/* Is the region suitable for the live update bootmem region? */
+if ( lu_bootmem_size && ! lu_bootmem_start && e < limit )
+{
+end = consider_modules(s, e, lu_bootmem_size, mod, mbi->mods_count 
+ relocated, -1);
+if ( end )
+{
+e = lu_bootmem_start = end - lu_bootmem_size;
+}
+}
+
 /* Is the region suitable for relocating the multiboot modules? */
 for ( j = mbi->mods_count - 1; j >= 0; j-- )
 {
@@ -1267,6 +1328,15 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 if ( !xen_phys_start )
 panic("Not enough memory to relocate Xen\n");
 
+if ( lu_bootmem_start )
+{
+if ( !lu_reserved )
+reserve_e820_ram(_e820, lu_bootmem_start, lu_bootmem_start + 
lu_bootmem_size);
+printk("LU bootmem: 0x%lx - 0x%lx\n", lu_bootmem_start, 
lu_bootmem_start + lu_bootmem_size);
+init_boot_pages(lu_bootmem_start, lu_bootmem_start + lu_bootmem_size);
+lu_reserved = true;
+}
+
 /* This needs to remain in sync with xen_in_range(). */
 reserve_e820_ram(_e820, __pa(_stext), __pa(__2M_rwdata_end));
 
@@ -1278,8 +1348,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 xenheap_max_mfn(PFN_DOWN(highmem_start - 1));
 
 /*
- * Walk every RAM region and map it in its entirety (on x86/64, at least)
- * and notify it to the boot allocator.
+ * Walk every 

[Xen-devel] [PATCH v2] x86/mem_sharing: Fix RANDCONFIG build

2020-01-08 Thread Andrew Cooper
Travis reports: https://travis-ci.org/andyhhp/xen/jobs/633751811

  mem_sharing.c:361:13: error: 'rmap_has_entries' defined but not used 
[-Werror=unused-function]
   static bool rmap_has_entries(const struct page_info *page)
   ^
  cc1: all warnings being treated as errors

This happens in a release build (disables MEM_SHARING_AUDIT) when
CONFIG_MEM_SHARING is enabled.

Expand both trivial helpers into their single callsite.

Signed-off-by: Andrew Cooper 
---
CC: Tamas K Lengyel 

v2:
 * Expand, rather than mark as __maybe_unused
---
 xen/arch/x86/mm/mem_sharing.c | 16 ++--
 1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index ddf1f0f9f9..64dd3689df 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -351,18 +351,6 @@ static gfn_info_t *rmap_retrieve(uint16_t domain_id, 
unsigned long gfn,
 return NULL;
 }
 
-/* Returns true if the rmap has only one entry. O(1) complexity. */
-static bool rmap_has_one_entry(const struct page_info *page)
-{
-return rmap_count(page) == 1;
-}
-
-/* Returns true if the rmap has any entries. O(1) complexity. */
-static bool rmap_has_entries(const struct page_info *page)
-{
-return rmap_count(page) != 0;
-}
-
 /*
  * The iterator hides the details of how the rmap is implemented. This
  * involves splitting the list_for_each_safe macro into two steps.
@@ -531,7 +519,7 @@ static int audit(void)
 }
 
 /* Check we have a list */
-if ( (!pg->sharing) || !rmap_has_entries(pg) )
+if ( (!pg->sharing) || rmap_count(pg) == 0 )
 {
 MEM_SHARING_DEBUG("mfn %lx shared, but empty gfn list!\n",
   mfn_x(mfn));
@@ -1220,7 +1208,7 @@ int __mem_sharing_unshare_page(struct domain *d,
  * Do the accounting first. If anything fails below, we have bigger
  * bigger fish to fry. First, remove the gfn from the list.
  */
-last_gfn = rmap_has_one_entry(page);
+last_gfn = rmap_count(page) == 1;
 if ( last_gfn )
 {
 /*
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-smoke test] 145806: tolerable all pass

2020-01-08 Thread osstest service owner
flight 145806 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/145806/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  3840e98f3e72b7b92071089a042cd7cf5be72732
baseline version:
 xen  4dde27b6e0a0b0dcb8fdfc7580fbd9c976aa103f

Last test of basis   145752  2020-01-07 18:00:34 Z0 days
Testing same since   145806  2020-01-08 15:00:58 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  George Dunlap 
  Jan Beulich 
  Juergen Gross 
  Marek Marczykowski-Górecki 
  Wei Liu 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

ssh: Could not resolve hostname xenbits.xen.org: Temporary failure in name 
resolution
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

commit 3840e98f3e72b7b92071089a042cd7cf5be72732
Author: Jan Beulich 
Date:   Wed Jan 8 15:04:36 2020 +0100

libxl: don't needlessly report "highmem" in use

Due to the unconditional updating of dom->highmem_end in
libxl__domain_device_construct_rdm() I've observed on a 2Gb HVM guest
with a passed through device (without overly large BARs, and with no RDM
ranges at all)

(d2) RAM in high memory; setting high_mem resource base to 1
...
(d2) E820 table:
(d2)  [00]: : - :000a: RAM
(d2)  HOLE: :000a - :000d
(d2)  [01]: :000d - :0010: RESERVED
(d2)  [02]: :0010 - :7f80: RAM
(d2)  HOLE: :7f80 - :fc00
(d2)  [03]: :fc00 - 0001:: RESERVED
(d2)  [04]: 0001: - 0001:: RAM

both of which aren't really appropriate in this case. Arrange for this
to not happen.

Signed-off-by: Jan Beulich 
Acked-by: Wei Liu 

commit fe4df51ff776c8e543879ed552ace34d217e048d
Author: Jan Beulich 
Date:   Wed Jan 8 15:03:58 2020 +0100

x86/mm: re-order a few conditionals

is_{hvm,pv}_*() can be expensive now, so where possible evaluate cheaper
conditions first.

Signed-off-by: Jan Beulich 
Acked-by: Andrew Cooper 

commit a4cde0266d4287650ec62d8f850e4f84359e5e4f
Author: Jan Beulich 
Date:   Wed Jan 8 15:03:19 2020 +0100

x86/mm: rename and tidy create_pae_xen_mappings()

After dad74b0f9e ("i386: fix handling of Xen entries in final L2 page
table") and the removal of 32-bit support the function doesn't modify
state anymore, and hence its name has been misleading. Change its name,
constify parameters and a local variable, and make it return bool.

Also drop the call to it from mod_l3_entry(): The function explicitly
disallows 32-bit domains to modify slot 3. This way we also won't
re-check slot 3 when a slot other than slot 3 changes. Doing so has
needlessly disallowed making some L2 table recursively link back to an
L2 used in some L3's 3rd slot, as we check for the type ref count to be
1. (Note that allowing dynamic changes of L3 entries in the way we do is
bogus anyway, as that's not how L3s behave in the native and EPT cases:
They get re-evaluated only upon CR3 reloads. NPT is 

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread George Dunlap
On 1/8/20 5:06 PM, Tamas K Lengyel wrote:
> On Wed, Jan 8, 2020 at 9:34 AM George Dunlap  wrote:
>>
>> On 12/31/19 3:11 PM, Roger Pau Monné wrote:
>>> On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
 On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné  
 wrote:
>
> On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel wrote:
>> On Mon, Dec 30, 2019 at 5:20 PM Julien Grall  
>> wrote:
>>>
>>> Hi,
>>>
>>> On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel,  wrote:

 On Mon, Dec 30, 2019 at 11:43 AM Julien Grall  wrote:
 But keep in mind that the "fork-vm" command even with this update
 would still not produce for you a "fully functional" VM on its own.
 The user still has to produce a new VM config file, create the new
 disk, save the QEMU state, etc.
>
> IMO the default behavior of the fork command should be to leave the
> original VM paused, so that you can continue using the same disk and
> network config in the fork and you won't need to pass a new config
> file.
>
> As Julien already said, maybe I wasn't clear in my previous replies:
> I'm not asking you to implement all this, it's fine if the
> implementation of the fork-vm xl command requires you to pass certain
> options, and that the default behavior is not implemented.
>
> We need an interface that's sane, and that's designed to be easy and
> comprehensive to use, not an interface built around what's currently
> implemented.

 OK, so I think that would look like "xl fork-vm " with
 additional options for things like name, disk, vlan, or a completely
 new config, all of which are currently not implemented, + an
 additional option to not launch QEMU at all, which would be the only
 one currently working. Also keeping the separate "xl fork-launch-dm"
 as is. Is that what we are talking about?
>>>
>>> I think fork-launch-vm should just be an option of fork-vm (ie:
>>> --launch-dm-only or some such). I don't think there's a reason to have
>>> a separate top-level command to just launch the device model.
>>
>> So first of all, Tamas -- do you actually need to exec xl here?  Would
>> it make sense for these to start out simply as libxl functions that are
>> called by your system?
> 
> For my current tools & tests - no. I don't start QEMU for the forks at
> all. So at this point I don't even need libxl. But I can foresee that
> at some point in the future it may become necessary in case we want
> allow the forked VM to touch emulated devices. Wiring QEMU up and
> making the system functional as a whole I found it easier to do it via
> xl. There is just way too many moving components involved to do that
> any other way.
> 
>>
>> I actually disagree that we want a single command to do all of these.
>> If we did want `exec xl` to be one of the supported interfaces, I think
>> it would break down something like this:
>>
>> `xl fork-domain`: Only forks the domain.
>> `xl fork-launch-dm`: (or attach-dm?): Start up and attach the
>> devicemodel to the domain
>>
>> Then `xl fork` (or maybe `xl fork-vm`) would be something implemented in
>> the future that would fork the entire domain.
> 
> I really don't have a strong opinion about this either way. I can see
> it working either way. Having them all bundled under a single
> top-level comment doesn't pollute the help text when someone is just
> looking at what xl can do in general. Makes that command a lot more
> complex for sure but I don't think it's too bad.

One thing I don't like about having a single command is that since
you're not planning on implementing the end-to-end "vm fork" command,
then when running the base "fork-vm" command, you'll have to print an
error message that says "This command is not available in its
completeness; you'll have to implement your own via fork-vm --domain,
fork-vm --save-dm, and fork-vm --launch-dm."

Which we could do, but seem a bit strange. :-)

>> Then have `xl fork-launch-dm` either take a filename (saved from the
>> previous step) or a parent domain id (in which case it would arrange to
>> save the file itself).
>>
>> Although in fact, is there any reason we couldn't store the parent
>> domain ID in xenstore, so that `xl fork-launch-dm` could find the parent
>> by itself?  (Although that, of course, is something that could be added
>> later if it's not something Tamas needs.)
> 
> Could be done. But I store ID internally in my tools anyway since I
> need it to initialize VMI. So having it in Xenstore is not required
> for me. In fact I would prefer to leave Xenstore out of these
> operations as much as possible cause it would slow things down. In my
> latest tests forking is down to 0.0007s, having to touch Xenstore for
> each would slow things down considerably.

Right, that makes sense.

 -George

___
Xen-devel mailing list

[Xen-devel] [PATCH v4 17/18] x86/mem_sharing: reset a fork

2020-01-08 Thread Tamas K Lengyel
Implement hypercall that allows a fork to shed all memory that got allocated
for it during its execution and re-load its vCPU context from the parent VM.
This allows the forked VM to reset into the same state the parent VM is in a
faster way then creating a new fork would be. Measurements show about a 2x
speedup during normal fuzzing operations. Performance may vary depending how
much memory got allocated for the forked VM. If it has been completely
deduplicated from the parent VM then creating a new fork would likely be more
performant.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 79 +++
 xen/include/public/memory.h   |  1 +
 2 files changed, 80 insertions(+)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index d544801681..aaa678da14 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1607,6 +1607,62 @@ static int mem_sharing_fork(struct domain *d, struct 
domain *cd)
 return 0;
 }
 
+/*
+ * The fork reset operation is intended to be used on short-lived forks only.
+ * There is no hypercall continuation operation implemented for this reason.
+ * For forks that obtain a larger memory footprint it is likely going to be
+ * more performant to create a new fork instead of resetting an existing one.
+ *
+ * TODO: In case this hypercall would become useful on forks with larger memory
+ * footprints the hypercall continuation should be implemented.
+ */
+static int mem_sharing_fork_reset(struct domain *d, struct domain *cd)
+{
+int rc;
+struct p2m_domain* p2m = p2m_get_hostp2m(cd);
+struct page_info *page, *tmp;
+
+if ( !d->controller_pause_count &&
+ (rc = domain_pause_by_systemcontroller(d)) )
+return rc;
+
+page_list_for_each_safe(page, tmp, >page_list)
+{
+p2m_type_t p2mt;
+p2m_access_t p2ma;
+gfn_t gfn;
+mfn_t mfn = page_to_mfn(page);
+
+if ( !mfn_valid(mfn) )
+continue;
+
+gfn = mfn_to_gfn(cd, mfn);
+mfn = __get_gfn_type_access(p2m, gfn_x(gfn), , ,
+0, NULL, false);
+
+if ( !p2m_is_ram(p2mt) || p2m_is_shared(p2mt) )
+continue;
+
+/* take an extra reference */
+if ( !get_page(page, cd) )
+continue;
+
+rc = p2m->set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
+p2m_invalid, p2m_access_rwx, -1);
+ASSERT(!rc);
+
+put_page_alloc_ref(page);
+put_page(page);
+}
+
+if ( (rc = hvm_copy_context_and_params(d, cd)) )
+return rc;
+
+fork_tsc(d, cd);
+
+return 0;
+}
+
 int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
 int rc;
@@ -1909,6 +1965,29 @@ int 
mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 break;
 }
 
+case XENMEM_sharing_op_fork_reset:
+{
+struct domain *pd;
+
+rc = -EINVAL;
+if ( mso.u.fork._pad[0] || mso.u.fork._pad[1] ||
+ mso.u.fork._pad[2] )
+goto out;
+
+rc = -ENOSYS;
+if ( !d->parent )
+goto out;
+
+rc = rcu_lock_live_remote_domain_by_id(d->parent->domain_id, );
+if ( rc )
+goto out;
+
+rc = mem_sharing_fork_reset(pd, d);
+
+rcu_unlock_domain(pd);
+break;
+}
+
 default:
 rc = -ENOSYS;
 break;
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 90a3f4498e..e3d063e22e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -483,6 +483,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
 #define XENMEM_sharing_op_audit 7
 #define XENMEM_sharing_op_range_share   8
 #define XENMEM_sharing_op_fork  9
+#define XENMEM_sharing_op_fork_reset10
 
 #define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
 #define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 16/18] xen/mem_access: Use __get_gfn_type_access in set_mem_access

2020-01-08 Thread Tamas K Lengyel
Use __get_gfn_type_access instead of p2m->get_entry to trigger page-forking
when the mem_access permission is being set on a page that has not yet been
copied over from the parent.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_access.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 320b9fe621..9caf08a5b2 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -303,11 +303,10 @@ static int set_mem_access(struct domain *d, struct 
p2m_domain *p2m,
 ASSERT(!ap2m);
 #endif
 {
-mfn_t mfn;
 p2m_access_t _a;
 p2m_type_t t;
-
-mfn = p2m->get_entry(p2m, gfn, , &_a, 0, NULL, NULL);
+mfn_t mfn = __get_gfn_type_access(p2m, gfn_x(gfn), , &_a,
+  P2M_ALLOC, NULL, false);
 rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, t, a, -1);
 }
 
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 14/18] x86/mem_sharing: check page type count earlier

2020-01-08 Thread Tamas K Lengyel
Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 13 ++---
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index baa3e35ded..ecbe40545d 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -652,19 +652,18 @@ static int page_make_sharable(struct domain *d,
 return -EBUSY;
 }
 
-/* Change page type and count atomically */
-if ( !get_page_and_type(page, d, PGT_shared_page) )
+/* Check if page is already typed and bail early if it is */
+if ( (page->u.inuse.type_info & PGT_count_mask) != 1 )
 {
 spin_unlock(>page_alloc_lock);
-return -EINVAL;
+return -EEXIST;
 }
 
-/* Check it wasn't already sharable and undo if it was */
-if ( (page->u.inuse.type_info & PGT_count_mask) != 1 )
+/* Change page type and count atomically */
+if ( !get_page_and_type(page, d, PGT_shared_page) )
 {
 spin_unlock(>page_alloc_lock);
-put_page_and_type(page);
-return -EEXIST;
+return -EINVAL;
 }
 
 /*
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 13/18] x86/mem_sharing: Skip xen heap pages in memshr nominate

2020-01-08 Thread Tamas K Lengyel
Trying to share these would fail anyway, better to skip them early.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index b8a9228ecf..baa3e35ded 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -852,6 +852,11 @@ static int nominate_page(struct domain *d, gfn_t gfn,
 if ( !p2m_is_sharable(p2mt) )
 goto out;
 
+/* Skip xen heap pages */
+page = mfn_to_page(mfn);
+if ( !page || is_xen_heap_page(page) )
+goto out;
+
 /* Check if there are mem_access/remapped altp2m entries for this page */
 if ( altp2m_active(d) )
 {
@@ -882,7 +887,6 @@ static int nominate_page(struct domain *d, gfn_t gfn,
 }
 
 /* Try to convert the mfn to the sharable type */
-page = mfn_to_page(mfn);
 ret = page_make_sharable(d, page, expected_refcnt);
 if ( ret )
 goto out;
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 11/18] x86/mem_sharing: ASSERT that p2m_set_entry succeeds

2020-01-08 Thread Tamas K Lengyel
Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 42 +--
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 93e7605900..3f36cd6bbc 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1117,11 +1117,19 @@ int add_to_physmap(struct domain *sd, unsigned long 
sgfn, shr_handle_t sh,
 goto err_unlock;
 }
 
+/*
+ * Must succeed, we just read the entry and hold the p2m lock
+ * via get_two_gfns.
+ */
 ret = p2m_set_entry(p2m, _gfn(cgfn), smfn, PAGE_ORDER_4K,
 p2m_ram_shared, a);
+ASSERT(!ret);
 
-/* Tempted to turn this into an assert */
-if ( ret )
+/*
+ * There is a chance we're plugging a hole where a paged out
+ * page was.
+ */
+if ( p2m_is_paging(cmfn_type) && (cmfn_type != p2m_ram_paging_out) )
 {
 mem_sharing_gfn_destroy(spage, cd, gfn_info);
 put_page_and_type(spage);
@@ -1129,29 +1137,21 @@ int add_to_physmap(struct domain *sd, unsigned long 
sgfn, shr_handle_t sh,
 else
 {
 /*
- * There is a chance we're plugging a hole where a paged out
- * page was.
+ * Further, there is a chance this was a valid page.
+ * Don't leak it.
  */
-if ( p2m_is_paging(cmfn_type) && (cmfn_type != p2m_ram_paging_out) )
+if ( mfn_valid(cmfn) )
 {
-atomic_dec(>paged_pages);
-/*
- * Further, there is a chance this was a valid page.
- * Don't leak it.
- */
-if ( mfn_valid(cmfn) )
+struct page_info *cpage = mfn_to_page(cmfn);
+
+if ( !get_page(cpage, cd) )
 {
-struct page_info *cpage = mfn_to_page(cmfn);
-
-if ( !get_page(cpage, cd) )
-{
-domain_crash(cd);
-ret = -EOVERFLOW;
-goto err_unlock;
-}
-put_page_alloc_ref(cpage);
-put_page(cpage);
+domain_crash(cd);
+ret = -EOVERFLOW;
+goto err_unlock;
 }
+put_page_alloc_ref(cpage);
+put_page(cpage);
 }
 }
 
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 18/18] xen/tools: VM forking toolstack side

2020-01-08 Thread Tamas K Lengyel
Add necessary bits to implement "xl fork-vm" commands. The command allows the
user to specify how to launch the device model allowing for a late-launch model
in which the user can execute the fork without the device model and decide to
only later launch it.

Signed-off-by: Tamas K Lengyel 
---
v4: combine xl commands as suboptions to xl fork-vm
---
 docs/man/xl.1.pod.in  |  36 ++
 tools/libxc/include/xenctrl.h |  13 ++
 tools/libxc/xc_memshr.c   |  22 
 tools/libxl/libxl.h   |   7 +
 tools/libxl/libxl_create.c| 237 +++---
 tools/libxl/libxl_dm.c|   2 +-
 tools/libxl/libxl_dom.c   |  83 
 tools/libxl/libxl_internal.h  |   1 +
 tools/libxl/libxl_types.idl   |   1 +
 tools/xl/xl.h |   5 +
 tools/xl/xl_cmdtable.c|  12 ++
 tools/xl/xl_saverestore.c |  96 ++
 tools/xl/xl_vmcontrol.c   |   8 ++
 13 files changed, 419 insertions(+), 104 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index d4b5e8e362..22cc4149b0 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -694,6 +694,42 @@ Leave the domain paused after creating the snapshot.
 
 =back
 
+=item B [I] I
+
+Create a fork of a running VM. The domain will be paused after the operation
+and needs to remain paused while forks of it exist.
+
+B
+
+=over 4
+
+=item B<-p>
+
+Leave the fork paused after creating it.
+
+=item B<--launch-dm>
+
+Specify whether the device model (QEMU) should be launched for the fork. Late
+launch allows to start the device model for an already running fork.
+
+=item B<-C>
+
+The config file to use when launching the device model. Currently required when
+launching the device model.
+
+=item B<-Q>
+
+The qemu save file to use when launching the device model.  Currently required
+when launching the device model.
+
+=item B<--fork-reset>
+
+Perform a reset operation of an already running fork. Note that resetting may
+be less performant then creating a new fork depending on how much memory the
+fork has deduplicated during its runtime.
+
+=back
+
 =item B [I]
 
 Display the number of shared pages for a specified domain. If no domain is
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 75f191ae3a..ffb0bb9a42 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2221,6 +2221,19 @@ int xc_memshr_range_share(xc_interface *xch,
   uint64_t first_gfn,
   uint64_t last_gfn);
 
+int xc_memshr_fork(xc_interface *xch,
+   uint32_t source_domain,
+   uint32_t client_domain);
+
+/*
+ * Note: this function is only intended to be used on short-lived forks that
+ * haven't yet aquired a lot of memory. In case the fork has a lot of memory
+ * it is likely more performant to create a new fork with xc_memshr_fork.
+ *
+ * With VMs that have a lot of memory this call may block for a long time.
+ */
+int xc_memshr_fork_reset(xc_interface *xch, uint32_t forked_domain);
+
 /* Debug calls: return the number of pages referencing the shared frame backing
  * the input argument. Should be one or greater.
  *
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 97e2e6a8d9..d0e4ee225b 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -239,6 +239,28 @@ int xc_memshr_debug_gref(xc_interface *xch,
 return xc_memshr_memop(xch, domid, );
 }
 
+int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid)
+{
+xen_mem_sharing_op_t mso;
+
+memset(, 0, sizeof(mso));
+
+mso.op = XENMEM_sharing_op_fork;
+mso.u.fork.parent_domain = pdomid;
+
+return xc_memshr_memop(xch, domid, );
+}
+
+int xc_memshr_fork_reset(xc_interface *xch, uint32_t domid)
+{
+xen_mem_sharing_op_t mso;
+
+memset(, 0, sizeof(mso));
+mso.op = XENMEM_sharing_op_fork_reset;
+
+return xc_memshr_memop(xch, domid, );
+}
+
 int xc_memshr_audit(xc_interface *xch)
 {
 xen_mem_sharing_op_t mso;
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 54abb9db1f..75cb070587 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1536,6 +1536,13 @@ int libxl_domain_create_new(libxl_ctx *ctx, 
libxl_domain_config *d_config,
 const libxl_asyncop_how *ao_how,
 const libxl_asyncprogress_how *aop_console_how)
 LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_vm(libxl_ctx *ctx, uint32_t pdomid, uint32_t *domid)
+ LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_config,
+uint32_t domid,
+const libxl_asyncprogress_how *aop_console_how)
+LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid);
 int libxl_domain_create_restore(libxl_ctx *ctx, 

[Xen-devel] [PATCH v4 12/18] x86/mem_sharing: Enable mem_sharing on first memop

2020-01-08 Thread Tamas K Lengyel
It is wasteful to require separate hypercalls to enable sharing on both the
parent and the client domain during VM forking. To speed things up we enable
sharing on the first memop in case it wasn't already enabled.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 36 +--
 1 file changed, 22 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 3f36cd6bbc..b8a9228ecf 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1412,6 +1412,24 @@ static int range_share(struct domain *d, struct domain 
*cd,
 return rc;
 }
 
+static inline int mem_sharing_control(struct domain *d, bool enable)
+{
+if ( enable )
+{
+if ( unlikely(!is_hvm_domain(d)) )
+return -ENOSYS;
+
+if ( unlikely(!hap_enabled(d)) )
+return -ENODEV;
+
+if ( unlikely(is_iommu_enabled(d)) )
+return -EXDEV;
+}
+
+d->arch.hvm.mem_sharing.enabled = enable;
+return 0;
+}
+
 int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
 int rc;
@@ -1433,10 +1451,8 @@ int 
mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 if ( rc )
 goto out;
 
-/* Only HAP is supported */
-rc = -ENODEV;
-if ( !mem_sharing_enabled(d) )
-goto out;
+if ( !mem_sharing_enabled(d) && (rc = mem_sharing_control(d, true)) )
+return rc;
 
 switch ( mso.op )
 {
@@ -1703,18 +1719,10 @@ int mem_sharing_domctl(struct domain *d, struct 
xen_domctl_mem_sharing_op *mec)
 {
 int rc;
 
-/* Only HAP is supported */
-if ( !hap_enabled(d) )
-return -ENODEV;
-
-switch ( mec->op )
+switch( mec->op )
 {
 case XEN_DOMCTL_MEM_SHARING_CONTROL:
-rc = 0;
-if ( unlikely(is_iommu_enabled(d) && mec->u.enable) )
-rc = -EXDEV;
-else
-d->arch.hvm.mem_sharing_enabled = mec->u.enable;
+rc = mem_sharing_control(d, mec->u.enable);
 break;
 
 default:
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 15/18] xen/mem_sharing: VM forking

2020-01-08 Thread Tamas K Lengyel
VM forking is the process of creating a domain with an empty memory space and a
parent domain specified from which to populate the memory when necessary. For
the new domain to be functional the VM state is copied over as part of the fork
operation (HVM params, hap allocation, etc).

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/hvm/hvm.c|   2 +-
 xen/arch/x86/mm/mem_sharing.c | 204 ++
 xen/arch/x86/mm/p2m.c |  11 +-
 xen/include/asm-x86/mem_sharing.h |  20 ++-
 xen/include/public/memory.h   |   5 +
 xen/include/xen/sched.h   |   1 +
 6 files changed, 239 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5d24ceb469..3241e2a5ac 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1909,7 +1909,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long 
gla,
 }
 #endif
 
-/* Spurious fault? PoD and log-dirty also take this path. */
+/* Spurious fault? PoD, log-dirty and VM forking also take this path. */
 if ( p2m_is_ram(p2mt) )
 {
 rc = 1;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index ecbe40545d..d544801681 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -22,11 +22,13 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -36,6 +38,9 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+#include 
 #include 
 
 #include "mm-locks.h"
@@ -1433,6 +1438,175 @@ static inline int mem_sharing_control(struct domain *d, 
bool enable)
 return 0;
 }
 
+/*
+ * Forking a page only gets called when the VM faults due to no entry being
+ * in the EPT for the access. Depending on the type of access we either
+ * populate the physmap with a shared entry for read-only access or
+ * fork the page if its a write access.
+ *
+ * The client p2m is already locked so we only need to lock
+ * the parent's here.
+ */
+int mem_sharing_fork_page(struct domain *d, gfn_t gfn, bool unsharing)
+{
+int rc = -ENOENT;
+shr_handle_t handle;
+struct domain *parent;
+struct p2m_domain *p2m;
+unsigned long gfn_l = gfn_x(gfn);
+mfn_t mfn, new_mfn;
+p2m_type_t p2mt;
+struct page_info *page;
+
+if ( !mem_sharing_is_fork(d) )
+return -ENOENT;
+
+parent = d->parent;
+
+if ( !unsharing )
+{
+/* For read-only accesses we just add a shared entry to the physmap */
+while ( parent )
+{
+if ( !(rc = nominate_page(parent, gfn, 0, )) )
+break;
+
+parent = parent->parent;
+}
+
+if ( !rc )
+{
+/* The client's p2m is already locked */
+struct p2m_domain *pp2m = p2m_get_hostp2m(parent);
+
+p2m_lock(pp2m);
+rc = add_to_physmap(parent, gfn_l, handle, d, gfn_l, false);
+p2m_unlock(pp2m);
+
+if ( !rc )
+return 0;
+}
+}
+
+/*
+ * If it's a write access (ie. unsharing) or if adding a shared entry to
+ * the physmap failed we'll fork the page directly.
+ */
+p2m = p2m_get_hostp2m(d);
+parent = d->parent;
+
+while ( parent )
+{
+mfn = get_gfn_query(parent, gfn_l, );
+
+if ( mfn_valid(mfn) && p2m_is_any_ram(p2mt) )
+break;
+
+put_gfn(parent, gfn_l);
+parent = parent->parent;
+}
+
+if ( !parent )
+return -ENOENT;
+
+if ( !(page = alloc_domheap_page(d, 0)) )
+{
+put_gfn(parent, gfn_l);
+return -ENOMEM;
+}
+
+new_mfn = page_to_mfn(page);
+copy_domain_page(new_mfn, mfn);
+set_gpfn_from_mfn(mfn_x(new_mfn), gfn_l);
+
+put_gfn(parent, gfn_l);
+
+return p2m->set_entry(p2m, gfn, new_mfn, PAGE_ORDER_4K, p2m_ram_rw,
+  p2m->default_access, -1);
+}
+
+static int bring_up_vcpus(struct domain *cd, struct cpupool *cpupool)
+{
+int ret;
+unsigned int i;
+
+if ( (ret = cpupool_move_domain(cd, cpupool)) )
+return ret;
+
+for ( i = 0; i < cd->max_vcpus; i++ )
+{
+if ( cd->vcpu[i] )
+continue;
+
+if ( !vcpu_create(cd, i) )
+return -EINVAL;
+}
+
+domain_update_node_affinity(cd);
+return 0;
+}
+
+static int fork_hap_allocation(struct domain *d, struct domain *cd)
+{
+int rc;
+bool preempted;
+unsigned long mb = hap_get_allocation(d);
+
+if ( mb == hap_get_allocation(cd) )
+return 0;
+
+paging_lock(cd);
+rc = hap_set_allocation(cd, mb << (20 - PAGE_SHIFT), );
+paging_unlock(cd);
+
+if ( rc )
+return rc;
+
+if ( preempted )
+return -ERESTART;
+
+return 0;
+}
+
+static void fork_tsc(struct domain *d, struct domain *cd)
+{
+uint32_t tsc_mode;
+uint32_t gtsc_khz;
+uint32_t incarnation;
+uint64_t elapsed_nsec;
+
+

[Xen-devel] [PATCH v4 07/18] x86/mem_sharing: Use INVALID_MFN and p2m_is_shared in relinquish_shared_pages

2020-01-08 Thread Tamas K Lengyel
While using _mfn(0) is of no consequence during teardown, INVALID_MFN is the
correct value that should be used.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 3aa61c30e6..95e75ff298 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1326,7 +1326,7 @@ int relinquish_shared_pages(struct domain *d)
 break;
 
 mfn = p2m->get_entry(p2m, _gfn(gfn), , , 0, NULL, NULL);
-if ( mfn_valid(mfn) && t == p2m_ram_shared )
+if ( mfn_valid(mfn) && p2m_is_shared(t) )
 {
 /* Does not fail with ENOMEM given the DESTROY flag */
 BUG_ON(__mem_sharing_unshare_page(
@@ -1336,7 +1336,7 @@ int relinquish_shared_pages(struct domain *d)
  * unshare.  Must succeed: we just read the old entry and
  * we hold the p2m lock.
  */
-set_rc = p2m->set_entry(p2m, _gfn(gfn), _mfn(0), PAGE_ORDER_4K,
+set_rc = p2m->set_entry(p2m, _gfn(gfn), INVALID_MFN, PAGE_ORDER_4K,
 p2m_invalid, p2m_access_rwx, -1);
 ASSERT(!set_rc);
 count += 0x10;
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 06/18] x86/mem_sharing: define mem_sharing_domain to hold some scattered variables

2020-01-08 Thread Tamas K Lengyel
Create struct mem_sharing_domain under hvm_domain and move mem sharing
variables into it from p2m_domain and hvm_domain.

Expose the mem_sharing_enabled macro to be used consistently across Xen.

Remove some duplicate calls to mem_sharing_enabled in mem_sharing.c

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 10 --
 xen/drivers/passthrough/pci.c |  3 +--
 xen/include/asm-x86/hvm/domain.h  |  6 +-
 xen/include/asm-x86/mem_sharing.h | 16 
 xen/include/asm-x86/p2m.h |  4 
 5 files changed, 26 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index f6187403a0..3aa61c30e6 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -197,9 +197,6 @@ static shr_handle_t get_next_handle(void)
 return x + 1;
 }
 
-#define mem_sharing_enabled(d) \
-(is_hvm_domain(d) && (d)->arch.hvm.mem_sharing_enabled)
-
 static atomic_t nr_saved_mfns   = ATOMIC_INIT(0);
 static atomic_t nr_shared_mfns  = ATOMIC_INIT(0);
 
@@ -1309,6 +1306,7 @@ int __mem_sharing_unshare_page(struct domain *d,
 int relinquish_shared_pages(struct domain *d)
 {
 int rc = 0;
+struct mem_sharing_domain *msd = >arch.hvm.mem_sharing;
 struct p2m_domain *p2m = p2m_get_hostp2m(d);
 unsigned long gfn, count = 0;
 
@@ -1316,7 +1314,7 @@ int relinquish_shared_pages(struct domain *d)
 return 0;
 
 p2m_lock(p2m);
-for ( gfn = p2m->next_shared_gfn_to_relinquish;
+for ( gfn = msd->next_shared_gfn_to_relinquish;
   gfn <= p2m->max_mapped_pfn; gfn++ )
 {
 p2m_access_t a;
@@ -1351,7 +1349,7 @@ int relinquish_shared_pages(struct domain *d)
 {
 if ( hypercall_preempt_check() )
 {
-p2m->next_shared_gfn_to_relinquish = gfn + 1;
+msd->next_shared_gfn_to_relinquish = gfn + 1;
 rc = -ERESTART;
 break;
 }
@@ -1437,7 +1435,7 @@ int 
mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 
 /* Only HAP is supported */
 rc = -ENODEV;
-if ( !hap_enabled(d) || !d->arch.hvm.mem_sharing_enabled )
+if ( !mem_sharing_enabled(d) )
 goto out;
 
 switch ( mso.op )
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c07a63981a..65d1d457ff 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1498,8 +1498,7 @@ static int assign_device(struct domain *d, u16 seg, u8 
bus, u8 devfn, u32 flag)
 /* Prevent device assign if mem paging or mem sharing have been 
  * enabled for this domain */
 if ( d != dom_io &&
- unlikely((is_hvm_domain(d) &&
-   d->arch.hvm.mem_sharing_enabled) ||
+ unlikely(mem_sharing_enabled(d) ||
   vm_event_check_ring(d->vm_event_paging) ||
   p2m_get_hostp2m(d)->global_logdirty) )
 return -EXDEV;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index bcc5621797..8f70ba2b1a 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -29,6 +29,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -156,7 +157,6 @@ struct hvm_domain {
 
 struct viridian_domain *viridian;
 
-bool_t mem_sharing_enabled;
 bool_t qemu_mapcache_invalidate;
 bool_t is_s3_suspended;
 
@@ -192,6 +192,10 @@ struct hvm_domain {
 struct vmx_domain vmx;
 struct svm_domain svm;
 };
+
+#ifdef CONFIG_MEM_SHARING
+struct mem_sharing_domain mem_sharing;
+#endif
 };
 
 #endif /* __ASM_X86_HVM_DOMAIN_H__ */
diff --git a/xen/include/asm-x86/mem_sharing.h 
b/xen/include/asm-x86/mem_sharing.h
index cf7848709f..13114b6346 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -26,6 +26,20 @@
 
 #ifdef CONFIG_MEM_SHARING
 
+struct mem_sharing_domain
+{
+bool enabled;
+
+/*
+ * When releasing shared gfn's in a preemptible manner, recall where
+ * to resume the search.
+ */
+unsigned long next_shared_gfn_to_relinquish;
+};
+
+#define mem_sharing_enabled(d) \
+(hap_enabled(d) && (d)->arch.hvm.mem_sharing.enabled)
+
 /* Auditing of memory sharing code? */
 #ifndef NDEBUG
 #define MEM_SHARING_AUDIT 1
@@ -104,6 +118,8 @@ int relinquish_shared_pages(struct domain *d);
 
 #else
 
+#define mem_sharing_enabled(d) false
+
 static inline unsigned int mem_sharing_get_nr_saved_mfns(void)
 {
 return 0;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 7399c4a897..8defa90306 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -305,10 +305,6 @@ struct p2m_domain {
 unsigned long min_remapped_gfn;
 unsigned long max_remapped_gfn;
 
-/* When releasing shared gfn's in a preemptible manner, recall where
- * to resume the search */
-

[Xen-devel] [PATCH v4 04/18] x86/mem_sharing: drop flags from mem_sharing_unshare_page

2020-01-08 Thread Tamas K Lengyel
All callers pass 0 in.

Signed-off-by: Tamas K Lengyel 
Reviewed-by: Wei Liu 
---
 xen/arch/x86/hvm/hvm.c| 2 +-
 xen/arch/x86/mm/p2m.c | 5 ++---
 xen/common/memory.c   | 2 +-
 xen/include/asm-x86/mem_sharing.h | 8 +++-
 4 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 24f08d7043..38e9006c92 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1898,7 +1898,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long 
gla,
 if ( npfec.write_access && (p2mt == p2m_ram_shared) )
 {
 ASSERT(p2m_is_hostp2m(p2m));
-sharing_enomem = mem_sharing_unshare_page(currd, gfn, 0);
+sharing_enomem = mem_sharing_unshare_page(currd, gfn);
 rc = 1;
 goto out_put_gfn;
 }
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 3119269073..baea632acc 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -515,7 +515,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, 
unsigned long gfn_l,
  * Try to unshare. If we fail, communicate ENOMEM without
  * sleeping.
  */
-if ( mem_sharing_unshare_page(p2m->domain, gfn_l, 0) < 0 )
+if ( mem_sharing_unshare_page(p2m->domain, gfn_l) < 0 )
 mem_sharing_notify_enomem(p2m->domain, gfn_l, false);
 mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL);
 }
@@ -896,8 +896,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t 
mfn,
 {
 /* Do an unshare to cleanly take care of all corner cases. */
 int rc;
-rc = mem_sharing_unshare_page(p2m->domain,
-  gfn_x(gfn_add(gfn, i)), 0);
+rc = mem_sharing_unshare_page(p2m->domain, gfn_x(gfn_add(gfn, i)));
 if ( rc )
 {
 p2m_unlock(p2m);
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 309e872edf..c7d2bac452 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -352,7 +352,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
  * might be the only one using this shared page, and we need to
  * trigger proper cleanup. Once done, this is like any other page.
  */
-rc = mem_sharing_unshare_page(d, gmfn, 0);
+rc = mem_sharing_unshare_page(d, gmfn);
 if ( rc )
 {
 mem_sharing_notify_enomem(d, gmfn, false);
diff --git a/xen/include/asm-x86/mem_sharing.h 
b/xen/include/asm-x86/mem_sharing.h
index af2a1038b5..cf7848709f 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -69,10 +69,9 @@ int __mem_sharing_unshare_page(struct domain *d,
uint16_t flags);
 
 static inline int mem_sharing_unshare_page(struct domain *d,
-   unsigned long gfn,
-   uint16_t flags)
+   unsigned long gfn)
 {
-int rc = __mem_sharing_unshare_page(d, gfn, flags);
+int rc = __mem_sharing_unshare_page(d, gfn, 0);
 BUG_ON(rc && (rc != -ENOMEM));
 return rc;
 }
@@ -115,8 +114,7 @@ static inline unsigned int 
mem_sharing_get_nr_shared_mfns(void)
 return 0;
 }
 
-static inline int mem_sharing_unshare_page(struct domain *d, unsigned long gfn,
-   uint16_t flags)
+static inline int mem_sharing_unshare_page(struct domain *d, unsigned long gfn)
 {
 ASSERT_UNREACHABLE();
 return -EOPNOTSUPP;
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 00/18] VM forking

2020-01-08 Thread Tamas K Lengyel
The following series implements VM forking for Intel HVM guests to allow for
the fast creation of identical VMs without the assosciated high startup costs
of booting or restoring the VM from a savefile.

JIRA issue: https://xenproject.atlassian.net/browse/XEN-89

The fork operation is implemented as part of the "xl fork-vm" command:
xl fork-vm -C  -Q  

By default a fully functional fork is created. The user is in charge however to
create the appropriate config file for the fork and to generate the QEMU save
file before the fork-vm call is made. The config file needs to give the
fork a new name at minimum but other settings may also require changes.

The interface also allows to split the forking into two steps:
xl fork-vm --launch-dm no \
   -p 
xl fork-vm --launch-dm late \
   -C  \
   -Q  \
   

The split creation model is useful when the VM needs to be created as fast as
possible. The forked VM can be unpaused without the device model being launched
to be monitored and accessed via VMI. Note however that without its device
model running (depending on what is executing in the VM) it is bound to
misbehave or even crash when its trying to access devices that would be
emulated by QEMU. We anticipate that for certain use-cases this would be an
acceptable situation, in case for example when fuzzing is performed of code
segments that don't access such devices.

Launching the device model requires the QEMU Xen savefile to be generated
manually from the parent VM. This can be accomplished simply by connecting to
its QMP socket and issuing the "xen-save-devices-state" command. For example
using the standard tool socat these commands can be used to generate the file:
socat - UNIX-CONNECT:/var/run/xen/qmp-libxl-
{ "execute": "qmp_capabilities" }
{ "execute": "xen-save-devices-state", \
"arguments": { "filename": "/path/to/save/qemu_state", \
"live": false} }

At runtime the forked VM starts running with an empty p2m which gets lazily
populated when the VM generates EPT faults, similar to how altp2m views are
populated. If the memory access is a read-only access, the p2m entry is
populated with a memory shared entry with its parent. For write memory accesses
or in case memory sharing wasn't possible (for example in case a reference is
held by a third party), a new page is allocated and the page contents are
copied over from the parent VM. Forks can be further forked if needed, thus
allowing for further memory savings.

A VM fork reset hypercall is also added that allows the fork to be reset to the
state it was just after a fork, also accessible via xl:
xl fork-vm --fork-reset -p 

This is an optimization for cases where the forks are very short-lived and run
without a device model, so resetting saves some time compared to creating a
brand new fork provided the fork has not aquired a lot of memory. If the fork
has a lot of memory deduplicated it is likely going to be faster to create a
new fork from scratch and asynchronously destroying the old one.

The series has been tested with both Linux and Windows VMs and functions as
expected. VM forking time has been measured to be 0.0007s, device model launch
to be around 1s depending largely on the number of devices being emulated. Fork
resets have been measured to be 0.0001s under the optimal circumstances.

Patches 1-2 implement changes to existing internal Xen APIs to make VM forking
possible.

Patches 3-14 are code-cleanups and adjustments of to Xen memory sharing
subsystem with no functional changes.

Patch 15 adds the hypervisor-side code implementing VM forking.

Patch 16 is integration of mem_access with forked VMs.

Patch 17 implements the VM fork reset operation hypervisor side bits.

Patch 18 adds the toolstack-side code implementing VM forking and reset.

Tamas K Lengyel (18):
  x86/hvm: introduce hvm_copy_context_and_params
  xen/x86: Make hap_get_allocation accessible
  x86/mem_sharing: make get_two_gfns take locks conditionally
  x86/mem_sharing: drop flags from mem_sharing_unshare_page
  x86/mem_sharing: don't try to unshare twice during page fault
  x86/mem_sharing: define mem_sharing_domain to hold some scattered
variables
  x86/mem_sharing: Use INVALID_MFN and p2m_is_shared in
relinquish_shared_pages
  x86/mem_sharing: Make add_to_physmap static and shorten name
  x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk
  x86/mem_sharing: ASSERT that p2m_set_entry succeeds
  x86/mem_sharing: Enable mem_sharing on first memop
  x86/mem_sharing: Skip xen heap pages in memshr nominate
  x86/mem_sharing: check page type count earlier
  xen/mem_sharing: VM forking
  xen/mem_access: Use __get_gfn_type_access in set_mem_access
  x86/mem_sharing: reset a fork
  xen/tools: VM forking toolstack side

 docs/man/xl.1.pod.in  |  36 +++
 tools/libxc/include/xenctrl.h |  13 

[Xen-devel] [PATCH v4 09/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool

2020-01-08 Thread Tamas K Lengyel
MEM_SHARING_DESTROY_GFN is used on the 'flags' bitfield during unsharing.
However, the bitfield is not used for anything else, so just convert it to a
bool instead.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 9 -
 xen/include/asm-x86/mem_sharing.h | 5 ++---
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 84b9f130b9..0435a7f803 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1182,7 +1182,7 @@ err_out:
  */
 int __mem_sharing_unshare_page(struct domain *d,
unsigned long gfn,
-   uint16_t flags)
+   bool destroy)
 {
 p2m_type_t p2mt;
 mfn_t mfn;
@@ -1238,7 +1238,7 @@ int __mem_sharing_unshare_page(struct domain *d,
  * If the GFN is getting destroyed drop the references to MFN
  * (possibly freeing the page), and exit early.
  */
-if ( flags & MEM_SHARING_DESTROY_GFN )
+if ( destroy )
 {
 if ( !last_gfn )
 mem_sharing_gfn_destroy(page, d, gfn_info);
@@ -1329,9 +1329,8 @@ int relinquish_shared_pages(struct domain *d)
 mfn = p2m->get_entry(p2m, _gfn(gfn), , , 0, NULL, NULL);
 if ( mfn_valid(mfn) && p2m_is_shared(t) )
 {
-/* Does not fail with ENOMEM given the DESTROY flag */
-BUG_ON(__mem_sharing_unshare_page(
-   d, gfn, MEM_SHARING_DESTROY_GFN));
+/* Does not fail with ENOMEM given "destroy" is set to true */
+BUG_ON(__mem_sharing_unshare_page(d, gfn, true));
 /*
  * Clear out the p2m entry so no one else may try to
  * unshare.  Must succeed: we just read the old entry and
diff --git a/xen/include/asm-x86/mem_sharing.h 
b/xen/include/asm-x86/mem_sharing.h
index 13114b6346..c915fd973f 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -76,16 +76,15 @@ struct page_sharing_info
 unsigned int mem_sharing_get_nr_saved_mfns(void);
 unsigned int mem_sharing_get_nr_shared_mfns(void);
 
-#define MEM_SHARING_DESTROY_GFN   (1<<1)
 /* Only fails with -ENOMEM. Enforce it with a BUG_ON wrapper. */
 int __mem_sharing_unshare_page(struct domain *d,
unsigned long gfn,
-   uint16_t flags);
+   bool destroy);
 
 static inline int mem_sharing_unshare_page(struct domain *d,
unsigned long gfn)
 {
-int rc = __mem_sharing_unshare_page(d, gfn, 0);
+int rc = __mem_sharing_unshare_page(d, gfn, false);
 BUG_ON(rc && (rc != -ENOMEM));
 return rc;
 }
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 10/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk

2020-01-08 Thread Tamas K Lengyel
Using XENLOG_ERR level since this is only used in debug paths (ie. it's
expected the user already has loglvl=all set).

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 86 +--
 1 file changed, 43 insertions(+), 43 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 0435a7f803..93e7605900 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -49,9 +49,6 @@ typedef struct pg_lock_data {
 
 static DEFINE_PER_CPU(pg_lock_data_t, __pld);
 
-#define MEM_SHARING_DEBUG(_f, _a...)  \
-debugtrace_printk("mem_sharing_debug: %s(): " _f, __func__, ##_a)
-
 /* Reverse map defines */
 #define RMAP_HASHTAB_ORDER  0
 #define RMAP_HASHTAB_SIZE   \
@@ -494,19 +491,19 @@ static int audit(void)
 /* If we can't lock it, it's definitely not a shared page */
 if ( !mem_sharing_page_lock(pg) )
 {
-MEM_SHARING_DEBUG(
-"mfn %lx in audit list, but cannot be locked (%lx)!\n",
-mfn_x(mfn), pg->u.inuse.type_info);
-errors++;
-continue;
+gdprintk(XENLOG_ERR,
+ "mfn %lx in audit list, but cannot be locked (%lx)!\n",
+ mfn_x(mfn), pg->u.inuse.type_info);
+   errors++;
+   continue;
 }
 
 /* Check if the MFN has correct type, owner and handle. */
 if ( (pg->u.inuse.type_info & PGT_type_mask) != PGT_shared_page )
 {
-MEM_SHARING_DEBUG(
-"mfn %lx in audit list, but not PGT_shared_page (%lx)!\n",
-mfn_x(mfn), pg->u.inuse.type_info & PGT_type_mask);
+gdprintk(XENLOG_ERR,
+ "mfn %lx in audit list, but not PGT_shared_page (%lx)!\n",
+ mfn_x(mfn), pg->u.inuse.type_info & PGT_type_mask);
 errors++;
 continue;
 }
@@ -514,24 +511,24 @@ static int audit(void)
 /* Check the page owner. */
 if ( page_get_owner(pg) != dom_cow )
 {
-MEM_SHARING_DEBUG("mfn %lx shared, but wrong owner %pd!\n",
-  mfn_x(mfn), page_get_owner(pg));
-errors++;
+   gdprintk(XENLOG_ERR, "mfn %lx shared, but wrong owner (%hu)!\n",
+mfn_x(mfn), page_get_owner(pg)->domain_id);
+   errors++;
 }
 
 /* Check the m2p entry */
 if ( !SHARED_M2P(get_gpfn_from_mfn(mfn_x(mfn))) )
 {
-MEM_SHARING_DEBUG("mfn %lx shared, but wrong m2p entry (%lx)!\n",
-  mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
-errors++;
+   gdprintk(XENLOG_ERR, "mfn %lx shared, but wrong m2p entry 
(%lx)!\n",
+mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
+   errors++;
 }
 
 /* Check we have a list */
 if ( (!pg->sharing) || !rmap_has_entries(pg) )
 {
-MEM_SHARING_DEBUG("mfn %lx shared, but empty gfn list!\n",
-  mfn_x(mfn));
+gdprintk(XENLOG_ERR, "mfn %lx shared, but empty gfn list!\n",
+ mfn_x(mfn));
 errors++;
 continue;
 }
@@ -550,24 +547,26 @@ static int audit(void)
 d = get_domain_by_id(g->domain);
 if ( d == NULL )
 {
-MEM_SHARING_DEBUG("Unknown dom: %hu, for PFN=%lx, MFN=%lx\n",
-  g->domain, g->gfn, mfn_x(mfn));
+gdprintk(XENLOG_ERR,
+ "Unknown dom: %hu, for PFN=%lx, MFN=%lx\n",
+ g->domain, g->gfn, mfn_x(mfn));
 errors++;
 continue;
 }
 o_mfn = get_gfn_query_unlocked(d, g->gfn, );
 if ( !mfn_eq(o_mfn, mfn) )
 {
-MEM_SHARING_DEBUG("Incorrect P2M for d=%hu, PFN=%lx."
-  "Expecting MFN=%lx, got %lx\n",
-  g->domain, g->gfn, mfn_x(mfn), mfn_x(o_mfn));
+gdprintk(XENLOG_ERR, "Incorrect P2M for d=%hu, PFN=%lx."
+ "Expecting MFN=%lx, got %lx\n",
+ g->domain, g->gfn, mfn_x(mfn), mfn_x(o_mfn));
 errors++;
 }
 if ( t != p2m_ram_shared )
 {
-MEM_SHARING_DEBUG("Incorrect P2M type for d=%hu, PFN=%lx 
MFN=%lx."
-  "Expecting t=%d, got %d\n",
-  g->domain, g->gfn, mfn_x(mfn), 
p2m_ram_shared, t);
+gdprintk(XENLOG_ERR,
+ "Incorrect P2M type for d=%hu, PFN=%lx MFN=%lx."
+ "Expecting t=%d, got %d\n",
+ g->domain, g->gfn, mfn_x(mfn), p2m_ram_shared, t);
 errors++;
 }

[Xen-devel] [PATCH v4 08/18] x86/mem_sharing: Make add_to_physmap static and shorten name

2020-01-08 Thread Tamas K Lengyel
It's not being called from outside mem_sharing.c

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/mem_sharing.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 95e75ff298..84b9f130b9 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1069,8 +1069,9 @@ err_out:
 return ret;
 }
 
-int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, 
shr_handle_t sh,
-   struct domain *cd, unsigned long cgfn, bool 
lock)
+static
+int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
+   struct domain *cd, unsigned long cgfn, bool lock)
 {
 struct page_info *spage;
 int ret = -EINVAL;
@@ -1582,7 +1583,7 @@ int 
mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 sh  = mso.u.share.source_handle;
 cgfn= mso.u.share.client_gfn;
 
-rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn, true);
+rc = add_to_physmap(d, sgfn, sh, cd, cgfn, true);
 
 rcu_unlock_domain(cd);
 }
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 03/18] x86/mem_sharing: make get_two_gfns take locks conditionally

2020-01-08 Thread Tamas K Lengyel
During VM forking the client lock will already be taken.

Signed-off-by: Tamas K Lengyel 
Acked-by: Andrew Coopers 
---
 xen/arch/x86/mm/mem_sharing.c | 11 ++-
 xen/include/asm-x86/p2m.h | 10 +-
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index ddf1f0f9f9..f6187403a0 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -955,7 +955,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
shr_handle_t sh,
 unsigned long put_count = 0;
 
 get_two_gfns(sd, sgfn, _type, NULL, ,
- cd, cgfn, _type, NULL, , 0, );
+ cd, cgfn, _type, NULL, , 0, , true);
 
 /*
  * This tricky business is to avoid two callers deadlocking if
@@ -1073,7 +1073,7 @@ err_out:
 }
 
 int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, 
shr_handle_t sh,
-   struct domain *cd, unsigned long cgfn)
+   struct domain *cd, unsigned long cgfn, bool 
lock)
 {
 struct page_info *spage;
 int ret = -EINVAL;
@@ -1085,7 +1085,7 @@ int mem_sharing_add_to_physmap(struct domain *sd, 
unsigned long sgfn, shr_handle
 struct two_gfns tg;
 
 get_two_gfns(sd, _gfn(sgfn), _type, NULL, ,
- cd, _gfn(cgfn), _type, , , 0, );
+ cd, _gfn(cgfn), _type, , , 0, , lock);
 
 /* Get the source shared page, check and lock */
 ret = XENMEM_SHARING_OP_S_HANDLE_INVALID;
@@ -1162,7 +1162,8 @@ int mem_sharing_add_to_physmap(struct domain *sd, 
unsigned long sgfn, shr_handle
 err_unlock:
 mem_sharing_page_unlock(spage);
 err_out:
-put_two_gfns();
+if ( lock )
+put_two_gfns();
 return ret;
 }
 
@@ -1583,7 +1584,7 @@ int 
mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 sh  = mso.u.share.source_handle;
 cgfn= mso.u.share.client_gfn;
 
-rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn);
+rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn, true);
 
 rcu_unlock_domain(cd);
 }
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 94285db1b4..7399c4a897 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -539,7 +539,7 @@ struct two_gfns {
 static inline void get_two_gfns(struct domain *rd, gfn_t rgfn,
 p2m_type_t *rt, p2m_access_t *ra, mfn_t *rmfn, struct domain *ld,
 gfn_t lgfn, p2m_type_t *lt, p2m_access_t *la, mfn_t *lmfn,
-p2m_query_t q, struct two_gfns *rval)
+p2m_query_t q, struct two_gfns *rval, bool lock)
 {
 mfn_t   *first_mfn, *second_mfn, scratch_mfn;
 p2m_access_t*first_a, *second_a, scratch_a;
@@ -569,10 +569,10 @@ do {\
 #undef assign_pointers
 
 /* Now do the gets */
-*first_mfn  = get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
-  gfn_x(rval->first_gfn), first_t, 
first_a, q, NULL);
-*second_mfn = get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
-  gfn_x(rval->second_gfn), second_t, 
second_a, q, NULL);
+*first_mfn  = __get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
+gfn_x(rval->first_gfn), first_t, 
first_a, q, NULL, lock);
+*second_mfn = __get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
+gfn_x(rval->second_gfn), second_t, 
second_a, q, NULL, lock);
 }
 
 static inline void put_two_gfns(struct two_gfns *arg)
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 05/18] x86/mem_sharing: don't try to unshare twice during page fault

2020-01-08 Thread Tamas K Lengyel
The page was already tried to be unshared in get_gfn_type_access. If that
didn't work, then trying again is pointless. Don't try to send vm_event again
either, simply check if there is a ring or not.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/hvm/hvm.c | 28 ++--
 1 file changed, 18 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 38e9006c92..5d24ceb469 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1702,11 +1703,14 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned 
long gla,
 struct domain *currd = curr->domain;
 struct p2m_domain *p2m, *hostp2m;
 int rc, fall_through = 0, paged = 0;
-int sharing_enomem = 0;
 vm_event_request_t *req_ptr = NULL;
 bool sync = false;
 unsigned int page_order;
 
+#ifdef CONFIG_MEM_SHARING
+bool sharing_enomem = false;
+#endif
+
 /* On Nested Virtualization, walk the guest page table.
  * If this succeeds, all is fine.
  * If this fails, inject a nested page fault into the guest.
@@ -1894,14 +1898,16 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned 
long gla,
 if ( p2m_is_paged(p2mt) || (p2mt == p2m_ram_paging_out) )
 paged = 1;
 
-/* Mem sharing: unshare the page and try again */
-if ( npfec.write_access && (p2mt == p2m_ram_shared) )
+#ifdef CONFIG_MEM_SHARING
+/* Mem sharing: if still shared on write access then its enomem */
+if ( npfec.write_access && p2m_is_shared(p2mt) )
 {
 ASSERT(p2m_is_hostp2m(p2m));
-sharing_enomem = mem_sharing_unshare_page(currd, gfn);
+sharing_enomem = true;
 rc = 1;
 goto out_put_gfn;
 }
+#endif
 
 /* Spurious fault? PoD and log-dirty also take this path. */
 if ( p2m_is_ram(p2mt) )
@@ -1955,19 +1961,21 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned 
long gla,
  */
 if ( paged )
 p2m_mem_paging_populate(currd, gfn);
+
+#ifdef CONFIG_MEM_SHARING
 if ( sharing_enomem )
 {
-int rv;
-
-if ( (rv = mem_sharing_notify_enomem(currd, gfn, true)) < 0 )
+if ( !vm_event_check_ring(currd->vm_event_share) )
 {
-gdprintk(XENLOG_ERR, "Domain %hu attempt to unshare "
- "gfn %lx, ENOMEM and no helper (rc %d)\n",
- currd->domain_id, gfn, rv);
+gprintk(XENLOG_ERR, "Domain %pd attempt to unshare "
+"gfn %lx, ENOMEM and no helper\n",
+currd, gfn);
 /* Crash the domain */
 rc = 0;
 }
 }
+#endif
+
 if ( req_ptr )
 {
 if ( monitor_traps(curr, sync, req_ptr) < 0 )
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 02/18] xen/x86: Make hap_get_allocation accessible

2020-01-08 Thread Tamas K Lengyel
During VM forking we'll copy the parent domain's parameters to the client,
including the HAP shadow memory setting that is used for storing the domain's
EPT. We'll copy this in the hypervisor instead doing it during toolstack launch
to allow the domain to start executing and unsharing memory before (or
even completely without) the toolstack.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/mm/hap/hap.c | 3 +--
 xen/include/asm-x86/hap.h | 1 +
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d93f3451c..c7c7ff6e99 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -321,8 +321,7 @@ static void hap_free_p2m_page(struct domain *d, struct 
page_info *pg)
 }
 
 /* Return the size of the pool, rounded up to the nearest MB */
-static unsigned int
-hap_get_allocation(struct domain *d)
+unsigned int hap_get_allocation(struct domain *d)
 {
 unsigned int pg = d->arch.paging.hap.total_pages
 + d->arch.paging.hap.p2m_pages;
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index b94bfb4ed0..1bf07e49fe 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -45,6 +45,7 @@ int   hap_track_dirty_vram(struct domain *d,
 
 extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
 int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted);
+unsigned int hap_get_allocation(struct domain *d);
 
 #endif /* XEN_HAP_H */
 
-- 
2.20.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v4 01/18] x86/hvm: introduce hvm_copy_context_and_params

2020-01-08 Thread Tamas K Lengyel
Currently the hvm parameters are only accessible via the HVMOP hypercalls. In
this patch we introduce a new function that can copy both the hvm context and
parameters directly into a target domain.

Signed-off-by: Tamas K Lengyel 
---
 xen/arch/x86/hvm/hvm.c| 241 +-
 xen/include/asm-x86/hvm/hvm.h |   2 +
 2 files changed, 152 insertions(+), 91 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4723f5d09c..24f08d7043 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4067,16 +4067,17 @@ static int hvmop_set_evtchn_upcall_vector(
 }
 
 static int hvm_allow_set_param(struct domain *d,
-   const struct xen_hvm_param *a)
+   uint32_t index,
+   uint64_t new_value)
 {
-uint64_t value = d->arch.hvm.params[a->index];
+uint64_t value = d->arch.hvm.params[index];
 int rc;
 
 rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
 if ( rc )
 return rc;
 
-switch ( a->index )
+switch ( index )
 {
 /* The following parameters can be set by the guest. */
 case HVM_PARAM_CALLBACK_IRQ:
@@ -4109,7 +4110,7 @@ static int hvm_allow_set_param(struct domain *d,
 if ( rc )
 return rc;
 
-switch ( a->index )
+switch ( index )
 {
 /* The following parameters should only be changed once. */
 case HVM_PARAM_VIRIDIAN:
@@ -4119,7 +4120,7 @@ static int hvm_allow_set_param(struct domain *d,
 case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
 case HVM_PARAM_ALTP2M:
 case HVM_PARAM_MCA_CAP:
-if ( value != 0 && a->value != value )
+if ( value != 0 && new_value != value )
 rc = -EEXIST;
 break;
 default:
@@ -4129,49 +4130,32 @@ static int hvm_allow_set_param(struct domain *d,
 return rc;
 }
 
-static int hvmop_set_param(
-XEN_GUEST_HANDLE_PARAM(xen_hvm_param_t) arg)
+static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
 {
 struct domain *curr_d = current->domain;
-struct xen_hvm_param a;
-struct domain *d;
-struct vcpu *v;
 int rc;
+struct vcpu *v;
 
-if ( copy_from_guest(, arg, 1) )
-return -EFAULT;
-
-if ( a.index >= HVM_NR_PARAMS )
+if ( index >= HVM_NR_PARAMS )
 return -EINVAL;
 
-/* Make sure the above bound check is not bypassed during speculation. */
-block_speculation();
-
-d = rcu_lock_domain_by_any_id(a.domid);
-if ( d == NULL )
-return -ESRCH;
-
-rc = -EINVAL;
-if ( !is_hvm_domain(d) )
-goto out;
-
-rc = hvm_allow_set_param(d, );
+rc = hvm_allow_set_param(d, index, value);
 if ( rc )
 goto out;
 
-switch ( a.index )
+switch ( index )
 {
 case HVM_PARAM_CALLBACK_IRQ:
-hvm_set_callback_via(d, a.value);
+hvm_set_callback_via(d, value);
 hvm_latch_shinfo_size(d);
 break;
 case HVM_PARAM_TIMER_MODE:
-if ( a.value > HVMPTM_one_missed_tick_pending )
+if ( value > HVMPTM_one_missed_tick_pending )
 rc = -EINVAL;
 break;
 case HVM_PARAM_VIRIDIAN:
-if ( (a.value & ~HVMPV_feature_mask) ||
- !(a.value & HVMPV_base_freq) )
+if ( (value & ~HVMPV_feature_mask) ||
+ !(value & HVMPV_base_freq) )
 rc = -EINVAL;
 break;
 case HVM_PARAM_IDENT_PT:
@@ -4181,7 +4165,7 @@ static int hvmop_set_param(
  */
 if ( !paging_mode_hap(d) || !cpu_has_vmx )
 {
-d->arch.hvm.params[a.index] = a.value;
+d->arch.hvm.params[index] = value;
 break;
 }
 
@@ -4196,7 +4180,7 @@ static int hvmop_set_param(
 
 rc = 0;
 domain_pause(d);
-d->arch.hvm.params[a.index] = a.value;
+d->arch.hvm.params[index] = value;
 for_each_vcpu ( d, v )
 paging_update_cr3(v, false);
 domain_unpause(d);
@@ -4205,23 +4189,23 @@ static int hvmop_set_param(
 break;
 case HVM_PARAM_DM_DOMAIN:
 /* The only value this should ever be set to is DOMID_SELF */
-if ( a.value != DOMID_SELF )
+if ( value != DOMID_SELF )
 rc = -EINVAL;
 
-a.value = curr_d->domain_id;
+value = curr_d->domain_id;
 break;
 case HVM_PARAM_ACPI_S_STATE:
 rc = 0;
-if ( a.value == 3 )
+if ( value == 3 )
 hvm_s3_suspend(d);
-else if ( a.value == 0 )
+else if ( value == 0 )
 hvm_s3_resume(d);
 else
 rc = -EINVAL;
 
 break;
 case HVM_PARAM_ACPI_IOPORTS_LOCATION:
-rc = pmtimer_change_ioport(d, a.value);
+rc = pmtimer_change_ioport(d, value);
 break;
 case HVM_PARAM_MEMORY_EVENT_CR0:
 case HVM_PARAM_MEMORY_EVENT_CR3:
@@ -4236,24 +4220,24 @@ static int hvmop_set_param(
 rc = xsm_hvm_param_nested(XSM_PRIV, d);
 if ( rc )
 

Re: [Xen-devel] [PATCH 4/6] x86/boot: Clean up l?_bootmap[] construction

2020-01-08 Thread Andrew Cooper
On 08/01/2020 16:55, Jan Beulich wrote:
> On 08.01.2020 17:15, Andrew Cooper wrote:
>> On 08/01/2020 11:38, Jan Beulich wrote:
>>> As said - I'm going to try to not stand in the way of you re-
>>> arranging this, but
>>> - the new code should not break silently when (in particular)
>>>   l2_bootmap[] changes
>> What practical changes do you think could be done here?  I can't spot
>> any which would be helpful.
>>
>> A BUILD_BUG_ON() doesn't work.  The most likely case for something going
>> wrong here is an edit to x86_64.S and no equivalent edit to page.h,
>> which a BUILD_BUG_ON() wouldn't spot.  head.S similarly has no useful
>> protections which could be added.
> Well, the fundamental assumption is that the .S files and the
> C declaration of l?_bootmap[] are kept in sync. No BUILD_BUG_ON()
> can cover a mistake made there. But rather than using the literal
> 4 as you did, an ARRAY_SIZE() construct should be usable to either
> replace it, or amend it with a BUILD_BUG_ON().

You are aware that ARRAY_SIZE(l2_bootmap) is 2048 and
ARRAY_SIZE(l3_bootmap) is 512, neither of which would be correct here?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Tamas K Lengyel
On Wed, Jan 8, 2020 at 9:34 AM George Dunlap  wrote:
>
> On 12/31/19 3:11 PM, Roger Pau Monné wrote:
> > On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
> >> On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné  
> >> wrote:
> >>>
> >>> On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel wrote:
>  On Mon, Dec 30, 2019 at 5:20 PM Julien Grall  
>  wrote:
> >
> > Hi,
> >
> > On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel,  wrote:
> >>
> >> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall  wrote:
> >> But keep in mind that the "fork-vm" command even with this update
> >> would still not produce for you a "fully functional" VM on its own.
> >> The user still has to produce a new VM config file, create the new
> >> disk, save the QEMU state, etc.
> >>>
> >>> IMO the default behavior of the fork command should be to leave the
> >>> original VM paused, so that you can continue using the same disk and
> >>> network config in the fork and you won't need to pass a new config
> >>> file.
> >>>
> >>> As Julien already said, maybe I wasn't clear in my previous replies:
> >>> I'm not asking you to implement all this, it's fine if the
> >>> implementation of the fork-vm xl command requires you to pass certain
> >>> options, and that the default behavior is not implemented.
> >>>
> >>> We need an interface that's sane, and that's designed to be easy and
> >>> comprehensive to use, not an interface built around what's currently
> >>> implemented.
> >>
> >> OK, so I think that would look like "xl fork-vm " with
> >> additional options for things like name, disk, vlan, or a completely
> >> new config, all of which are currently not implemented, + an
> >> additional option to not launch QEMU at all, which would be the only
> >> one currently working. Also keeping the separate "xl fork-launch-dm"
> >> as is. Is that what we are talking about?
> >
> > I think fork-launch-vm should just be an option of fork-vm (ie:
> > --launch-dm-only or some such). I don't think there's a reason to have
> > a separate top-level command to just launch the device model.
>
> So first of all, Tamas -- do you actually need to exec xl here?  Would
> it make sense for these to start out simply as libxl functions that are
> called by your system?

For my current tools & tests - no. I don't start QEMU for the forks at
all. So at this point I don't even need libxl. But I can foresee that
at some point in the future it may become necessary in case we want
allow the forked VM to touch emulated devices. Wiring QEMU up and
making the system functional as a whole I found it easier to do it via
xl. There is just way too many moving components involved to do that
any other way.

>
> I actually disagree that we want a single command to do all of these.
> If we did want `exec xl` to be one of the supported interfaces, I think
> it would break down something like this:
>
> `xl fork-domain`: Only forks the domain.
> `xl fork-launch-dm`: (or attach-dm?): Start up and attach the
> devicemodel to the domain
>
> Then `xl fork` (or maybe `xl fork-vm`) would be something implemented in
> the future that would fork the entire domain.

I really don't have a strong opinion about this either way. I can see
it working either way. Having them all bundled under a single
top-level comment doesn't pollute the help text when someone is just
looking at what xl can do in general. Makes that command a lot more
complex for sure but I don't think it's too bad.

>
> (This is similar to how `git am` works for instance; internally it runs
> several steps, including `git mailsplit`, `git mailinfo`, and `git
> apply-patch`, each of which can be called individually.)
>
> I think I would also have:
>
> `xl fork-save-dm`: Connect over qmp to the parent domain and save the dm
> file

Aye, could be done. For now I didn't bother since its trivial to do
manually already.

>
> Then have `xl fork-launch-dm` either take a filename (saved from the
> previous step) or a parent domain id (in which case it would arrange to
> save the file itself).
>
> Although in fact, is there any reason we couldn't store the parent
> domain ID in xenstore, so that `xl fork-launch-dm` could find the parent
> by itself?  (Although that, of course, is something that could be added
> later if it's not something Tamas needs.)

Could be done. But I store ID internally in my tools anyway since I
need it to initialize VMI. So having it in Xenstore is not required
for me. In fact I would prefer to leave Xenstore out of these
operations as much as possible cause it would slow things down. In my
latest tests forking is down to 0.0007s, having to touch Xenstore for
each would slow things down considerably.

Thanks,
Tamas

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] x86/boot: Rationalise stack handling during early boot

2020-01-08 Thread Andrew Cooper
The top (numerically higher addresses) of cpu0_stack[] contains the BSP's
cpu_info block.  Logic in Xen expects this to be initialised to 0, but this
area of stack is also used during early boot.

Update the head.S code to avoid using the cpu_info block.  Additionally,
update the stack_start variable to match, which avoids __high_start() and
efi_arch_post_exit_boot() needing to make the adjustment manually.

Finally, leave a big warning by the BIOS BSS initialisation, because it is by
no means obvious that the stack doesn't survive the REP STOS.

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Wei Liu 
CC: Roger Pau Monné 
---
 xen/arch/x86/boot/head.S| 10 +++---
 xen/arch/x86/boot/x86_64.S  |  3 +--
 xen/arch/x86/efi/efi-boot.h | 13 +++--
 xen/arch/x86/smpboot.c  |  2 +-
 4 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index 8d0ffbd1b0..2382b61dd4 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -400,7 +400,7 @@ __pvh_start:
 sub $sym_offs(1b), %esi
 
 /* Set up stack. */
-lea STACK_SIZE + sym_esi(cpu0_stack), %esp
+lea STACK_SIZE - CPUINFO_sizeof + sym_esi(cpu0_stack), %esp
 
 mov %ebx, sym_esi(pvh_start_info_pa)
 
@@ -447,7 +447,7 @@ __start:
 sub $sym_offs(1b), %esi
 
 /* Set up stack. */
-lea STACK_SIZE + sym_esi(cpu0_stack), %esp
+lea STACK_SIZE - CPUINFO_sizeof + sym_esi(cpu0_stack), %esp
 
 /* Bootloaders may set multiboot{1,2}.mem_lower to a nonzero value. */
 xor %edx,%edx
@@ -616,7 +616,11 @@ trampoline_setup:
 cmpb$0,sym_fs(efi_platform)
 jnz 1f
 
-/* Initialize BSS (no nasty surprises!). */
+/*
+ * Initialise the BSS.
+ *
+ * !!! WARNING - also zeroes the current stack !!!
+ */
 mov $sym_offs(__bss_start),%edi
 mov $sym_offs(__bss_end),%ecx
 push%fs
diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
index b54d3aceea..0acf5e860c 100644
--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -16,7 +16,6 @@ ENTRY(__high_start)
 mov %rcx,%cr4
 
 mov stack_start(%rip),%rsp
-or  $(STACK_SIZE-CPUINFO_sizeof),%rsp
 
 /* Reset EFLAGS (subsumes CLI and CLD). */
 pushq   $0
@@ -42,7 +41,7 @@ multiboot_ptr:
 .long   0
 
 GLOBAL(stack_start)
-.quad   cpu0_stack
+.quad   cpu0_stack + STACK_SIZE - CPUINFO_sizeof
 
 .section .data.page_aligned, "aw", @progbits
 .align PAGE_SIZE, 0
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 676d616ff8..8debdc7ca8 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -249,23 +249,24 @@ static void __init noreturn efi_arch_post_exit_boot(void)
"or $"__stringify(X86_CR4_PGE)", %[cr4]\n\t"
"mov%[cr4], %%cr4\n\t"
 #endif
-   "movabs $__start_xen, %[rip]\n\t"
"lgdt   boot_gdtr(%%rip)\n\t"
-   "movstack_start(%%rip), %%rsp\n\t"
"mov%[ds], %%ss\n\t"
"mov%[ds], %%ds\n\t"
"mov%[ds], %%es\n\t"
"mov%[ds], %%fs\n\t"
"mov%[ds], %%gs\n\t"
-   "movl   %[cs], 8(%%rsp)\n\t"
-   "mov%[rip], (%%rsp)\n\t"
-   "lretq  %[stkoff]-16"
+
+   /* Jump to higher mappings. */
+   "movstack_start(%%rip), %%rsp\n\t"
+   "movabs $__start_xen, %[rip]\n\t"
+   "push   %[cs]\n\t"
+   "push   %[rip]\n\t"
+   "lretq"
: [rip] "=" (efer/* any dead 64-bit variable */),
  [cr4] "+" (cr4)
: [cr3] "r" (idle_pg_table),
  [cs] "ir" (__HYPERVISOR_CS),
  [ds] "r" (__HYPERVISOR_DS),
- [stkoff] "i" (STACK_SIZE - sizeof(struct cpu_info)),
  "D" ()
: "memory" );
 unreachable();
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 7e29704080..0d0526e2b2 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -554,7 +554,7 @@ static int do_boot_cpu(int apicid, int cpu)
 printk("Booting processor %d/%d eip %lx\n",
cpu, apicid, start_eip);
 
-stack_start = stack_base[cpu];
+stack_start = stack_base[cpu] + STACK_SIZE - sizeof(struct cpu_info);
 
 /* This grunge runs the startup process for the targeted processor. */
 
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] MAINTAINERS: fix malformed entry

2020-01-08 Thread Jan Beulich
On 08.01.2020 17:57, Juergen Gross wrote:
> MAINTAINERS entries tagged with "L:" should have a pure mail address
> as the second word. Fix a malformed entry. Otherwise add_maintainers.pl
> will produce an empty "Cc:" line.
> 
> Signed-off-by: Juergen Gross 

Acked-by: Jan Beulich 

Of course the alternative would be to make the script less picky.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] MAINTAINERS: fix malformed entry

2020-01-08 Thread Juergen Gross
MAINTAINERS entries tagged with "L:" should have a pure mail address
as the second word. Fix a malformed entry. Otherwise add_maintainers.pl
will produce an empty "Cc:" line.

Signed-off-by: Juergen Gross 
---
 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index eaea4620e2..a42fef6ab9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -173,7 +173,7 @@ ARINC653 SCHEDULER
 M: Josh Whitehead 
 M: Stewart Hildebrand 
 S: Supported
-L: DornerWorks Xen-Devel 
+L: xen-de...@dornerworks.com
 F: xen/common/sched_arinc653.c
 F: tools/libxc/xc_arinc653.c
 
-- 
2.16.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 4/6] x86/boot: Clean up l?_bootmap[] construction

2020-01-08 Thread Jan Beulich
On 08.01.2020 17:15, Andrew Cooper wrote:
> On 08/01/2020 11:38, Jan Beulich wrote:
>> As said - I'm going to try to not stand in the way of you re-
>> arranging this, but
>> - the new code should not break silently when (in particular)
>>   l2_bootmap[] changes
> 
> What practical changes do you think could be done here?  I can't spot
> any which would be helpful.
> 
> A BUILD_BUG_ON() doesn't work.  The most likely case for something going
> wrong here is an edit to x86_64.S and no equivalent edit to page.h,
> which a BUILD_BUG_ON() wouldn't spot.  head.S similarly has no useful
> protections which could be added.

Well, the fundamental assumption is that the .S files and the
C declaration of l?_bootmap[] are kept in sync. No BUILD_BUG_ON()
can cover a mistake made there. But rather than using the literal
4 as you did, an ARRAY_SIZE() construct should be usable to either
replace it, or amend it with a BUILD_BUG_ON().

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] scripts/add_maintainers.pl adding empty Cc: lines

2020-01-08 Thread Jürgen Groß

Just had a chat with Lars on IRC, which might be of common
interest (and Lars asked me to post it to xen-devel):

(17:00:16) juergen_gross: lars_kurth: any idea why 
./scripts/add_maintainers.pl would add a "Cc:" without a mail address to 
a patch? Happened e.g. in my series "[PATCH v2 0/9] xen: scheduler 
cleanups" (cover-letter, patches 1, 2, 7 and 9)
(17:01:58) lars_kurth: juergen_gross: oh, an actual bug! Let me look at 
the code

(17:02:19) lars_kurth: juergen_gross:  is it missing some e-mails?
(17:02:34) juergen_gross: git send-email seems to remove those empty Cc: 
lines
(17:02:53) juergen_gross: I'm not aware of a mail address missing. Let 
me double check
(17:06:56) juergen_gross: lars_kurth: hmm, shouldn't the MAINTAINERS 
entry "L:  DornerWorks Xen-Devel " result 
in a Cc:?

(17:08:17) lars_kurth: Let me have a look
(17:13:16) juergen_gross: lars_kurth: at least the related file is 
touched exactly by the affected patches (and not by any not affected patch)
(17:13:36) lars_kurth: Looking at the series the most likely cause of 
this is the L: entry - need to look at the code
(17:15:21) lars_kurth: juergen_gross: it's also an odd one because it 
changes MAINTAINERS and renames a lot of files, which may be the cause 
for the empty spaces
(17:15:52) juergen_gross: lars_kurth: in Linux MAINTAINERS all "L:" 
entries just have a mail address as first word after the "L:" (not "bla 
bla ")

(17:16:11) lars_kurth: Ah yes: let me look at that code
(17:21:29) lars_kurth: juergen_gross: I think that is in fact the issue
(17:27:16) lars_kurth: juergen_gross: I can't fix this with some 
debugging. Could you copy this conversation into a mail on xen-devel@ 
such that I remember

(17:27:43) lars_kurth: uergen_gross: with=without
(17:29:36) lars_kurth:  juergen_gross: I think what happens is that 
get_maintainer.pl and add_maintainer.pl process these lines differently, 
but add_maintainer.pl also checks against output created from 
get_maintainer.pl
(17:44:58) juergen_gross: lars_kurth: what about doing it the easy way? 
With a modifed MAINTAINERS file (using "L: xen-de...@dornerworks.com") 
everything is fine. I can send a patch in case you agree.
(17:46:41) lars_kurth: juergen_gross: let's do that first, but I still 
would like to fix the underlying issue at some point - asking for you to 
send the IRC log, as I cleared my history by mistake (when I was typing 
a reply I slipped from shift to ctrl, which did it)



Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] xen/x86: clear per cpu stub page information in cpu_smpboot_free()

2020-01-08 Thread Jan Beulich
On 08.01.2020 16:29, Jürgen Groß wrote:
> On 08.01.20 16:21, Jan Beulich wrote:
>> On 08.01.2020 15:34, Juergen Gross wrote:
>>> cpu_smpboot_free() removes the stubs for the cpu going offline, but it
>>> isn't clearing the related percpu variables. This will result in
>>> crashes when a stub page is released due to all related cpus gone
>>> offline and one of those cpus going online later.
>>>
>>> Fix that by clearing stubs.addr and stubs.mfn in order to allocate a
>>> new stub page when needed.
>>
>> I was really hoping for you to mention CPU parking here. How about
>>
>> "Fix that by clearing stubs.mfn (and also stubs.addr just to be on
>>   the safe side) in order to allocate a new stub page when needed,
>>   irrespective of whether the CPU gets parked or removed."
>>
>>> --- a/xen/arch/x86/smpboot.c
>>> +++ b/xen/arch/x86/smpboot.c
>>> @@ -945,6 +945,8 @@ static void cpu_smpboot_free(unsigned int cpu, bool 
>>> remove)
>>>(per_cpu(stubs.addr, cpu) | ~PAGE_MASK) + 1);
>>>   if ( i == STUBS_PER_PAGE )
>>>   free_domheap_page(mfn_to_page(mfn));
>>> +per_cpu(stubs.addr, cpu) = 0;
>>> +per_cpu(stubs.mfn, cpu) = 0;
>>
>> Looking more closely, I think I'd prefer these two lines (of which
>> the addr one isn't strictly needed anyway) to move ahead of the
>> if().
>>
>> If you agree, I'll be happy to do both while committing.
> 
> I agree.
> 
> I'm not sure the addr clearing can be omitted. This might result in
> problems when during onlining an early error happens in
> cpu_smpboot_alloc() and thus skipping the call of alloc_stub_page().
> The subsequent call of cpu_smpboot_free() will then overwrite mfn 0.

Oh, good point.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread George Dunlap
On 12/31/19 3:11 PM, Roger Pau Monné wrote:
> On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
>> On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné  wrote:
>>>
>>> On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel wrote:
 On Mon, Dec 30, 2019 at 5:20 PM Julien Grall  
 wrote:
>
> Hi,
>
> On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel,  wrote:
>>
>> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall  wrote:
>> But keep in mind that the "fork-vm" command even with this update
>> would still not produce for you a "fully functional" VM on its own.
>> The user still has to produce a new VM config file, create the new
>> disk, save the QEMU state, etc.
>>>
>>> IMO the default behavior of the fork command should be to leave the
>>> original VM paused, so that you can continue using the same disk and
>>> network config in the fork and you won't need to pass a new config
>>> file.
>>>
>>> As Julien already said, maybe I wasn't clear in my previous replies:
>>> I'm not asking you to implement all this, it's fine if the
>>> implementation of the fork-vm xl command requires you to pass certain
>>> options, and that the default behavior is not implemented.
>>>
>>> We need an interface that's sane, and that's designed to be easy and
>>> comprehensive to use, not an interface built around what's currently
>>> implemented.
>>
>> OK, so I think that would look like "xl fork-vm " with
>> additional options for things like name, disk, vlan, or a completely
>> new config, all of which are currently not implemented, + an
>> additional option to not launch QEMU at all, which would be the only
>> one currently working. Also keeping the separate "xl fork-launch-dm"
>> as is. Is that what we are talking about?
> 
> I think fork-launch-vm should just be an option of fork-vm (ie:
> --launch-dm-only or some such). I don't think there's a reason to have
> a separate top-level command to just launch the device model.

So first of all, Tamas -- do you actually need to exec xl here?  Would
it make sense for these to start out simply as libxl functions that are
called by your system?

I actually disagree that we want a single command to do all of these.
If we did want `exec xl` to be one of the supported interfaces, I think
it would break down something like this:

`xl fork-domain`: Only forks the domain.
`xl fork-launch-dm`: (or attach-dm?): Start up and attach the
devicemodel to the domain

Then `xl fork` (or maybe `xl fork-vm`) would be something implemented in
the future that would fork the entire domain.

(This is similar to how `git am` works for instance; internally it runs
several steps, including `git mailsplit`, `git mailinfo`, and `git
apply-patch`, each of which can be called individually.)

I think I would also have:

`xl fork-save-dm`: Connect over qmp to the parent domain and save the dm
file

Then have `xl fork-launch-dm` either take a filename (saved from the
previous step) or a parent domain id (in which case it would arrange to
save the file itself).

Although in fact, is there any reason we couldn't store the parent
domain ID in xenstore, so that `xl fork-launch-dm` could find the parent
by itself?  (Although that, of course, is something that could be added
later if it's not something Tamas needs.)

Thoughts?

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 4/6] x86/boot: Clean up l?_bootmap[] construction

2020-01-08 Thread Andrew Cooper
On 08/01/2020 11:38, Jan Beulich wrote:
>> The purpose of this was to make the handling of l?_bootmap[] as
>> consistent as possible between the various environments.  The pagetables
>> themselves are common, and should be used consistently.
> I don't think I can wholeheartedly agree here: l?_bootmap[] are
> throw-away page tables (living in .init), and with the non-EFI and
> EFI boot paths being so different anyway, them using the available
> tables differently is not a big issue imo. This heavy difference of
> other aspects was also why back then I decided to be as defensive
> towards l2_bootmap[] size changes as possible in code which doesn't
> really need it to be multiple pages.

From this description, it suggests that you haven't spotted the rather
more subtle bug which will trip up anyone trying to develop in the future.

This scheme is incompatible with trying to map a second object (e.g. the
trampoline) into the bootmap, because depending on alignment, it may
overlap with the PTEs which mapped Xen.  There also typically isn't an
l3_bootmap[0] => l2_bootmap[0] because of where xen.efi is loaded in memory.

>
> As said - I'm going to try to not stand in the way of you re-
> arranging this, but
> - the new code should not break silently when (in particular)
>   l2_bootmap[] changes

What practical changes do you think could be done here?  I can't spot
any which would be helpful.

A BUILD_BUG_ON() doesn't work.  The most likely case for something going
wrong here is an edit to x86_64.S and no equivalent edit to page.h,
which a BUILD_BUG_ON() wouldn't spot.  head.S similarly has no useful
protections which could be added.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [XEN PATCH 2/2] automation: Cache sub-project git tree in build jobs

2020-01-08 Thread Wei Liu
On Mon, Jan 06, 2020 at 02:34:02PM +, Anthony PERARD wrote:
> On Fri, Jan 03, 2020 at 02:29:07PM +, Wei Liu wrote:
> > On Thu, Dec 19, 2019 at 02:42:17PM +, Anthony PERARD wrote:
> > > GitLab have a caching capability, see [1]. Let's use it to avoid using
> > > Internet too often.
> > > 
> > > The cache is setup so that when xen.git/Config.mk is changed, the
> > > cache will need to be recreated. This has been chosen because that is
> > > where the information about how to clone sub-project trees is encoded
> > > (revisions). That may not work for qemu-xen tree which usually is
> > > `master', but that should be fine for now.
> > > 
> > > The cache is populated of "git bundle" which will contain a mirror of
> > > the original repo, and can be cloned from. If the bundle exist, the
> > > script have the Xen makefiles clone from it, otherwise it will clone
> > > from the original URL and the bundles will be created just after.
> > > 
> > > We have more than one runner in GitLab, and no shared cache between
> > > them, so every build jobs will be responsible to create the cache.
> > > 
> > > [1] https://docs.gitlab.com/ee/ci/yaml/README.html#cache
> > > 
> > > Signed-off-by: Anthony PERARD 
> > 
> > This is a good improvement.
> > 
> > Have you run this in Gitlab CI? Can you point me to a run?
> 
> I have use the CI to develop the patch, so yes I have a run of it. But
> it is a run made with my wip branch, still it should be the same result
> if it was done with the final patch:
> https://gitlab.com/xen-project/people/anthonyper/xen/pipelines/104343621

This looks good to me.

> 
> > > diff --git a/automation/scripts/prepare-cache.sh 
> > > b/automation/scripts/prepare-cache.sh
> > > new file mode 100755
> > > index ..017f1b8f0672
> > > --- /dev/null
> > > +++ b/automation/scripts/prepare-cache.sh
> > > @@ -0,0 +1,52 @@
> > > +#!/bin/bash
> > > +
> > > +set -ex
> > > +
> > > +cachedir="${CI_PROJECT_DIR:=`pwd`}/ci_cache"
> > > +mkdir -p "$cachedir"
> > > +
> > > +declare -A r
> > > +r[extras/mini-os]=MINIOS_UPSTREAM_URL
> > > +r[tools/qemu-xen-dir]=QEMU_UPSTREAM_URL
> > > +r[tools/qemu-xen-traditional-dir]=QEMU_TRADITIONAL_URL
> > > +r[tools/firmware/ovmf-dir]=OVMF_UPSTREAM_URL
> > > +r[tools/firmware/seabios-dir]=SEABIOS_UPSTREAM_URL
> > 
> > Does this mean if in the future we add or remove trees we will need to
> > modify this part in the same commit?
> 
> We would need to modify the script when trees are removed, because I
> haven't thought of that. But when trees are added, the script can be
> changed in a follow-up.
> 
> Ideally, we would use the Makefiles to discovers the git clones that can
> be cached, but that's not possible just yet.
> 
> In the mean time, I think I should make the script more robust against
> removal of trees, so it doesn't have to be modified in the same commit.

OK. I'm expecting a new version then.

Wei.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] arm64: xen: Use modern annotations for assembly functions

2020-01-08 Thread Will Deacon
On Thu, Dec 19, 2019 at 01:07:50PM -0800, Stefano Stabellini wrote:
> On Thu, 19 Dec 2019, Mark Brown wrote:
> > In an effort to clarify and simplify the annotation of assembly functions
> > in the kernel new macros have been introduced. These replace ENTRY and
> > ENDPROC. Update the annotations in the xen code to the new macros.
> > 
> > Signed-off-by: Mark Brown 
> > Reviewed-by: Julien Grall 
> > Reviewed-by: Stefano Stabellini 
> 
> Thank you!
> 
> > ---
> >  arch/arm64/xen/hypercall.S | 8 
> >  1 file changed, 4 insertions(+), 4 deletions(-)

Is this going via the Xen tree, or shall I queue it along with the other
asm annotation patches in the arm64 tree? I don't see it in -next yet.

Cheers,

Will

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 3/5] x86/hyperv: provide percpu hypercall input page

2020-01-08 Thread Wei Liu
On Wed, Jan 08, 2020 at 11:55:03AM +0100, Jan Beulich wrote:
> On 07.01.2020 18:27, Wei Liu wrote:
> > On Tue, Jan 07, 2020 at 06:08:19PM +0100, Jan Beulich wrote:
> >> On 07.01.2020 17:33, Wei Liu wrote:
> >>> On Mon, Jan 06, 2020 at 11:27:18AM +0100, Jan Beulich wrote:
>  On 05.01.2020 17:47, Wei Liu wrote:
> > Hyper-V's input / output argument must be 8 bytes aligned an not cross
> > page boundary. The easiest way to satisfy those requirements is to use
> > percpu page.
> 
>  I'm not sure "easiest" is really true here. Others could consider adding
>  __aligned() attributes as easy or even easier (by being even more
>  transparent to use sites). Could we settle on "One way ..."?
> >>>
> >>> Do you mean something like
> >>>
> >>>struct foo __aligned(8);
> >>
> >> If this is in a header and ...
> >>
> >>>hv_do_hypercall(OP, virt_to_maddr(), ...);
> >>
> >> ... this in actual code, then yes.
> >>
> >>> ?
> >>>
> >>> I don't think this is transparent to user sites. Plus, foo is on stack
> >>> which is 1) difficult to get its maddr,
> >>
> >> It being on the stack may indeed complicate getting its machine address
> >> (if not now, then down the road) - valid point.
> >>
> >>> 2) may cross page boundary.
> >>
> >> The __aligned() of course needs to be large enough to avoid this
> >> happening.
> > 
> > For this alignment to be large enough, it will need to be of PAGE_SIZE,
> > right? Wouldn't that blow up Xen's stack easily?  Given we only have two
> > pages for that.
> 
> Why PAGE_SIZE? For example, a 24-byte structure won't cross a page
> boundary if aligned to 32 bytes.
> 

You're right.

I said PAGE_SIZE because I was too lazy to calculate the size of every
structures. That's tedious and error prone.

> > In light of these restrictions, the approach I take in the original
> > patch should be okay.
> > 
> > I'm fine with changing the wording to "One way ..." -- if that's the
> > only objection you have after this mail.
> 
> Well, the goal was to (a) check whether alternatives have been considered
> (and if not, to consider them) and then (b) if we stick to your approach,
> slightly change the wording as suggested.

I think the determining factor here is to the difficulty of getting
maddr of a stack variable. I will stick with this approach and change
the wording.

Wei.

> 
> Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/6] x86/boot: Map the trampoline as read-only

2020-01-08 Thread Andrew Cooper
On 08/01/2020 11:08, Jan Beulich wrote:
> On 07.01.2020 20:04, Andrew Cooper wrote:
>> On 07/01/2020 16:19, Jan Beulich wrote:
>>> On 07.01.2020 16:51, Andrew Cooper wrote:
 On 07/01/2020 15:21, Jan Beulich wrote:
> On 06.01.2020 16:54, Andrew Cooper wrote:
>> c/s ec92fcd1d08, which caused the trampoline GDT Access bits to be set,
>> removed the final writes which occurred between enabling paging and 
>> switching
>> to the high mappings.  There don't plausibly need to be any memory 
>> writes in
>> few instructions is takes to perform this transition.
>>
>> As a consequence, we can remove the RWX mapping of the trampoline.  It 
>> is RX
>> via its identity mapping below 1M, and RW via the directmap.
>>
>> Signed-off-by: Andrew Cooper 
> Reviewed-by: Jan Beulich 
>
>> This probably wants backporting, alongside ec92fcd1d08 if it hasn't yet.
> This is just cleanup, largely cosmetic in nature. It could be argued
> that once the directmap has disappeared this can serve as additional
> proof that the trampoline range has no (intended) writable mappings
> anymore, but prior to that point I don't see much further benefit.
> Could you expand on the reasons why you see both as backporting
> candidates?
 Defence in depth.

 An RWX mapping is very attractive for an attacker who's broken into Xen
 and is looking to expand the damage they can do.
>>> Such an attacker is typically in the position though to make
>>> themselves RWX mappings.
>> This is one example of a possibility.  I wouldn't put it in the "likely"
>> category, and it definitely isn't a guarantee.
>>
>>>  Having as little as possible is only
>>> complicating their job, not making it impossible, I would say.
>> Yes, and?
>>
>> This is the entire point of defence in depth.  Make an attackers job harder.
>>
>> Enforcing W^X is universally considered a good thing from a security
>> perspective, because it removes a load of trivial cases cases where a
>> stack over-write can easily be turned into arbitrary code execution.
> Then let me ask the question differently: Did we backport any of the
> earlier RWX elimination changes? I don't recall us doing so.

I don't know if we did or not.

> Please
> don't get me wrong - I'm happy to be convinced of the backport need,
> but as always I'd like to take such a decision in a consistent (and
> hence sufficiently predictable) manner, or alternatively with a good
> enough reason to ignore this general goal.

If we didn't, then we really ought to have done.  There are real,
concrete security nice-to-haves from it.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/2] Revert "tools/libxc: disable x2APIC when using nested virtualization"

2020-01-08 Thread Wei Liu
On Wed, Jan 08, 2020 at 11:38:57AM +0100, Roger Pau Monne wrote:
> This reverts commit 7b3c5b70a32303b46d0d051e695f18d72cce5ed0 and
> re-enables the usage of x2APIC with nested virtualization.
> 
> Signed-off-by: Roger Pau Monné 

Acked-by: Wei Liu 

(subject to acceptance of patch 1, of course)

> ---
>  tools/libxc/xc_cpuid_x86.c | 11 ---
>  1 file changed, 11 deletions(-)
> 
> diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
> index ac38c1406e..2540aa1e1c 100644
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -653,17 +653,6 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t 
> domid,
>  p->extd.itsc = true;
>  p->basic.vmx = true;
>  p->extd.svm = true;
> -
> -/*
> - * BODGE: don't announce x2APIC mode when using nested 
> virtualization,
> - * as it doesn't work properly. This should be removed once the
> - * underlying bug(s) are fixed.
> - */
> -rc = xc_hvm_param_get(xch, domid, HVM_PARAM_NESTEDHVM, );
> -if ( rc )
> -goto out;
> -if ( val )
> -p->basic.x2apic = false;
>  }
>  
>  rc = x86_cpuid_copy_to_buffer(p, leaves, _leaves);
> -- 
> 2.24.1
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Making save/restore optional in toolstack, for edge/embedded derivatives

2020-01-08 Thread Marek Marczykowski-Górecki
On Wed, Jan 08, 2020 at 03:20:36PM +, Wei Liu wrote:
> On Thu, Jan 02, 2020 at 01:51:21PM -0500, Rich Persaud wrote:
> > Linux stubdom patches currently require qemu in dom0 for consoles [1],
> > due to the upstream toolstack need for save/restore.  Until a
> > long-term solution is available (multiple console support in
> > xenconsoled), would tools maintainers consider a patch that made
> > save/restore build-time configurable for the toolstack?  This would
> > avoid Xen edge/embedded derivatives having to patch downstream to
> > remove save/restore, e.g. to avoid qemu in dom0.
> 
> Re multiple console support, I think that's added back in 2017 for Arm
> guests. What is missing?
> 
> (Not suggesting it is fit for purpose as-is)

No, it only adds support for multiple console _types_. The key thing is,
those are statically defined in the code. I've tried to repurpose it to
support up to 3 (or 4) consoles, but it's rather ugly and Ian(?) didn't
liked it. Refactoring it for dynamic number of console is much more
work...

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] PV DRM doesn't work without auto_translated_physmap feature in Dom0

2020-01-08 Thread Santucco

Thank you very much for all your answers. 
>Среда, 8 января 2020, 10:54 +03:00 от Oleksandr Andrushchenko < 
>oleksandr_andrushche...@epam.com >:
> 
>On 1/6/20 10:38 AM, Jürgen Groß wrote:
>> On 06.01.20 08:56, Santucco wrote:
>>> Hello,
>>>
>>> I’m trying to use vdispl interface from PV OS, it doesn’t work.
>>> Configuration details:
>>>  Xen 4.12.1
>>>  Dom0: Linux 4.20.17-gentoo #13 SMP Sat Dec 28 11:12:24 MSK 2019
>>> x86_64 Intel(R) Celeron(R) CPU N3050 @ 1.60GHz GenuineIntel GNU/Linux
>>>  DomU: x86 Plan9, PV
>>>  displ_be as a backend for vdispl and vkb
>>>
>>> when VM starts, displ_be reports about an error:
>>> gnttab: error: ioctl DMABUF_EXP_FROM_REFS failed: Invalid argument
>>> (displ_be.log:221)
>>>
>>> related Dom0 output is:
>>> [  191.579278] Cannot provide dma-buf: use_ptemode 1
>>> (dmesg.create.log:123)
>>
>> This seems to be a limitation of the xen dma-buf driver. It was written
>> for being used on ARM initially where PV is not available.
>This is true and we never tried/targeted PV domains with this
>implementation,
>so if there is a need for that someone has to take a look on the proper
>implementation for PV…
Have I got your right and there is no the proper implementation :-)?
>>
>> CC-ing Oleksandr Andrushchenko who is the author of that driver. He
>> should be able to tell us what would be needed to enable PV dom0.
>>
>> Depending on your use case it might be possible to use PVH dom0, but
>> support for this mode is "experimental" only and some features are not
>> yet working.
>>
>Well, one of the workarounds possible is to drop zero-copying use-case
>(this is why display backend tries to create dmu-bufs from grants passed
>by the guest domain and fails because of "Cannot provide dma-buf:
>use_ptemode 1")
>So, in this case display backend will do memory copying for the incoming
>frames
>and won't touch DMABUF_EXP_FROM_REFS ioctl.
>To do so just disable zero-copying while building the backend [1]
 
Thanks, I have just tried the workaround.  The backend has failed in an other 
place not corresponding with dma_buf.
Anyway it is enough to continue debugging  my frontend implementation.
 
Do you know how big is performance penalty in comparison with the zero-copy 
variant?
 
Does it make a sense if I make a dedicated HVM domain with linux only for the 
purpose of vdispl and vkbd backends? Is there a hope this approach will work?
>>
>> Juergen
>>
>[1]  https://github.com/xen-troops/displ_be/blob/master/CMakeLists.txt#L12
 
Best regards,
  Alexander Sychev___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 00/20] VM forking

2020-01-08 Thread Tamas K Lengyel
On Wed, Jan 8, 2020 at 8:08 AM Roger Pau Monné  wrote:
>
> On Tue, Dec 31, 2019 at 09:36:01AM -0700, Tamas K Lengyel wrote:
> > On Tue, Dec 31, 2019 at 9:08 AM Tamas K Lengyel  wrote:
> > >
> > > On Tue, Dec 31, 2019 at 8:11 AM Roger Pau Monné  
> > > wrote:
> > > >
> > > > On Tue, Dec 31, 2019 at 08:00:17AM -0700, Tamas K Lengyel wrote:
> > > > > On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné 
> > > > >  wrote:
> > > > > >
> > > > > > On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel wrote:
> > > > > > > On Mon, Dec 30, 2019 at 5:20 PM Julien Grall 
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel, 
> > > > > > > >  wrote:
> > > > > > > >>
> > > > > > > >> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall  
> > > > > > > >> wrote:
> > > > > > > >> But keep in mind that the "fork-vm" command even with this 
> > > > > > > >> update
> > > > > > > >> would still not produce for you a "fully functional" VM on its 
> > > > > > > >> own.
> > > > > > > >> The user still has to produce a new VM config file, create the 
> > > > > > > >> new
> > > > > > > >> disk, save the QEMU state, etc.
> > > > > >
> > > > > > IMO the default behavior of the fork command should be to leave the
> > > > > > original VM paused, so that you can continue using the same disk and
> > > > > > network config in the fork and you won't need to pass a new config
> > > > > > file.
> > > > > >
> > > > > > As Julien already said, maybe I wasn't clear in my previous replies:
> > > > > > I'm not asking you to implement all this, it's fine if the
> > > > > > implementation of the fork-vm xl command requires you to pass 
> > > > > > certain
> > > > > > options, and that the default behavior is not implemented.
> > > > > >
> > > > > > We need an interface that's sane, and that's designed to be easy and
> > > > > > comprehensive to use, not an interface built around what's currently
> > > > > > implemented.
> > > > >
> > > > > OK, so I think that would look like "xl fork-vm " with
> > > > > additional options for things like name, disk, vlan, or a completely
> > > > > new config, all of which are currently not implemented, + an
> > > > > additional option to not launch QEMU at all, which would be the only
> > > > > one currently working. Also keeping the separate "xl fork-launch-dm"
> > > > > as is. Is that what we are talking about?
> > > >
> > > > I think fork-launch-vm should just be an option of fork-vm (ie:
> > > > --launch-dm-only or some such). I don't think there's a reason to have
> > > > a separate top-level command to just launch the device model.
> > >
> > > It's just that the fork-launch-dm needs the domid of the fork, while
> > > the fork-vm needs the parent's domid. But I guess we can interpret the
> > > "domid" required input differently depending on which sub-option is
> > > specified for the command. Let's see how it pans out.
> >
> > How does the following look for the interface?
> >
> > { "fork-vm",
> >   _fork_vm, 0, 1,
> >   "Fork a domain from the running parent domid",
> >   "[options] ",
> >   "-h   Print this help.\n"
> >   "-N Assign name to VM fork.\n"
> >   "-D Assign disk to VM fork.\n"
> >   "-B  >   "-V Assign vlan to VM fork.\n"
>
> IMO I think the name of fork is the only useful option. Being able to
> assign disks or bridges from the command line seems quite complicated.
> What about VMs with multiple disks? Or VMs with multiple nics on
> different bridges?
>
> I think it's easier for both the implementation and the user to just
> use a config file in that case.

I agree but it sounded to me you guys wanted to have a "complete"
interface even if it's unimplemented. This is what a complete
interface would look to me.

>
> >   "-C   Use config file for VM fork.\n"
> >   "-Q   Use qemu save file for VM fork.\n"
> >   "--launch-dm Launch device model (QEMU) for VM 
> > fork.\n"
> >   "--fork-reset Reset VM fork.\n"
> >   "-p   Do not unpause VMs after fork."
>
> I think the default behaviour should be to leave the original VM
> paused and the forked one running, and hence this should be:

That is the default. I guess the text saying VMs was not correctly
worded, it just means don't unpause fork after it's created. The
parent remains always paused.

>
> "-p   Leave forked VM paused."
> "-u   Leave parent VM unpaused."

But you shouldn't unpause the parent VM at all. It should remain
paused as long as there are forks running that were split from it.
Unpausing it will lead to subtle and unexplainable crashes in the fork
since the fork now will use pages that are from a different execution
path. Technically in the future it would be possible to unpause the VM
but it 

Re: [Xen-devel] [PATCH v2] xen/x86: clear per cpu stub page information in cpu_smpboot_free()

2020-01-08 Thread Jürgen Groß

On 08.01.20 16:21, Jan Beulich wrote:

On 08.01.2020 15:34, Juergen Gross wrote:

cpu_smpboot_free() removes the stubs for the cpu going offline, but it
isn't clearing the related percpu variables. This will result in
crashes when a stub page is released due to all related cpus gone
offline and one of those cpus going online later.

Fix that by clearing stubs.addr and stubs.mfn in order to allocate a
new stub page when needed.


I was really hoping for you to mention CPU parking here. How about

"Fix that by clearing stubs.mfn (and also stubs.addr just to be on
  the safe side) in order to allocate a new stub page when needed,
  irrespective of whether the CPU gets parked or removed."


--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -945,6 +945,8 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove)
   (per_cpu(stubs.addr, cpu) | ~PAGE_MASK) + 1);
  if ( i == STUBS_PER_PAGE )
  free_domheap_page(mfn_to_page(mfn));
+per_cpu(stubs.addr, cpu) = 0;
+per_cpu(stubs.mfn, cpu) = 0;


Looking more closely, I think I'd prefer these two lines (of which
the addr one isn't strictly needed anyway) to move ahead of the
if().

If you agree, I'll be happy to do both while committing.


I agree.

I'm not sure the addr clearing can be omitted. This might result in
problems when during onlining an early error happens in
cpu_smpboot_alloc() and thus skipping the call of alloc_stub_page().
The subsequent call of cpu_smpboot_free() will then overwrite mfn 0.


Juergen


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [XEN PATCH v2 2/6] xen: Have Kconfig check $(CC)'s version

2020-01-08 Thread Jan Beulich
On 08.01.2020 15:47, Anthony PERARD wrote:
> On Mon, Jan 06, 2020 at 03:34:43PM +0100, Jan Beulich wrote:
>> On 06.01.2020 15:01, Anthony PERARD wrote:
>>> On Fri, Jan 03, 2020 at 05:42:18PM +0100, Jan Beulich wrote:
 Wouldn't both better
 have a "depends on CC_IS_*" line instead? This would then also
 result (afaict) in no CONFIG_CLANG_VERSION in .config if building
 with gcc (and vice versa), instead of a bogus CONFIG_CLANG_VERSION=0.
>>>
>>> It sounds attracting to remove variables from .config, but it is equally
>>> attracting to always have a variable set. It can be used
>>> unconditionally when always set (without risking invalid syntax for
>>> example).
>>
>> Hmm, yes, as long as we don't have (by mechanical conversion) or gain
>> constructs like
>>
>> #if CONFIG_GCC_VERSION < 5 /* must be gcc 4.x */
>>
>> Plus - what's CONFIG_CC_IS_{GCC,CLANG} good for then? The same can
>> then be achieved by comparing CONFIG_{GCC,CLANG}_VERSION against zero.
> 
> Sure, but it is much easier to understand what "ifdef CONFIG_CC_IS_GCC"
> is actually checking than it is to understand what
> "[ $CONFIG_GCC_VERSION -ne 0 ]" is for. In the second form, it isn't
> immediatly obvious for humans that we are simply checking which compiler
> is in use.

And I wasn't really suggesting to drop the CC_IS_* ones. What I
dislike is the duplication resulting from the *_VERSION ones not
having a "depends on CC_IS_*".

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook

2020-01-08 Thread Juergen Gross
Instead of having an own percpu-variable for private data per cpu the
generic scheduler interface for that purpose should be used.

Signed-off-by: Juergen Gross 
---
 xen/common/sched/null.c | 89 +
 1 file changed, 60 insertions(+), 29 deletions(-)

diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c
index b99f1e3c65..3161ac2e62 100644
--- a/xen/common/sched/null.c
+++ b/xen/common/sched/null.c
@@ -89,7 +89,6 @@ struct null_private {
 struct null_pcpu {
 struct sched_unit *unit;
 };
-DEFINE_PER_CPU(struct null_pcpu, npc);
 
 /*
  * Schedule unit
@@ -159,32 +158,48 @@ static void null_deinit(struct scheduler *ops)
 ops->sched_data = NULL;
 }
 
-static void init_pdata(struct null_private *prv, unsigned int cpu)
+static void init_pdata(struct null_private *prv, struct null_pcpu *npc,
+   unsigned int cpu)
 {
 /* Mark the pCPU as free, and with no unit assigned */
 cpumask_set_cpu(cpu, >cpus_free);
-per_cpu(npc, cpu).unit = NULL;
+npc->unit = NULL;
 }
 
 static void null_init_pdata(const struct scheduler *ops, void *pdata, int cpu)
 {
 struct null_private *prv = null_priv(ops);
 
-/* alloc_pdata is not implemented, so we want this to be NULL. */
-ASSERT(!pdata);
+ASSERT(pdata);
 
-init_pdata(prv, cpu);
+init_pdata(prv, pdata, cpu);
 }
 
 static void null_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 {
 struct null_private *prv = null_priv(ops);
+struct null_pcpu *npc = pcpu;
 
-/* alloc_pdata not implemented, so this must have stayed NULL */
-ASSERT(!pcpu);
+ASSERT(npc);
 
 cpumask_clear_cpu(cpu, >cpus_free);
-per_cpu(npc, cpu).unit = NULL;
+npc->unit = NULL;
+}
+
+static void *null_alloc_pdata(const struct scheduler *ops, int cpu)
+{
+struct null_pcpu *npc;
+
+npc = xzalloc(struct null_pcpu);
+if ( npc == NULL )
+return ERR_PTR(-ENOMEM);
+
+return npc;
+}
+
+static void null_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
+{
+xfree(pcpu);
 }
 
 static void *null_alloc_udata(const struct scheduler *ops,
@@ -268,6 +283,7 @@ pick_res(struct null_private *prv, const struct sched_unit 
*unit)
 unsigned int bs;
 unsigned int cpu = sched_unit_master(unit), new_cpu;
 cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain);
+struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
 ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock));
 
@@ -286,8 +302,7 @@ pick_res(struct null_private *prv, const struct sched_unit 
*unit)
  * don't, so we get to keep in the scratch cpumask what we have just
  * put in it.)
  */
-if ( likely((per_cpu(npc, cpu).unit == NULL ||
- per_cpu(npc, cpu).unit == unit)
+if ( likely((npc->unit == NULL || npc->unit == unit)
 && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
 {
 new_cpu = cpu;
@@ -336,9 +351,11 @@ pick_res(struct null_private *prv, const struct sched_unit 
*unit)
 static void unit_assign(struct null_private *prv, struct sched_unit *unit,
 unsigned int cpu)
 {
+struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
+
 ASSERT(is_unit_online(unit));
 
-per_cpu(npc, cpu).unit = unit;
+npc->unit = unit;
 sched_set_res(unit, get_sched_res(cpu));
 cpumask_clear_cpu(cpu, >cpus_free);
 
@@ -363,12 +380,13 @@ static bool unit_deassign(struct null_private *prv, 
struct sched_unit *unit)
 unsigned int bs;
 unsigned int cpu = sched_unit_master(unit);
 struct null_unit *wvc;
+struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
 ASSERT(list_empty(_unit(unit)->waitq_elem));
-ASSERT(per_cpu(npc, cpu).unit == unit);
+ASSERT(npc->unit == unit);
 ASSERT(!cpumask_test_cpu(cpu, >cpus_free));
 
-per_cpu(npc, cpu).unit = NULL;
+npc->unit = NULL;
 cpumask_set_cpu(cpu, >cpus_free);
 
 dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain,
@@ -436,7 +454,7 @@ static spinlock_t *null_switch_sched(struct scheduler 
*new_ops,
  */
 ASSERT(!local_irq_is_enabled());
 
-init_pdata(prv, cpu);
+init_pdata(prv, pdata, cpu);
 
 return >_lock;
 }
@@ -446,6 +464,7 @@ static void null_unit_insert(const struct scheduler *ops,
 {
 struct null_private *prv = null_priv(ops);
 struct null_unit *nvc = null_unit(unit);
+struct null_pcpu *npc;
 unsigned int cpu;
 spinlock_t *lock;
 
@@ -462,6 +481,7 @@ static void null_unit_insert(const struct scheduler *ops,
  retry:
 sched_set_res(unit, pick_res(prv, unit));
 cpu = sched_unit_master(unit);
+npc = get_sched_res(cpu)->sched_priv;
 
 spin_unlock(lock);
 
@@ -471,7 +491,7 @@ static void null_unit_insert(const struct scheduler *ops,
 cpupool_domain_master_cpumask(unit->domain));
 
 /* If the pCPU is free, we assign unit to it */
-if ( 

[Xen-devel] [PATCH v2 9/9] xen/sched: add const qualifier where appropriate

2020-01-08 Thread Juergen Gross
Make use of the const qualifier more often in scheduling code.

Signed-off-by: Juergen Gross 
Reviewed-by: Dario Faggioli 
---
 xen/common/sched/arinc653.c |  4 ++--
 xen/common/sched/core.c | 25 +++---
 xen/common/sched/cpupool.c  |  2 +-
 xen/common/sched/credit.c   | 44 --
 xen/common/sched/credit2.c  | 52 +++--
 xen/common/sched/null.c | 17 ---
 xen/common/sched/rt.c   | 32 ++--
 xen/include/xen/sched.h |  9 
 8 files changed, 96 insertions(+), 89 deletions(-)

diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c
index bce8021e3f..5421918221 100644
--- a/xen/common/sched/arinc653.c
+++ b/xen/common/sched/arinc653.c
@@ -608,7 +608,7 @@ static struct sched_resource *
 a653sched_pick_resource(const struct scheduler *ops,
 const struct sched_unit *unit)
 {
-cpumask_t *online;
+const cpumask_t *online;
 unsigned int cpu;
 
 /*
@@ -639,7 +639,7 @@ a653_switch_sched(struct scheduler *new_ops, unsigned int 
cpu,
   void *pdata, void *vdata)
 {
 struct sched_resource *sr = get_sched_res(cpu);
-arinc653_unit_t *svc = vdata;
+const arinc653_unit_t *svc = vdata;
 
 ASSERT(!pdata && svc && is_idle_unit(svc->unit));
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d32b9b1baa..944164d78a 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -175,7 +175,7 @@ static inline struct scheduler *dom_scheduler(const struct 
domain *d)
 
 static inline struct scheduler *unit_scheduler(const struct sched_unit *unit)
 {
-struct domain *d = unit->domain;
+const struct domain *d = unit->domain;
 
 if ( likely(d->cpupool != NULL) )
 return d->cpupool->sched;
@@ -202,7 +202,7 @@ static inline struct scheduler *vcpu_scheduler(const struct 
vcpu *v)
 }
 #define VCPU2ONLINE(_v) cpupool_domain_master_cpumask((_v)->domain)
 
-static inline void trace_runstate_change(struct vcpu *v, int new_state)
+static inline void trace_runstate_change(const struct vcpu *v, int new_state)
 {
 struct { uint32_t vcpu:16, domain:16; } d;
 uint32_t event;
@@ -220,7 +220,7 @@ static inline void trace_runstate_change(struct vcpu *v, 
int new_state)
 __trace_var(event, 1/*tsc*/, sizeof(d), );
 }
 
-static inline void trace_continue_running(struct vcpu *v)
+static inline void trace_continue_running(const struct vcpu *v)
 {
 struct { uint32_t vcpu:16, domain:16; } d;
 
@@ -302,7 +302,8 @@ void sched_guest_idle(void (*idle) (void), unsigned int cpu)
 atomic_dec(_cpu(sched_urgent_count, cpu));
 }
 
-void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate)
+void vcpu_runstate_get(const struct vcpu *v,
+   struct vcpu_runstate_info *runstate)
 {
 spinlock_t *lock;
 s_time_t delta;
@@ -324,7 +325,7 @@ void vcpu_runstate_get(struct vcpu *v, struct 
vcpu_runstate_info *runstate)
 uint64_t get_cpu_idle_time(unsigned int cpu)
 {
 struct vcpu_runstate_info state = { 0 };
-struct vcpu *v = idle_vcpu[cpu];
+const struct vcpu *v = idle_vcpu[cpu];
 
 if ( cpu_online(cpu) && v )
 vcpu_runstate_get(v, );
@@ -392,7 +393,7 @@ static void sched_free_unit_mem(struct sched_unit *unit)
 
 static void sched_free_unit(struct sched_unit *unit, struct vcpu *v)
 {
-struct vcpu *vunit;
+const struct vcpu *vunit;
 unsigned int cnt = 0;
 
 /* Don't count to be released vcpu, might be not in vcpu list yet. */
@@ -522,7 +523,7 @@ static unsigned int sched_select_initial_cpu(const struct 
vcpu *v)
 
 int sched_init_vcpu(struct vcpu *v)
 {
-struct domain *d = v->domain;
+const struct domain *d = v->domain;
 struct sched_unit *unit;
 unsigned int processor;
 
@@ -913,7 +914,7 @@ static void sched_unit_move_locked(struct sched_unit *unit,
unsigned int new_cpu)
 {
 unsigned int old_cpu = unit->res->master_cpu;
-struct vcpu *v;
+const struct vcpu *v;
 
 rcu_read_lock(_res_rculock);
 
@@ -1090,7 +1091,7 @@ static bool sched_check_affinity_broken(const struct 
sched_unit *unit)
 return false;
 }
 
-static void sched_reset_affinity_broken(struct sched_unit *unit)
+static void sched_reset_affinity_broken(const struct sched_unit *unit)
 {
 struct vcpu *v;
 
@@ -1176,7 +1177,7 @@ void restore_vcpu_affinity(struct domain *d)
 int cpu_disable_scheduler(unsigned int cpu)
 {
 struct domain *d;
-struct cpupool *c;
+const struct cpupool *c;
 cpumask_t online_affinity;
 int ret = 0;
 
@@ -1251,8 +1252,8 @@ out:
 static int cpu_disable_scheduler_check(unsigned int cpu)
 {
 struct domain *d;
-struct vcpu *v;
-struct cpupool *c;
+const struct vcpu *v;
+const struct cpupool *c;
 
 c = get_sched_res(cpu)->cpupool;
 if ( c == NULL )
diff --git a/xen/common/sched/cpupool.c 

[Xen-devel] [PATCH v2 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume()

2020-01-08 Thread Juergen Gross
sched_tick_suspend() and sched_tick_resume() only call rcu related
functions, so eliminate them and do the rcu_idle_timer*() calling in
rcu_idle_[enter|exit]().

Signed-off-by: Juergen Gross 
Reviewed-by: Dario Faggioli 
Acked-by: Julien Grall 
Acked-by: Andrew Cooper 
---
 xen/arch/arm/domain.c |  6 +++---
 xen/arch/x86/acpi/cpu_idle.c  | 15 ---
 xen/arch/x86/cpu/mwait-idle.c |  8 
 xen/common/rcupdate.c |  7 +--
 xen/common/sched/core.c   | 12 
 xen/include/xen/rcupdate.h|  3 ---
 xen/include/xen/sched.h   |  2 --
 7 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c0a13aa0ab..aa3df3b3ba 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -46,8 +46,8 @@ static void do_idle(void)
 {
 unsigned int cpu = smp_processor_id();
 
-sched_tick_suspend();
-/* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+rcu_idle_enter(cpu);
+/* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
 process_pending_softirqs();
 
 local_irq_disable();
@@ -58,7 +58,7 @@ static void do_idle(void)
 }
 local_irq_enable();
 
-sched_tick_resume();
+rcu_idle_exit(cpu);
 }
 
 void idle_loop(void)
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 5edd1844f4..2676f0d7da 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -599,7 +599,8 @@ void update_idle_stats(struct acpi_processor_power *power,
 
 static void acpi_processor_idle(void)
 {
-struct acpi_processor_power *power = processor_powers[smp_processor_id()];
+unsigned int cpu = smp_processor_id();
+struct acpi_processor_power *power = processor_powers[cpu];
 struct acpi_processor_cx *cx = NULL;
 int next_state;
 uint64_t t1, t2 = 0;
@@ -648,8 +649,8 @@ static void acpi_processor_idle(void)
 
 cpufreq_dbs_timer_suspend();
 
-sched_tick_suspend();
-/* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+rcu_idle_enter(cpu);
+/* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
 process_pending_softirqs();
 
 /*
@@ -658,10 +659,10 @@ static void acpi_processor_idle(void)
  */
 local_irq_disable();
 
-if ( !cpu_is_haltable(smp_processor_id()) )
+if ( !cpu_is_haltable(cpu) )
 {
 local_irq_enable();
-sched_tick_resume();
+rcu_idle_exit(cpu);
 cpufreq_dbs_timer_resume();
 return;
 }
@@ -786,7 +787,7 @@ static void acpi_processor_idle(void)
 /* Now in C0 */
 power->last_state = >states[0];
 local_irq_enable();
-sched_tick_resume();
+rcu_idle_exit(cpu);
 cpufreq_dbs_timer_resume();
 return;
 }
@@ -794,7 +795,7 @@ static void acpi_processor_idle(void)
 /* Now in C0 */
 power->last_state = >states[0];
 
-sched_tick_resume();
+rcu_idle_exit(cpu);
 cpufreq_dbs_timer_resume();
 
 if ( cpuidle_current_governor->reflect )
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 52413e6da1..f49b04c45b 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -755,8 +755,8 @@ static void mwait_idle(void)
 
cpufreq_dbs_timer_suspend();
 
-   sched_tick_suspend();
-   /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+   rcu_idle_enter(cpu);
+   /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
process_pending_softirqs();
 
/* Interrupts must be disabled for C2 and higher transitions. */
@@ -764,7 +764,7 @@ static void mwait_idle(void)
 
if (!cpu_is_haltable(cpu)) {
local_irq_enable();
-   sched_tick_resume();
+   rcu_idle_exit(cpu);
cpufreq_dbs_timer_resume();
return;
}
@@ -806,7 +806,7 @@ static void mwait_idle(void)
if (!(lapic_timer_reliable_states & (1 << cstate)))
lapic_timer_on();
 
-   sched_tick_resume();
+   rcu_idle_exit(cpu);
cpufreq_dbs_timer_resume();
 
if ( cpuidle_current_governor->reflect )
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index a56103c6f7..cb712c8690 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -459,7 +459,7 @@ int rcu_needs_cpu(int cpu)
  * periodically poke rcu_pedning(), so that it will invoke the callback
  * not too late after the end of the grace period.
  */
-void rcu_idle_timer_start()
+static void rcu_idle_timer_start(void)
 {
 struct rcu_data *rdp = _cpu(rcu_data);
 
@@ -475,7 +475,7 @@ void rcu_idle_timer_start()
 rdp->idle_timer_active = true;
 }
 
-void rcu_idle_timer_stop()
+static void rcu_idle_timer_stop(void)
 {
 struct rcu_data *rdp = _cpu(rcu_data);
 
@@ -633,10 +633,13 @@ void rcu_idle_enter(unsigned int cpu)
  * Se the comment before cpumask_andnot() in  

[Xen-devel] [PATCH v2 3/9] xen/sched: cleanup sched.h

2020-01-08 Thread Juergen Gross
There are some items in include/xen/sched.h which can be moved to
private.h as they are scheduler private.

Signed-off-by: Juergen Gross 
Reviewed-by: Dario Faggioli 
---
 xen/common/sched/core.c|  2 +-
 xen/common/sched/private.h | 13 +
 xen/include/xen/sched.h| 17 -
 3 files changed, 14 insertions(+), 18 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 2fae959e90..4153d110be 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1346,7 +1346,7 @@ int vcpu_set_hard_affinity(struct vcpu *v, const 
cpumask_t *affinity)
 return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_hard_affinity);
 }
 
-int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity)
+static int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity)
 {
 return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_soft_affinity);
 }
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index a702fd23b1..edce354dc7 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -533,6 +533,7 @@ static inline void sched_unit_unpause(const struct 
sched_unit *unit)
 struct cpupool
 {
 int  cpupool_id;
+#define CPUPOOLID_NONE-1
 unsigned int n_dom;
 cpumask_var_tcpu_valid;  /* all cpus assigned to pool */
 cpumask_var_tres_valid;  /* all scheduling resources of pool */
@@ -618,5 +619,17 @@ affinity_balance_cpumask(const struct sched_unit *unit, 
int step,
 
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
+void schedule_dump(struct cpupool *c);
+struct scheduler *scheduler_get_default(void);
+struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr);
+void scheduler_free(struct scheduler *sched);
+int cpu_disable_scheduler(unsigned int cpu);
+int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
+int schedule_cpu_rm(unsigned int cpu);
+int sched_move_domain(struct domain *d, struct cpupool *c);
+struct cpupool *cpupool_get_by_id(int poolid);
+void cpupool_put(struct cpupool *pool);
+int cpupool_add_domain(struct domain *d, int poolid);
+void cpupool_rm_domain(struct domain *d);
 
 #endif /* __XEN_SCHED_IF_H__ */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d3adc69ab9..b4c2e4f7c2 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -687,7 +687,6 @@ int  sched_init_vcpu(struct vcpu *v);
 void sched_destroy_vcpu(struct vcpu *v);
 int  sched_init_domain(struct domain *d, int poolid);
 void sched_destroy_domain(struct domain *d);
-int sched_move_domain(struct domain *d, struct cpupool *c);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
 int  sched_id(void);
@@ -920,19 +919,10 @@ static inline bool sched_has_urgent_vcpu(void)
 return atomic_read(_cpu(sched_urgent_count));
 }
 
-struct scheduler;
-
-struct scheduler *scheduler_get_default(void);
-struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr);
-void scheduler_free(struct scheduler *sched);
-int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-int schedule_cpu_rm(unsigned int cpu);
 void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value);
-int cpu_disable_scheduler(unsigned int cpu);
 void sched_setup_dom0_vcpus(struct domain *d);
 int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
-int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity);
 void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
  struct xen_domctl_vcpuaffinity *vcpuaff);
@@ -1065,17 +1055,10 @@ extern enum cpufreq_controller {
 FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
 } cpufreq_controller;
 
-#define CPUPOOLID_NONE-1
-
-struct cpupool *cpupool_get_by_id(int poolid);
-void cpupool_put(struct cpupool *pool);
-int cpupool_add_domain(struct domain *d, int poolid);
-void cpupool_rm_domain(struct domain *d);
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 int cpupool_get_id(const struct domain *d);
 cpumask_t *cpupool_valid_cpus(struct cpupool *pool);
-void schedule_dump(struct cpupool *c);
 extern void dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
-- 
2.16.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC PATCH V2 09/11] xen: Clear IRQD_IRQ_STARTED flag during shutdown PIRQs

2020-01-08 Thread Thomas Gleixner
Anchal Agarwal  writes:

> shutdown_pirq is invoked during hibernation path and hence
> PIRQs should be restarted during resume.
> Before this commit'020db9d3c1dc0a' xen/events: Fix interrupt lost
> during irq_disable and irq_enable startup_pirq was automatically
> called during irq_enable however, after this commit pirq's did not
> get explicitly started once resumed from hibernation.
>
> chip->irq_startup is called only if IRQD_IRQ_STARTED is unset during
> irq_startup on resume. This flag gets cleared by free_irq->irq_shutdown
> during suspend. free_irq() never gets explicitly called for ioapic-edge
> and ioapic-level interrupts as respective drivers do nothing during
> suspend/resume. So we shut them down explicitly in the first place in
> syscore_suspend path to clear IRQ<>event channel mapping. shutdown_pirq
> being called explicitly during suspend does not clear this flags, hence
> .irq_enable is called in irq_startup during resume instead and pirq's
> never start up.

What? 

> +void irq_state_clr_started(struct irq_desc *desc)
>  {
>   irqd_clear(>irq_data, IRQD_IRQ_STARTED);
>  }
> +EXPORT_SYMBOL_GPL(irq_state_clr_started);

This is core internal state and not supposed to be fiddled with by
drivers.

irq_chip has irq_suspend/resume/pm_shutdown callbacks for a reason.

Thanks,

   tglx

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory

2020-01-08 Thread Juergen Gross
Move sched*c and cpupool.c to a new directory common/sched.

Signed-off-by: Juergen Gross 
---
V2:
- renamed sources (Dario Faggioli, Andrew Cooper)
---
 MAINTAINERS   |  8 +--
 xen/common/Kconfig| 66 +--
 xen/common/Makefile   |  8 +--
 xen/common/sched/Kconfig  | 65 ++
 xen/common/sched/Makefile |  7 +++
 xen/common/{sched_arinc653.c => sched/arinc653.c} |  0
 xen/common/{compat/schedule.c => sched/compat.c}  |  2 +-
 xen/common/{schedule.c => sched/core.c}   |  2 +-
 xen/common/{ => sched}/cpupool.c  |  0
 xen/common/{sched_credit.c => sched/credit.c} |  0
 xen/common/{sched_credit2.c => sched/credit2.c}   |  0
 xen/common/{sched_null.c => sched/null.c} |  0
 xen/common/{sched_rt.c => sched/rt.c} |  0
 13 files changed, 80 insertions(+), 78 deletions(-)
 create mode 100644 xen/common/sched/Kconfig
 create mode 100644 xen/common/sched/Makefile
 rename xen/common/{sched_arinc653.c => sched/arinc653.c} (100%)
 rename xen/common/{compat/schedule.c => sched/compat.c} (97%)
 rename xen/common/{schedule.c => sched/core.c} (99%)
 rename xen/common/{ => sched}/cpupool.c (100%)
 rename xen/common/{sched_credit.c => sched/credit.c} (100%)
 rename xen/common/{sched_credit2.c => sched/credit2.c} (100%)
 rename xen/common/{sched_null.c => sched/null.c} (100%)
 rename xen/common/{sched_rt.c => sched/rt.c} (100%)

diff --git a/MAINTAINERS b/MAINTAINERS
index eaea4620e2..9d2ac631ba 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -174,7 +174,7 @@ M:  Josh Whitehead 
 M: Stewart Hildebrand 
 S: Supported
 L: DornerWorks Xen-Devel 
-F: xen/common/sched_arinc653.c
+F: xen/common/sched/arinc653.c
 F: tools/libxc/xc_arinc653.c
 
 ARM (W/ VIRTUALISATION EXTENSIONS) ARCHITECTURE
@@ -212,7 +212,7 @@ CPU POOLS
 M: Juergen Gross 
 M: Dario Faggioli 
 S: Supported
-F: xen/common/cpupool.c
+F: xen/common/sched/cpupool.c
 
 DEVICE TREE
 M: Stefano Stabellini 
@@ -378,13 +378,13 @@ RTDS SCHEDULER
 M: Dario Faggioli 
 M: Meng Xu 
 S: Supported
-F: xen/common/sched_rt.c
+F: xen/common/sched/rt.c
 
 SCHEDULING
 M: George Dunlap 
 M: Dario Faggioli 
 S: Supported
-F: xen/common/sched*
+F: xen/common/sched/
 
 SEABIOS UPSTREAM
 M: Wei Liu 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index b3d161d057..9d6d09eb37 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -275,71 +275,7 @@ config ARGO
 
  If unsure, say N.
 
-menu "Schedulers"
-   visible if EXPERT = "y"
-
-config SCHED_CREDIT
-   bool "Credit scheduler support"
-   default y
-   ---help---
- The traditional credit scheduler is a general purpose scheduler.
-
-config SCHED_CREDIT2
-   bool "Credit2 scheduler support"
-   default y
-   ---help---
- The credit2 scheduler is a general purpose scheduler that is
- optimized for lower latency and higher VM density.
-
-config SCHED_RTDS
-   bool "RTDS scheduler support (EXPERIMENTAL)"
-   default y
-   ---help---
- The RTDS scheduler is a soft and firm real-time scheduler for
- multicore, targeted for embedded, automotive, graphics and gaming
- in the cloud, and general low-latency workloads.
-
-config SCHED_ARINC653
-   bool "ARINC653 scheduler support (EXPERIMENTAL)"
-   default DEBUG
-   ---help---
- The ARINC653 scheduler is a hard real-time scheduler for single
- cores, targeted for avionics, drones, and medical devices.
-
-config SCHED_NULL
-   bool "Null scheduler support (EXPERIMENTAL)"
-   default y
-   ---help---
- The null scheduler is a static, zero overhead scheduler,
- for when there always are less vCPUs than pCPUs, typically
- in embedded or HPC scenarios.
-
-choice
-   prompt "Default Scheduler?"
-   default SCHED_CREDIT2_DEFAULT
-
-   config SCHED_CREDIT_DEFAULT
-   bool "Credit Scheduler" if SCHED_CREDIT
-   config SCHED_CREDIT2_DEFAULT
-   bool "Credit2 Scheduler" if SCHED_CREDIT2
-   config SCHED_RTDS_DEFAULT
-   bool "RT Scheduler" if SCHED_RTDS
-   config SCHED_ARINC653_DEFAULT
-   bool "ARINC653 Scheduler" if SCHED_ARINC653
-   config SCHED_NULL_DEFAULT
-   bool "Null Scheduler" if SCHED_NULL
-endchoice
-
-config SCHED_DEFAULT
-   string
-   default "credit" if SCHED_CREDIT_DEFAULT
-   default "credit2" if SCHED_CREDIT2_DEFAULT
-   default "rtds" if SCHED_RTDS_DEFAULT
-   default "arinc653" if SCHED_ARINC653_DEFAULT
-   default "null" if SCHED_NULL_DEFAULT
-   default "credit2"
-
-endmenu
+source "common/sched/Kconfig"
 
 config CRYPTO
bool
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 

[Xen-devel] [PATCH v2 4/9] xen/sched: remove special cases for free cpus in schedulers

2020-01-08 Thread Juergen Gross
With the idle scheduler now taking care of all cpus not in any cpupool
the special cases in the other schedulers for no cpupool associated
can be removed.

Signed-off-by: Juergen Gross 
---
 xen/common/sched/credit.c  |  7 ++-
 xen/common/sched/credit2.c | 30 --
 2 files changed, 2 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index 4329d9df56..6b04f8f71c 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -1690,11 +1690,8 @@ csched_load_balance(struct csched_private *prv, int cpu,
 
 BUG_ON(get_sched_res(cpu) != snext->unit->res);
 
-/*
- * If this CPU is going offline, or is not (yet) part of any cpupool
- * (as it happens, e.g., during cpu bringup), we shouldn't steal work.
- */
-if ( unlikely(!cpumask_test_cpu(cpu, online) || c == NULL) )
+/* If this CPU is going offline, we shouldn't steal work.  */
+if ( unlikely(!cpumask_test_cpu(cpu, online)) )
 goto out;
 
 if ( snext->pri == CSCHED_PRI_IDLE )
diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 65e8ab052e..849d254e04 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -2744,40 +2744,10 @@ static void
 csched2_unit_migrate(
 const struct scheduler *ops, struct sched_unit *unit, unsigned int new_cpu)
 {
-struct domain *d = unit->domain;
 struct csched2_unit * const svc = csched2_unit(unit);
 struct csched2_runqueue_data *trqd;
 s_time_t now = NOW();
 
-/*
- * Being passed a target pCPU which is outside of our cpupool is only
- * valid if we are shutting down (or doing ACPI suspend), and we are
- * moving everyone to BSP, no matter whether or not BSP is inside our
- * cpupool.
- *
- * And since there indeed is the chance that it is not part of it, all
- * we must do is remove _and_ unassign the unit from any runqueue, as
- * well as updating v->processor with the target, so that the suspend
- * process can continue.
- *
- * It will then be during resume that a new, meaningful, value for
- * v->processor will be chosen, and during actual domain unpause that
- * the unit will be assigned to and added to the proper runqueue.
- */
-if ( unlikely(!cpumask_test_cpu(new_cpu, 
cpupool_domain_master_cpumask(d))) )
-{
-ASSERT(system_state == SYS_STATE_suspend);
-if ( unit_on_runq(svc) )
-{
-runq_remove(svc);
-update_load(ops, svc->rqd, NULL, -1, now);
-}
-_runq_deassign(svc);
-sched_set_res(unit, get_sched_res(new_cpu));
-return;
-}
-
-/* If here, new_cpu must be a valid Credit2 pCPU, and in our affinity. */
 ASSERT(cpumask_test_cpu(new_cpu, _priv(ops)->initialized));
 ASSERT(cpumask_test_cpu(new_cpu, unit->cpu_hard_affinity));
 
-- 
2.16.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2 2/9] xen/sched: make sched-if.h really scheduler private

2020-01-08 Thread Juergen Gross
include/xen/sched-if.h should be private to scheduler code, so move it
to common/sched/private.h and move the remaining use cases to
cpupool.c and core.c.

Signed-off-by: Juergen Gross 
Reviewed-by: Dario Faggioli 
---
V2:
- rename to private.h (Andrew Cooper)
---
 xen/arch/x86/dom0_build.c  |   5 +-
 xen/common/domain.c|  70 
 xen/common/domctl.c| 135 +--
 xen/common/sched/arinc653.c|   3 +-
 xen/common/sched/core.c| 191 -
 xen/common/sched/cpupool.c |  13 +-
 xen/common/sched/credit.c  |   2 +-
 xen/common/sched/credit2.c |   3 +-
 xen/common/sched/null.c|   3 +-
 .../xen/sched-if.h => common/sched/private.h}  |   3 -
 xen/common/sched/rt.c  |   3 +-
 xen/include/xen/domain.h   |   3 +
 xen/include/xen/sched.h|   7 +
 13 files changed, 228 insertions(+), 213 deletions(-)
 rename xen/{include/xen/sched-if.h => common/sched/private.h} (99%)

diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 28b964e018..56c2dee0fc 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -9,7 +9,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 
 #include 
@@ -227,9 +226,9 @@ unsigned int __init dom0_max_vcpus(void)
 dom0_nodes = node_online_map;
 for_each_node_mask ( node, dom0_nodes )
 cpumask_or(_cpus, _cpus, _to_cpumask(node));
-cpumask_and(_cpus, _cpus, cpupool0->cpu_valid);
+cpumask_and(_cpus, _cpus, cpupool_valid_cpus(cpupool0));
 if ( cpumask_empty(_cpus) )
-cpumask_copy(_cpus, cpupool0->cpu_valid);
+cpumask_copy(_cpus, cpupool_valid_cpus(cpupool0));
 
 max_vcpus = cpumask_weight(_cpus);
 if ( opt_dom0_max_vcpus_min > max_vcpus )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 0b1103fdb2..71a7c2776f 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -10,7 +10,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -565,75 +564,6 @@ void __init setup_system_domains(void)
 #endif
 }
 
-void domain_update_node_affinity(struct domain *d)
-{
-cpumask_var_t dom_cpumask, dom_cpumask_soft;
-cpumask_t *dom_affinity;
-const cpumask_t *online;
-struct sched_unit *unit;
-unsigned int cpu;
-
-/* Do we have vcpus already? If not, no need to update node-affinity. */
-if ( !d->vcpu || !d->vcpu[0] )
-return;
-
-if ( !zalloc_cpumask_var(_cpumask) )
-return;
-if ( !zalloc_cpumask_var(_cpumask_soft) )
-{
-free_cpumask_var(dom_cpumask);
-return;
-}
-
-online = cpupool_domain_master_cpumask(d);
-
-spin_lock(>node_affinity_lock);
-
-/*
- * If d->auto_node_affinity is true, let's compute the domain's
- * node-affinity and update d->node_affinity accordingly. if false,
- * just leave d->auto_node_affinity alone.
- */
-if ( d->auto_node_affinity )
-{
-/*
- * We want the narrowest possible set of pcpus (to get the narowest
- * possible set of nodes). What we need is the cpumask of where the
- * domain can run (the union of the hard affinity of all its vcpus),
- * and the full mask of where it would prefer to run (the union of
- * the soft affinity of all its various vcpus). Let's build them.
- */
-for_each_sched_unit ( d, unit )
-{
-cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-   unit->cpu_soft_affinity);
-}
-/* Filter out non-online cpus */
-cpumask_and(dom_cpumask, dom_cpumask, online);
-ASSERT(!cpumask_empty(dom_cpumask));
-/* And compute the intersection between hard, online and soft */
-cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
-
-/*
- * If not empty, the intersection of hard, soft and online is the
- * narrowest set we want. If empty, we fall back to hard
- */
-dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-   dom_cpumask : dom_cpumask_soft;
-
-nodes_clear(d->node_affinity);
-for_each_cpu ( cpu, dom_affinity )
-node_set(cpu_to_node(cpu), d->node_affinity);
-}
-
-spin_unlock(>node_affinity_lock);
-
-free_cpumask_var(dom_cpumask_soft);
-free_cpumask_var(dom_cpumask);
-}
-
-
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
 {
 /* Being disjoint with the system is just wrong. */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 650310e874..8b819f56e5 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -11,7 

[Xen-devel] [PATCH v2 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack

2020-01-08 Thread Juergen Gross
In rt scheduler there are three instances of cpumasks allocated on the
stack. Replace them by using cpumask_scratch.

Signed-off-by: Juergen Gross 
---
 xen/common/sched/rt.c | 56 ++-
 1 file changed, 37 insertions(+), 19 deletions(-)

diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c
index 8203b63a9d..d26f77f554 100644
--- a/xen/common/sched/rt.c
+++ b/xen/common/sched/rt.c
@@ -637,23 +637,38 @@ replq_reinsert(const struct scheduler *ops, struct 
rt_unit *svc)
  * and available resources
  */
 static struct sched_resource *
-rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit)
+rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu)
 {
-cpumask_t cpus;
+cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu);
 cpumask_t *online;
 int cpu;
 
 online = cpupool_domain_master_cpumask(unit->domain);
-cpumask_and(, online, unit->cpu_hard_affinity);
+cpumask_and(cpus, online, unit->cpu_hard_affinity);
 
-cpu = cpumask_test_cpu(sched_unit_master(unit), )
+cpu = cpumask_test_cpu(sched_unit_master(unit), cpus)
 ? sched_unit_master(unit)
-: cpumask_cycle(sched_unit_master(unit), );
-ASSERT( !cpumask_empty() && cpumask_test_cpu(cpu, ) );
+: cpumask_cycle(sched_unit_master(unit), cpus);
+ASSERT( !cpumask_empty(cpus) && cpumask_test_cpu(cpu, cpus) );
 
 return get_sched_res(cpu);
 }
 
+/*
+ * Pick a valid resource for the unit vc
+ * Valid resource of an unit is intesection of unit's affinity
+ * and available resources
+ */
+static struct sched_resource *
+rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit)
+{
+struct sched_resource *res;
+
+res = rt_res_pick_locked(unit, unit->res->master_cpu);
+
+return res;
+}
+
 /*
  * Init/Free related code
  */
@@ -886,11 +901,14 @@ rt_unit_insert(const struct scheduler *ops, struct 
sched_unit *unit)
 struct rt_unit *svc = rt_unit(unit);
 s_time_t now;
 spinlock_t *lock;
+unsigned int cpu = smp_processor_id();
 
 BUG_ON( is_idle_unit(unit) );
 
 /* This is safe because unit isn't yet being scheduled */
-sched_set_res(unit, rt_res_pick(ops, unit));
+lock = pcpu_schedule_lock_irq(cpu);
+sched_set_res(unit, rt_res_pick_locked(unit, cpu));
+pcpu_schedule_unlock_irq(lock, cpu);
 
 lock = unit_schedule_lock_irq(unit);
 
@@ -1003,13 +1021,13 @@ burn_budget(const struct scheduler *ops, struct rt_unit 
*svc, s_time_t now)
  * lock is grabbed before calling this function
  */
 static struct rt_unit *
-runq_pick(const struct scheduler *ops, const cpumask_t *mask)
+runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu)
 {
 struct list_head *runq = rt_runq(ops);
 struct list_head *iter;
 struct rt_unit *svc = NULL;
 struct rt_unit *iter_svc = NULL;
-cpumask_t cpu_common;
+cpumask_t *cpu_common = cpumask_scratch_cpu(cpu);
 cpumask_t *online;
 
 list_for_each ( iter, runq )
@@ -1018,9 +1036,9 @@ runq_pick(const struct scheduler *ops, const cpumask_t 
*mask)
 
 /* mask cpu_hard_affinity & cpupool & mask */
 online = cpupool_domain_master_cpumask(iter_svc->unit->domain);
-cpumask_and(_common, online, iter_svc->unit->cpu_hard_affinity);
-cpumask_and(_common, mask, _common);
-if ( cpumask_empty(_common) )
+cpumask_and(cpu_common, online, iter_svc->unit->cpu_hard_affinity);
+cpumask_and(cpu_common, mask, cpu_common);
+if ( cpumask_empty(cpu_common) )
 continue;
 
 ASSERT( iter_svc->cur_budget > 0 );
@@ -1092,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, struct 
sched_unit *currunit,
 }
 else
 {
-snext = runq_pick(ops, cpumask_of(sched_cpu));
+snext = runq_pick(ops, cpumask_of(sched_cpu), cur_cpu);
 
 if ( snext == NULL )
 snext = rt_unit(sched_idle_unit(sched_cpu));
@@ -1186,22 +1204,22 @@ runq_tickle(const struct scheduler *ops, struct rt_unit 
*new)
 struct rt_unit *iter_svc;
 struct sched_unit *iter_unit;
 int cpu = 0, cpu_to_tickle = 0;
-cpumask_t not_tickled;
+cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id());
 cpumask_t *online;
 
 if ( new == NULL || is_idle_unit(new->unit) )
 return;
 
 online = cpupool_domain_master_cpumask(new->unit->domain);
-cpumask_and(_tickled, online, new->unit->cpu_hard_affinity);
-cpumask_andnot(_tickled, _tickled, >tickled);
+cpumask_and(not_tickled, online, new->unit->cpu_hard_affinity);
+cpumask_andnot(not_tickled, not_tickled, >tickled);
 
 /*
  * 1) If there are any idle CPUs, kick one.
  *For cache benefit,we first search new->cpu.
  *The same loop also find the one with lowest priority.
  */
-cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), _tickled);
+cpu = 

[Xen-devel] [PATCH v2 7/9] xen/sched: switch scheduling to bool where appropriate

2020-01-08 Thread Juergen Gross
Scheduling code has several places using int or bool_t instead of bool.
Switch those.

Signed-off-by: Juergen Gross 
---
V2:
- rename bool "pos" to "first" (Dario Faggioli)
---
 xen/common/sched/arinc653.c |  8 
 xen/common/sched/core.c | 14 +++---
 xen/common/sched/cpupool.c  | 10 +-
 xen/common/sched/credit.c   | 12 ++--
 xen/common/sched/private.h  |  2 +-
 xen/common/sched/rt.c   | 18 +-
 xen/include/xen/sched.h |  6 +++---
 7 files changed, 35 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c
index 8895d92b5e..bce8021e3f 100644
--- a/xen/common/sched/arinc653.c
+++ b/xen/common/sched/arinc653.c
@@ -75,7 +75,7 @@ typedef struct arinc653_unit_s
  * arinc653_unit_t pointer. */
 struct sched_unit * unit;
 /* awake holds whether the UNIT has been woken with vcpu_wake() */
-bool_t  awake;
+boolawake;
 /* list holds the linked list information for the list this UNIT
  * is stored in */
 struct list_headlist;
@@ -427,7 +427,7 @@ a653sched_alloc_udata(const struct scheduler *ops, struct 
sched_unit *unit,
  * will mark the UNIT awake.
  */
 svc->unit = unit;
-svc->awake = 0;
+svc->awake = false;
 if ( !is_idle_unit(unit) )
 list_add(>list, _PRIV(ops)->unit_list);
 update_schedule_units(ops);
@@ -473,7 +473,7 @@ static void
 a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
 {
 if ( AUNIT(unit) != NULL )
-AUNIT(unit)->awake = 0;
+AUNIT(unit)->awake = false;
 
 /*
  * If the UNIT being put to sleep is the same one that is currently
@@ -493,7 +493,7 @@ static void
 a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
 {
 if ( AUNIT(unit) != NULL )
-AUNIT(unit)->awake = 1;
+AUNIT(unit)->awake = true;
 
 cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ);
 }
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 4153d110be..896f82f4d2 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -53,7 +53,7 @@ string_param("sched", opt_sched);
  * scheduler will give preferrence to partially idle package compared to
  * the full idle package, when picking pCPU to schedule vCPU.
  */
-bool_t sched_smt_power_savings = 0;
+bool sched_smt_power_savings;
 boolean_param("sched_smt_power_savings", sched_smt_power_savings);
 
 /* Default scheduling rate limit: 1ms
@@ -574,7 +574,7 @@ int sched_init_vcpu(struct vcpu *v)
 {
 get_sched_res(v->processor)->curr = unit;
 get_sched_res(v->processor)->sched_unit_idle = unit;
-v->is_running = 1;
+v->is_running = true;
 unit->is_running = true;
 unit->state_entry_time = NOW();
 }
@@ -983,7 +983,7 @@ static void sched_unit_migrate_finish(struct sched_unit 
*unit)
 unsigned long flags;
 unsigned int old_cpu, new_cpu;
 spinlock_t *old_lock, *new_lock;
-bool_t pick_called = 0;
+bool pick_called = false;
 struct vcpu *v;
 
 /*
@@ -1029,7 +1029,7 @@ static void sched_unit_migrate_finish(struct sched_unit 
*unit)
 if ( (new_lock == get_sched_res(new_cpu)->schedule_lock) &&
  cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_valid) )
 break;
-pick_called = 1;
+pick_called = true;
 }
 else
 {
@@ -1037,7 +1037,7 @@ static void sched_unit_migrate_finish(struct sched_unit 
*unit)
  * We do not hold the scheduler lock appropriate for this vCPU.
  * Thus we cannot select a new CPU on this iteration. Try again.
  */
-pick_called = 0;
+pick_called = false;
 }
 
 sched_spin_unlock_double(old_lock, new_lock, flags);
@@ -2148,7 +2148,7 @@ static void sched_switch_units(struct sched_resource *sr,
 vcpu_runstate_change(vnext, vnext->new_state, now);
 }
 
-vnext->is_running = 1;
+vnext->is_running = true;
 
 if ( is_idle_vcpu(vnext) )
 vnext->sched_unit = next;
@@ -2219,7 +2219,7 @@ static void vcpu_context_saved(struct vcpu *vprev, struct 
vcpu *vnext)
 smp_wmb();
 
 if ( vprev != vnext )
-vprev->is_running = 0;
+vprev->is_running = false;
 }
 
 static void unit_context_saved(struct sched_resource *sr)
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 7b31ab0d61..c1396cfff4 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -154,7 +154,7 @@ static struct cpupool *alloc_cpupool_struct(void)
  * the searched id is returned
  * returns NULL if not found.
  */
-static struct cpupool *__cpupool_find_by_id(int id, int exact)
+static struct cpupool *__cpupool_find_by_id(int id, bool exact)
 {
 struct cpupool **q;
 
@@ -169,10 +169,10 @@ static struct cpupool *__cpupool_find_by_id(int id, int 

[Xen-devel] [PATCH v2 0/9] xen: scheduler cleanups

2020-01-08 Thread Juergen Gross
Move all scheduler related hypervisor code to xen/common/sched/ and
do a lot of cleanups.

Juergen Gross (9):
  xen/sched: move schedulers and cpupool coding to dedicated directory
  xen/sched: make sched-if.h really scheduler private
  xen/sched: cleanup sched.h
  xen/sched: remove special cases for free cpus in schedulers
  xen/sched: use scratch cpumask instead of allocating it on the stack
  xen/sched: replace null scheduler percpu-variable with pdata hook
  xen/sched: switch scheduling to bool where appropriate
  xen/sched: eliminate sched_tick_suspend() and sched_tick_resume()
  xen/sched: add const qualifier where appropriate

 MAINTAINERS|   8 +-
 xen/arch/arm/domain.c  |   6 +-
 xen/arch/x86/acpi/cpu_idle.c   |  15 +-
 xen/arch/x86/cpu/mwait-idle.c  |   8 +-
 xen/arch/x86/dom0_build.c  |   5 +-
 xen/common/Kconfig |  66 +-
 xen/common/Makefile|   8 +-
 xen/common/domain.c|  70 --
 xen/common/domctl.c| 135 +--
 xen/common/rcupdate.c  |   7 +-
 xen/common/sched/Kconfig   |  65 ++
 xen/common/sched/Makefile  |   7 +
 xen/common/{sched_arinc653.c => sched/arinc653.c}  |  15 +-
 xen/common/{compat/schedule.c => sched/compat.c}   |   2 +-
 xen/common/{schedule.c => sched/core.c}| 246 ++---
 xen/common/{ => sched}/cpupool.c   |  23 +-
 xen/common/{sched_credit.c => sched/credit.c}  |  65 +++---
 xen/common/{sched_credit2.c => sched/credit2.c}|  85 +++
 xen/common/{sched_null.c => sched/null.c}  | 105 ++---
 .../xen/sched-if.h => common/sched/private.h}  |  18 +-
 xen/common/{sched_rt.c => sched/rt.c}  | 109 +
 xen/include/xen/domain.h   |   3 +
 xen/include/xen/rcupdate.h |   3 -
 xen/include/xen/sched.h|  39 ++--
 24 files changed, 568 insertions(+), 545 deletions(-)
 create mode 100644 xen/common/sched/Kconfig
 create mode 100644 xen/common/sched/Makefile
 rename xen/common/{sched_arinc653.c => sched/arinc653.c} (99%)
 rename xen/common/{compat/schedule.c => sched/compat.c} (97%)
 rename xen/common/{schedule.c => sched/core.c} (92%)
 rename xen/common/{ => sched}/cpupool.c (97%)
 rename xen/common/{sched_credit.c => sched/credit.c} (97%)
 rename xen/common/{sched_credit2.c => sched/credit2.c} (98%)
 rename xen/common/{sched_null.c => sched/null.c} (92%)
 rename xen/{include/xen/sched-if.h => common/sched/private.h} (96%)
 rename xen/common/{sched_rt.c => sched/rt.c} (94%)

-- 
2.16.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1 0/4] basic KASAN support for Xen PV domains

2020-01-08 Thread Sergey Dyasli
This series allows to boot and run Xen PV kernels (Dom0 and DomU) with
CONFIG_KASAN=y. It has been used internally for some time now with good
results for finding memory corruption issues in Dom0 kernel.

Only Outline instrumentation is supported at the moment.

Sergey Dyasli (2):
  kasan: introduce set_pmd_early_shadow()
  x86/xen: add basic KASAN support for PV kernel

Ross Lagerwall (2):
  xen: teach KASAN about grant tables
  xen/netback: Fix grant copy across page boundary with KASAN

 arch/x86/mm/kasan_init_64.c   | 12 +++
 arch/x86/xen/Makefile |  7 
 arch/x86/xen/enlighten_pv.c   |  3 ++
 arch/x86/xen/mmu_pv.c | 39 
 drivers/net/xen-netback/common.h  |  2 +-
 drivers/net/xen-netback/netback.c | 59 +--
 drivers/xen/Makefile  |  2 ++
 drivers/xen/grant-table.c |  5 ++-
 include/xen/xen-ops.h |  4 +++
 kernel/Makefile   |  2 ++
 lib/Kconfig.kasan |  3 +-
 mm/kasan/init.c   | 25 -
 12 files changed, 141 insertions(+), 22 deletions(-)

-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1 3/4] xen: teach KASAN about grant tables

2020-01-08 Thread Sergey Dyasli
From: Ross Lagerwall 

Otherwise it produces lots of false positives when a guest starts using
PV I/O devices.

Signed-off-by: Ross Lagerwall 
Signed-off-by: Sergey Dyasli 
---
RFC --> v1:
- Slightly clarified the commit message
---
 drivers/xen/grant-table.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 7b36b51cdb9f..ce95f7232de6 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1048,6 +1048,7 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
foreign = xen_page_foreign(pages[i]);
foreign->domid = map_ops[i].dom;
foreign->gref = map_ops[i].ref;
+   kasan_alloc_pages(pages[i], 0);
break;
}
 
@@ -1084,8 +1085,10 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref 
*unmap_ops,
if (ret)
return ret;
 
-   for (i = 0; i < count; i++)
+   for (i = 0; i < count; i++) {
ClearPageForeign(pages[i]);
+   kasan_free_pages(pages[i], 0);
+   }
 
return clear_foreign_p2m_mapping(unmap_ops, kunmap_ops, pages, count);
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  1   2   >