Re: [Xen-devel] Criteria / validation proposal: drop Xen

2019-07-11 Thread David Woodhouse
On Thu, 2019-07-11 at 14:19 -0700, Adam Williamson wrote:
> Yeah, that's where I was going to go next (there has already been a
> thread about this this morning). If what we care about is that Fedora
> boots on EC2, that's what we should have in the criteria, and what we
> should test.

While trying hard to avoid a "haha he would say that" response, I do
genuinely believe that's a reasonable canary and could cover most of
the use cases that various users even outside EC2 would care about.

> IIRC, what we have right now is a somewhat vague setup where we just
> have 'local', 'ec2' and 'openstack' columns. The instructions for
> "Amazon Web Services" just say "Launch an instance with the AMI under
> test". So we could probably stand to tighten that up a bit, and define
> specific instance type(s) that we want to test/block on.

I think we can define a set of instance types that would cover what it
makes sense to test. Do we still care about actual PV guests or only
HVM? I think it makes sense to test guests with Xen netback and blkback
rather than only ENA and NVMe, but Fedora probably wants to test the
latter two *anyway*.

Do we want to do this by making sure you have free credits to run the
appropriate tests directly... or is it better all round for us to just
do this on nightly builds for ourselves?

The latter brings me to a question that's been bugging me for a while —
how in $DEITY's name *do* I launch the latest official Fedora AMI
anyway? I can't find it through the normal GUI launch process and have
to go to getfedora.org and click around for a while because I find the
specific AMI ID for the that region, and then manually enter that to
launch the instance. Can't we fix that so I can just select 'Fedora 30'
with a single click? Whose heads do I have to bash together to make
that work?




smime.p7s
Description: S/MIME cryptographic signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable test] 138892: regressions - FAIL

2019-07-11 Thread osstest service owner
flight 138892 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138892/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  7 xen-boot fail REGR. vs. 138868
 test-arm64-arm64-examine11 examine-serial/bootloader fail REGR. vs. 138868

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 138868
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 138868
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 138868
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 138868
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 138868
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 138868
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 138868
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 138868
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 138868
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass

version targeted for testing:
 xen  8706d38479218dcf549a94516918c3e3b30a7bb0
baseline version:
 xen  b541287c3600713feaaaf7608cd405e7b2e4efd0

Last test of basis   138868  2019-07-09 15:18:41 Z2 days

[Xen-devel] [PATCH] xen/pv: Fix a boot up hang triggered by int3 self test

2019-07-11 Thread Zhenzhong Duan
Commit 7457c0da024b ("x86/alternatives: Add int3_emulate_call()
selftest") reveals a bug in XEN PV int3 assemble code. There is
a double pop of register R11 and RCX currupting the exception
frame, one in xen_int3 and the other in xen_xenint3.

We see below hang at bootup:

general protection fault:  [#1] SMP NOPTI
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0+ #6
RIP: e030:int3_magic+0x0/0x7
Call Trace:
 alternative_instructions+0x3d/0x12e
 check_bugs+0x7c9/0x887
 ?__get_locked_pte+0x178/0x1f0
 start_kernel+0x4ff/0x535
 ?set_init_arg+0x55/0x55
 xen_start_kernel+0x571/0x57a

Fix it by removing xen_xenint3.

Signed-off-by: Zhenzhong Duan 
Cc: Boris Ostrovsky 
Cc: Juergen Gross 
Cc: Stefano Stabellini 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
---
 arch/x86/include/asm/traps.h | 2 +-
 arch/x86/xen/enlighten_pv.c  | 2 +-
 arch/x86/xen/xen-asm_64.S| 1 -
 3 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h
index 7d6f3f3..f2bd284 100644
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -40,7 +40,7 @@
 asmlinkage void xen_divide_error(void);
 asmlinkage void xen_xennmi(void);
 asmlinkage void xen_xendebug(void);
-asmlinkage void xen_xenint3(void);
+asmlinkage void xen_int3(void);
 asmlinkage void xen_overflow(void);
 asmlinkage void xen_bounds(void);
 asmlinkage void xen_invalid_op(void);
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 4722ba2..2138d69 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -596,7 +596,7 @@ struct trap_array_entry {
 
 static struct trap_array_entry trap_array[] = {
{ debug,   xen_xendebug,true },
-   { int3,xen_xenint3, true },
+   { int3,xen_int3,true },
{ double_fault,xen_double_fault,true },
 #ifdef CONFIG_X86_MCE
{ machine_check,   xen_machine_check,   true },
diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S
index 1e9ef0b..ebf610b 100644
--- a/arch/x86/xen/xen-asm_64.S
+++ b/arch/x86/xen/xen-asm_64.S
@@ -32,7 +32,6 @@ xen_pv_trap divide_error
 xen_pv_trap debug
 xen_pv_trap xendebug
 xen_pv_trap int3
-xen_pv_trap xenint3
 xen_pv_trap xennmi
 xen_pv_trap overflow
 xen_pv_trap bounds
-- 
1.8.3.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [ovmf test] 138896: all pass - PUSHED

2019-07-11 Thread osstest service owner
flight 138896 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138896/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf 8df52631e53c73cbe5ef037155cc5b6bdc87f757
baseline version:
 ovmf f527942e6bdd9f198db90f2de99a0482e9be5b1b

Last test of basis   138877  2019-07-10 00:40:45 Z2 days
Testing same since   138896  2019-07-10 20:29:44 Z1 days1 attempts


People who touched revisions under test:
  Alexander Graf 
  Bob Feng 
  Feng, Bob C 
  GregX Yeh 
  GregX Yeh 
  Ray Ni 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f527942e6b..8df52631e5  8df52631e53c73cbe5ef037155cc5b6bdc87f757 -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [libvirt test] 138895: regressions - FAIL

2019-07-11 Thread osstest service owner
flight 138895 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138895/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-qcow2 15 guest-start/debian.repeat fail REGR. vs. 
138876

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 138876
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 138876
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  4cc5679e9121be04bab4fc9b91f4ff353198d76b
baseline version:
 libvirt  2a5bc136393863689bb8f54cb14342d3fe17e227

Last test of basis   138876  2019-07-10 00:24:07 Z2 days
Testing same since   138895  2019-07-10 20:25:35 Z1 days1 attempts


People who touched revisions under test:
  Daniel P. Berrangé 
  Eric Blake 
  Peter Krempa 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-arm64-arm64-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-arm64-arm64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-arm64-arm64-libvirt-qcow2   fail
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 4cc5679e9121be04bab4fc9b91f4ff353198d76b
Author: Eric Blake 
Date:   Tue Jul 9 09:02:35 

Re: [Xen-devel] [PATCH V4 1/9] block: add a helper function to read nr_setcs

2019-07-11 Thread Martin K. Petersen

Hi Chaitanya,

> +static inline sector_t bdev_nr_sects(struct block_device *bdev)
> +{
> + return part_nr_sects_read(bdev->bd_part);
> +}

Can bdev end up being NULL in any of the call sites?

Otherwise no objections.

-- 
Martin K. Petersen  Oracle Linux Engineering

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v7] x86/emulate: Send vm_event from emulate

2019-07-11 Thread Jan Beulich
On 11.07.2019 19:13, Tamas K Lengyel wrote:
>> @@ -629,6 +697,14 @@ static void *hvmemul_map_linear_addr(
>>
>>   ASSERT(p2mt == p2m_ram_logdirty || !p2m_is_readonly(p2mt));
>>   }
>> +
>> +if ( curr->arch.vm_event &&
>> +curr->arch.vm_event->send_event &&
> 
> Why not fold these checks into hvm_emulate_send_vm_event since..

I had asked for at least the first of the checks to be pulled
out of the function, for the common case to be affected as
little as possible.

>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3224,6 +3224,14 @@ static enum hvm_translation_result __hvm_copy(
>>   return HVMTRANS_bad_gfn_to_mfn;
>>   }
>>
>> +if ( unlikely(v->arch.vm_event) &&
>> +v->arch.vm_event->send_event &&
> 
> .. you seem to just repeat them here again?

I agree that the duplication makes no sense.

Jan
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [qemu-mainline test] 138890: tolerable FAIL - PUSHED

2019-07-11 Thread osstest service owner
flight 138890 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138890/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 138799
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 138799
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 138799
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 138799
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 138799
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 qemuu6df2cdf44a82426f7a59dcb03f0dd2181ed7fdfa
baseline version:
 qemuud2c5f91ca944aaade642624397e1853801bbc744

Last test of basis   138799  2019-07-06 18:10:18 Z5 days
Failing since138825  2019-07-08 09:36:12 Z3 days3 attempts
Testing same since   138890  2019-07-10 12:47:03 Z1 days1 attempts


People who touched revisions under test:
  Alex Bennée 
  Alex Williamson 
  Alistair Francis 
  Christian Borntraeger 
  Christophe de Dinechin 
  Cornelia Huck 
  David Gibson 
  Dr. David Alan Gilbert 
  Eduardo Habkost 
  Eric Blake 
  Igor Mammedov 
  Jason Dillaman 
  John Snow 
  Julio Montes 
  Kevin Wolf 
  Laurent Desnogues 
  Li Qiang 
  Like Xu 
  Liran Alon 
  Markus Armbruster 
  Max Reitz 
  Paolo 

[Xen-devel] [linux-4.19 test] 138888: regressions - FAIL

2019-07-11 Thread osstest service owner
flight 13 linux-4.19 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/13/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine11 examine-serial/bootloader fail REGR. vs. 129313
 test-amd64-i386-qemut-rhel6hvm-amd 12 guest-start/redhat.repeat fail REGR. vs. 
129313
 build-armhf-pvops 6 kernel-build fail REGR. vs. 129313

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-examine  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl   1 build-check(1)   blocked  n/a
 test-armhf-armhf-xl-vhd   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux7a6bfa08b938d33ba0a2b80d4f717d4f0dbf9170
baseline version:
 linux84df9525b0c27f3ebc2ebb1864fa62a97fdedb7d

Last test of basis   129313  2018-11-02 05:39:08 Z  251 days
Failing since129412  2018-11-04 14:10:15 Z  249 days  155 attempts
Testing same since   13  2019-07-10 10:15:20 Z1 days1 attempts


2246 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64   

[Xen-devel] [linux-next test] 138885: regressions - FAIL

2019-07-11 Thread osstest service owner
flight 138885 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138885/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-win7-amd64  7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 138849
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-xl-qemut-ws16-amd64  7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-boot fail REGR. vs. 
138849
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-xl-shadow 7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-xl-pvshim7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-libvirt  7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
138849
 test-amd64-amd64-xl   7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-amd64-pvgrub  7 xen-bootfail REGR. vs. 138849
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 138849
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 138849
 test-amd64-i386-libvirt   7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. 
vs. 138849
 test-amd64-i386-xl-qemuu-win10-i386  7 xen-boot  fail REGR. vs. 138849
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-xl-qemut-win10-i386  7 xen-boot  fail REGR. vs. 138849
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  7 xen-bootfail REGR. vs. 138849
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 7 xen-boot fail REGR. vs. 
138849
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
138849
 test-amd64-amd64-xl-credit1   7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-xl-xsm   7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-qemuu-nested-amd  7 xen-bootfail REGR. vs. 138849
 test-amd64-amd64-libvirt-pair 10 xen-boot/src_host   fail REGR. vs. 138849
 test-amd64-amd64-libvirt-pair 11 xen-boot/dst_host   fail REGR. vs. 138849
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 138849
 test-amd64-amd64-xl-shadow7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-xl-qemuu-ws16-amd64  7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 138849
 test-amd64-amd64-xl-qemut-debianhvm-amd64  7 xen-bootfail REGR. vs. 138849
 test-amd64-amd64-libvirt-xsm  7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-boot fail REGR. vs. 
138849
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 138849
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 138849
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-libvirt-pair 10 xen-boot/src_hostfail REGR. vs. 138849
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-boot  fail REGR. vs. 138849
 test-amd64-i386-libvirt-pair 11 xen-boot/dst_hostfail REGR. vs. 138849
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-boot  fail REGR. vs. 138849
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-xl-multivcpu  7 xen-bootfail REGR. vs. 138849
 test-amd64-amd64-libvirt-vhd  7 xen-boot fail REGR. vs. 138849
 test-amd64-amd64-xl-qemut-win10-i386  7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  7 xen-boot  fail REGR. vs. 138849
 test-amd64-amd64-xl-pvhv2-intel  7 xen-boot  fail REGR. vs. 138849
 test-amd64-amd64-xl-pvhv2-amd  7 xen-bootfail REGR. vs. 138849
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
138849
 test-amd64-i386-xl-pvshim 7 xen-boot fail REGR. vs. 138849
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 138849
 

Re: [Xen-devel] Criteria / validation proposal: drop Xen

2019-07-11 Thread Adam Williamson
On Thu, 2019-07-11 at 21:43 +0100, Peter Robinson wrote:
> > On Mon, 2019-07-08 at 09:11 -0700, Adam Williamson wrote:
> > > It's worth noting that at least part of the justification for the
> > > criterion in the first place was that Amazon was using Xen for EC2, but
> > > that is no longer the case, most if not all EC2 instance types no
> > > longer use Xen.
> > 
> > I don't know where you got that particular piece of information. It
> > isn't correct. Most EC2 instance types still use Xen. The vast majority
> > of EC2 instances, by volume, are Xen.
> 
> Correct, it's only specific types of new hypervisors that use kvm
> based, plus new HW like aarch64.
> 
> That being said I don't believe testing we can boot on xen is actually
> useful these days for the AWS use case, it's likely different enough
> that the testing isn't useful, we'd be much better testing that cloud
> images actually work on AWS than testing if it boots on xen.

Yeah, that's where I was going to go next (there has already been a
thread about this this morning). If what we care about is that Fedora
boots on EC2, that's what we should have in the criteria, and what we
should test.

IIRC, what we have right now is a somewhat vague setup where we just
have 'local', 'ec2' and 'openstack' columns. The instructions for
"Amazon Web Services" just say "Launch an instance with the AMI under
test". So we could probably stand to tighten that up a bit, and define
specific instance type(s) that we want to test/block on.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Criteria / validation proposal: drop Xen

2019-07-11 Thread David Woodhouse
On Mon, 2019-07-08 at 09:11 -0700, Adam Williamson wrote:
> It's worth noting that at least part of the justification for the
> criterion in the first place was that Amazon was using Xen for EC2, but
> that is no longer the case, most if not all EC2 instance types no
> longer use Xen.

I don't know where you got that particular piece of information. It
isn't correct. Most EC2 instance types still use Xen. The vast majority
of EC2 instances, by volume, are Xen.


smime.p7s
Description: S/MIME cryptographic signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Criteria / validation proposal: drop Xen

2019-07-11 Thread marma...@invisiblethingslab.com
On Thu, Jul 11, 2019 at 10:58:03AM -0700, Adam Williamson wrote:
> On Thu, 2019-07-11 at 09:57 -0500, Doug Goldstein wrote:
> > On 7/8/19 11:11 AM, Adam Williamson wrote:
> > > On Tue, 2019-05-21 at 11:14 -0700, Adam Williamson wrote:
> > > > > > > > "The release must boot successfully as Xen DomU with releases 
> > > > > > > > providing
> > > > > > > > a functional, supported Xen Dom0 and widely used cloud providers
> > > > > > > > utilizing Xen."
> > > > > > > > 
> > > > > > > > and change the 'milestone' for the test case -
> > > > > > > > https://fedoraproject.org/wiki/QA:Testcase_Boot_Methods_Xen_Para_Virt
> > > > > > > >  -
> > > > > > > > from Final to Optional.
> > > > > > > > 
> > > > > > > > Thoughts? Comments? Thanks!
> > > > > > > I would prefer for it to remain as it is.
> > > > > > This is only practical if it's going to be tested, and tested 
> > > > > > regularly
> > > > > > - not *only* on the final release candidate, right before we sign 
> > > > > > off
> > > > > > on the release. It needs to be tested regularly throughout the 
> > > > > > release
> > > > > > cycle, on the composes that are "nominated for testing".
> > > > > Would the proposal above work for you? I think it satisfies what you 
> > > > > are
> > > > > looking for. We would also have someone who monitors these test 
> > > > > results
> > > > > pro-actively.
> > > > In theory, yeah, but given the history here I'm somewhat sceptical. I'd
> > > > also say we still haven't really got a convincing case for why we
> > > > should continue to block the release (at least in theory) on Fedora
> > > > working in Xen when we don't block on any other virt stack apart from
> > > > our 'official' one, and we don't block on all sorts of other stuff we'd
> > > > "like to have working" either. Regardless of the testing issues, I'd
> > > > like to see that too if we're going to keep blocking on Xen...
> > > So, this died here. As things stand: I proposed removing the Xen
> > > criterion, Lars opposed, we discussed the testing situation a bit, and
> > > I said overall I'm still inclined to remove the criterion because
> > > there's no clear justification for it for Fedora any more. Xen working
> > > (or rather, Fedora working on Xen) is just not a key requirement for
> > > Fedora at present, AFAICS.
> > > 
> > > It's worth noting that at least part of the justification for the
> > > criterion in the first place was that Amazon was using Xen for EC2, but
> > > that is no longer the case, most if not all EC2 instance types no
> > > longer use Xen. Another consideration is that there was a time when KVM
> > > was still pretty new stuff and VirtualBox was not as popular as it is
> > > now, and Xen was still widely used for general hobbyist virtualization
> > > purposes; I don't believe that's really the case any more.
> > 
> > So I'll just point out this is false. Amazon very much uses Xen still 
> > and is investing in Xen still. In fact I'm writing this email from the 
> > XenSummit where Amazon is currently discussing their future development 
> > efforts for the Xen Project.
> 
> Sorry about that, it was just based on my best efforts at trying to
> figure it out; Amazon instance types don't all explicitly state exactly
> how they work.
> 
> Which EC2 instance types still use Xen?

I don't know what new instance types use Xen, but definitely there are
existing previous instance generations that are still running and not
going away anytime soon. From what I understand, they are still great
majority of EC2.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [GSoC-2019] About the crossbar and xen

2019-07-11 Thread Denis Obrezkov
Hi,

On 7/11/19 7:32 PM, Julien Grall wrote:

>>
>> What do you think?
> 
> Have you looked at the series I pointed out earlier on? It extends Xen
> to support other interrupt controller parent.
> 
Yes, but you said once that these patch series wasn't accepted because
maintainers didn't like something. So, I wanted to understand whether
this approach is acceptable.

-- 
Regards, Denis Obrezkov

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [GSoC-2019] About the crossbar and xen

2019-07-11 Thread Hunyue Yau
[This mail incorporates comments raised on IRC. I have made some of this more 
verbose to provide context to people that haven't seen the IRC comments.]

This will be a bunch of facts on the am5. Someone else will have relate it 
back to Xen.

1 - The WUGen is a hardware block on the MPU block that can turn interrupts 
into wake up events if the MPU is in certain deeper sleep states. This applies 
only to certain interrupts. We can confirm this by looking at the DT's register 
address. Per the TRM, they are registers for the MPU's PRCM (Power/Reset/Clock 
Management). In short, this block makes interrupts a possible wake up source. 

2 - Earlier kernels did not expose that HW block. See this patch that 
introduced the WUGen - 
https://github.com/torvalds/linux/commit/7136d457f365ecc93ddffcdd42ab49a8473f260b
I suspect looking at the before part of the patch should provide clues on how 
to handle the WUGen.


3 - This may be redundant but more definitions (in the human sense) here: 
https://www.mjmwired.net/kernel/Documentation/devicetree/bindings/interrupt-controller/ti,omap4-wugen-mpu

4 - For the UART case, I suspect the flow Dennis pointed out is about right. 
However, that may be different depending on the interrupt source.

Unknowns from my point of view - 

a - Does Xen virtualize power management? If so, this may have have an impact. 
I would not recommend adding PM virtualization in GSoC. It is tugging on a 
very long string. 

b - If Xen does not virtualize that, someone needs to decide how much to leave 
to the guess. 

c - I wonder if we can do a half way hack where the kernel sets up the PM but 
Xen hooks to get the interrupt. The HW will do its PM thing and Xen can 
process the interrupt.

Guesses/possible hacks -
- For the interrupts we care about, the cross bar can route things to the MPU 
unconditionally. This would break the other HW blocks but most of them aren't 
needed for boot.

On Thursday, July 11, 2019 18:32:22 Julien Grall wrote:
> On 7/11/19 1:50 PM, Denis Obrezkov wrote:
> > Hi,
> 
> Hi,
> 
> >>> I am interested whether we should do something with omap5-wugen-mpu. I
> >>> found that crossbar is connected to GIC. And on some schemes in trm it
> >>> is connected via omap5-wugen-mpu. So, it is not clear for me whether it
> >>> should be handled in xen.
> 
> I don't know much about omap5-wugen-mpu, so I will leave Hunyue and Iain
> to provide feedback here.
> 
> > Also, I am interested in how to add the crossbar. I can see two options
> > as we discussed earlier. The first option is to remove the crossbar but
> > for me it might cause some problems since a guest might want to use it.
> > The second option is to expose the crossbar and intercept all the calls
> > to it. But the problem is that now xen supports only one
> > interrupt-controller. And at the same time most of the SPI IRQs are
> > mapped to the crossbar. So, when xen checks whether the main
> > interrupt-controller is the same as the one to who external interrupts
> > are mapped it fails.
> > (xen/common/device_tree.c:dt_irq_translate()).
> > And I don't think that I should change interrupt-parent option of
> > devices to map them to the GIC because it is essentially the first
> > option mentioned above. So, it seems that probably interrupt-parent
> > finding decision logic should be changed a bit? Like to find a GIC node
> > not in a direct interrupt-parent but transitively in one of ancestors:
> > 
> > UART -> crossbar -> wugen -> GIC
> > 
> > What do you think?
> 
> Have you looked at the series I pointed out earlier on? It extends Xen
> to support other interrupt controller parent.
> 
> Cheers,

-- 
Hunyue Yau
http://www.hy-research.com/

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-smoke test] 138907: tolerable all pass - PUSHED

2019-07-11 Thread osstest service owner
flight 138907 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138907/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  38eeb3864de40aa568c48f9f26271c141c62b50b
baseline version:
 xen  c19434d9284e93e6f9aaec9a70f5f361adbfaba6

Last test of basis   138894  2019-07-10 19:01:03 Z0 days
Testing same since   138907  2019-07-11 15:01:02 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 

jobs:
 build-arm64-xsm  pass
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c19434d928..38eeb3864d  38eeb3864de40aa568c48f9f26271c141c62b50b -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Criteria / validation proposal: drop Xen

2019-07-11 Thread Adam Williamson
On Thu, 2019-07-11 at 09:57 -0500, Doug Goldstein wrote:
> On 7/8/19 11:11 AM, Adam Williamson wrote:
> > On Tue, 2019-05-21 at 11:14 -0700, Adam Williamson wrote:
> > > > > > > "The release must boot successfully as Xen DomU with releases 
> > > > > > > providing
> > > > > > > a functional, supported Xen Dom0 and widely used cloud providers
> > > > > > > utilizing Xen."
> > > > > > > 
> > > > > > > and change the 'milestone' for the test case -
> > > > > > > https://fedoraproject.org/wiki/QA:Testcase_Boot_Methods_Xen_Para_Virt
> > > > > > >  -
> > > > > > > from Final to Optional.
> > > > > > > 
> > > > > > > Thoughts? Comments? Thanks!
> > > > > > I would prefer for it to remain as it is.
> > > > > This is only practical if it's going to be tested, and tested 
> > > > > regularly
> > > > > - not *only* on the final release candidate, right before we sign off
> > > > > on the release. It needs to be tested regularly throughout the release
> > > > > cycle, on the composes that are "nominated for testing".
> > > > Would the proposal above work for you? I think it satisfies what you are
> > > > looking for. We would also have someone who monitors these test results
> > > > pro-actively.
> > > In theory, yeah, but given the history here I'm somewhat sceptical. I'd
> > > also say we still haven't really got a convincing case for why we
> > > should continue to block the release (at least in theory) on Fedora
> > > working in Xen when we don't block on any other virt stack apart from
> > > our 'official' one, and we don't block on all sorts of other stuff we'd
> > > "like to have working" either. Regardless of the testing issues, I'd
> > > like to see that too if we're going to keep blocking on Xen...
> > So, this died here. As things stand: I proposed removing the Xen
> > criterion, Lars opposed, we discussed the testing situation a bit, and
> > I said overall I'm still inclined to remove the criterion because
> > there's no clear justification for it for Fedora any more. Xen working
> > (or rather, Fedora working on Xen) is just not a key requirement for
> > Fedora at present, AFAICS.
> > 
> > It's worth noting that at least part of the justification for the
> > criterion in the first place was that Amazon was using Xen for EC2, but
> > that is no longer the case, most if not all EC2 instance types no
> > longer use Xen. Another consideration is that there was a time when KVM
> > was still pretty new stuff and VirtualBox was not as popular as it is
> > now, and Xen was still widely used for general hobbyist virtualization
> > purposes; I don't believe that's really the case any more.
> 
> So I'll just point out this is false. Amazon very much uses Xen still 
> and is investing in Xen still. In fact I'm writing this email from the 
> XenSummit where Amazon is currently discussing their future development 
> efforts for the Xen Project.

Sorry about that, it was just based on my best efforts at trying to
figure it out; Amazon instance types don't all explicitly state exactly
how they work.

Which EC2 instance types still use Xen?
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [GSoC-2019] About the crossbar and xen

2019-07-11 Thread Julien Grall



On 7/11/19 1:50 PM, Denis Obrezkov wrote:

Hi,


Hi,



I am interested whether we should do something with omap5-wugen-mpu. I
found that crossbar is connected to GIC. And on some schemes in trm it
is connected via omap5-wugen-mpu. So, it is not clear for me whether it
should be handled in xen.


I don't know much about omap5-wugen-mpu, so I will leave Hunyue and Iain 
to provide feedback here.







Also, I am interested in how to add the crossbar. I can see two options
as we discussed earlier. The first option is to remove the crossbar but
for me it might cause some problems since a guest might want to use it.
The second option is to expose the crossbar and intercept all the calls
to it. But the problem is that now xen supports only one
interrupt-controller. And at the same time most of the SPI IRQs are
mapped to the crossbar. So, when xen checks whether the main
interrupt-controller is the same as the one to who external interrupts
are mapped it fails.
(xen/common/device_tree.c:dt_irq_translate()).
And I don't think that I should change interrupt-parent option of
devices to map them to the GIC because it is essentially the first
option mentioned above. So, it seems that probably interrupt-parent
finding decision logic should be changed a bit? Like to find a GIC node
not in a direct interrupt-parent but transitively in one of ancestors:

UART -> crossbar -> wugen -> GIC

What do you think?


Have you looked at the series I pointed out earlier on? It extends Xen 
to support other interrupt controller parent.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] x86: Get rid of p2m_host array allocation for HVM guests

2019-07-11 Thread Julien Grall

Hi Andrew,

On 7/11/19 3:25 PM, Andrew Cooper wrote:

On 10/07/2019 14:25, Julien Grall wrote:


However, in attempting to review this, I've got some bigger questions.

All ARM and x86 HVM (and PVH) guests return true for
xc_dom_translated(), so should take the fastpath out of xc_dom_p2m() and
never read from dom->p2m_host[].  Therefore, I don't see why the
majority of this patch is necessary.


I agree that p2m_host will never get used by Arm. So this is a waste
of memory.


   On the ARM side, this also means
that dom->rambase_pfn isn't being used as intended, which suggests there
is further cleanup/correction to be done here.


I am not sure to follow this. Could you expand it?


dom->rambase_pfn was introduced for ARM, and the code which uses it in
xc_dom_p2m() is dead (on ARM, not on x86).

It isn't functioning as intended.


I am afraid I still don't follow it... rambase_pfn is used in various 
place in xc_dom_core.c and xc_dom_armzimageloader.c.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v7] x86/emulate: Send vm_event from emulate

2019-07-11 Thread Tamas K Lengyel
> @@ -629,6 +697,14 @@ static void *hvmemul_map_linear_addr(
>
>  ASSERT(p2mt == p2m_ram_logdirty || !p2m_is_readonly(p2mt));
>  }
> +
> +if ( curr->arch.vm_event &&
> +curr->arch.vm_event->send_event &&

Why not fold these checks into hvm_emulate_send_vm_event since..

> +hvm_emulate_send_vm_event(addr, gfn, pfec) )
> +{
> +err = ERR_PTR(~X86EMUL_RETRY);
> +goto out;
> +}
>  }
>
>  /* Entire access within a single frame? */
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 029eea3b85..783ebc3525 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3224,6 +3224,14 @@ static enum hvm_translation_result __hvm_copy(
>  return HVMTRANS_bad_gfn_to_mfn;
>  }
>
> +if ( unlikely(v->arch.vm_event) &&
> +v->arch.vm_event->send_event &&

.. you seem to just repeat them here again?

> +hvm_emulate_send_vm_event(addr, gfn, pfec) )
> +{
> +put_page(page);
> +return HVMTRANS_gfn_paged_out;
> +}
> +
>  p = (char *)__map_domain_page(page) + (addr & ~PAGE_MASK);
>
>  if ( flags & HVMCOPY_to_guest )

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] x86/smpboot: Remove redundant order calculations

2019-07-11 Thread Jan Beulich
On 11.07.2019 17:49, Andrew Cooper wrote:
> The GDT and IDT allocations are all order 0, and not going to change.
> 
> Use an explicit 0, instead of calling get_order_from_pages().  This
> allows for the removal of the 'order' local parameter in both
> cpu_smpboot_{alloc,free}().
> 
> While making this adjustment, rearrange cpu_smpboot_free() to fold the
> two "if ( remove )" clauses.  There is no explicit requirements for the
> order of free()s.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper 

While I think that it was appropriate for the code to be independent
of actual (albeit never changing) sizes, I have to agree that with
the context switch side change in it's better to be consistent here.
Hence despite my slight dislike
Reviewed-by: Jan Beulich 

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-4.9 test] 138883: tolerable FAIL - PUSHED

2019-07-11 Thread osstest service owner
flight 138883 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138883/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 138603
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 138603
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 138603
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 138603
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 138603
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux 

[Xen-devel] [PATCH] x86/smpboot: Remove redundant order calculations

2019-07-11 Thread Andrew Cooper
The GDT and IDT allocations are all order 0, and not going to change.

Use an explicit 0, instead of calling get_order_from_pages().  This
allows for the removal of the 'order' local parameter in both
cpu_smpboot_{alloc,free}().

While making this adjustment, rearrange cpu_smpboot_free() to fold the
two "if ( remove )" clauses.  There is no explicit requirements for the
order of free()s.

No practical change.

Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Wei Liu 
CC: Roger Pau Monné 
---
 xen/arch/x86/smpboot.c | 22 --
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 004285d14c..65e9ceeece 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -902,7 +902,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
  */
 static void cpu_smpboot_free(unsigned int cpu, bool remove)
 {
-unsigned int order, socket = cpu_to_socket(cpu);
+unsigned int socket = cpu_to_socket(cpu);
 struct cpuinfo_x86 *c = cpu_data;
 
 if ( cpumask_empty(socket_cpumask[socket]) )
@@ -944,16 +944,12 @@ static void cpu_smpboot_free(unsigned int cpu, bool 
remove)
 free_domheap_page(mfn_to_page(mfn));
 }
 
-order = get_order_from_pages(NR_RESERVED_GDT_PAGES);
-if ( remove )
-FREE_XENHEAP_PAGES(per_cpu(gdt_table, cpu), order);
-
-free_xenheap_pages(per_cpu(compat_gdt_table, cpu), order);
+FREE_XENHEAP_PAGE(per_cpu(compat_gdt_table, cpu));
 
 if ( remove )
 {
-order = get_order_from_bytes(IDT_ENTRIES * sizeof(idt_entry_t));
-FREE_XENHEAP_PAGES(idt_tables[cpu], order);
+FREE_XENHEAP_PAGE(per_cpu(gdt_table, cpu));
+FREE_XENHEAP_PAGE(idt_tables[cpu]);
 
 if ( stack_base[cpu] )
 {
@@ -965,7 +961,7 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove)
 
 static int cpu_smpboot_alloc(unsigned int cpu)
 {
-unsigned int i, order, memflags = 0;
+unsigned int i, memflags = 0;
 nodeid_t node = cpu_to_node(cpu);
 seg_desc_t *gdt;
 unsigned long stub_page;
@@ -980,8 +976,7 @@ static int cpu_smpboot_alloc(unsigned int cpu)
 goto out;
 memguard_guard_stack(stack_base[cpu]);
 
-order = get_order_from_pages(NR_RESERVED_GDT_PAGES);
-gdt = per_cpu(gdt_table, cpu) ?: alloc_xenheap_pages(order, memflags);
+gdt = per_cpu(gdt_table, cpu) ?: alloc_xenheap_pages(0, memflags);
 if ( gdt == NULL )
 goto out;
 per_cpu(gdt_table, cpu) = gdt;
@@ -991,7 +986,7 @@ static int cpu_smpboot_alloc(unsigned int cpu)
 BUILD_BUG_ON(NR_CPUS > 0x1);
 gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
 
-per_cpu(compat_gdt_table, cpu) = gdt = alloc_xenheap_pages(order, 
memflags);
+per_cpu(compat_gdt_table, cpu) = gdt = alloc_xenheap_pages(0, memflags);
 if ( gdt == NULL )
 goto out;
 per_cpu(compat_gdt_table_l1e, cpu) =
@@ -999,9 +994,8 @@ static int cpu_smpboot_alloc(unsigned int cpu)
 memcpy(gdt, boot_cpu_compat_gdt_table, NR_RESERVED_GDT_PAGES * PAGE_SIZE);
 gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
 
-order = get_order_from_bytes(IDT_ENTRIES * sizeof(idt_entry_t));
 if ( idt_tables[cpu] == NULL )
-idt_tables[cpu] = alloc_xenheap_pages(order, memflags);
+idt_tables[cpu] = alloc_xenheap_pages(0, memflags);
 if ( idt_tables[cpu] == NULL )
 goto out;
 memcpy(idt_tables[cpu], idt_table, IDT_ENTRIES * sizeof(idt_entry_t));
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Criteria / validation proposal: drop Xen

2019-07-11 Thread Doug Goldstein


On 7/8/19 11:11 AM, Adam Williamson wrote:

On Tue, 2019-05-21 at 11:14 -0700, Adam Williamson wrote:

"The release must boot successfully as Xen DomU with releases providing
a functional, supported Xen Dom0 and widely used cloud providers
utilizing Xen."

and change the 'milestone' for the test case -
https://fedoraproject.org/wiki/QA:Testcase_Boot_Methods_Xen_Para_Virt -
from Final to Optional.

Thoughts? Comments? Thanks!

I would prefer for it to remain as it is.

This is only practical if it's going to be tested, and tested regularly
- not *only* on the final release candidate, right before we sign off
on the release. It needs to be tested regularly throughout the release
cycle, on the composes that are "nominated for testing".

Would the proposal above work for you? I think it satisfies what you are
looking for. We would also have someone who monitors these test results
pro-actively.

In theory, yeah, but given the history here I'm somewhat sceptical. I'd
also say we still haven't really got a convincing case for why we
should continue to block the release (at least in theory) on Fedora
working in Xen when we don't block on any other virt stack apart from
our 'official' one, and we don't block on all sorts of other stuff we'd
"like to have working" either. Regardless of the testing issues, I'd
like to see that too if we're going to keep blocking on Xen...

So, this died here. As things stand: I proposed removing the Xen
criterion, Lars opposed, we discussed the testing situation a bit, and
I said overall I'm still inclined to remove the criterion because
there's no clear justification for it for Fedora any more. Xen working
(or rather, Fedora working on Xen) is just not a key requirement for
Fedora at present, AFAICS.

It's worth noting that at least part of the justification for the
criterion in the first place was that Amazon was using Xen for EC2, but
that is no longer the case, most if not all EC2 instance types no
longer use Xen. Another consideration is that there was a time when KVM
was still pretty new stuff and VirtualBox was not as popular as it is
now, and Xen was still widely used for general hobbyist virtualization
purposes; I don't believe that's really the case any more.



So I'll just point out this is false. Amazon very much uses Xen still 
and is investing in Xen still. In fact I'm writing this email from the 
XenSummit where Amazon is currently discussing their future development 
efforts for the Xen Project.


--

Doug



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] x86: Get rid of p2m_host array allocation for HVM guests

2019-07-11 Thread Andrew Cooper
On 10/07/2019 14:25, Julien Grall wrote:
>>
>> However, in attempting to review this, I've got some bigger questions.
>>
>> All ARM and x86 HVM (and PVH) guests return true for
>> xc_dom_translated(), so should take the fastpath out of xc_dom_p2m() and
>> never read from dom->p2m_host[].  Therefore, I don't see why the
>> majority of this patch is necessary.
>
> I agree that p2m_host will never get used by Arm. So this is a waste
> of memory.
>
>>   On the ARM side, this also means
>> that dom->rambase_pfn isn't being used as intended, which suggests there
>> is further cleanup/correction to be done here.
>
> I am not sure to follow this. Could you expand it?

dom->rambase_pfn was introduced for ARM, and the code which uses it in
xc_dom_p2m() is dead (on ARM, not on x86).

It isn't functioning as intended.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 00/60] xen: add core scheduling support

2019-07-11 Thread Dario Faggioli
On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
> Add support for core- and socket-scheduling in the Xen hypervisor.
> 
> [...]
>
> I have done some very basic performance testing: on a 4 cpu system
> (2 cores with 2 threads each) I did a "make -j 4" for building the
> Xen
> hypervisor. With This test has been run on dom0, once with no other
> guest active and once with another guest with 4 vcpus running the
> same
> test. The results are (always elapsed time, system time, user time):
> 
> sched-gran=cpu,no other guest: 116.10 177.65 207.84
> sched-gran=core,   no other guest: 114.04 175.47 207.45
> sched-gran=cpu,other guest:202.30 334.21 384.63
> sched-gran=core,   other guest:207.24 293.04 371.37
> 
> The performance tests have been performed with credit2, the other
> schedulers are tested only briefly to be able to create a domain in a
> cpupool.
> 
I have done some more, and I'd like to report the results here.

For those that are attending the Xen-Project Dev Summit, and has seen
Juergen's talk about core-scheduling, these are the numbers he had in
his slides.

They're quite a few number, and there are multiple way to show them.

We arranged them in two different ways, and I'm showing both. Since
it's quite likely that the result will be poor in mail clients, here
they are the links to view in browser or download text files:

http://xenbits.xen.org/people/dariof/benchmarks/results/2019/07-July/xen/core-sched/mmtests/boxes/wayrath/summary.txt

http://xenbits.xen.org/people/dariof/benchmarks/results/2019/07-July/xen/core-sched/mmtests/boxes/wayrath/summary-5columns.txt

They're the same numbers, in the two files, but in
'summary-5columns.txt' has them arranged in tables that show, for each
combination of benchmark and configuration, the differences between the
various options (i.e., no core-scheduling, core-scheduling not used,
core-scheduling in use, etc).

The 'summary.txt' file, contains some more data (such as the results of
runs done on baremetal), arranged in different tables. It also contains
some of my thoughts and analysis about what the numbers tells us.

It's quite hard to come up with a concise summary, as results vary a
lot on a case by case basis, and there are a few things that needs
being investigated mode.

I'll try anyway, but please, if you are interested in the subject, do
have a look at the numbers themselves, even if there's a lot of them:

- Overhead: the cost of having this patch series applied, and not 
  using core-scheduling, seems acceptable to me. In most cases, the 
  overhead is within the noise margin of the measurements. There are a 
  couple of benchmarks where this is not the case. But that means we 
  can go trying figuring out why this happens only there, and, 
  potentially, optimize and tune.

- PV vs. HVM: there seem to be some differences, in some of the 
  results, for different type of guest (well, for PV I used dom0). In 
  general, HVM seems to behave a little worse, i.e., suffers from more 
  overhead and perf degradation, but this is not the case for all 
  benchmarks, so it's hard to tell whether it's something specific or 
  an actual trend.
  I don't have the numbers for proper PV guests and for PVH. I expect 
  the former to be close to dom0 numbers and the latter to HVM 
  numbers, but I'll try to do those runs as well (as soon as the 
  testbox is free again).

- HT vs. noHT: even before considering core-scheduling at all, the 
  debate is still open about whether or not Hyperthreading help in the 
  first place. These numbers shows that this very much depend on the 
  workload and on the load, which is no big surprise.
  It is quite a solid trend, however, than when load is high (look, 
  for instance, at runs that saturate the CPU, or at oversubscribed 
  runs), Hyperthreading let us achieve better results.

- Regressions versus no core-scheduling: this happens, as it could 
  have been expected. It does not happen 100% of the times, and 
  mileage may vary, but in most benchmarks and in most configurations, 
  we do regress.

- Core-scheduling vs. no-Hyperthreading: this is again a mixed bag. 
  There are cases where things are faster in one setup, and cases 
  where it is the other one that wins. Especially in the non 
  overloaded case.

- Core-scheduling and overloading: when more vCPUs than pCPUs are used 
  (and there is actual overload, i.e., the vCPUs actually generate 
  more load than there are pCPUs to satisfy), core-scheduling shows 
  pretty decent performance. This is easy to see, comparing core-
  scheduling with no-Hyperthreading, in the overloaded cases. In most 
  benchmark, both the configuration perform worse than default, but 
  core-scheduling regresses a lot less than no-Hyperthreading. And 
  this, I think, is quite important!

- Core-scheduling and HT-aware scheduling: currently, the scheduler 
  tend to spread vCPUs among cores. That is, if we have 2 vCPUs and 2 
  cores with two threads each, the 

Re: [Xen-devel] [PATCH L1TF MDS GT v2 2/2] common/grant_table: harden version dependent accesses

2019-07-11 Thread Jan Beulich
On 10.07.2019 14:54, Norbert Manthey wrote:
> Guests can issue grant table operations and provide guest controlled
> data to them. This data is used as index for memory loads after bound
> checks have been done. Depending on the grant table version, the
> size of elements in containers differ. As the base data structure is
> a page, the number of elements per page also differs. Consequently,
> bound checks are version dependent, so that speculative execution can
> happen in several stages, the bound check as well as the version check.
> 
> This commit mitigates cases where out-of-bound accesses could happen
> due to the version comparison. In cases, where no different memory
> locations are accessed on the code path that follow an if statement,
> no protection is required. No different memory locations are accessed
> in the following functions after a version check:
> 
>   * gnttab_setup_table: only calculated numbersi are used, and then
>  function gnttab_grow_table is called, which is version protected
> 
>   * gnttab_transfer: the case that depends on the version check just gets
>  into copying a page or not
> 
>   * acquire_grant_for_copy: the not fixed comparison is on the abort path
>  and does not access other structures, and on the else branch
>  accesses only structures that have been validated before
> 
>   * gnttab_set_version: all accessible data is allocated for both versions

On v1 I did say "The very first loop is safe only because nr_grant_entries()
is." But anyway, ...

>  Furthermore, the functions gnttab_populate_status_frames and
>  gnttab_unpopulate_status_frames received a block_speculation
>  macro. Hence, this code will only be executed once the correct
>  version is visible in the architectural state.
> 
>   * gnttab_release_mappings: this function is called only during domain
> destruction and control is not returned to the guest
> 
>   * mem_sharing_gref_to_gfn: speculation will be stoped by the second if
> statement, as that places a barrier on any path to be executed.
> 
>   * gnttab_get_status_frame_mfn: no version dependent check, because all
> accesses, except the gt->status[idx], do not perform index-based
> accesses, or speculative out-of-bound accesses in the
> gnttab_grow_table function call.
> 
>   * gnttab_usage_print: cannot be triggered by the guest
> 
> This is part of the speculative hardening effort.
> 
> Signed-off-by: Norbert Manthey 

Reviewed-by: Jan Beulich 

Jan
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH L1TF MDS GT v2 1/2] common/grant_table: harden bound accesses

2019-07-11 Thread Jan Beulich
On 10.07.2019 14:54, Norbert Manthey wrote:
> Guests can issue grant table operations and provide guest controlled
> data to them. This data is used as index for memory loads after bound
> checks have been done. To avoid speculative out-of-bound accesses, we
> use the array_index_nospec macro where applicable, or the macro
> block_speculation. Note, the block_speculation macro is used on all
> path in shared_entry_header and nr_grant_entries. This way, after a
> call to such a function, all bound checks that happened before become
> architectural visible, so that no additional protection is required
> for corresponding array accesses. As the way we introduce an lfence
> instruction might allow the compiler to reload certain values from
> memory multiple times, we try to avoid speculatively continuing
> execution with stale register data by moving relevant data into
> function local variables.
> 
> Speculative execution is not blocked in case one of the following
> properties is true:
>   - path cannot be triggered by the guest
>   - path does not return to the guest
>   - path does not result in an out-of-bound access
>   - path cannot be executed repeatedly

Upon re-reading I think this last item isn't fully applicable: I think
you attach such an attribute to domain creation/destruction functions.
Those aren't executed repeatedly for a single guest, but e.g. rapid
rebooting of a guest (or several ones) may actually be able to train
such paths.

> @@ -2091,6 +2100,7 @@ gnttab_transfer(
>   struct page_info *page;
>   int i;
>   struct gnttab_transfer gop;
> +grant_ref_t ref;

This declaration would better live in the more narrow scope it's
(only) used in.

> @@ -2237,8 +2247,14 @@ gnttab_transfer(
>*/
>   spin_unlock(>page_alloc_lock);
>   okay = gnttab_prepare_for_transfer(e, d, gop.ref);
> +ref = gop.ref;

Other than in the earlier cases here you copy a variable that's
already local to the function. Is this really helpful?

Independent of this - is there a particular reason you latch the
value into the (second) local variable only after its first use? It
likely won't matter much, but it's a little puzzling nevertheless.

> -if ( unlikely(!okay || assign_pages(e, page, 0, MEMF_no_refcount)) )
> +/*
> + * Make sure the reference bound check in gnttab_prepare_for_transfer
> + * is respected and speculative execution is blocked accordingly
> + */
> +if ( unlikely(!evaluate_nospec(okay)) ||
> +unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )

If I can trust my mail UI (which I'm not sure I can) there's an
indentation issue here.

> @@ -3853,7 +3879,8 @@ static int gnttab_get_status_frame_mfn(struct domain *d,
>   return -EINVAL;
>   }
>   
> -*mfn = _mfn(virt_to_mfn(gt->status[idx]));
> +/* Make sure idx is bounded wrt nr_status_frames */
> +*mfn = _mfn(virt_to_mfn(gt->status[array_index_nospec(idx, 
> nr_status_frames(gt))]));

This and ...

> @@ -3882,7 +3909,8 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>   return -EINVAL;
>   }
>   
> -*mfn = _mfn(virt_to_mfn(gt->shared_raw[idx]));
> +/* Make sure idx is bounded wrt nr_status_frames */
> +*mfn = _mfn(virt_to_mfn(gt->shared_raw[array_index_nospec(idx, 
> nr_grant_frames(gt))]));

... this line are too long now and hence need wrapping.

Jan
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [GSoC-2019] About the crossbar and xen

2019-07-11 Thread Denis Obrezkov
Hi,

>>
>> I am interested whether we should do something with omap5-wugen-mpu. I
>> found that crossbar is connected to GIC. And on some schemes in trm it
>> is connected via omap5-wugen-mpu. So, it is not clear for me whether it
>> should be handled in xen.
> 

Also, I am interested in how to add the crossbar. I can see two options
as we discussed earlier. The first option is to remove the crossbar but
for me it might cause some problems since a guest might want to use it.
The second option is to expose the crossbar and intercept all the calls
to it. But the problem is that now xen supports only one
interrupt-controller. And at the same time most of the SPI IRQs are
mapped to the crossbar. So, when xen checks whether the main
interrupt-controller is the same as the one to who external interrupts
are mapped it fails.
(xen/common/device_tree.c:dt_irq_translate()).
And I don't think that I should change interrupt-parent option of
devices to map them to the GIC because it is essentially the first
option mentioned above. So, it seems that probably interrupt-parent
finding decision logic should be changed a bit? Like to find a GIC node
not in a direct interrupt-parent but transitively in one of ancestors:

UART -> crossbar -> wugen -> GIC

What do you think?

-- 
Regards, Denis Obrezkov

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-4.4 test] 138882: regressions - FAIL

2019-07-11 Thread osstest service owner
flight 138882 linux-4.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138882/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw 10 debian-di-installfail REGR. vs. 138573

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install 
fail never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1   7 xen-boot fail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-arm64-arm64-xl   7 xen-boot fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm   7 xen-boot fail   never pass
 test-arm64-arm64-xl-seattle   7 xen-boot fail   never pass
 test-arm64-arm64-xl-credit2   7 xen-boot fail   never pass
 test-arm64-arm64-examine  8 reboot   fail   never pass
 test-arm64-arm64-libvirt-xsm  7 xen-boot fail   never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-arm64-arm64-xl-thunderx  7 xen-boot fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 linux7bbf48947605d6ccef21a896c4b44dc356dc8726
baseline version:
 linux72d1ee93e9311c88809585a114c138bc6a43627a

Last test of basis   138573  2019-06-27 00:40:41 Z   14 days
Testing same since   138882  2019-07-10 08:11:16 Z1 days1 attempts


People who touched revisions under test:
  Adeodato Simó 
  Alejandro Jimenez 
  Alessio Balsini 
  Alexander Potapenko 
  Alexandra Winter 
  Alexandre Belloni 

[Xen-devel] [linux-4.14 test] 138881: tolerable FAIL - PUSHED

2019-07-11 Thread osstest service owner
flight 138881 linux-4.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/138881/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-pvshim12 guest-start  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass

version targeted for testing:
 linuxaea8526edf59da3ff5306ca408e13d8f6ab89b34
baseline version:
 linuxe3c1b27308ae0472f27e07903181d6abfe0cb1d7

Last test of basis   138732  2019-07-03 11:39:48 Z7 days
Testing same since   138881  2019-07-10 08:10:31 Z0 days1 attempts


People who