[Xen-devel] [xen-4.7-testing test] 110430: tolerable FAIL - PUSHED

2017-06-14 Thread osstest service owner
flight 110430 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110430/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds   15 guest-start/debian.repeat fail blocked in 109620
 test-xtf-amd64-amd64-1  45 xtf/test-hvm64-lbr-tsx-vmentry fail like 109586
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 109620
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 109620
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 109620
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 109620
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 109620
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start  fail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start  fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop fail never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass

version targeted for testing:
 xen  84cd8d3fbdfbc0655ad242da1d2fdadddf5be89e
baseline version:
 xen  7a0bf3eef7b9cc3958de61d537c699b200be4163

Last test of basis   109620  2017-05-19 16:27:47 Z   26 days
Failing since110185  2017-06-09 12:23:59 Z5 days7 attempts
Testing same since   110430  2017-06-14 06:46:55 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Boris 

[Xen-devel] [qemu-mainline test] 110428: regressions - FAIL

2017-06-14 Thread osstest service owner
flight 110428 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110428/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd   5 xen-install  fail REGR. vs. 109975

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-start/win.repeat fail blocked in 
109975
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 109975
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail like 109975
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 109975
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 109975
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail  like 109975
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass

version targeted for testing:
 qemuu3f0602927b120a480b35dcf58cf6f95435b3ae91
baseline version:
 qemuuc6e84fbd447a51e1161d74d71566a5f67b47eac5

Last test of basis   109975  2017-06-04 00:16:43 Z   11 days
Failing since110013  2017-06-05 10:45:10 Z9 days   13 attempts
Testing same since   110428  2017-06-14 05:53:59 Z0 days1 attempts


People who touched revisions under test:
  Aaron Larson 
  Abdallah Bouassida 
  Alex Bennée 
  Aurelien Jarno 
  Bruno Dominguez 
  Christian Borntraeger 
  Cornelia Huck 
  Cédric Le Goater 
  Daniel Barboza 
  Daniel P. Berrange 
  David Gibson 
  David Hildenbrand 
  Denis Plotnikov 
  Eduardo Habkost 
  Emilio G. Cota 
  Eric Auger 
  Eric Blake 

[Xen-devel] [linux-linus test] 110427: tolerable FAIL - PUSHED

2017-06-14 Thread osstest service owner
flight 110427 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110427/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stopfail REGR. vs. 110380
 test-armhf-armhf-xl-rtds15 guest-start/debian.repeat fail REGR. vs. 110380

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 110380
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 110380
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 110380
 test-amd64-amd64-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail like 110380
 test-amd64-i386-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail like 110380
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 110380
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 110380
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install fail never pass

version targeted for testing:
 linux63f700aab4c11d46626de3cd051dae56cf7e9056
baseline version:
 linux32c1431eea4881a6b17bd7c639315010aeefa452

Last test of basis   110380  2017-06-12 17:27:04 Z2 days
Testing same since   110399  2017-06-13 06:37:48 Z1 days2 attempts


People who touched revisions under test:
  Christian Borntraeger 
  Cornelia Huck 
  Harald Freudenberger 
  Linus 

Re: [Xen-devel] [RFC PATCH] docs: add README.atomic

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Jan Beulich wrote:
> >>> Stefano Stabellini  06/14/17 8:45 PM >>>
> >On Wed, 14 Jun 2017, Jan Beulich wrote:
> >> > +What ACCESS_ONCE does *not* guarantee though is this access is done in a
> >> > +single instruction, so complex or non-native or unaligned data types are
> >> > +not guaranteed to be atomic. If for instance counter would be a 64-bit 
> >> > value
> >> > +on a 32-bit system, the compiler would probably generate two load 
> >> > instructions,
> >> > +which could end up in reading a wrong value if some other CPU changes 
> >> > the other
> >> > +half of the variable in between those two reads.
> >> > +However accessing _aligned and native_ data types is guaranteed to be 
> >> > atomic
> >> > +in the architectures supported by Xen, so ACCESS_ONCE is safe to use 
> >> > when
> >> > +these conditions are met.
> >> 
> >> As mentioned before, such a guarantee does not exist. Please only
> >> state what is really the case, i.e. we _expect_ compilers to behave
> >> this way.
> >
> >Regarding compilers support: do we state clearly in any docs or website
> >what are the compilers we actually support? I think this would be the
> >right opportunity to do it.
> 
> At the very least we state somewhere what gcc versions we support. However,
> I can't see the relation of such a statement to the discussion here.

The relation is that our "compiler expectations" shape what compilers we
support.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [seabios baseline-only test] 71564: tolerable FAIL

2017-06-14 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 71564 seabios real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71564/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop   fail blocked in 71557
 build-amd64-libvirt   5 libvirt-buildfail   like 71557
 build-i386-libvirt5 libvirt-buildfail   like 71557
 test-amd64-amd64-xl-qemuu-winxpsp3 17 guest-start/win.repeat   fail like 71557
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10  fail like 71557
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  9 windows-installfail like 71557

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios  7759d3a5be049eb8d0b4f7c6b1f1a0ba5e871cf3
baseline version:
 seabios  58953eb793b7f43f9cbb72bd7802922746235266

Last test of basis71557  2017-06-13 06:20:39 Z1 days
Testing same since71564  2017-06-14 18:50:04 Z0 days1 attempts


People who touched revisions under test:
  Kevin O'Connor 
  Patrick Rudolph 
  Youness Alaoui 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmblocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 fail
 test-amd64-amd64-xl-qemuu-winxpsp3   fail
 test-amd64-i386-xl-qemuu-winxpsp3pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


commit 7759d3a5be049eb8d0b4f7c6b1f1a0ba5e871cf3
Author: Youness Alaoui 
Date:   Mon Jun 12 21:09:07 2017 -0400

nvme: Enable NVMe support for non-qemu hardware

NVMe support was tested on purism/librem13 laptops and SeaBIOS has
no problems in detecting and booting the drives.

This is a continuation of commit 235a8190 which was incomplete.

Signed-off-by: Youness Alaoui 
Signed-off-by: Kevin O'Connor 

commit e30d51cc58065513279cbe3288108555240e7c44
Author: Patrick Rudolph 
Date:   Mon May 29 19:25:14 2017 +0200

SeaVGABios/cbvga: Advertise compatible VESA modes

Advertise compatible VESA modes, that are smaller or equal to
coreboot's active framebuffer. Only modes that have the same Bpp
are advertise and can be selected.

Allows the Windows 7 bootloader NTLDR to show up in VESA mode.
Allows to show the Windows 7 boot logo.
Allows Windows to boot in safe mode and in normal boot using
VgaSave driver with resolution up to 1600x1200.

This fixes most likely other bootloader and operating systems as well,
in case the are relying on VESA framebuffer support.

Signed-off-by: Patrick Rudolph 

commit 6b69446de71a6f8a472798a38c08881ec42a8518
Author: 

Re: [Xen-devel] [RFC PATCH 3/4] xl: introduce facility to run function with per-domain lock held

2017-06-14 Thread Wei Liu
On Wed, Jun 14, 2017 at 06:19:20PM +0100, Wei Liu wrote:
> Signed-off-by: Wei Liu 
> ---
>  tools/xl/xl.h   |  1 +
>  tools/xl/xl_utils.c | 19 +++
>  tools/xl/xl_utils.h |  3 +++
>  3 files changed, 23 insertions(+)
> 
> diff --git a/tools/xl/xl.h b/tools/xl/xl.h
> index 93ec4d7e4c..8d667ff444 100644
> --- a/tools/xl/xl.h
> +++ b/tools/xl/xl.h
> @@ -292,6 +292,7 @@ extern void printf_info_sexp(int domid, 
> libxl_domain_config *d_config, FILE *fh)
>  
>  #define XL_GLOBAL_CONFIG XEN_CONFIG_DIR "/xl.conf"
>  #define XL_LOCK_FILE XEN_LOCK_DIR "/xl"
> +#define XL_DOMAIN_LOCK_FILE_FMT XEN_LOCK_DIR "/xl-%u"
>  
>  #endif /* XL_H */
>  
> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
> index e7038ec324..bb32ba0a1f 100644
> --- a/tools/xl/xl_utils.c
> +++ b/tools/xl/xl_utils.c
> @@ -27,6 +27,25 @@
>  #include "xl.h"
>  #include "xl_utils.h"
>  
> +int with_lock(uint32_t domid, domain_fn fn, void *arg)
> +{
> +char filename[sizeof(XL_DOMAIN_LOCK_FILE_FMT)+15];
> +int fd_lock = -1;
> +int rc;
> +
> +snprintf(filename, sizeof(filename), XL_DOMAIN_LOCK_FILE_FMT, domid);
> +
> +rc = acquire_lock(filename, _lock);
> +if (rc) goto out;
> +

It is necessary to check if the domain is still valid here. And we
should probably accept a string instead of domid in this function and
call find_domain, so that we can retry. Basically:

   retry:
   domid = find_domain();
   snprintf(...)
   rc = acquire_lock()
   if (rc) goto out;

   if (domain is not valid anymore) {
   release_lock();
   goto retry;
   }

   /* ... the rest ...*/

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [libvirt test] 110425: tolerable all pass - PUSHED

2017-06-14 Thread osstest service owner
flight 110425 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110425/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail  like 110397
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail  like 110397
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail  like 110397
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt 13 saverestore-support-checkfail   never pass
 test-arm64-arm64-libvirt-qcow2 11 migrate-support-checkfail never pass
 test-arm64-arm64-libvirt-qcow2 12 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  992bf863fccfe1fa1d0c5a5277b9cee50abc48ef
baseline version:
 libvirt  2feb2fe2512771763000930b68b689750c124454

Last test of basis   110397  2017-06-13 04:21:48 Z1 days
Testing same since   110425  2017-06-14 04:26:23 Z0 days1 attempts


People who touched revisions under test:
  Erik Skultety 
  Jiri Denemark 
  Michal Privoznik 
  Philipp Hahn 
  Xi Xu 
  Yi Wang 

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-arm64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-arm64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-arm64-arm64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-arm64-arm64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-arm64-arm64-libvirt-qcow2   pass
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master

[Xen-devel] [linux-4.9 test] 110423: regressions - FAIL

2017-06-14 Thread osstest service owner
flight 110423 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110423/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2   6 xen-boot fail REGR. vs. 107358

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-pair 11 host-ping-check-xen/src_host fail pass in 
110396
 test-amd64-i386-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail pass in 
110396

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop   fail REGR. vs. 107358
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stopfail REGR. vs. 107358
 test-amd64-amd64-xl-rtds  9 debian-install   fail REGR. vs. 107358

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-start/win.repeat fail blocked in 
107358
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop   fail in 110396 like 107358
 test-armhf-armhf-libvirt-xsm  6 xen-boot fail  like 107358
 test-armhf-armhf-xl-multivcpu  6 xen-boot fail like 107358
 test-armhf-armhf-xl-vhd   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-rtds  6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt-raw  6 xen-boot fail  like 107358
 test-armhf-armhf-xl   6 xen-boot fail  like 107358
 test-armhf-armhf-xl-xsm   6 xen-boot fail  like 107358
 test-armhf-armhf-libvirt  6 xen-boot fail  like 107358
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-armhf-armhf-xl-arndale   6 xen-boot fail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-armhf-armhf-examine  6 reboot   fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-installfail never pass

version targeted for testing:
 linuxf1aa865ae5d4608cbfbb02f42baa1ef5ed95fce2
baseline version:
 linux37feaf8095d352014555b82adb4a04609ca17d3f

Last test of basis   107358  2017-04-10 19:42:52 Z   65 days
Failing since107396  2017-04-12 11:15:19 Z   63 days   96 attempts
Testing same since   110082  2017-06-07 11:34:26 Z7 days9 attempts


605 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm  pass
 build-arm64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-arm64  pass
 build-armhf  pass
 build-i386 

Re: [Xen-devel] [PATCH v3 06/18] xen/pvcalls: handle commands from the frontend

2017-06-14 Thread Stefano Stabellini
On Mon, 12 Jun 2017, Boris Ostrovsky wrote:
> > +
> >  static void pvcalls_back_work(struct work_struct *work)
> >  {
> > +   struct pvcalls_fedata *priv = container_of(work,
> > +   struct pvcalls_fedata, register_work);
> > +   int notify, notify_all = 0, more = 1;
> > +   struct xen_pvcalls_request req;
> > +   struct xenbus_device *dev = priv->dev;
> > +
> > +   while (more) {
> > +   while (RING_HAS_UNCONSUMED_REQUESTS(>ring)) {
> > +   RING_COPY_REQUEST(>ring,
> > + priv->ring.req_cons++,
> > + );
> > +
> > +   if (!pvcalls_back_handle_cmd(dev, )) {
> > +   RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
> > +   >ring, notify);
> > +   notify_all += notify;
> > +   }
> > +   }
> > +
> > +   if (notify_all)
> > +   notify_remote_via_irq(priv->irq);
> > +
> > +   RING_FINAL_CHECK_FOR_REQUESTS(>ring, more);
> > +   }
> >  }
> >  
> >  static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> >  {
> > +   struct xenbus_device *dev = dev_id;
> > +   struct pvcalls_fedata *priv = NULL;
> > +
> > +   if (dev == NULL)
> > +   return IRQ_HANDLED;
> > +
> > +   priv = dev_get_drvdata(>dev);
> > +   if (priv == NULL)
> > +   return IRQ_HANDLED;
> > +
> > +   /*
> > +* TODO: a small theoretical race exists if we try to queue work
> > +* after pvcalls_back_work checked for final requests and before
> > +* it returns. The queuing will fail, and pvcalls_back_work
> > +* won't do the work because it is about to return. In that
> > +* case, we lose the notification.
> > +*/
> > +   queue_work(priv->wq, >register_work);
> 
> Would queuing delayed work (if queue_work() failed) help? And canceling
> it on next invocation of pvcalls_back_event()?

Looking at the implementation of queue_delayed_work_on and
queue_work_on, it looks like that if queue_work fails then also
queue_delayed_work would fail: they both test on
WORK_STRUCT_PENDING_BIT.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 110455: tolerable trouble: broken/pass - PUSHED

2017-06-14 Thread osstest service owner
flight 110455 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110455/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  695bb5f504ab48c1d546446f104c1b6c0ead126d
baseline version:
 xen  c55667bd0ad8f04688abfd5c6317709dc00f88ab

Last test of basis   110440  2017-06-14 13:01:40 Z0 days
Testing same since   110455  2017-06-14 19:02:27 Z0 days1 attempts


People who touched revisions under test:
  Andre Przywara 
  Julien Grall 
  Stefano Stabellini 
  Vijaya Kumar K 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=695bb5f504ab48c1d546446f104c1b6c0ead126d
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
695bb5f504ab48c1d546446f104c1b6c0ead126d
+ branch=xen-unstable-smoke
+ revision=695bb5f504ab48c1d546446f104c1b6c0ead126d
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.9-testing
+ '[' x695bb5f504ab48c1d546446f104c1b6c0ead126d = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : 

Re: [Xen-devel] [PATCH] xen/include/asm-x86/hvm/svm/vmcb.h: Correction in comments.

2017-06-14 Thread Boris Ostrovsky
On 06/14/2017 03:19 PM, Dushyant Behl wrote:
> The VMEXIT codes listed from EXCEPTION_PF to EXCEPTION_XF had comments
> describe the exitcodes slightly shifted than the expected value.
> The expected exitcode value for page-fault is 78 which should be 0x4E
> and so on till exception XF.
>
> Signed-off-by: Dushyant Behl 


Reviewed-by: Boris Ostrovsky 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 11/18] xen/pvcalls: implement accept command

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Juergen Gross wrote:
> On 14/06/17 02:47, Stefano Stabellini wrote:
> > On Tue, 13 Jun 2017, Juergen Gross wrote:
> >> On 02/06/17 21:31, Stefano Stabellini wrote:
> >>> Implement the accept command by calling inet_accept. To avoid blocking
> >>> in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get
> >>> scheduled on sk_data_ready (for a passive socket, it means that there
> >>> are connections to accept).
> >>>
> >>> Use the reqcopy field to store the request. Accept the new socket from
> >>> the delayed work function, create a new sock_mapping for it, map
> >>> the indexes page and data ring, and reply to the other end. Allocate an
> >>> ioworker for the socket.
> >>>
> >>> Only support one outstanding blocking accept request for every socket at
> >>> any time.
> >>>
> >>> Add a field to sock_mapping to remember the passive socket from which an
> >>> active socket was created.
> >>>
> >>> Signed-off-by: Stefano Stabellini 
> >>> CC: boris.ostrov...@oracle.com
> >>> CC: jgr...@suse.com
> >>> ---
> >>>  drivers/xen/pvcalls-back.c | 109 
> >>> -
> >>>  1 file changed, 108 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> >>> index a75586e..f1173f4 100644
> >>> --- a/drivers/xen/pvcalls-back.c
> >>> +++ b/drivers/xen/pvcalls-back.c
> >>> @@ -65,6 +65,7 @@ struct pvcalls_ioworker {
> >>>  struct sock_mapping {
> >>>   struct list_head list;
> >>>   struct pvcalls_fedata *priv;
> >>> + struct sockpass_mapping *sockpass;
> >>>   struct socket *sock;
> >>>   uint64_t id;
> >>>   grant_ref_t ref;
> >>> @@ -275,10 +276,79 @@ static int pvcalls_back_release(struct 
> >>> xenbus_device *dev,
> >>>  
> >>>  static void __pvcalls_back_accept(struct work_struct *work)
> >>>  {
> >>> + struct sockpass_mapping *mappass = container_of(
> >>> + work, struct sockpass_mapping, register_work);
> >>> + struct sock_mapping *map;
> >>> + struct pvcalls_ioworker *iow;
> >>> + struct pvcalls_fedata *priv;
> >>> + struct socket *sock;
> >>> + struct xen_pvcalls_response *rsp;
> >>> + struct xen_pvcalls_request *req;
> >>> + int notify;
> >>> + int ret = -EINVAL;
> >>> + unsigned long flags;
> >>> +
> >>> + priv = mappass->priv;
> >>> + /* We only need to check the value of "cmd" atomically on read. */
> >>> + spin_lock_irqsave(>copy_lock, flags);
> >>> + req = >reqcopy;
> >>> + if (req->cmd != PVCALLS_ACCEPT) {
> >>> + spin_unlock_irqrestore(>copy_lock, flags);
> >>> + return;
> >>> + }
> >>> + spin_unlock_irqrestore(>copy_lock, flags);
> >>
> >> What about:
> >>req = >reqcopy;
> >>if (ACCESS_ONCE(req->cmd) != PVCALLS_ACCEPT)
> >>return;
> >>
> >> I can't see the need for taking a lock here.
> > 
> > Sure, good idea
> > 
> > 
> >>> +
> >>> + sock = sock_alloc();
> >>> + if (sock == NULL)
> >>> + goto out_error;
> >>> + sock->type = mappass->sock->type;
> >>> + sock->ops = mappass->sock->ops;
> >>> +
> >>> + ret = inet_accept(mappass->sock, sock, O_NONBLOCK, true);
> >>> + if (ret == -EAGAIN) {
> >>> + sock_release(sock);
> >>> + goto out_error;
> >>> + }
> >>> +
> >>> + map = pvcalls_new_active_socket(priv,
> >>> + req->u.accept.id_new,
> >>> + req->u.accept.ref,
> >>> + req->u.accept.evtchn,
> >>> + sock);
> >>> + if (!map) {
> >>> + sock_release(sock);
> >>> + goto out_error;
> >>> + }
> >>> +
> >>> + map->sockpass = mappass;
> >>> + iow = >ioworker;
> >>> + atomic_inc(>read);
> >>> + atomic_inc(>io);
> >>> + queue_work_on(iow->cpu, iow->wq, >register_work);
> >>> +
> >>> +out_error:
> >>> + rsp = RING_GET_RESPONSE(>ring, priv->ring.rsp_prod_pvt++);
> >>> + rsp->req_id = req->req_id;
> >>> + rsp->cmd = req->cmd;
> >>> + rsp->u.accept.id = req->u.accept.id;
> >>> + rsp->ret = ret;
> >>> + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(>ring, notify);
> >>> + if (notify)
> >>> + notify_remote_via_irq(priv->irq);
> >>> +
> >>> + spin_lock_irqsave(>copy_lock, flags);
> >>> + mappass->reqcopy.cmd = 0;
> >>> + spin_unlock_irqrestore(>copy_lock, flags);
> >>
> >> ACCESS_ONCE(mappass->reqcopy.cmd) = 0;
> > 
> > OK
> > 
> > 
> >>>  }
> >>>  
> >>>  static void pvcalls_pass_sk_data_ready(struct sock *sock)
> >>>  {
> >>> + struct sockpass_mapping *mappass = sock->sk_user_data;
> >>> +
> >>> + if (mappass == NULL)
> >>> + return;
> >>> +
> >>> + queue_work(mappass->wq, >register_work);
> >>>  }
> >>>  
> >>>  static int pvcalls_back_bind(struct xenbus_device *dev,
> >>> @@ -380,7 +450,44 @@ static int pvcalls_back_listen(struct xenbus_device 
> >>> *dev,
> >>>  static int pvcalls_back_accept(struct xenbus_device *dev,
> >>>  struct xen_pvcalls_request *req)
> >>>  {
> >>> - return 0;
> >>> + struct pvcalls_fedata *priv;
> >>> + struct 

Re: [Xen-devel] Incorrect Comment in xen/include/asm-x86/hvm/svm/vmcb.h

2017-06-14 Thread Dushyant Behl
On Wed, Jun 14, 2017 at 2:19 PM, Jan Beulich  wrote:
 On 13.06.17 at 17:49,  wrote:
>> Hi Everyone,
>>
>> I was looking at the SVM setup code in Xen when I noticed that some
>> comments describing the VMEXIT codes look wrong.
>> The processor exception exitcodes listed from VMEXIT_EXCEPTION_PF to
>> VM_EXCEPTION_XF seem to describe the hexadecimal exit code different
>> than the expected value.
>>
>> This section is taken from xen/include/asm-x86/hvm/svm/vmcb.h
>>
>> VMEXIT_EXCEPTION_PF  =  78, /* 0x4f, page-fault */
>> VMEXIT_EXCEPTION_15  =  79, /* 0x50, reserved */
>> VMEXIT_EXCEPTION_MF  =  80, /* 0x51, x87 floating-point exception-pending */
>> VMEXIT_EXCEPTION_AC  =  81, /* 0x52, alignment-check */
>> VMEXIT_EXCEPTION_MC  =  82, /* 0x53, machine-check */
>> VMEXIT_EXCEPTION_XF  =  83, /* 0x54, simd floating-point */
>>
>> The expected exception code for page-fault is 78 which should be 0x4E
>> in hexadecimal, same case for all the exceptions till XF.
>> If this needs correction please let me know, will be happy to submit a
>> patch.
>
> Please do; in fact I don't see why you didn't right away.

Done. Thanks. Next time i'll send one right away :)

-
Dushyant

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] xen/include/asm-x86/hvm/svm/vmcb.h: Correction in comments.

2017-06-14 Thread Dushyant Behl
The VMEXIT codes listed from EXCEPTION_PF to EXCEPTION_XF had comments
describe the exitcodes slightly shifted than the expected value.
The expected exitcode value for page-fault is 78 which should be 0x4E
and so on till exception XF.

Signed-off-by: Dushyant Behl 
---
 xen/include/asm-x86/hvm/svm/vmcb.h | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-x86/hvm/svm/vmcb.h 
b/xen/include/asm-x86/hvm/svm/vmcb.h
index 6bbab1e..30a228b 100644
--- a/xen/include/asm-x86/hvm/svm/vmcb.h
+++ b/xen/include/asm-x86/hvm/svm/vmcb.h
@@ -244,12 +244,12 @@ enum VMEXIT_EXITCODE
 VMEXIT_EXCEPTION_NP  =  75, /* 0x4b, segment-not-present */
 VMEXIT_EXCEPTION_SS  =  76, /* 0x4c, stack */
 VMEXIT_EXCEPTION_GP  =  77, /* 0x4d, general-protection */
-VMEXIT_EXCEPTION_PF  =  78, /* 0x4f, page-fault */
-VMEXIT_EXCEPTION_15  =  79, /* 0x50, reserved */
-VMEXIT_EXCEPTION_MF  =  80, /* 0x51, x87 floating-point exception-pending 
*/
-VMEXIT_EXCEPTION_AC  =  81, /* 0x52, alignment-check */
-VMEXIT_EXCEPTION_MC  =  82, /* 0x53, machine-check */
-VMEXIT_EXCEPTION_XF  =  83, /* 0x54, simd floating-point */
+VMEXIT_EXCEPTION_PF  =  78, /* 0x4e, page-fault */
+VMEXIT_EXCEPTION_15  =  79, /* 0x4f, reserved */
+VMEXIT_EXCEPTION_MF  =  80, /* 0x50, x87 floating-point exception-pending 
*/
+VMEXIT_EXCEPTION_AC  =  81, /* 0x51, alignment-check */
+VMEXIT_EXCEPTION_MC  =  82, /* 0x52, machine-check */
+VMEXIT_EXCEPTION_XF  =  83, /* 0x53, simd floating-point */
 
 /* exceptions 20-31 (exitcodes 84-95) are reserved */
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 14/18] xen/pvcalls: disconnect and module_exit

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Boris Ostrovsky wrote:
> >>>  static int backend_disconnect(struct xenbus_device *dev)
> >>>  {
> >>> + struct pvcalls_fedata *priv;
> >>> + struct sock_mapping *map, *n;
> >>> + struct sockpass_mapping *mappass;
> >>> + struct radix_tree_iter iter;
> >>> + void **slot;
> >>> +
> >>> +
> >>> + priv = dev_get_drvdata(>dev);
> 
> Can you also rename priv to something else (like fedata)? And in other
> routines too.

Yes, done

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 2/2] xen/livepatch: Don't crash on encountering STN_UNDEF relocations

2017-06-14 Thread Konrad Rzeszutek Wilk
On Wed, Jun 14, 2017 at 07:33:57PM +0100, Andrew Cooper wrote:
> On 14/06/17 15:18, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jun 14, 2017 at 04:24:00AM -0600, Jan Beulich wrote:
> > On 14.06.17 at 12:13,  wrote:
> >>> On 14/06/17 11:11, Jan Beulich wrote:
> >>> On 13.06.17 at 22:51,  wrote:
> > --- a/xen/arch/x86/livepatch.c
> > +++ b/xen/arch/x86/livepatch.c
> > @@ -170,14 +170,22 @@ int arch_livepatch_perform_rela(struct 
> > livepatch_elf 
> >>> *elf,
> >  uint8_t *dest = base->load_addr + r->r_offset;
> >  uint64_t val;
> >  
> > -if ( symndx > elf->nsym )
> > +if ( symndx == STN_UNDEF )
> > +val = 0;
> > +else if ( symndx > elf->nsym )
> >  {
> >  dprintk(XENLOG_ERR, LIVEPATCH "%s: Relative relocation 
> > wants 
> >>> symbol@%u which is past end!\n",
> >  elf->name, symndx);
> >  return -EINVAL;
> >  }
> > -
> > -val = r->r_addend + elf->sym[symndx].sym->st_value;
> > +else if ( !elf->sym[symndx].sym )
> > +{
> > +dprintk(XENLOG_ERR, LIVEPATCH "%s: No symbol@%u\n",
> > +elf->name, symndx);
> > +return -EINVAL;
> > +}
> > +else
> > +val = r->r_addend + elf->sym[symndx].sym->st_value;
>  I don't understand this: st_value for STN_UNDEF is going to be zero
>  (so far there's also no extension defined for the first entry, afaict),
>  so there should be no difference between hard-coding the zero and
>  reading the symbol table entry. Furthermore r_addend would still
>  need applying. And finally "val" is never being cast to a pointer, and
>  hence I miss the connection to whatever crash you've been
>  observing.
> >>> elf->sym[0].sym is the NULL pointer.
> >>>
> >>> ->st_value dereferences it.
> >> Ah, but that is then what you want to change (unless we decide
> >> to outright refuse STN_UNDEF, which still depends on why it's
> >> there in the first place).
> > That the !elf->sym[0].sym is very valid case.
> > And in that context the 'val=r->r_addend' makes sense.
> >
> > And from an EFI spec, the relocations can point to the SHN_UNDEF area (why
> > would it I have no clue) - but naturally we can't mess with that.
> >
> > But I am curious as Jan about this - and whether this is something that
> > could be constructed with a test-case?
> 
> Well - I've got a livepatch with such a relocation.  It is probably a
> livepatch build tools issue, but the question is whether Xen should ever
> accept such a livepatch or not (irrespective of whether this exact
> relocation is permitted within the ELF spec).

CC-ing Jamie

I would say no, as I can't find a good use-case for a relocation 
to point to the SHN_UNDEF symbol [0]. It feels to me as if somebody
would be mucking with a NULL pointer.

But perhaps if the addendum had a value it would make sense?

As in NULL + ?


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 1/4] doc, xen: document hypervisor sysfs nodes for xen

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Boris Ostrovsky wrote:
> + Stefano for ARM.
> 
> On 06/12/2017 10:21 AM, Juergen Gross wrote:
> > Today only a few sysfs nodes under /sys/hypervisor/ are documented
> > for Xen in Documentation/ABI/testing/sysfs-hypervisor-pmu.
> >
> > Add the remaining Xen sysfs nodes under /sys/hypervisor/ in a new
> > file Documentation/ABI/stable/sysfs-hypervisor-xen and add the Xen
> > specific sysfs docs to the MAINTAINERS file.
> >
> > Signed-off-by: Juergen Gross 
> > ---
> > V3:
> >   - added hint for hidden values where appropriate (Andrew Cooper)
> >
> > V2:
> >   - rename file to Documentation/ABI/stable/sysfs-hypervisor-xen in
> > order to reflect Xen dependency
> >   - leave pmu entries in old file under testing (Boris Ostrovsky)
> > ---
> >  Documentation/ABI/stable/sysfs-hypervisor-xen | 119 
> > ++
> >  MAINTAINERS   |   2 +
> >  2 files changed, 121 insertions(+)
> >  create mode 100644 Documentation/ABI/stable/sysfs-hypervisor-xen
> >
> > diff --git a/Documentation/ABI/stable/sysfs-hypervisor-xen 
> > b/Documentation/ABI/stable/sysfs-hypervisor-xen
> > new file mode 100644
> > index ..e413154128b8
> > --- /dev/null
> > +++ b/Documentation/ABI/stable/sysfs-hypervisor-xen
> > @@ -0,0 +1,119 @@
> > +What:  /sys/hypervisor/compilation/compile_date
> > +Date:  March 2009
> > +KernelVersion: 2.6.30
> > +Contact:   xen-de...@lists.xenproject.org
> > +Description:   If running under Xen:
> > +   Contains the build time stamp of the Xen hypervisor
> > +   Might return "" in case of special security settings
> > +   in the hypervisor.
> > +
> > +What:  /sys/hypervisor/compilation/compiled_by
> > +Date:  March 2009
> > +KernelVersion: 2.6.30
> > +Contact:   xen-de...@lists.xenproject.org
> > +Description:   If running under Xen:
> > +   Contains information who built the Xen hypervisor
> > +   Might return "" in case of special security settings
> > +   in the hypervisor.
> > +
> > +What:  /sys/hypervisor/compilation/compiler
> > +Date:  March 2009
> > +KernelVersion: 2.6.30
> > +Contact:   xen-de...@lists.xenproject.org
> > +Description:   If running under Xen:
> > +   Compiler which was used to build the Xen hypervisor
> > +   Might return "" in case of special security settings
> > +   in the hypervisor.
> > +
> > +What:  /sys/hypervisor/properties/capabilities
> > +Date:  March 2009
> > +KernelVersion: 2.6.30
> > +Contact:   xen-de...@lists.xenproject.org
> > +Description:   If running under Xen:
> > +   Space separated list of supported guest system types. Each type
> > +   is in the format: -.-
> > +   With:
> > +   : "xen" -- x86: paravirtualized, arm: standard
> > +"hvm" -- x86 only: full virtualized
> 
> s/full/fully/
> 
> Other than that
> 
> Reviewed-by: Boris Ostrovsky 

Reviewed-by: Stefano Stabellini 


> > +   : major guest interface version
> > +   : minor guest interface version
> > +   :  architecture, e.g.:
> > +"x86_32": 32 bit x86 guest without PAE
> > +"x86_32p": 32 bit x86 guest with PAE
> > +"x86_64": 64 bit x86 guest
> > +"armv7l": 32 bit arm guest
> > +"aarch64": 64 bit arm guest
> > +
> > +What:  /sys/hypervisor/properties/changeset
> > +Date:  March 2009
> > +KernelVersion: 2.6.30
> > +Contact:   xen-de...@lists.xenproject.org
> > +Description:   If running under Xen:
> > +   Changeset of the hypervisor (git commit)
> > +   Might return "" in case of special security settings
> > +   in the hypervisor.
> > +
> > +What:  /sys/hypervisor/properties/features
> > +Date:  March 2009
> > +KernelVersion: 2.6.30
> > +Contact:   xen-de...@lists.xenproject.org
> > +Description:   If running under Xen:
> > +   Features the Xen hypervisor supports for the guest as defined
> > +   in include/xen/interface/features.h printed as a hex value.
> > +
> > +What:  /sys/hypervisor/properties/pagesize
> > +Date:  March 2009
> > +KernelVersion: 2.6.30
> > +Contact:   xen-de...@lists.xenproject.org
> > +Description:   If running under Xen:
> > +   Default page size of the hypervisor printed as a hex value.
> > +   Might return "0" in case of special security settings
> > +   in the hypervisor.
> > +
> > +What:  /sys/hypervisor/properties/virtual_start
> > +Date:  March 2009
> > +KernelVersion: 2.6.30
> > +Contact:   

Re: [Xen-devel] [PATCH v4 2/4] xen: add sysfs node for guest type

2017-06-14 Thread Boris Ostrovsky

> Hmm, okay. Are you fine with the attached patch?


Reviewed-by: Boris Ostrovsky 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 2/4] xen: add sysfs node for guest type

2017-06-14 Thread Juergen Gross
On 14/06/17 19:43, Boris Ostrovsky wrote:
> 
>> --- a/Documentation/ABI/testing/sysfs-hypervisor-pmu
>> +++ b/Documentation/ABI/testing/sysfs-hypervisor-xen
>> @@ -1,8 +1,19 @@
>> +What:   /sys/hypervisor/guest_type
>> +Date:   May 2017
>> +KernelVersion:  4.13
>> +Contact:xen-de...@lists.xenproject.org
>> +Description:If running under Xen:
>> +Type of guest:
>> +"Xen": standard guest type on arm
>> +"HVM": fully virtualized guest (x86)
>> +"PV": paravirtualized guest (x86)
>> +"PVH": fully virtualized guest without legacy emulation (x86)
>> +
>>  
> 
> 
> 
>>  
>> +static ssize_t guest_type_show(struct hyp_sysfs_attr *attr, char *buffer)
>> +{
>> +const char *type = "???";
>> +
>> +switch (xen_domain_type) {
>> +case XEN_NATIVE:
>> +/* ARM only. */
>> +type = "Xen";
>> +break;
>> +case XEN_PV_DOMAIN:
>> +type = "PV";
>> +break;
>> +case XEN_HVM_DOMAIN:
>> +type = xen_pvh_domain() ? "PVH" : "HVM";
>> +break;
>> +}
> 
> I think we should return -EINVAL for unknown type. Or document "???" in
> the ABI document.

Hmm, okay. Are you fine with the attached patch?


Juergen
>From b8661036e7465eab99f988cec3fe37e35536eb40 Mon Sep 17 00:00:00 2001
From: Juergen Gross 
Date: Wed, 14 Jun 2017 17:12:45 +0200
Subject: [PATCH v5] xen: add sysfs node for guest type

Currently there is no reliable user interface inside a Xen guest to
determine its type (e.g. HVM, PV or PVH). Instead of letting user mode
try to determine this by various rather hacky mechanisms (parsing of
boot messages before they are gone, trying to make use of known subtle
differences in behavior of some instructions), add a sysfs node
/sys/hypervisor/guest_type to explicitly deliver this information as
it is known to the kernel.

Signed-off-by: Juergen Gross 
---
V4:
  - use xen_domain_type instead of introducing xen_guest_type
(Boris Ostrovsky)
V2:
  - remove PVHVM guest type (Andrew Cooper)
  - move description to Documentation/ABI/testing/sysfs-hypervisor-xen
(Boris Ostrovsky)
  - make xen_guest_type const char * (Jan Beulich)
  - modify standard ARM guest type to "Xen"
---
 .../{sysfs-hypervisor-pmu => sysfs-hypervisor-xen} | 15 --
 MAINTAINERS|  2 +-
 drivers/xen/sys-hypervisor.c   | 34 ++
 3 files changed, 48 insertions(+), 3 deletions(-)
 rename Documentation/ABI/testing/{sysfs-hypervisor-pmu => sysfs-hypervisor-xen} (67%)

diff --git a/Documentation/ABI/testing/sysfs-hypervisor-pmu b/Documentation/ABI/testing/sysfs-hypervisor-xen
similarity index 67%
rename from Documentation/ABI/testing/sysfs-hypervisor-pmu
rename to Documentation/ABI/testing/sysfs-hypervisor-xen
index 224faa105e18..c0edb3fdd6eb 100644
--- a/Documentation/ABI/testing/sysfs-hypervisor-pmu
+++ b/Documentation/ABI/testing/sysfs-hypervisor-xen
@@ -1,8 +1,19 @@
+What:		/sys/hypervisor/guest_type
+Date:		May 2017
+KernelVersion:	4.13
+Contact:	xen-de...@lists.xenproject.org
+Description:	If running under Xen:
+		Type of guest:
+		"Xen": standard guest type on arm
+		"HVM": fully virtualized guest (x86)
+		"PV": paravirtualized guest (x86)
+		"PVH": fully virtualized guest without legacy emulation (x86)
+
 What:		/sys/hypervisor/pmu/pmu_mode
 Date:		August 2015
 KernelVersion:	4.3
 Contact:	Boris Ostrovsky 
-Description:
+Description:	If running under Xen:
 		Describes mode that Xen's performance-monitoring unit (PMU)
 		uses. Accepted values are
 			"off"  -- PMU is disabled
@@ -17,7 +28,7 @@ What:   /sys/hypervisor/pmu/pmu_features
 Date:   August 2015
 KernelVersion:  4.3
 Contact:Boris Ostrovsky 
-Description:
+Description:	If running under Xen:
 		Describes Xen PMU features (as an integer). A set bit indicates
 		that the corresponding feature is enabled. See
 		include/xen/interface/xenpmu.h for available features
diff --git a/MAINTAINERS b/MAINTAINERS
index 68c31aebb79c..5630439429e6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13983,7 +13983,7 @@ F:	arch/x86/include/asm/xen/
 F:	include/xen/
 F:	include/uapi/xen/
 F:	Documentation/ABI/stable/sysfs-hypervisor-xen
-F:	Documentation/ABI/testing/sysfs-hypervisor-pmu
+F:	Documentation/ABI/testing/sysfs-hypervisor-xen
 
 XEN HYPERVISOR ARM
 M:	Stefano Stabellini 
diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 84106f9c456c..2f78f84a31e9 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -50,6 +50,35 @@ static int __init xen_sysfs_type_init(void)
 	return sysfs_create_file(hypervisor_kobj, _attr.attr);
 }
 
+static ssize_t guest_type_show(struct hyp_sysfs_attr *attr, char *buffer)
+{
+	const char *type;
+
+	switch (xen_domain_type) {
+	

Re: [Xen-devel] [RFC PATCH] docs: add README.atomic

2017-06-14 Thread Jan Beulich
>>> Stefano Stabellini  06/14/17 8:45 PM >>>
>On Wed, 14 Jun 2017, Jan Beulich wrote:
>> > +What ACCESS_ONCE does *not* guarantee though is this access is done in a
>> > +single instruction, so complex or non-native or unaligned data types are
>> > +not guaranteed to be atomic. If for instance counter would be a 64-bit 
>> > value
>> > +on a 32-bit system, the compiler would probably generate two load 
>> > instructions,
>> > +which could end up in reading a wrong value if some other CPU changes the 
>> > other
>> > +half of the variable in between those two reads.
>> > +However accessing _aligned and native_ data types is guaranteed to be 
>> > atomic
>> > +in the architectures supported by Xen, so ACCESS_ONCE is safe to use when
>> > +these conditions are met.
>> 
>> As mentioned before, such a guarantee does not exist. Please only
>> state what is really the case, i.e. we _expect_ compilers to behave
>> this way.
>
>Regarding compilers support: do we state clearly in any docs or website
>what are the compilers we actually support? I think this would be the
>right opportunity to do it.

At the very least we state somewhere what gcc versions we support. However,
I can't see the relation of such a statement to the discussion here.

Jan



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 2/2] xen/livepatch: Don't crash on encountering STN_UNDEF relocations

2017-06-14 Thread Jan Beulich
>>> Andrew Cooper  06/14/17 8:34 PM >>>
>Well - I've got a livepatch with such a relocation.  It is probably a
>livepatch build tools issue, but the question is whether Xen should ever
>accept such a livepatch or not (irrespective of whether this exact
>relocation is permitted within the ELF spec).

Since the spec explicitly mentions that case, I think we'd better support it.
But it wouldn't be the end of the world if we didn't, as presumably there
aren't that many use cases for it.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH] docs: add README.atomic

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Jan Beulich wrote:
> > +What ACCESS_ONCE does *not* guarantee though is this access is done in a
> > +single instruction, so complex or non-native or unaligned data types are
> > +not guaranteed to be atomic. If for instance counter would be a 64-bit 
> > value
> > +on a 32-bit system, the compiler would probably generate two load 
> > instructions,
> > +which could end up in reading a wrong value if some other CPU changes the 
> > other
> > +half of the variable in between those two reads.
> > +However accessing _aligned and native_ data types is guaranteed to be 
> > atomic
> > +in the architectures supported by Xen, so ACCESS_ONCE is safe to use when
> > +these conditions are met.
> 
> As mentioned before, such a guarantee does not exist. Please only
> state what is really the case, i.e. we _expect_ compilers to behave
> this way.

Regarding compilers support: do we state clearly in any docs or website
what are the compilers we actually support? I think this would be the
right opportunity to do it.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 00/34] arm64: Dom0 ITS emulation

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Andre Przywara wrote:
> Hi,
> 
> hopefully the final version, with only nits from v11 addressed.
> The same restriction as for the previous versions  still apply: the locking
> is considered somewhat insufficient and will be fixed by an upcoming rework.
> 
> Patches 01/34 and 02/34 should be applied for 4.9 still, since they fix
> existing bugs.
> 
> The minor comments on v11 have been addressed and the respective tags
> have been added. For a changelog see below (which omits typo fixes).
> 
> I dropped Julien's Acked-by from patch 25/34 (MAPD), since I changed
> it slightly after Stefano's comment.

I committed the series, thanks and congratulations!


> --
> This series adds support for emulation of an ARM GICv3 ITS interrupt
> controller. For hardware which relies on the ITS to provide interrupts for
> its peripherals this code is needed to get a machine booted into Dom0 at
> all. ITS emulation for DomUs is only really useful with PCI passthrough,
> which is not yet available for ARM. It is expected that this feature
> will be co-developed with the ITS DomU code. However this code drop here
> considered DomU emulation already, to keep later architectural changes
> to a minimum.
> 
> This is technical preview version to allow early testing of the feature.
> Things not (properly) addressed in this release:
> - There is only support for Dom0 at the moment. DomU support is only really
> useful with PCI passthrough, which is not there yet for ARM.
> - The MOVALL command is not emulated. In our case there is really nothing
> to do here. We might need to revisit this in the future for DomU support.
> - The INVALL command might need some rework to be more efficient. Currently
> we iterate over all mapped LPIs, which might take a bit longer.
> - Indirect tables are not supported. This affects both the host and the
> virtual side.
> - The ITS tables inside (Dom0) guest memory cannot easily be protected
> at the moment (without restricting access to Xen as well). So for now
> we trust Dom0 not to touch this memory (which the spec forbids as well).
> - With malicious guests (DomUs) there is a possibility of an interrupt
> storm triggered by a device. We would need to investigate what that means
> for Xen and if there is a nice way to prevent this. Disabling the LPI on
> the host side would require command queuing, which has its downsides to
> be issued during runtime.
> - Dom0 should make sure that the ITS resources (number of LPIs, devices,
> events) later handed to a DomU are really limited, as a large number of
> them could mean much time spend in Xen to initialize, free or handle those.
> It is expected that the toolstack sets up a tailored ITS with just enough
> resources to accommodate the needs of the actual passthrough-ed device(s).
> - The command queue locking is currently suboptimal and should be made more
> fine-grained in the future, if possible.
> - Provide support for running with an IOMMU, to map the doorbell page
> to all devices.
> 
> 
> Some generic design principles:
> 
> * The current GIC code statically allocates structures for each supported
> IRQ (both for the host and the guest), which due to the potentially
> millions of LPI interrupts is not feasible to copy for the ITS.
> So we refrain from introducing the ITS as a first class Xen interrupt
> controller, also we don't hold struct irq_desc's or struct pending_irq's
> for each possible LPI.
> Fortunately LPIs are only interesting to guests, so we get away with
> storing only the virtual IRQ number and the guest VCPU for each allocated
> host LPI, which can be stashed into one uint64_t. This data is stored in
> a two-level table, which is both memory efficient and quick to access.
> We hook into the existing IRQ handling and VGIC code to avoid accessing
> the normal structures, providing alternative methods for getting the
> needed information (priority, is enabled?) for LPIs.
> Whenever a guest maps a device, we allocate the maximum required number
> of struct pending_irq's, so that any triggering LPI can find its data
> structure. Upon the guest actually mapping the LPI, this pointer to the
> corresponding pending_irq gets entered into a radix tree, so that it can
> be quickly looked up.
> 
> * On the guest side we (later will) have to deal with malicious guests
> trying to hog Xen with mapping requests for a lot of LPIs, for instance.
> As the ITS actually uses system memory for storing status information,
> we use this memory (which the guest has to provide) to naturally limit
> a guest. Whenever we need information from any of the ITS tables, we
> temporarily map them (which is cheap on arm64) and copy the required data.
> 
> * An obvious approach to handling some guest ITS commands would be to
> propagate them to the host, for instance to map devices and LPIs and
> to enable or disable LPIs.
> However this (later with DomU support) will create an attack vector, as
> a malicious 

[Xen-devel] [seabios test] 110421: tolerable FAIL - PUSHED

2017-06-14 Thread osstest service owner
flight 110421 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110421/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail like 110383
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail like 110383
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass

version targeted for testing:
 seabios  7759d3a5be049eb8d0b4f7c6b1f1a0ba5e871cf3
baseline version:
 seabios  58953eb793b7f43f9cbb72bd7802922746235266

Last test of basis   110383  2017-06-12 19:20:34 Z1 days
Testing same since   110398  2017-06-13 05:55:29 Z1 days2 attempts


People who touched revisions under test:
  Kevin O'Connor 
  Patrick Rudolph 
  Youness Alaoui 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=seabios
+ revision=7759d3a5be049eb8d0b4f7c6b1f1a0ba5e871cf3
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push seabios 
7759d3a5be049eb8d0b4f7c6b1f1a0ba5e871cf3
+ branch=seabios
+ revision=7759d3a5be049eb8d0b4f7c6b1f1a0ba5e871cf3
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ 

Re: [Xen-devel] [PATCH 2/2] xen/livepatch: Don't crash on encountering STN_UNDEF relocations

2017-06-14 Thread Andrew Cooper
On 14/06/17 15:18, Konrad Rzeszutek Wilk wrote:
> On Wed, Jun 14, 2017 at 04:24:00AM -0600, Jan Beulich wrote:
> On 14.06.17 at 12:13,  wrote:
>>> On 14/06/17 11:11, Jan Beulich wrote:
>>> On 13.06.17 at 22:51,  wrote:
> --- a/xen/arch/x86/livepatch.c
> +++ b/xen/arch/x86/livepatch.c
> @@ -170,14 +170,22 @@ int arch_livepatch_perform_rela(struct 
> livepatch_elf 
>>> *elf,
>  uint8_t *dest = base->load_addr + r->r_offset;
>  uint64_t val;
>  
> -if ( symndx > elf->nsym )
> +if ( symndx == STN_UNDEF )
> +val = 0;
> +else if ( symndx > elf->nsym )
>  {
>  dprintk(XENLOG_ERR, LIVEPATCH "%s: Relative relocation wants 
>>> symbol@%u which is past end!\n",
>  elf->name, symndx);
>  return -EINVAL;
>  }
> -
> -val = r->r_addend + elf->sym[symndx].sym->st_value;
> +else if ( !elf->sym[symndx].sym )
> +{
> +dprintk(XENLOG_ERR, LIVEPATCH "%s: No symbol@%u\n",
> +elf->name, symndx);
> +return -EINVAL;
> +}
> +else
> +val = r->r_addend + elf->sym[symndx].sym->st_value;
 I don't understand this: st_value for STN_UNDEF is going to be zero
 (so far there's also no extension defined for the first entry, afaict),
 so there should be no difference between hard-coding the zero and
 reading the symbol table entry. Furthermore r_addend would still
 need applying. And finally "val" is never being cast to a pointer, and
 hence I miss the connection to whatever crash you've been
 observing.
>>> elf->sym[0].sym is the NULL pointer.
>>>
>>> ->st_value dereferences it.
>> Ah, but that is then what you want to change (unless we decide
>> to outright refuse STN_UNDEF, which still depends on why it's
>> there in the first place).
> That the !elf->sym[0].sym is very valid case.
> And in that context the 'val=r->r_addend' makes sense.
>
> And from an EFI spec, the relocations can point to the SHN_UNDEF area (why
> would it I have no clue) - but naturally we can't mess with that.
>
> But I am curious as Jan about this - and whether this is something that
> could be constructed with a test-case?

Well - I've got a livepatch with such a relocation.  It is probably a
livepatch build tools issue, but the question is whether Xen should ever
accept such a livepatch or not (irrespective of whether this exact
relocation is permitted within the ELF spec).

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 01/34] ARM: vGIC: avoid rank lock when reading priority

2017-06-14 Thread Julien Grall

Hi Stefano,

On 06/14/2017 07:15 PM, Stefano Stabellini wrote:

On Wed, 14 Jun 2017, Julien Grall wrote:

In any case, all those macros does not prevent re-ordering at the
processor
level nor read/write atomicity if the variable is misaligned.


My understanding is that the unwritten assumption in Xen is that
variables are always aligned. You are right about processor level
reordering, in fact when needed we have to have barriers

I have read Andre's well written README.atomic, and he ends the
document stating the following:



This makes read and write accesses to ints and longs (and their
respective
unsigned counter parts) naturally atomic.
However it would be beneficial to use atomic primitives anyway to
annotate
the code as being concurrent and to prevent silent breakage when
changing
the code


with which I completely agree


Which means you are happy to use either ACCESS_ONCE or
read_atomic/write_atomic as they in-fine exactly the same on the compiler
we
support.


I do understand that both of them will produce the same output,
therefore, both work for this use-case.

I don't understand why anybody would prefer ACCESS_ONCE over
read/write_atomic, given that with ACCESS_ONCE as a contributor/reviewer
you additionally need to remember to check whether the argument is a
native data type. Basically, I see ACCESS_ONCE as "more work" for me.
Why do you think that ACCESS_ONCE is "better"?


Have you looked at the implementation of ACCESS_ONCE? You don't have to check
the data type when using ACCESS_ONCE. There are safety check to avoid misusing
it.


It checks for a scalar type, not for native data type. They are not
always the same thing but I think they are on arm.

That's the goal of specification (such as AAPCS).


What I want to avoid is this split mind we currently have about atomic.
They are either all safe or not.


What split mind? Do you mean ACCESS_ONCE vs. read/write_atomic? So far,
there are no instances of ACCESS_ONCE in xen/arch/arm.


No. I mean there are places with exact same construct but sometimes 
consider we consider safe, sometimes not. We should have a clear common 
answer rather than arguing differently every time.



As Andre suggested, we should probably import a lighter version of
WRITE_ONCE/READ_ONCE. They are first easier to understand than
read_atomic/write_atomic that could be confused with atomic_read/atomic_write
(IIRC Jan agreed here).

The main goal is to avoid assembly code when it is deem not necessary.


All right, this is one reason. Sorry if I seem unnecessarily contrarian,
but this is the first time I read a reason for this recent push for using
ACCESS_ONCE. You wrote that you preferred the read/write_atomic
functions yourself on Monday.


I preferred {read,write}_atomic because the prototype is nicer to use 
than ACCESS_ONCE. In the ideal we should introduce WRITE_ONCE/READ_ONCE 
improving the naming and also having a nice prototype.


--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/2] xen/livepatch: Clean up arch relocation handling

2017-06-14 Thread Andrew Cooper
On 14/06/17 15:02, Jan Beulich wrote:
 On 14.06.17 at 15:44,  wrote:
>> On Tue, Jun 13, 2017 at 09:51:35PM +0100, Andrew Cooper wrote:
>>> --- a/xen/arch/arm/arm32/livepatch.c
>>> +++ b/xen/arch/arm/arm32/livepatch.c
>>> @@ -224,21 +224,21 @@ int arch_livepatch_perform(struct livepatch_elf *elf,
>>> const struct livepatch_elf_sec *rela,
>>> bool use_rela)
>>>  {
>>> -const Elf_RelA *r_a;
>>> -const Elf_Rel *r;
>>> -unsigned int symndx, i;
>>> -uint32_t val;
>>> -void *dest;
>>> +unsigned int i;
>>>  int rc = 0;
>>>  
>>>  for ( i = 0; i < (rela->sec->sh_size / rela->sec->sh_entsize); i++ )
>>>  {
>>> +unsigned int symndx;
>>> +uint32_t val;
>>> +void *dest;
>>>  unsigned char type;
>>> -s32 addend = 0;
>>> +s32 addend;
>>>  
>>>  if ( use_rela )
>>>  {
>>> -r_a = rela->data + i * rela->sec->sh_entsize;
>>> +const Elf_RelA *r_a = rela->data + i * rela->sec->sh_entsize;
>>> +
>>>  symndx = ELF32_R_SYM(r_a->r_info);
>>>  type = ELF32_R_TYPE(r_a->r_info);
>>>  dest = base->load_addr + r_a->r_offset; /* P */
>>> @@ -246,10 +246,12 @@ int arch_livepatch_perform(struct livepatch_elf *elf,
>>>  }
>>>  else
>>>  {
>>> -r = rela->data + i * rela->sec->sh_entsize;
>>> +const Elf_Rel *r = rela->data + i * rela->sec->sh_entsize;
>>> +
>>>  symndx = ELF32_R_SYM(r->r_info);
>>>  type = ELF32_R_TYPE(r->r_info);
>>>  dest = base->load_addr + r->r_offset; /* P */
>>> +addend = get_addend(type, dest);
>>>  }
>>>  
>>>  if ( symndx > elf->nsym )
>>> @@ -259,13 +261,11 @@ int arch_livepatch_perform(struct livepatch_elf *elf,
>>>  return -EINVAL;
>>>  }
>>>  
>>> -if ( !use_rela )
>>> -addend = get_addend(type, dest);
>> This was added right after the symndx > elf->nsym check as
>> way to make sure we won't dereference the dest (b/c the symbol
>> may be outside the bounds).
> But symndx isn't being used here.

Indeed.  r->r_offset (and therefore dest) has no direct bearing on symndx.

Having said that, there is no sanity check that r->r_offset is within
base->load_addr + sec->sh_size in arm32, whereas both arm64 and x86
appear to do this check.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 25/34] ARM: vITS: handle MAPD command

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Andre Przywara wrote:
> The MAPD command maps a device by associating a memory region for
> storing ITEs with a certain device ID. Since it features a valid bit,
> MAPD also covers the "unmap" functionality, which we also cover here.
> We store the given guest physical address in the device table, and, if
> this command comes from Dom0, tell the host ITS driver about this new
> mapping, so it can issue the corresponding host MAPD command and create
> the required tables. We take care of rolling back actions should one
> step fail.
> Upon unmapping a device we make sure we clean up all associated
> resources and release the memory again.
> We use our existing guest memory access function to find the right ITT
> entry and store the mapping there (in guest memory).
> 
> Signed-off-by: Andre Przywara 

Acked-by: Stefano Stabellini 


> ---
>  xen/arch/arm/gic-v3-its.c|  17 +
>  xen/arch/arm/gic-v3-lpi.c|  17 +
>  xen/arch/arm/vgic-v3-its.c   | 142 
> +++
>  xen/include/asm-arm/gic_v3_its.h |   5 ++
>  4 files changed, 181 insertions(+)
> 
> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
> index 38f0840..8864e0b 100644
> --- a/xen/arch/arm/gic-v3-its.c
> +++ b/xen/arch/arm/gic-v3-its.c
> @@ -859,6 +859,23 @@ struct pending_irq 
> *gicv3_its_get_event_pending_irq(struct domain *d,
>  return get_event_pending_irq(d, vdoorbell_address, vdevid, eventid, 
> NULL);
>  }
>  
> +int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
> + uint32_t vdevid, uint32_t eventid)
> +{
> +uint32_t host_lpi = INVALID_LPI;
> +
> +if ( !get_event_pending_irq(d, vdoorbell_address, vdevid, eventid,
> +_lpi) )
> +return -EINVAL;
> +
> +if ( host_lpi == INVALID_LPI )
> +return -EINVAL;
> +
> +gicv3_lpi_update_host_entry(host_lpi, d->domain_id, INVALID_LPI);
> +
> +return 0;
> +}
> +
>  /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. 
> */
>  void gicv3_its_dt_init(const struct dt_device_node *node)
>  {
> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> index dc936fa..c3474f5 100644
> --- a/xen/arch/arm/gic-v3-lpi.c
> +++ b/xen/arch/arm/gic-v3-lpi.c
> @@ -215,6 +215,23 @@ out:
>  irq_exit();
>  }
>  
> +void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
> + uint32_t virt_lpi)
> +{
> +union host_lpi *hlpip, hlpi;
> +
> +ASSERT(host_lpi >= LPI_OFFSET);
> +
> +host_lpi -= LPI_OFFSET;
> +
> +hlpip = _data.host_lpis[host_lpi / HOST_LPIS_PER_PAGE][host_lpi % 
> HOST_LPIS_PER_PAGE];
> +
> +hlpi.virt_lpi = virt_lpi;
> +hlpi.dom_id = domain_id;
> +
> +write_u64_atomic(>data, hlpi.data);
> +}
> +
>  static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
>  {
>  uint64_t val;
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 4552bc9..d236bbe 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -159,6 +159,21 @@ static struct vcpu *get_vcpu_from_collection(struct 
> virt_its *its,
>  return its->d->vcpu[vcpu_id];
>  }
>  
> +/* Set the address of an ITT for a given device ID. */
> +static int its_set_itt_address(struct virt_its *its, uint32_t devid,
> +   paddr_t itt_address, uint32_t nr_bits)
> +{
> +paddr_t addr = get_baser_phys_addr(its->baser_dev);
> +dev_table_entry_t itt_entry = DEV_TABLE_ENTRY(itt_address, nr_bits);
> +
> +if ( devid >= its->max_devices )
> +return -ENOENT;
> +
> +return vgic_access_guest_memory(its->d,
> +addr + devid * sizeof(dev_table_entry_t),
> +_entry, sizeof(itt_entry), true);
> +}
> +
>  /*
>   * Lookup the address of the Interrupt Translation Table associated with
>   * that device ID.
> @@ -375,6 +390,130 @@ out_unlock:
>  return ret;
>  }
>  
> +/* Must be called with the ITS lock held. */
> +static int its_discard_event(struct virt_its *its,
> + uint32_t vdevid, uint32_t vevid)
> +{
> +struct pending_irq *p;
> +unsigned long flags;
> +struct vcpu *vcpu;
> +uint32_t vlpi;
> +
> +ASSERT(spin_is_locked(>its_lock));
> +
> +if ( !read_itte(its, vdevid, vevid, , ) )
> +return -ENOENT;
> +
> +if ( vlpi == INVALID_LPI )
> +return -ENOENT;
> +
> +/*
> + * TODO: This relies on the VCPU being correct in the ITS tables.
> + * This can be fixed by either using a per-IRQ lock or by using
> + * the VCPU ID from the pending_irq instead.
> + */
> +spin_lock_irqsave(>arch.vgic.lock, flags);
> +
> +/* Remove the pending_irq from the tree. */
> +write_lock(>d->arch.vgic.pend_lpi_tree_lock);
> +p = 

Re: [Xen-devel] [PATCH v11 01/34] ARM: vGIC: avoid rank lock when reading priority

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Julien Grall wrote:
> > > > > In any case, all those macros does not prevent re-ordering at the
> > > > > processor
> > > > > level nor read/write atomicity if the variable is misaligned.
> > > > 
> > > > My understanding is that the unwritten assumption in Xen is that
> > > > variables are always aligned. You are right about processor level
> > > > reordering, in fact when needed we have to have barriers
> > > > 
> > > > I have read Andre's well written README.atomic, and he ends the
> > > > document stating the following:
> > > > 
> > > > 
> > > > > This makes read and write accesses to ints and longs (and their
> > > > > respective
> > > > > unsigned counter parts) naturally atomic.
> > > > > However it would be beneficial to use atomic primitives anyway to
> > > > > annotate
> > > > > the code as being concurrent and to prevent silent breakage when
> > > > > changing
> > > > > the code
> > > > 
> > > > with which I completely agree
> > > 
> > > Which means you are happy to use either ACCESS_ONCE or
> > > read_atomic/write_atomic as they in-fine exactly the same on the compiler
> > > we
> > > support.
> > 
> > I do understand that both of them will produce the same output,
> > therefore, both work for this use-case.
> > 
> > I don't understand why anybody would prefer ACCESS_ONCE over
> > read/write_atomic, given that with ACCESS_ONCE as a contributor/reviewer
> > you additionally need to remember to check whether the argument is a
> > native data type. Basically, I see ACCESS_ONCE as "more work" for me.
> > Why do you think that ACCESS_ONCE is "better"?
> 
> Have you looked at the implementation of ACCESS_ONCE? You don't have to check
> the data type when using ACCESS_ONCE. There are safety check to avoid misusing
> it.

It checks for a scalar type, not for native data type. They are not
always the same thing but I think they are on arm.


> What I want to avoid is this split mind we currently have about atomic.
> They are either all safe or not.

What split mind? Do you mean ACCESS_ONCE vs. read/write_atomic? So far,
there are no instances of ACCESS_ONCE in xen/arch/arm.


> As Andre suggested, we should probably import a lighter version of
> WRITE_ONCE/READ_ONCE. They are first easier to understand than
> read_atomic/write_atomic that could be confused with atomic_read/atomic_write
> (IIRC Jan agreed here).
> 
> The main goal is to avoid assembly code when it is deem not necessary.

All right, this is one reason. Sorry if I seem unnecessarily contrarian,
but this is the first time I read a reason for this recent push for using
ACCESS_ONCE. You wrote that you preferred the read/write_atomic
functions yourself on Monday.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 00/34] arm64: Dom0 ITS emulation

2017-06-14 Thread Julien Grall



On 06/14/2017 05:51 PM, Andre Przywara wrote:

Hi,


Hi Andre,


hopefully the final version, with only nits from v11 addressed.
The same restriction as for the previous versions  still apply: the locking
is considered somewhat insufficient and will be fixed by an upcoming rework.


I have acked the remaining patches.



Patches 01/34 and 02/34 should be applied for 4.9 still, since they fix
existing bugs.


I would like to see a bit more testing on #1 before considering a 
backport. Let's merge it first in staging and see how it goes in the 
testing.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 25/34] ARM: vITS: handle MAPD command

2017-06-14 Thread Julien Grall

Hi Andre,

On 06/14/2017 05:52 PM, Andre Przywara wrote:

The MAPD command maps a device by associating a memory region for
storing ITEs with a certain device ID. Since it features a valid bit,
MAPD also covers the "unmap" functionality, which we also cover here.
We store the given guest physical address in the device table, and, if
this command comes from Dom0, tell the host ITS driver about this new
mapping, so it can issue the corresponding host MAPD command and create
the required tables. We take care of rolling back actions should one
step fail.
Upon unmapping a device we make sure we clean up all associated
resources and release the memory again.
We use our existing guest memory access function to find the right ITT
entry and store the mapping there (in guest memory).

Signed-off-by: Andre Przywara 


Acked-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 07/34] ARM: vGIC: introduce gic_remove_irq_from_queues()

2017-06-14 Thread Julien Grall

Hi Andre,

On 06/14/2017 05:51 PM, Andre Przywara wrote:

To avoid code duplication in a later patch, introduce a generic function
to remove a virtual IRQ from the VGIC.
Call that function instead of the open-coded version in vgic_migrate_irq().

Signed-off-by: Andre Przywara 


Acked-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 06/34] ARM: vGIC: move irq_to_pending() calls under the VGIC VCPU lock

2017-06-14 Thread Julien Grall

Hi Andre,

On 06/14/2017 05:51 PM, Andre Przywara wrote:

So far irq_to_pending() is just a convenience function to lookup
statically allocated arrays. This will change with LPIs, which are
more dynamic, so the memory for their struct pending_irq might go away.
The proper answer to the issue of preventing stale pointers is
ref-counting, which requires more rework and will be introduced with
a later rework.
For now move the irq_to_pending() calls that are used with LPIs under the
VGIC VCPU lock, and only use the returned pointer while holding the lock.
This prevents the memory from being freed while we use it.
For the sake of completeness we take care about all irq_to_pending()
users, even those which later will never deal with LPIs.
Document the limits of vgic_num_irqs().

Signed-off-by: Andre Przywara 


Acked-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 05/34] ARM: vGIC: rework gic_remove_from_queues()

2017-06-14 Thread Julien Grall

Hi Andre,

On 06/14/2017 05:51 PM, Andre Przywara wrote:

The function name gic_remove_from_queues() was a bit of a misnomer,
since it just removes an IRQ from the pending queue, not both queues.
Rename the function to make this more clear, also give it a pointer to
a struct pending_irq directly and rely on the VGIC VCPU lock to be
already taken, so this can be used in more places. This results in the
lock to be taken in the caller instead now.
Replace the list removal in gic_clear_pending_irqs() with a call to
this function.

Signed-off-by: Andre Przywara 


Reviewed-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 01/34] ARM: vGIC: avoid rank lock when reading priority

2017-06-14 Thread Julien Grall

Hi Andre,

On 06/14/2017 05:51 PM, Andre Przywara wrote:

When reading the priority value of a virtual interrupt, we were taking
the respective rank lock so far.
However for forwarded interrupts (Dom0 only so far) this may lead to a
deadlock with the following call chain:
- MMIO access to change the IRQ affinity, calling the ITARGETSR handler
- this handler takes the appropriate rank lock and calls vgic_store_itargetsr()
- vgic_store_itargetsr() will eventually call vgic_migrate_irq()
- if this IRQ is already in-flight, it will remove it from the old
   VCPU and inject it into the new one, by calling vgic_vcpu_inject_irq()
- vgic_vcpu_inject_irq will call vgic_get_virq_priority()
- vgic_get_virq_priority() tries to take the rank lock - again!
It seems like this code path has never been exercised before.

Fix this by avoiding taking the lock in vgic_get_virq_priority() (like we
do in vgic_get_target_vcpu()).
Actually we are just reading one byte, and priority changes while
interrupts are handled are a benign race that can happen on real hardware
too. So it is safe to just prevent the compiler from reading from the
struct more than once.

Signed-off-by: Andre Przywara 


Reviewed-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 4/4] xen: add sysfs node for hypervisor build id

2017-06-14 Thread Boris Ostrovsky
On 06/14/2017 01:23 PM, Juergen Gross wrote:
> For support of Xen hypervisor live patching the hypervisor build id is
> needed. Add a node /sys/hypervisor/properties/buildid containing the
> information.
>
> Signed-off-by: Juergen Gross 

Reviewed-by: Boris Ostrovsky 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 01/34] ARM: vGIC: avoid rank lock when reading priority

2017-06-14 Thread Julien Grall

Hi Stefano,

On 06/14/2017 06:32 PM, Stefano Stabellini wrote:

On Wed, 14 Jun 2017, Julien Grall wrote:

I don't understand your explanation. There are no PV protocols under
xen/, they are implemented in other repositories. I grepped for ACCESS
under xen/include/public, in case you referred to the PV protocol
headers, but couldn't find anything interesting.


Have a look at the pl011 emulation from Bhupinder. It will use plain '=' for
updating the PV drivers. So can you explain why it is fine there and not here?


Bhupinder's series is the first PV driver in the Xen codebase. It is
easy to forget that coding should/might have to be different compared to
Linux, which is the codebase I usually work with for PV drivers.

It is true that updating the indexes should be done atomically,
otherwise the other end might end up reading a wrong index value.



Furthermore implementation of
atomic_read/atomic_write in Linux (both ARM and x86) is based on
WRITE_ONCE/READ_ONCE, on Xen it is a simple assignment.


I don't follow why you are referring to Linux constructs in this
discussion about Xen atomic functions.


My point here is Xen and Linux are very similar. Actually a lot of the atomic
code is been taken from Linux (have a look to our atomic_read/atomic_write).

As the atomic code was added the code by you (likely from Linux), I don't
understand why you don't complain about the atomic implementation but
ACCESS_ONCE.


Linux and Xen are free to make different assumptions regarding
compilers.


It is not possible to say that Xen may have different assumptions when 
you import code from Linux without deep review (yes most of the code 
taken from Linux is as it is).




FWIW I am not really complaining about either atomics or ACCESS_ONCE, I
just don't see the advantage of using ACCESS_ONCE over read/write_atomic
(see below).



In any case, all those macros does not prevent re-ordering at the
processor
level nor read/write atomicity if the variable is misaligned.


My understanding is that the unwritten assumption in Xen is that
variables are always aligned. You are right about processor level
reordering, in fact when needed we have to have barriers

I have read Andre's well written README.atomic, and he ends the
document stating the following:



This makes read and write accesses to ints and longs (and their respective
unsigned counter parts) naturally atomic.
However it would be beneficial to use atomic primitives anyway to annotate
the code as being concurrent and to prevent silent breakage when changing
the code


with which I completely agree


Which means you are happy to use either ACCESS_ONCE or
read_atomic/write_atomic as they in-fine exactly the same on the compiler we
support.


I do understand that both of them will produce the same output,
therefore, both work for this use-case.

I don't understand why anybody would prefer ACCESS_ONCE over
read/write_atomic, given that with ACCESS_ONCE as a contributor/reviewer
you additionally need to remember to check whether the argument is a
native data type. Basically, I see ACCESS_ONCE as "more work" for me.
Why do you think that ACCESS_ONCE is "better"?


Have you looked at the implementation of ACCESS_ONCE? You don't have to 
check the data type when using ACCESS_ONCE. There are safety check to 
avoid misusing it.


What I want to avoid is this split mind we currently have about atomic.
They are either all safe or not.

As Andre suggested, we should probably import a lighter version of 
WRITE_ONCE/READ_ONCE. They are first easier to understand than 
read_atomic/write_atomic that could be confused with 
atomic_read/atomic_write (IIRC Jan agreed here).


The main goal is to avoid assembly code when it is deem not necessary.



Regarding the "compiler with support": do we state clearly in any docs
or website what are the compilers we support? I think this would be the
right opportunity to do it.


That's a discussion to have on the README.atomics patch.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v4 2/4] xen: add sysfs node for guest type

2017-06-14 Thread Boris Ostrovsky

> --- a/Documentation/ABI/testing/sysfs-hypervisor-pmu
> +++ b/Documentation/ABI/testing/sysfs-hypervisor-xen
> @@ -1,8 +1,19 @@
> +What:/sys/hypervisor/guest_type
> +Date:May 2017
> +KernelVersion:   4.13
> +Contact: xen-de...@lists.xenproject.org
> +Description: If running under Xen:
> + Type of guest:
> + "Xen": standard guest type on arm
> + "HVM": fully virtualized guest (x86)
> + "PV": paravirtualized guest (x86)
> + "PVH": fully virtualized guest without legacy emulation (x86)
> +
>  



>  
> +static ssize_t guest_type_show(struct hyp_sysfs_attr *attr, char *buffer)
> +{
> + const char *type = "???";
> +
> + switch (xen_domain_type) {
> + case XEN_NATIVE:
> + /* ARM only. */
> + type = "Xen";
> + break;
> + case XEN_PV_DOMAIN:
> + type = "PV";
> + break;
> + case XEN_HVM_DOMAIN:
> + type = xen_pvh_domain() ? "PVH" : "HVM";
> + break;
> + }

I think we should return -EINVAL for unknown type. Or document "???" in
the ABI document.


-boris

> + return sprintf(buffer, "%s\n", type);
> +}
>


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 01/34] ARM: vGIC: avoid rank lock when reading priority

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Julien Grall wrote:
> On 06/13/2017 11:19 PM, Stefano Stabellini wrote:
> > On Tue, 13 Jun 2017, Julien Grall wrote:
> > > On 12/06/2017 23:34, Stefano Stabellini wrote:
> > > > On Mon, 12 Jun 2017, Julien Grall wrote:
> > > > > Hi Andre,
> > > > > 
> > > > > On 09/06/17 18:41, Andre Przywara wrote:
> > > > > > When reading the priority value of a virtual interrupt, we were
> > > > > > taking
> > > > > > the respective rank lock so far.
> > > > > > However for forwarded interrupts (Dom0 only so far) this may lead to
> > > > > > a
> > > > > > deadlock with the following call chain:
> > > > > > - MMIO access to change the IRQ affinity, calling the ITARGETSR
> > > > > > handler
> > > > > > - this handler takes the appropriate rank lock and calls
> > > > > > vgic_store_itargetsr()
> > > > > > - vgic_store_itargetsr() will eventually call vgic_migrate_irq()
> > > > > > - if this IRQ is already in-flight, it will remove it from the old
> > > > > >VCPU and inject it into the new one, by calling
> > > > > > vgic_vcpu_inject_irq()
> > > > > > - vgic_vcpu_inject_irq will call vgic_get_virq_priority()
> > > > > > - vgic_get_virq_priority() tries to take the rank lock - again!
> > > > > > It seems like this code path has never been exercised before.
> > > > > > 
> > > > > > Fix this by avoiding taking the lock in vgic_get_virq_priority()
> > > > > > (like
> > > > > > we
> > > > > > do in vgic_get_target_vcpu()).
> > > > > > Actually we are just reading one byte, and priority changes while
> > > > > > interrupts are handled are a benign race that can happen on real
> > > > > > hardware
> > > > > > too. So it is safe to just prevent the compiler from reading from
> > > > > > the
> > > > > > struct more than once.
> > > > > > 
> > > > > > Signed-off-by: Andre Przywara 
> > > > > > ---
> > > > > >   xen/arch/arm/vgic-v2.c | 13 -
> > > > > >   xen/arch/arm/vgic-v3.c | 11 +++
> > > > > >   xen/arch/arm/vgic.c|  8 +---
> > > > > >   3 files changed, 16 insertions(+), 16 deletions(-)
> > > > > > 
> > > > > > diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> > > > > > index dc9f95b..5370020 100644
> > > > > > --- a/xen/arch/arm/vgic-v2.c
> > > > > > +++ b/xen/arch/arm/vgic-v2.c
> > > > > > @@ -258,9 +258,9 @@ static int vgic_v2_distr_mmio_read(struct vcpu
> > > > > > *v,
> > > > > > mmio_info_t *info,
> > > > > >   if ( rank == NULL ) goto read_as_zero;
> > > > > > 
> > > > > >   vgic_lock_rank(v, rank, flags);
> > > > > > -ipriorityr = rank->ipriorityr[REG_RANK_INDEX(8,
> > > > > > - gicd_reg -
> > > > > > GICD_IPRIORITYR,
> > > > > > - DABT_WORD)];
> > > > > > +ipriorityr = ACCESS_ONCE(rank->ipriorityr[REG_RANK_INDEX(8,
> > > > > > + gicd_reg - GICD_IPRIORITYR,
> > > > > > + DABT_WORD)]);
> > > > > 
> > > > > The indentation is a bit odd. Can you introduce a temporary variable
> > > > > here?
> > > > > 
> > > > > >   vgic_unlock_rank(v, rank, flags);
> > > > > >   *r = vgic_reg32_extract(ipriorityr, info);
> > > > > > 
> > > > > > @@ -499,7 +499,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu
> > > > > > *v,
> > > > > > mmio_info_t *info,
> > > > > > 
> > > > > >   case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
> > > > > >   {
> > > > > > -uint32_t *ipriorityr;
> > > > > > +uint32_t *ipriorityr, priority;
> > > > > > 
> > > > > >   if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD )
> > > > > > goto
> > > > > > bad_width;
> > > > > >   rank = vgic_rank_offset(v, 8, gicd_reg - GICD_IPRIORITYR,
> > > > > > DABT_WORD);
> > > > > > @@ -508,7 +508,10 @@ static int vgic_v2_distr_mmio_write(struct vcpu
> > > > > > *v,
> > > > > > mmio_info_t *info,
> > > > > >   ipriorityr = >ipriorityr[REG_RANK_INDEX(8,
> > > > > > gicd_reg -
> > > > > > GICD_IPRIORITYR,
> > > > > > DABT_WORD)];
> > > > > > -vgic_reg32_update(ipriorityr, r, info);
> > > > > > +priority = ACCESS_ONCE(*ipriorityr);
> > > > > > +vgic_reg32_update(, r, info);
> > > > > > +ACCESS_ONCE(*ipriorityr) = priority;
> > > > > 
> > > > > This is a bit odd to read because of the dereferencing. I admit that I
> > > > > would
> > > > > prefer if you use read_atomic/write_atomic which are easier to
> > > > > understand
> > > > > (though the naming is confusing).
> > > > > 
> > > > > Let see what Stefano thinks here.
> > > > 
> > > > I also prefer *_atomic, especially given what Jan wrote about
> > > > ACCESS_ONCE:
> > > > 
> > > >Plus ACCESS_ONCE() doesn't enforce a single instruction to be used in
> > > >the resulting assembly).
> > > 
> > > I don't buy this 

Re: [Xen-devel] [PATCH] xen: allocate page for shared info page from low memory

2017-06-14 Thread Boris Ostrovsky
On 06/14/2017 01:11 PM, Juergen Gross wrote:
> On 14/06/17 18:58, Boris Ostrovsky wrote:
>> On 06/12/2017 07:53 AM, Juergen Gross wrote:
>>> In a HVM guest the kernel allocates the page for mapping the shared
>>> info structure via extend_brk() today. This will lead to a drop of
>>> performance as the underlying EPT entry will have to be split up into
>>> 4kB entries as the single shared info page is located in hypervisor
>>> memory.
>>>
>>> The issue has been detected by using the libmicro munmap test:
>>> unmapping 8kB of memory was faster by nearly a factor of two when no
>>> pv interfaces were active in the HVM guest.
>>>
>>> So instead of taking a page from memory which might be mapped via
>>> large EPT entries use a page which is already mapped via a 4kB EPT
>>> entry: we can take a page from the first 1MB of memory as the video
>>> memory at 640kB disallows using larger EPT entries.
>>>
>>> Signed-off-by: Juergen Gross 
>>> ---
>>>  arch/x86/xen/enlighten_hvm.c | 31 ---
>>>  arch/x86/xen/enlighten_pv.c  |  2 --
>>>  2 files changed, 24 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
>>> index a6d014f47e52..c19477b6e43a 100644
>>> --- a/arch/x86/xen/enlighten_hvm.c
>>> +++ b/arch/x86/xen/enlighten_hvm.c
>>> @@ -1,5 +1,6 @@
>>>  #include 
>>>  #include 
>>> +#include 
>>>  
>>>  #include 
>>>  #include 
>>> @@ -10,9 +11,11 @@
>>>  #include 
>>>  #include 
>>>  #include 
>>> +#include 
>>>  
>>>  #include 
>>>  #include 
>>> +#include 
>>>  
>>>  #include "xen-ops.h"
>>>  #include "mmu.h"
>>> @@ -22,20 +25,34 @@ void __ref xen_hvm_init_shared_info(void)
>>>  {
>>> int cpu;
>>> struct xen_add_to_physmap xatp;
>>> -   static struct shared_info *shared_info_page;
>>> +   u64 pa;
>>> +
>>> +   if (HYPERVISOR_shared_info == _dummy_shared_info) {
>>> +   /*
>>> +* Search for a free page starting at 4kB physical address.
>>> +* Low memory is preferred to avoid an EPT large page split up
>>> +* by the mapping.
>>> +* Starting below X86_RESERVE_LOW (usually 64kB) is fine as
>>> +* the BIOS used for HVM guests is well behaved and won't
>>> +* clobber memory other than the first 4kB.
>>> +*/
>>> +   for (pa = PAGE_SIZE;
>>> +!e820__mapped_all(pa, pa + PAGE_SIZE, E820_TYPE_RAM) ||
>>> +memblock_is_reserved(pa);
>>> +pa += PAGE_SIZE)
>>> +   ;
>> Is it possible to never find a page here?
> Only if there is no memory available at all. :-)
>
> TBH: I expect this to _always_ succeed at the first loop iteration.



Reviewed-by: Boris Ostrovsky 


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 2/4] xen: add sysfs node for guest type

2017-06-14 Thread Juergen Gross
Currently there is no reliable user interface inside a Xen guest to
determine its type (e.g. HVM, PV or PVH). Instead of letting user mode
try to determine this by various rather hacky mechanisms (parsing of
boot messages before they are gone, trying to make use of known subtle
differences in behavior of some instructions), add a sysfs node
/sys/hypervisor/guest_type to explicitly deliver this information as
it is known to the kernel.

Signed-off-by: Juergen Gross 
---
V4:
  - use xen_domain_type instead of introducing xen_guest_type
(Boris Ostrovsky)
V2:
  - remove PVHVM guest type (Andrew Cooper)
  - move description to Documentation/ABI/testing/sysfs-hypervisor-xen
(Boris Ostrovsky)
  - make xen_guest_type const char * (Jan Beulich)
  - modify standard ARM guest type to "Xen"
---
 .../{sysfs-hypervisor-pmu => sysfs-hypervisor-xen} | 15 +--
 MAINTAINERS|  2 +-
 drivers/xen/sys-hypervisor.c   | 31 ++
 3 files changed, 45 insertions(+), 3 deletions(-)
 rename Documentation/ABI/testing/{sysfs-hypervisor-pmu => 
sysfs-hypervisor-xen} (67%)

diff --git a/Documentation/ABI/testing/sysfs-hypervisor-pmu 
b/Documentation/ABI/testing/sysfs-hypervisor-xen
similarity index 67%
rename from Documentation/ABI/testing/sysfs-hypervisor-pmu
rename to Documentation/ABI/testing/sysfs-hypervisor-xen
index 224faa105e18..c0edb3fdd6eb 100644
--- a/Documentation/ABI/testing/sysfs-hypervisor-pmu
+++ b/Documentation/ABI/testing/sysfs-hypervisor-xen
@@ -1,8 +1,19 @@
+What:  /sys/hypervisor/guest_type
+Date:  May 2017
+KernelVersion: 4.13
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Type of guest:
+   "Xen": standard guest type on arm
+   "HVM": fully virtualized guest (x86)
+   "PV": paravirtualized guest (x86)
+   "PVH": fully virtualized guest without legacy emulation (x86)
+
 What:  /sys/hypervisor/pmu/pmu_mode
 Date:  August 2015
 KernelVersion: 4.3
 Contact:   Boris Ostrovsky 
-Description:
+Description:   If running under Xen:
Describes mode that Xen's performance-monitoring unit (PMU)
uses. Accepted values are
"off"  -- PMU is disabled
@@ -17,7 +28,7 @@ What:   /sys/hypervisor/pmu/pmu_features
 Date:   August 2015
 KernelVersion:  4.3
 Contact:Boris Ostrovsky 
-Description:
+Description:   If running under Xen:
Describes Xen PMU features (as an integer). A set bit indicates
that the corresponding feature is enabled. See
include/xen/interface/xenpmu.h for available features
diff --git a/MAINTAINERS b/MAINTAINERS
index 68c31aebb79c..5630439429e6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13983,7 +13983,7 @@ F:  arch/x86/include/asm/xen/
 F: include/xen/
 F: include/uapi/xen/
 F: Documentation/ABI/stable/sysfs-hypervisor-xen
-F: Documentation/ABI/testing/sysfs-hypervisor-pmu
+F: Documentation/ABI/testing/sysfs-hypervisor-xen
 
 XEN HYPERVISOR ARM
 M: Stefano Stabellini 
diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 84106f9c456c..10400917e8e8 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -50,6 +50,32 @@ static int __init xen_sysfs_type_init(void)
return sysfs_create_file(hypervisor_kobj, _attr.attr);
 }
 
+static ssize_t guest_type_show(struct hyp_sysfs_attr *attr, char *buffer)
+{
+   const char *type = "???";
+
+   switch (xen_domain_type) {
+   case XEN_NATIVE:
+   /* ARM only. */
+   type = "Xen";
+   break;
+   case XEN_PV_DOMAIN:
+   type = "PV";
+   break;
+   case XEN_HVM_DOMAIN:
+   type = xen_pvh_domain() ? "PVH" : "HVM";
+   break;
+   }
+   return sprintf(buffer, "%s\n", type);
+}
+
+HYPERVISOR_ATTR_RO(guest_type);
+
+static int __init xen_sysfs_guest_type_init(void)
+{
+   return sysfs_create_file(hypervisor_kobj, _type_attr.attr);
+}
+
 /* xen version attributes */
 static ssize_t major_show(struct hyp_sysfs_attr *attr, char *buffer)
 {
@@ -471,6 +497,9 @@ static int __init hyper_sysfs_init(void)
ret = xen_sysfs_type_init();
if (ret)
goto out;
+   ret = xen_sysfs_guest_type_init();
+   if (ret)
+   goto guest_type_out;
ret = xen_sysfs_version_init();
if (ret)
goto version_out;
@@ -502,6 +531,8 @@ static int __init hyper_sysfs_init(void)
 comp_out:
sysfs_remove_group(hypervisor_kobj, _group);
 version_out:
+   sysfs_remove_file(hypervisor_kobj, _type_attr.attr);
+guest_type_out:
sysfs_remove_file(hypervisor_kobj, _attr.attr);
 out:

[Xen-devel] [PATCH v4 0/4] xen: add xen sysfs nodes

2017-06-14 Thread Juergen Gross
In order to be able to determine the Xen guest type from within the
guest as a user there is currently no stable interface available.

Add a sysfs node for that purpose as the guest type information is
available for the kernel.

While doing this document all the other Xen related sysfs nodes.

Add another node to show the Xen hypervisor buildid in order to make
hypervisor live patching easier.

Juergen Gross (4):
  doc,xen: document hypervisor sysfs nodes for xen
  xen: add sysfs node for guest type
  xen: sync include/xen/interface/version.h
  xen: add sysfs node for hypervisor build id

 Documentation/ABI/stable/sysfs-hypervisor-xen  | 119 +
 .../{sysfs-hypervisor-pmu => sysfs-hypervisor-xen} |  24 -
 MAINTAINERS|   2 +
 drivers/xen/sys-hypervisor.c   |  59 ++
 include/xen/interface/version.h|  15 +++
 5 files changed, 217 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/ABI/stable/sysfs-hypervisor-xen
 rename Documentation/ABI/testing/{sysfs-hypervisor-pmu => 
sysfs-hypervisor-xen} (54%)

-- 
2.12.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 3/4] xen: sync include/xen/interface/version.h

2017-06-14 Thread Juergen Gross
Sync include/xen/interface/version.h with the Xen source.

Signed-off-by: Juergen Gross 
Reviewed-by: Boris Ostrovsky 
---
 include/xen/interface/version.h | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/include/xen/interface/version.h b/include/xen/interface/version.h
index 7ff6498679a3..145f12f9ecec 100644
--- a/include/xen/interface/version.h
+++ b/include/xen/interface/version.h
@@ -63,4 +63,19 @@ struct xen_feature_info {
 /* arg == xen_domain_handle_t. */
 #define XENVER_guest_handle 8
 
+#define XENVER_commandline 9
+struct xen_commandline {
+   char buf[1024];
+};
+
+/*
+ * Return value is the number of bytes written, or XEN_Exx on error.
+ * Calling with empty parameter returns the size of build_id.
+ */
+#define XENVER_build_id 10
+struct xen_build_id {
+   uint32_tlen; /* IN: size of buf[]. */
+   unsigned char   buf[];
+};
+
 #endif /* __XEN_PUBLIC_VERSION_H__ */
-- 
2.12.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 4/4] xen: add sysfs node for hypervisor build id

2017-06-14 Thread Juergen Gross
For support of Xen hypervisor live patching the hypervisor build id is
needed. Add a node /sys/hypervisor/properties/buildid containing the
information.

Signed-off-by: Juergen Gross 
---
V4:
  - send correct patch
---
 Documentation/ABI/testing/sysfs-hypervisor-xen | 11 +-
 drivers/xen/sys-hypervisor.c   | 28 ++
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/Documentation/ABI/testing/sysfs-hypervisor-xen 
b/Documentation/ABI/testing/sysfs-hypervisor-xen
index c0edb3fdd6eb..53b7b2ea7515 100644
--- a/Documentation/ABI/testing/sysfs-hypervisor-xen
+++ b/Documentation/ABI/testing/sysfs-hypervisor-xen
@@ -1,5 +1,5 @@
 What:  /sys/hypervisor/guest_type
-Date:  May 2017
+Date:  June 2017
 KernelVersion: 4.13
 Contact:   xen-de...@lists.xenproject.org
 Description:   If running under Xen:
@@ -32,3 +32,12 @@ Description: If running under Xen:
Describes Xen PMU features (as an integer). A set bit indicates
that the corresponding feature is enabled. See
include/xen/interface/xenpmu.h for available features
+
+What:  /sys/hypervisor/properties/buildid
+Date:  June 2017
+KernelVersion: 4.13
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Build id of the hypervisor, needed for hypervisor live patching.
+   Might return "" in case of special security settings
+   in the hypervisor.
diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
index 10400917e8e8..f8eeed46fbc3 100644
--- a/drivers/xen/sys-hypervisor.c
+++ b/drivers/xen/sys-hypervisor.c
@@ -353,12 +353,40 @@ static ssize_t features_show(struct hyp_sysfs_attr *attr, 
char *buffer)
 
 HYPERVISOR_ATTR_RO(features);
 
+static ssize_t buildid_show(struct hyp_sysfs_attr *attr, char *buffer)
+{
+   ssize_t ret;
+   struct xen_build_id *buildid;
+
+   ret = HYPERVISOR_xen_version(XENVER_build_id, NULL);
+   if (ret < 0) {
+   if (ret == -EPERM)
+   ret = sprintf(buffer, "");
+   return ret;
+   }
+
+   buildid = kmalloc(sizeof(*buildid) + ret, GFP_KERNEL);
+   if (!buildid)
+   return -ENOMEM;
+
+   buildid->len = ret;
+   ret = HYPERVISOR_xen_version(XENVER_build_id, buildid);
+   if (ret > 0)
+   ret = sprintf(buffer, "%s", buildid->buf);
+   kfree(buildid);
+
+   return ret;
+}
+
+HYPERVISOR_ATTR_RO(buildid);
+
 static struct attribute *xen_properties_attrs[] = {
_attr.attr,
_attr.attr,
_start_attr.attr,
_attr.attr,
_attr.attr,
+   _attr.attr,
NULL
 };
 
-- 
2.12.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v4 1/4] doc, xen: document hypervisor sysfs nodes for xen

2017-06-14 Thread Juergen Gross
Today only a few sysfs nodes under /sys/hypervisor/ are documented
for Xen in Documentation/ABI/testing/sysfs-hypervisor-pmu.

Add the remaining Xen sysfs nodes under /sys/hypervisor/ in a new
file Documentation/ABI/stable/sysfs-hypervisor-xen and add the Xen
specific sysfs docs to the MAINTAINERS file.

Signed-off-by: Juergen Gross 
Reviewed-by: Boris Ostrovsky 
---
V4:
  - s/full/fully/ (Boris Ostrovsky)
V3:
  - added hint for hidden values where appropriate (Andrew Cooper)

V2:
  - rename file to Documentation/ABI/stable/sysfs-hypervisor-xen in
order to reflect Xen dependency
  - leave pmu entries in old file under testing (Boris Ostrovsky)
---
 Documentation/ABI/stable/sysfs-hypervisor-xen | 119 ++
 MAINTAINERS   |   2 +
 2 files changed, 121 insertions(+)
 create mode 100644 Documentation/ABI/stable/sysfs-hypervisor-xen

diff --git a/Documentation/ABI/stable/sysfs-hypervisor-xen 
b/Documentation/ABI/stable/sysfs-hypervisor-xen
new file mode 100644
index ..3cf5cdfcd9a8
--- /dev/null
+++ b/Documentation/ABI/stable/sysfs-hypervisor-xen
@@ -0,0 +1,119 @@
+What:  /sys/hypervisor/compilation/compile_date
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Contains the build time stamp of the Xen hypervisor
+   Might return "" in case of special security settings
+   in the hypervisor.
+
+What:  /sys/hypervisor/compilation/compiled_by
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Contains information who built the Xen hypervisor
+   Might return "" in case of special security settings
+   in the hypervisor.
+
+What:  /sys/hypervisor/compilation/compiler
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Compiler which was used to build the Xen hypervisor
+   Might return "" in case of special security settings
+   in the hypervisor.
+
+What:  /sys/hypervisor/properties/capabilities
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Space separated list of supported guest system types. Each type
+   is in the format: -.-
+   With:
+   : "xen" -- x86: paravirtualized, arm: standard
+"hvm" -- x86 only: fully virtualized
+   : major guest interface version
+   : minor guest interface version
+   :  architecture, e.g.:
+"x86_32": 32 bit x86 guest without PAE
+"x86_32p": 32 bit x86 guest with PAE
+"x86_64": 64 bit x86 guest
+"armv7l": 32 bit arm guest
+"aarch64": 64 bit arm guest
+
+What:  /sys/hypervisor/properties/changeset
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Changeset of the hypervisor (git commit)
+   Might return "" in case of special security settings
+   in the hypervisor.
+
+What:  /sys/hypervisor/properties/features
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Features the Xen hypervisor supports for the guest as defined
+   in include/xen/interface/features.h printed as a hex value.
+
+What:  /sys/hypervisor/properties/pagesize
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Default page size of the hypervisor printed as a hex value.
+   Might return "0" in case of special security settings
+   in the hypervisor.
+
+What:  /sys/hypervisor/properties/virtual_start
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Virtual address of the hypervisor as a hex value.
+
+What:  /sys/hypervisor/type
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   Type of hypervisor:
+   "xen": Xen hypervisor
+
+What:  /sys/hypervisor/uuid
+Date:  March 2009
+KernelVersion: 2.6.30
+Contact:   xen-de...@lists.xenproject.org
+Description:   If running under Xen:
+   

Re: [Xen-devel] tags in backport commits

2017-06-14 Thread Stefano Stabellini
On Wed, 14 Jun 2017, Jan Beulich wrote:
> >>> On 13.06.17 at 19:41,  wrote:
> > On Tue, 13 Jun 2017, George Dunlap wrote:
> >> On 13/06/17 08:28, Jan Beulich wrote:
> >> > Furthermore - who would you mean to create these tags? In the
> >> > end I think it should be the person responsible for the respective
> >> > parts of the stable trees to decide if and how far such backports
> >> > ought to occur, so neither the person submitting the patch nor
> >> > the person committing the patch are in the position to give more
> >> > than a hint here (again speaking against using such tags for
> >> > automation).
> >> 
> >> We could require that the "stable" tag be acked by any stable tree
> >> maintainers that it affects.
> > 
> > Yes, that is where CC: sta...@xenproject.org comes into play. The people
> > at sta...@xenproject.org should ack or request a chance to the
> > backporting info. The first step would be to create a mailing list for
> > that.
> 
> Isn't Linux'es stable@ a fake address? I'm not really fancying getting
> yet another copy of patches through such a new alias or list, so I'd
> expect this to be a fake address in our case too.

Neither I am, but shouldn't emails be deduped by the mail server? In any
case, you are right, I realize that we don't actually need another
mailing list, just another rule in my procmail, so I am fine with
sta...@xenproject.org being a fake address, but let's keep in mind that
git send-email will try to send emails to sta...@xenproject.org, so if
we don't create it, the sender will get bounce backs?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH 0/4] Per-domain locking in xl

2017-06-14 Thread Wei Liu
It has always been the case that different xl processes can manipulate the same
domain at the same time. This could be problematic.

This series attempts to provide facility for xl to have a per-domain lock. This
lock should be used whenever xl manipulates an existing domain.

The last patch is an example on using the facility and serves as an example for
further refactoring. The refactoring is a bit tedious so I would like to know
if people are happy with this approach before working on it further.

Cc: Jan Beulich 
Cc: Andrew Cooper 

Wei Liu (4):
  xl: move {acquire,release}_lock to xl_utils.c
  xl: make lock functions work with arbitrary files and fds
  xl: introduce facility to run function with per-domain lock held
  XXX a command to test the locking facility

 tools/xl/xl.c   | 19 ++-
 tools/xl/xl.h   |  6 +++-
 tools/xl/xl_cmdtable.c  |  5 +++
 tools/xl/xl_misc.c  | 37 +
 tools/xl/xl_utils.c | 88 +
 tools/xl/xl_utils.h |  6 
 tools/xl/xl_vmcontrol.c | 73 ++--
 7 files changed, 154 insertions(+), 80 deletions(-)

-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH 4/4] XXX a command to test the locking facility

2017-06-14 Thread Wei Liu
Signed-off-by: Wei Liu 
---
 tools/xl/xl.h  |  2 ++
 tools/xl/xl_cmdtable.c |  5 +
 tools/xl/xl_misc.c | 37 +
 3 files changed, 44 insertions(+)

diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 8d667ff444..9cb135baf2 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -209,6 +209,8 @@ int main_psr_cat_show(int argc, char **argv);
 #endif
 int main_qemu_monitor_command(int argc, char **argv);
 
+int main_lock(int argc, char **argv);
+
 void help(const char *command);
 
 extern const char *common_domname;
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 30eb93c17f..01db5fb8f4 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -591,6 +591,11 @@ struct cmd_spec cmd_table[] = {
   "Issue a qemu monitor command to the device model of a domain",
   " ",
 },
+{ "lock",
+  _lock, 0, 0,
+  "Lock a domain, prevent other xl processes from manipulating it",
+  "",
+},
 };
 
 int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
diff --git a/tools/xl/xl_misc.c b/tools/xl/xl_misc.c
index 9c6227af23..772fd7adb4 100644
--- a/tools/xl/xl_misc.c
+++ b/tools/xl/xl_misc.c
@@ -14,6 +14,7 @@
 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -344,6 +345,42 @@ int main_config_update(int argc, char **argv)
 return 0;
 }
 
+struct lock_arg {
+uint32_t domid;
+};
+
+static int lock_fn(void *p)
+{
+struct lock_arg *arg = p;
+
+fprintf(stderr, "Now I have the lock for %u\n", arg->domid);
+
+sleep(10);
+
+fprintf(stderr, "Done\n");
+
+return 0;
+}
+
+int main_lock(int argc, char **argv)
+{
+uint32_t domid;
+struct lock_arg arg;
+int rc;
+
+domid = find_domain(argv[optind++]);
+
+fprintf(stderr, "About to lock %u\n", domid);
+
+arg.domid = domid;
+
+rc = with_lock(domid, lock_fn, );
+fprintf(stderr, "with_lock returned, rc = %d\n", rc);
+if (rc)
+return EXIT_FAILURE;
+return EXIT_SUCCESS;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH 2/4] xl: make lock functions work with arbitrary files and fds

2017-06-14 Thread Wei Liu
Rename the existing lock to xl_global lock. Refactor the functions to
take the filename and fd so that they can work with any filename and
fd.

No functional change.

Signed-off-by: Wei Liu 
---
 tools/xl/xl.c   | 19 ++-
 tools/xl/xl.h   |  3 ++-
 tools/xl/xl_utils.c | 38 --
 tools/xl/xl_utils.h |  4 ++--
 tools/xl/xl_vmcontrol.c |  6 +++---
 5 files changed, 37 insertions(+), 33 deletions(-)

diff --git a/tools/xl/xl.c b/tools/xl/xl.c
index 02179a6229..f6740b0a86 100644
--- a/tools/xl/xl.c
+++ b/tools/xl/xl.c
@@ -35,7 +35,8 @@ int force_execution;
 int autoballoon = -1;
 char *blkdev_start;
 int run_hotplug_scripts = 1;
-char *lockfile;
+char *xl_global_lockfile;
+int  xl_global_fd_lock = -1;
 char *default_vifscript = NULL;
 char *default_bridge = NULL;
 char *default_gatewaydev = NULL;
@@ -117,14 +118,14 @@ static void parse_global_config(const char *configfile,
 if (!xlu_cfg_get_long (config, "run_hotplug_scripts", , 0))
 run_hotplug_scripts = l;
 
-if (!xlu_cfg_get_string (config, "lockfile", , 0))
-lockfile = strdup(buf);
+if (!xlu_cfg_get_string (config, "xl_global_lockfile", , 0))
+xl_global_lockfile = strdup(buf);
 else {
-lockfile = strdup(XL_LOCK_FILE);
+xl_global_lockfile = strdup(XL_LOCK_FILE);
 }
 
-if (!lockfile) {
-fprintf(stderr, "failed to allocate lockfile\n");
+if (!xl_global_lockfile) {
+fprintf(stderr, "failed to allocate xl_global_lockfile\n");
 exit(1);
 }
 
@@ -295,9 +296,9 @@ static void xl_ctx_free(void)
 xtl_logger_destroy((xentoollog_logger*)logger);
 logger = NULL;
 }
-if (lockfile) {
-free(lockfile);
-lockfile = NULL;
+if (xl_global_lockfile) {
+free(xl_global_lockfile);
+xl_global_lockfile = NULL;
 }
 }
 
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index aa95b77146..93ec4d7e4c 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -265,7 +265,8 @@ extern int claim_mode;
 extern bool progress_use_cr;
 extern xentoollog_level minmsglevel;
 #define minmsglevel_default XTL_PROGRESS
-extern char *lockfile;
+extern char *xl_global_lockfile;
+extern int  xl_global_fd_lock;
 extern char *default_vifscript;
 extern char *default_bridge;
 extern char *default_gatewaydev;
diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
index 331a67bc95..e7038ec324 100644
--- a/tools/xl/xl_utils.c
+++ b/tools/xl/xl_utils.c
@@ -316,50 +316,51 @@ out:
 return ret;
 }
 
-static int fd_lock = -1;
-
-int acquire_lock(void)
+int acquire_lock(const char *lockfile, int *fd_lock)
 {
 int rc;
 struct flock fl;
 
 /* lock already acquired */
-if (fd_lock >= 0)
+if (*fd_lock >= 0)
 return ERROR_INVAL;
 
 fl.l_type = F_WRLCK;
 fl.l_whence = SEEK_SET;
 fl.l_start = 0;
 fl.l_len = 0;
-fd_lock = open(lockfile, O_WRONLY|O_CREAT, S_IWUSR);
-if (fd_lock < 0) {
-fprintf(stderr, "cannot open the lockfile %s errno=%d\n", lockfile, 
errno);
+*fd_lock = open(lockfile, O_WRONLY|O_CREAT, S_IWUSR);
+if (*fd_lock < 0) {
+fprintf(stderr, "cannot open the lockfile %s errno=%d\n",
+lockfile, errno);
 return ERROR_FAIL;
 }
-if (fcntl(fd_lock, F_SETFD, FD_CLOEXEC) < 0) {
-close(fd_lock);
-fprintf(stderr, "cannot set cloexec to lockfile %s errno=%d\n", 
lockfile, errno);
+if (fcntl(*fd_lock, F_SETFD, FD_CLOEXEC) < 0) {
+close(*fd_lock);
+fprintf(stderr, "cannot set cloexec to lockfile %s errno=%d\n",
+lockfile, errno);
 return ERROR_FAIL;
 }
 get_lock:
-rc = fcntl(fd_lock, F_SETLKW, );
+rc = fcntl(*fd_lock, F_SETLKW, );
 if (rc < 0 && errno == EINTR)
 goto get_lock;
 if (rc < 0) {
-fprintf(stderr, "cannot acquire lock %s errno=%d\n", lockfile, errno);
+fprintf(stderr, "cannot acquire lock %s errno=%d\n",
+lockfile, errno);
 rc = ERROR_FAIL;
 } else
 rc = 0;
 return rc;
 }
 
-int release_lock(void)
+int release_lock(const char *lockfile, int *fd_lock)
 {
 int rc;
 struct flock fl;
 
 /* lock not acquired */
-if (fd_lock < 0)
+if (*fd_lock < 0)
 return ERROR_INVAL;
 
 release_lock:
@@ -368,16 +369,17 @@ release_lock:
 fl.l_start = 0;
 fl.l_len = 0;
 
-rc = fcntl(fd_lock, F_SETLKW, );
+rc = fcntl(*fd_lock, F_SETLKW, );
 if (rc < 0 && errno == EINTR)
 goto release_lock;
 if (rc < 0) {
-fprintf(stderr, "cannot release lock %s, errno=%d\n", lockfile, errno);
+fprintf(stderr, "cannot release lock %s, errno=%d\n",
+lockfile, errno);
 rc = ERROR_FAIL;
 } else
 rc = 0;
-close(fd_lock);
-fd_lock = -1;
+close(*fd_lock);
+*fd_lock = -1;
 
 return rc;
 }
diff --git a/tools/xl/xl_utils.h b/tools/xl/xl_utils.h
index 

[Xen-devel] [RFC PATCH 1/4] xl: move {acquire, release}_lock to xl_utils.c

2017-06-14 Thread Wei Liu
Pure code motion, no functional change.

Signed-off-by: Wei Liu 
---
 tools/xl/xl_utils.c | 67 +
 tools/xl/xl_utils.h |  3 +++
 tools/xl/xl_vmcontrol.c | 67 -
 3 files changed, 70 insertions(+), 67 deletions(-)

diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
index 4503ac7ea0..331a67bc95 100644
--- a/tools/xl/xl_utils.c
+++ b/tools/xl/xl_utils.c
@@ -316,6 +316,73 @@ out:
 return ret;
 }
 
+static int fd_lock = -1;
+
+int acquire_lock(void)
+{
+int rc;
+struct flock fl;
+
+/* lock already acquired */
+if (fd_lock >= 0)
+return ERROR_INVAL;
+
+fl.l_type = F_WRLCK;
+fl.l_whence = SEEK_SET;
+fl.l_start = 0;
+fl.l_len = 0;
+fd_lock = open(lockfile, O_WRONLY|O_CREAT, S_IWUSR);
+if (fd_lock < 0) {
+fprintf(stderr, "cannot open the lockfile %s errno=%d\n", lockfile, 
errno);
+return ERROR_FAIL;
+}
+if (fcntl(fd_lock, F_SETFD, FD_CLOEXEC) < 0) {
+close(fd_lock);
+fprintf(stderr, "cannot set cloexec to lockfile %s errno=%d\n", 
lockfile, errno);
+return ERROR_FAIL;
+}
+get_lock:
+rc = fcntl(fd_lock, F_SETLKW, );
+if (rc < 0 && errno == EINTR)
+goto get_lock;
+if (rc < 0) {
+fprintf(stderr, "cannot acquire lock %s errno=%d\n", lockfile, errno);
+rc = ERROR_FAIL;
+} else
+rc = 0;
+return rc;
+}
+
+int release_lock(void)
+{
+int rc;
+struct flock fl;
+
+/* lock not acquired */
+if (fd_lock < 0)
+return ERROR_INVAL;
+
+release_lock:
+fl.l_type = F_UNLCK;
+fl.l_whence = SEEK_SET;
+fl.l_start = 0;
+fl.l_len = 0;
+
+rc = fcntl(fd_lock, F_SETLKW, );
+if (rc < 0 && errno == EINTR)
+goto release_lock;
+if (rc < 0) {
+fprintf(stderr, "cannot release lock %s, errno=%d\n", lockfile, errno);
+rc = ERROR_FAIL;
+} else
+rc = 0;
+close(fd_lock);
+fd_lock = -1;
+
+return rc;
+}
+
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/xl/xl_utils.h b/tools/xl/xl_utils.h
index 7b9ccca30a..3ee1543e56 100644
--- a/tools/xl/xl_utils.h
+++ b/tools/xl/xl_utils.h
@@ -146,6 +146,9 @@ uint32_t find_domain(const char *p) 
__attribute__((warn_unused_result));
 void print_bitmap(uint8_t *map, int maplen, FILE *stream);
 
 int do_daemonize(char *name, const char *pidfile);
+
+int acquire_lock(void);
+int release_lock(void);
 #endif /* XL_UTILS_H */
 
 /*
diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c
index 89c2b25ded..9b5a4cb001 100644
--- a/tools/xl/xl_vmcontrol.c
+++ b/tools/xl/xl_vmcontrol.c
@@ -30,8 +30,6 @@
 #include "xl_utils.h"
 #include "xl_parse.h"
 
-static int fd_lock = -1;
-
 static void pause_domain(uint32_t domid)
 {
 libxl_domain_pause(ctx, domid);
@@ -550,71 +548,6 @@ static void autoconnect_vncviewer(uint32_t domid, int 
autopass)
 _exit(EXIT_FAILURE);
 }
 
-static int acquire_lock(void)
-{
-int rc;
-struct flock fl;
-
-/* lock already acquired */
-if (fd_lock >= 0)
-return ERROR_INVAL;
-
-fl.l_type = F_WRLCK;
-fl.l_whence = SEEK_SET;
-fl.l_start = 0;
-fl.l_len = 0;
-fd_lock = open(lockfile, O_WRONLY|O_CREAT, S_IWUSR);
-if (fd_lock < 0) {
-fprintf(stderr, "cannot open the lockfile %s errno=%d\n", lockfile, 
errno);
-return ERROR_FAIL;
-}
-if (fcntl(fd_lock, F_SETFD, FD_CLOEXEC) < 0) {
-close(fd_lock);
-fprintf(stderr, "cannot set cloexec to lockfile %s errno=%d\n", 
lockfile, errno);
-return ERROR_FAIL;
-}
-get_lock:
-rc = fcntl(fd_lock, F_SETLKW, );
-if (rc < 0 && errno == EINTR)
-goto get_lock;
-if (rc < 0) {
-fprintf(stderr, "cannot acquire lock %s errno=%d\n", lockfile, errno);
-rc = ERROR_FAIL;
-} else
-rc = 0;
-return rc;
-}
-
-static int release_lock(void)
-{
-int rc;
-struct flock fl;
-
-/* lock not acquired */
-if (fd_lock < 0)
-return ERROR_INVAL;
-
-release_lock:
-fl.l_type = F_UNLCK;
-fl.l_whence = SEEK_SET;
-fl.l_start = 0;
-fl.l_len = 0;
-
-rc = fcntl(fd_lock, F_SETLKW, );
-if (rc < 0 && errno == EINTR)
-goto release_lock;
-if (rc < 0) {
-fprintf(stderr, "cannot release lock %s, errno=%d\n", lockfile, errno);
-rc = ERROR_FAIL;
-} else
-rc = 0;
-close(fd_lock);
-fd_lock = -1;
-
-return rc;
-}
-
-
 static void autoconnect_console(libxl_ctx *ctx_ignored,
 libxl_event *ev, void *priv)
 {
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH 3/4] xl: introduce facility to run function with per-domain lock held

2017-06-14 Thread Wei Liu
Signed-off-by: Wei Liu 
---
 tools/xl/xl.h   |  1 +
 tools/xl/xl_utils.c | 19 +++
 tools/xl/xl_utils.h |  3 +++
 3 files changed, 23 insertions(+)

diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 93ec4d7e4c..8d667ff444 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -292,6 +292,7 @@ extern void printf_info_sexp(int domid, libxl_domain_config 
*d_config, FILE *fh)
 
 #define XL_GLOBAL_CONFIG XEN_CONFIG_DIR "/xl.conf"
 #define XL_LOCK_FILE XEN_LOCK_DIR "/xl"
+#define XL_DOMAIN_LOCK_FILE_FMT XEN_LOCK_DIR "/xl-%u"
 
 #endif /* XL_H */
 
diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
index e7038ec324..bb32ba0a1f 100644
--- a/tools/xl/xl_utils.c
+++ b/tools/xl/xl_utils.c
@@ -27,6 +27,25 @@
 #include "xl.h"
 #include "xl_utils.h"
 
+int with_lock(uint32_t domid, domain_fn fn, void *arg)
+{
+char filename[sizeof(XL_DOMAIN_LOCK_FILE_FMT)+15];
+int fd_lock = -1;
+int rc;
+
+snprintf(filename, sizeof(filename), XL_DOMAIN_LOCK_FILE_FMT, domid);
+
+rc = acquire_lock(filename, _lock);
+if (rc) goto out;
+
+rc = fn(arg);
+
+release_lock(filename, _lock);
+
+out:
+return rc;
+}
+
 void dolog(const char *file, int line, const char *func, char *fmt, ...)
 {
 va_list ap;
diff --git a/tools/xl/xl_utils.h b/tools/xl/xl_utils.h
index 18280d7e84..5e0d502fa6 100644
--- a/tools/xl/xl_utils.h
+++ b/tools/xl/xl_utils.h
@@ -149,6 +149,9 @@ int do_daemonize(char *name, const char *pidfile);
 
 int acquire_lock(const char *lockfile, int *fd_lock);
 int release_lock(const char *lockfile, int *fd_lock);
+
+typedef int (*domain_fn)(void *arg);
+int with_lock(uint32_t domid, domain_fn fn, void *arg);
 #endif /* XL_UTILS_H */
 
 /*
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen: allocate page for shared info page from low memory

2017-06-14 Thread Juergen Gross
On 14/06/17 18:58, Boris Ostrovsky wrote:
> On 06/12/2017 07:53 AM, Juergen Gross wrote:
>> In a HVM guest the kernel allocates the page for mapping the shared
>> info structure via extend_brk() today. This will lead to a drop of
>> performance as the underlying EPT entry will have to be split up into
>> 4kB entries as the single shared info page is located in hypervisor
>> memory.
>>
>> The issue has been detected by using the libmicro munmap test:
>> unmapping 8kB of memory was faster by nearly a factor of two when no
>> pv interfaces were active in the HVM guest.
>>
>> So instead of taking a page from memory which might be mapped via
>> large EPT entries use a page which is already mapped via a 4kB EPT
>> entry: we can take a page from the first 1MB of memory as the video
>> memory at 640kB disallows using larger EPT entries.
>>
>> Signed-off-by: Juergen Gross 
>> ---
>>  arch/x86/xen/enlighten_hvm.c | 31 ---
>>  arch/x86/xen/enlighten_pv.c  |  2 --
>>  2 files changed, 24 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
>> index a6d014f47e52..c19477b6e43a 100644
>> --- a/arch/x86/xen/enlighten_hvm.c
>> +++ b/arch/x86/xen/enlighten_hvm.c
>> @@ -1,5 +1,6 @@
>>  #include 
>>  #include 
>> +#include 
>>  
>>  #include 
>>  #include 
>> @@ -10,9 +11,11 @@
>>  #include 
>>  #include 
>>  #include 
>> +#include 
>>  
>>  #include 
>>  #include 
>> +#include 
>>  
>>  #include "xen-ops.h"
>>  #include "mmu.h"
>> @@ -22,20 +25,34 @@ void __ref xen_hvm_init_shared_info(void)
>>  {
>>  int cpu;
>>  struct xen_add_to_physmap xatp;
>> -static struct shared_info *shared_info_page;
>> +u64 pa;
>> +
>> +if (HYPERVISOR_shared_info == _dummy_shared_info) {
>> +/*
>> + * Search for a free page starting at 4kB physical address.
>> + * Low memory is preferred to avoid an EPT large page split up
>> + * by the mapping.
>> + * Starting below X86_RESERVE_LOW (usually 64kB) is fine as
>> + * the BIOS used for HVM guests is well behaved and won't
>> + * clobber memory other than the first 4kB.
>> + */
>> +for (pa = PAGE_SIZE;
>> + !e820__mapped_all(pa, pa + PAGE_SIZE, E820_TYPE_RAM) ||
>> + memblock_is_reserved(pa);
>> + pa += PAGE_SIZE)
>> +;
> 
> Is it possible to never find a page here?

Only if there is no memory available at all. :-)

TBH: I expect this to _always_ succeed at the first loop iteration.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen: allocate page for shared info page from low memory

2017-06-14 Thread Boris Ostrovsky
On 06/12/2017 07:53 AM, Juergen Gross wrote:
> In a HVM guest the kernel allocates the page for mapping the shared
> info structure via extend_brk() today. This will lead to a drop of
> performance as the underlying EPT entry will have to be split up into
> 4kB entries as the single shared info page is located in hypervisor
> memory.
>
> The issue has been detected by using the libmicro munmap test:
> unmapping 8kB of memory was faster by nearly a factor of two when no
> pv interfaces were active in the HVM guest.
>
> So instead of taking a page from memory which might be mapped via
> large EPT entries use a page which is already mapped via a 4kB EPT
> entry: we can take a page from the first 1MB of memory as the video
> memory at 640kB disallows using larger EPT entries.
>
> Signed-off-by: Juergen Gross 
> ---
>  arch/x86/xen/enlighten_hvm.c | 31 ---
>  arch/x86/xen/enlighten_pv.c  |  2 --
>  2 files changed, 24 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
> index a6d014f47e52..c19477b6e43a 100644
> --- a/arch/x86/xen/enlighten_hvm.c
> +++ b/arch/x86/xen/enlighten_hvm.c
> @@ -1,5 +1,6 @@
>  #include 
>  #include 
> +#include 
>  
>  #include 
>  #include 
> @@ -10,9 +11,11 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include 
>  #include 
> +#include 
>  
>  #include "xen-ops.h"
>  #include "mmu.h"
> @@ -22,20 +25,34 @@ void __ref xen_hvm_init_shared_info(void)
>  {
>   int cpu;
>   struct xen_add_to_physmap xatp;
> - static struct shared_info *shared_info_page;
> + u64 pa;
> +
> + if (HYPERVISOR_shared_info == _dummy_shared_info) {
> + /*
> +  * Search for a free page starting at 4kB physical address.
> +  * Low memory is preferred to avoid an EPT large page split up
> +  * by the mapping.
> +  * Starting below X86_RESERVE_LOW (usually 64kB) is fine as
> +  * the BIOS used for HVM guests is well behaved and won't
> +  * clobber memory other than the first 4kB.
> +  */
> + for (pa = PAGE_SIZE;
> +  !e820__mapped_all(pa, pa + PAGE_SIZE, E820_TYPE_RAM) ||
> +  memblock_is_reserved(pa);
> +  pa += PAGE_SIZE)
> + ;

Is it possible to never find a page here?

-boris

> +
> + memblock_reserve(pa, PAGE_SIZE);
> + HYPERVISOR_shared_info = __va(pa);
> + }
>  
> - if (!shared_info_page)
> - shared_info_page = (struct shared_info *)
> - extend_brk(PAGE_SIZE, PAGE_SIZE);
>   xatp.domid = DOMID_SELF;
>   xatp.idx = 0;
>   xatp.space = XENMAPSPACE_shared_info;
> - xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT;
> + xatp.gpfn = virt_to_pfn(HYPERVISOR_shared_info);
>   if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, ))
>   BUG();
>  
> - HYPERVISOR_shared_info = (struct shared_info *)shared_info_page;
> -
>   /* xen_vcpu is a pointer to the vcpu_info struct in the shared_info
>* page, we use it in the event channel upcall and in some pvclock
>* related functions. We don't need the vcpu_info placement
> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
> index f33eef4ebd12..a9a67ecf2c07 100644
> --- a/arch/x86/xen/enlighten_pv.c
> +++ b/arch/x86/xen/enlighten_pv.c
> @@ -89,8 +89,6 @@
>  
>  void *xen_initial_gdt;
>  
> -RESERVE_BRK(shared_info_page_brk, PAGE_SIZE);
> -
>  static int xen_cpu_up_prepare_pv(unsigned int cpu);
>  static int xen_cpu_dead_pv(unsigned int cpu);
>  


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 33/34] ARM: vITS: create and initialize virtual ITSes for Dom0

2017-06-14 Thread Andre Przywara
For each hardware ITS create and initialize a virtual ITS for Dom0.
We use the same memory mapped address to keep the doorbell working.
This introduces a function to initialize a virtual ITS.
We maintain a list of virtual ITSes, at the moment for the only
purpose of later being able to free them again.
We configure the virtual ITSes to match the hardware ones, that is we
keep the number of device ID bits and event ID bits the same as the host
ITS.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c   | 77 
 xen/include/asm-arm/domain.h |  1 +
 2 files changed, 78 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 335272f..bfc5acc 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -52,6 +52,7 @@
  */
 struct virt_its {
 struct domain *d;
+struct list_head vits_list;
 paddr_t doorbell_address;
 unsigned int devid_bits;
 unsigned int evid_bits;
@@ -1454,6 +1455,46 @@ static const struct mmio_handler_ops 
vgic_its_mmio_handler = {
 .write = vgic_v3_its_mmio_write,
 };
 
+static int vgic_v3_its_init_virtual(struct domain *d, paddr_t guest_addr,
+unsigned int devid_bits,
+unsigned int evid_bits)
+{
+struct virt_its *its;
+uint64_t base_attr;
+
+its = xzalloc(struct virt_its);
+if ( !its )
+return -ENOMEM;
+
+base_attr  = GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT;
+base_attr |= GIC_BASER_CACHE_SameAsInner << 
GITS_BASER_OUTER_CACHEABILITY_SHIFT;
+base_attr |= GIC_BASER_CACHE_RaWaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT;
+
+its->cbaser  = base_attr;
+base_attr |= 0ULL << GITS_BASER_PAGE_SIZE_SHIFT;/* 4K pages */
+its->baser_dev = GITS_BASER_TYPE_DEVICE << GITS_BASER_TYPE_SHIFT;
+its->baser_dev |= (sizeof(dev_table_entry_t) - 1) <<
+  GITS_BASER_ENTRY_SIZE_SHIFT;
+its->baser_dev |= base_attr;
+its->baser_coll  = GITS_BASER_TYPE_COLLECTION << GITS_BASER_TYPE_SHIFT;
+its->baser_coll |= (sizeof(coll_table_entry_t) - 1) <<
+   GITS_BASER_ENTRY_SIZE_SHIFT;
+its->baser_coll |= base_attr;
+its->d = d;
+its->doorbell_address = guest_addr + ITS_DOORBELL_OFFSET;
+its->devid_bits = devid_bits;
+its->evid_bits = evid_bits;
+spin_lock_init(>vcmd_lock);
+spin_lock_init(>its_lock);
+
+register_mmio_handler(d, _its_mmio_handler, guest_addr, SZ_64K, its);
+
+/* Register the virtual ITS to be able to clean it up later. */
+list_add_tail(>vits_list, >arch.vgic.vits_list);
+
+return 0;
+}
+
 unsigned int vgic_v3_its_count(const struct domain *d)
 {
 struct host_its *hw_its;
@@ -1469,16 +1510,52 @@ unsigned int vgic_v3_its_count(const struct domain *d)
 return ret;
 }
 
+/*
+ * For a hardware domain, this will iterate over the host ITSes
+ * and map one virtual ITS per host ITS at the same address.
+ */
 int vgic_v3_its_init_domain(struct domain *d)
 {
+int ret;
+
+INIT_LIST_HEAD(>arch.vgic.vits_list);
 spin_lock_init(>arch.vgic.its_devices_lock);
 d->arch.vgic.its_devices = RB_ROOT;
 
+if ( is_hardware_domain(d) )
+{
+struct host_its *hw_its;
+
+list_for_each_entry(hw_its, _its_list, entry)
+{
+/*
+ * For each host ITS create a virtual ITS using the same
+ * base and thus doorbell address.
+ * Use the same number of device ID and event ID bits as the host.
+ */
+ret = vgic_v3_its_init_virtual(d, hw_its->addr,
+   hw_its->devid_bits,
+   hw_its->evid_bits);
+if ( ret )
+return ret;
+else
+d->arch.vgic.has_its = true;
+}
+}
+
 return 0;
 }
 
 void vgic_v3_its_free_domain(struct domain *d)
 {
+struct virt_its *pos, *temp;
+
+list_for_each_entry_safe( pos, temp, >arch.vgic.vits_list, vits_list )
+{
+list_del(>vits_list);
+xfree(pos);
+}
+
 ASSERT(RB_EMPTY_ROOT(>arch.vgic.its_devices));
 }
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index b33f54a..8dfc1d1 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -115,6 +115,7 @@ struct arch_domain
 spinlock_t its_devices_lock;/* Protects the its_devices tree */
 struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
 rwlock_t pend_lpi_tree_lock;/* Protects the pend_lpi_tree */
+struct list_head vits_list; /* List of virtual ITSes */
 unsigned int intid_bits;
 /*
  * TODO: if there are more bool's being added below, consider
-- 
2.9.0


___

[Xen-devel] [PATCH v12 32/34] ARM: vITS: increase mmio_count for each ITS

2017-06-14 Thread Andre Przywara
Increase the count of MMIO regions needed by one for each ITS Dom0 has
to emulate. We emulate the ITSes 1:1 from the hardware, so the number
is the number of host ITSes.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c   | 15 +++
 xen/arch/arm/vgic-v3.c   |  3 +++
 xen/include/asm-arm/gic_v3_its.h |  7 +++
 3 files changed, 25 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index f853987..335272f 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -1454,6 +1454,21 @@ static const struct mmio_handler_ops 
vgic_its_mmio_handler = {
 .write = vgic_v3_its_mmio_write,
 };
 
+unsigned int vgic_v3_its_count(const struct domain *d)
+{
+struct host_its *hw_its;
+unsigned int ret = 0;
+
+/* Only Dom0 can use emulated ITSes so far. */
+if ( !is_hardware_domain(d) )
+return 0;
+
+list_for_each_entry(hw_its, _its_list, entry)
+ret++;
+
+return ret;
+}
+
 int vgic_v3_its_init_domain(struct domain *d)
 {
 spin_lock_init(>arch.vgic.its_devices_lock);
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 90a2ae3..4287ae1 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1814,6 +1814,9 @@ int vgic_v3_init(struct domain *d, int *mmio_count)
 /* GICD region + number of Redistributors */
 *mmio_count = vgic_v3_rdist_count(d) + 1;
 
+/* one region per ITS */
+*mmio_count += vgic_v3_its_count(d);
+
 register_vgic_ops(d, _ops);
 
 return 0;
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index ce46a3f..459b6fe 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -137,6 +137,8 @@ void gicv3_its_dt_init(const struct dt_device_node *node);
 
 bool gicv3_its_host_has_its(void);
 
+unsigned int vgic_v3_its_count(const struct domain *d);
+
 void gicv3_do_LPI(unsigned int lpi);
 
 int gicv3_lpi_init_rdist(void __iomem * rdist_base);
@@ -194,6 +196,11 @@ static inline bool gicv3_its_host_has_its(void)
 return false;
 }
 
+static inline unsigned int vgic_v3_its_count(const struct domain *d)
+{
+return 0;
+}
+
 static inline void gicv3_do_LPI(unsigned int lpi)
 {
 /* We don't enable LPIs without an ITS. */
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] xen: idle_loop: either deal with tasklets or go idle

2017-06-14 Thread Dario Faggioli
In fact, there exists two kind of tasklets: vCPU and
softirq context tasklets. When we want to do vCPU
context tasklet work, we force the idle vCPU (of a
particular pCPU) into execution, and run it from there.

This means there are two possible reasons for choosing
to run the idle vCPU:
1) we want a pCPU to go idle,
2) wa want to run some vCPU context tasklet work.

If we're in case 2), it does not make sense to try to
see whether we can go idle, and only afterwords (as the
check _will_ fail), go processing tasklets.

This patch rearranges the code of the body of the idle
vCPUs, so that we actually check whether we are in
case 1) or 2), and act accordingly.

As a matter of fact, this also means that we do not
check for any tasklet work to be done, after waking up
from idle. This is not a problem, because:
a) for softirq context tasklets, if any is queued
   "during" wakeup from idle, TASKLET_SOFTIRQ is
   raised, and the call to do_softirq() (which is still
   happening *after* the wakeup) will take care of it;
b) for vCPU context tasklets, if any is queued "during"
   wakeup from idle, SCHEDULE_SOFTIRQ is raised and
   do_softirq() (happening after the wakeup) calls
   the scheduler. The scheduler sees that there is
   tasklet work pending and confirms the idle vCPU
   in execution, which then will get to execute
   do_tasklet().

Signed-off-by: Dario Faggioli 
---
Cc: Stefano Stabellini 
Cc: Julien Grall 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Boris Ostrovsky 
---
 xen/arch/arm/domain.c |   21 ++---
 xen/arch/x86/domain.c |   12 +---
 xen/common/tasklet.c  |   10 +-
 xen/include/xen/tasklet.h |   12 +++-
 4 files changed, 35 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 76310ed..0ceeb5b 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -41,20 +41,27 @@ DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
 void idle_loop(void)
 {
+unsigned int cpu = smp_processor_id();
+
 for ( ; ; )
 {
-if ( cpu_is_offline(smp_processor_id()) )
+if ( cpu_is_offline(cpu) )
 stop_cpu();
 
-local_irq_disable();
-if ( cpu_is_haltable(smp_processor_id()) )
+/* Are we here for running vcpu context tasklets, or for idling? */
+if ( unlikely(tasklet_work_to_do(cpu)) )
+do_tasklet(cpu);
+else
 {
-dsb(sy);
-wfi();
+local_irq_disable();
+if ( cpu_is_haltable(cpu) )
+{
+dsb(sy);
+wfi();
+}
+local_irq_enable();
 }
-local_irq_enable();
 
-do_tasklet();
 do_softirq();
 /*
  * We MUST be last (or before dsb, wfi). Otherwise after we get the
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 49388f4..d06700d 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -112,12 +112,18 @@ static void play_dead(void)
 
 static void idle_loop(void)
 {
+unsigned int cpu = smp_processor_id();
+
 for ( ; ; )
 {
-if ( cpu_is_offline(smp_processor_id()) )
+if ( cpu_is_offline(cpu) )
 play_dead();
-(*pm_idle)();
-do_tasklet();
+
+/* Are we here for running vcpu context tasklets, or for idling? */
+if ( unlikely(tasklet_work_to_do(cpu)) )
+do_tasklet(cpu);
+else
+(*pm_idle)();
 do_softirq();
 /*
  * We MUST be last (or before pm_idle). Otherwise after we get the
diff --git a/xen/common/tasklet.c b/xen/common/tasklet.c
index 365a777..0465751 100644
--- a/xen/common/tasklet.c
+++ b/xen/common/tasklet.c
@@ -104,19 +104,11 @@ static void do_tasklet_work(unsigned int cpu, struct 
list_head *list)
 }
 
 /* VCPU context work */
-void do_tasklet(void)
+void do_tasklet(unsigned int cpu)
 {
-unsigned int cpu = smp_processor_id();
 unsigned long *work_to_do = _cpu(tasklet_work_to_do, cpu);
 struct list_head *list = _cpu(tasklet_list, cpu);
 
-/*
- * Work must be enqueued *and* scheduled. Otherwise there is no work to
- * do, and/or scheduler needs to run to update idle vcpu priority.
- */
-if ( likely(*work_to_do != (TASKLET_enqueued|TASKLET_scheduled)) )
-return;
-
 spin_lock_irq(_lock);
 
 do_tasklet_work(cpu, list);
diff --git a/xen/include/xen/tasklet.h b/xen/include/xen/tasklet.h
index 8c3de7e..1a3f861 100644
--- a/xen/include/xen/tasklet.h
+++ b/xen/include/xen/tasklet.h
@@ -40,9 +40,19 @@ DECLARE_PER_CPU(unsigned long, tasklet_work_to_do);
 #define TASKLET_enqueued   (1ul << _TASKLET_enqueued)
 #define TASKLET_scheduled  (1ul << _TASKLET_scheduled)
 
+static inline bool tasklet_work_to_do(unsigned int cpu)
+{
+/*
+ * Work must be enqueued 

[Xen-devel] [PATCH v12 25/34] ARM: vITS: handle MAPD command

2017-06-14 Thread Andre Przywara
The MAPD command maps a device by associating a memory region for
storing ITEs with a certain device ID. Since it features a valid bit,
MAPD also covers the "unmap" functionality, which we also cover here.
We store the given guest physical address in the device table, and, if
this command comes from Dom0, tell the host ITS driver about this new
mapping, so it can issue the corresponding host MAPD command and create
the required tables. We take care of rolling back actions should one
step fail.
Upon unmapping a device we make sure we clean up all associated
resources and release the memory again.
We use our existing guest memory access function to find the right ITT
entry and store the mapping there (in guest memory).

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/gic-v3-its.c|  17 +
 xen/arch/arm/gic-v3-lpi.c|  17 +
 xen/arch/arm/vgic-v3-its.c   | 142 +++
 xen/include/asm-arm/gic_v3_its.h |   5 ++
 4 files changed, 181 insertions(+)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 38f0840..8864e0b 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -859,6 +859,23 @@ struct pending_irq *gicv3_its_get_event_pending_irq(struct 
domain *d,
 return get_event_pending_irq(d, vdoorbell_address, vdevid, eventid, NULL);
 }
 
+int gicv3_remove_guest_event(struct domain *d, paddr_t vdoorbell_address,
+ uint32_t vdevid, uint32_t eventid)
+{
+uint32_t host_lpi = INVALID_LPI;
+
+if ( !get_event_pending_irq(d, vdoorbell_address, vdevid, eventid,
+_lpi) )
+return -EINVAL;
+
+if ( host_lpi == INVALID_LPI )
+return -EINVAL;
+
+gicv3_lpi_update_host_entry(host_lpi, d->domain_id, INVALID_LPI);
+
+return 0;
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index dc936fa..c3474f5 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -215,6 +215,23 @@ out:
 irq_exit();
 }
 
+void gicv3_lpi_update_host_entry(uint32_t host_lpi, int domain_id,
+ uint32_t virt_lpi)
+{
+union host_lpi *hlpip, hlpi;
+
+ASSERT(host_lpi >= LPI_OFFSET);
+
+host_lpi -= LPI_OFFSET;
+
+hlpip = _data.host_lpis[host_lpi / HOST_LPIS_PER_PAGE][host_lpi % 
HOST_LPIS_PER_PAGE];
+
+hlpi.virt_lpi = virt_lpi;
+hlpi.dom_id = domain_id;
+
+write_u64_atomic(>data, hlpi.data);
+}
+
 static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
 {
 uint64_t val;
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 4552bc9..d236bbe 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -159,6 +159,21 @@ static struct vcpu *get_vcpu_from_collection(struct 
virt_its *its,
 return its->d->vcpu[vcpu_id];
 }
 
+/* Set the address of an ITT for a given device ID. */
+static int its_set_itt_address(struct virt_its *its, uint32_t devid,
+   paddr_t itt_address, uint32_t nr_bits)
+{
+paddr_t addr = get_baser_phys_addr(its->baser_dev);
+dev_table_entry_t itt_entry = DEV_TABLE_ENTRY(itt_address, nr_bits);
+
+if ( devid >= its->max_devices )
+return -ENOENT;
+
+return vgic_access_guest_memory(its->d,
+addr + devid * sizeof(dev_table_entry_t),
+_entry, sizeof(itt_entry), true);
+}
+
 /*
  * Lookup the address of the Interrupt Translation Table associated with
  * that device ID.
@@ -375,6 +390,130 @@ out_unlock:
 return ret;
 }
 
+/* Must be called with the ITS lock held. */
+static int its_discard_event(struct virt_its *its,
+ uint32_t vdevid, uint32_t vevid)
+{
+struct pending_irq *p;
+unsigned long flags;
+struct vcpu *vcpu;
+uint32_t vlpi;
+
+ASSERT(spin_is_locked(>its_lock));
+
+if ( !read_itte(its, vdevid, vevid, , ) )
+return -ENOENT;
+
+if ( vlpi == INVALID_LPI )
+return -ENOENT;
+
+/*
+ * TODO: This relies on the VCPU being correct in the ITS tables.
+ * This can be fixed by either using a per-IRQ lock or by using
+ * the VCPU ID from the pending_irq instead.
+ */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* Remove the pending_irq from the tree. */
+write_lock(>d->arch.vgic.pend_lpi_tree_lock);
+p = radix_tree_delete(>d->arch.vgic.pend_lpi_tree, vlpi);
+write_unlock(>d->arch.vgic.pend_lpi_tree_lock);
+
+if ( !p )
+{
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+
+return -ENOENT;
+}
+
+/* Cleanup the pending_irq and disconnect it from the LPI. */
+gic_remove_irq_from_queues(vcpu, p);
+vgic_init_pending_irq(p, INVALID_LPI);
+
+spin_unlock_irqrestore(>arch.vgic.lock, 

[Xen-devel] [PATCH v12 28/34] ARM: vITS: handle MOVI command

2017-06-14 Thread Andre Przywara
The MOVI command moves the interrupt affinity from one redistributor
(read: VCPU) to another.
For now migration of "live" LPIs is not yet implemented, but we store
the changed affinity in our virtual ITTE and the pending_irq.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c | 69 ++
 1 file changed, 69 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 2f911dc..07ee1b1 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -651,6 +651,69 @@ out_remove_mapping:
 return ret;
 }
 
+static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
+{
+uint32_t devid = its_cmd_get_deviceid(cmdptr);
+uint32_t eventid = its_cmd_get_id(cmdptr);
+uint16_t collid = its_cmd_get_collection(cmdptr);
+unsigned long flags;
+struct pending_irq *p;
+struct vcpu *ovcpu, *nvcpu;
+uint32_t vlpi;
+int ret = -1;
+
+spin_lock(>its_lock);
+/* Check for a mapped LPI and get the LPI number. */
+if ( !read_itte(its, devid, eventid, , ) )
+goto out_unlock;
+
+if ( vlpi == INVALID_LPI )
+goto out_unlock;
+
+/* Check the new collection ID and get the new VCPU pointer */
+nvcpu = get_vcpu_from_collection(its, collid);
+if ( !nvcpu )
+goto out_unlock;
+
+p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
+devid, eventid);
+if ( unlikely(!p) )
+goto out_unlock;
+
+/*
+ * TODO: This relies on the VCPU being correct in the ITS tables.
+ * This can be fixed by either using a per-IRQ lock or by using
+ * the VCPU ID from the pending_irq instead.
+ */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* Update our cached vcpu_id in the pending_irq. */
+p->lpi_vcpu_id = nvcpu->vcpu_id;
+
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+
+/*
+ * TODO: Investigate if and how to migrate an already pending LPI. This
+ * is not really critical, as these benign races happen in hardware too
+ * (an affinity change may come too late for a just fired IRQ), but may
+ * simplify the code if we can keep the IRQ's associated VCPU in sync,
+ * so that we don't have to deal with special cases anymore.
+ * Migrating those LPIs is not easy to do at the moment anyway, but should
+ * become easier with the introduction of a per-IRQ lock.
+ */
+
+/* Now store the new collection in the translation table. */
+if ( !write_itte(its, devid, eventid, collid, vlpi) )
+goto out_unlock;
+
+ret = 0;
+
+out_unlock:
+spin_unlock(>its_lock);
+
+return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)  baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg) ((reg) & GENMASK(19, 5))
 
@@ -703,6 +766,12 @@ static int vgic_its_handle_cmds(struct domain *d, struct 
virt_its *its)
 case GITS_CMD_MAPTI:
 ret = its_handle_mapti(its, command);
 break;
+case GITS_CMD_MOVALL:
+gdprintk(XENLOG_G_INFO, "vGITS: ignoring MOVALL command\n");
+break;
+case GITS_CMD_MOVI:
+ret = its_handle_movi(its, command);
+break;
 case GITS_CMD_SYNC:
 /* We handle ITS commands synchronously, so we ignore SYNC. */
 break;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 27/34] ARM: vITS: handle MAPTI/MAPI command

2017-06-14 Thread Andre Przywara
The MAPTI commands associates a DeviceID/EventID pair with a LPI/CPU
pair and actually instantiates LPI interrupts. MAPI is just a variant
of this comment, where the LPI ID is the same as the event ID.
We connect the already allocated host LPI to this virtual LPI, so that
any triggering LPI on the host can be quickly forwarded to a guest.
Beside entering the domain and the virtual LPI number in the respective
host LPI entry, we also initialize and add the already allocated
struct pending_irq to our radix tree, so that we can now easily find it
by its virtual LPI number.
We also read the property table to update the enabled bit and the
priority for our new LPI, as we might have missed this during an earlier
INVALL call (which only checks mapped LPIs). But we make sure that the
property table is actually valid, as all redistributors might still
be disabled at this point.
Since write_itte() now sees its first usage, we change the declaration
to static.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/gic-v3-its.c|  27 
 xen/arch/arm/vgic-v3-its.c   | 145 ++-
 xen/include/asm-arm/gic_v3_its.h |   3 +
 3 files changed, 173 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 8864e0b..3d863cd 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -876,6 +876,33 @@ int gicv3_remove_guest_event(struct domain *d, paddr_t 
vdoorbell_address,
 return 0;
 }
 
+/*
+ * Connects the event ID for an already assigned device to the given VCPU/vLPI
+ * pair. The corresponding physical LPI is already mapped on the host side
+ * (when assigning the physical device to the guest), so we just connect the
+ * target VCPU/vLPI pair to that interrupt to inject it properly if it fires.
+ * Returns a pointer to the already allocated struct pending_irq that is
+ * meant to be used by that event.
+ */
+struct pending_irq *gicv3_assign_guest_event(struct domain *d,
+ paddr_t vdoorbell_address,
+ uint32_t vdevid, uint32_t eventid,
+ uint32_t virt_lpi)
+{
+struct pending_irq *pirq;
+uint32_t host_lpi = INVALID_LPI;
+
+pirq = get_event_pending_irq(d, vdoorbell_address, vdevid, eventid,
+ _lpi);
+
+if ( !pirq )
+return NULL;
+
+gicv3_lpi_update_host_entry(host_lpi, d->domain_id, virt_lpi);
+
+return pirq;
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index d236bbe..2f911dc 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -253,8 +253,8 @@ static bool read_itte(struct virt_its *its, uint32_t devid, 
uint32_t evid,
  * If vcpu_ptr is provided, returns the VCPU belonging to that collection.
  * Must be called with the ITS lock held.
  */
-bool write_itte(struct virt_its *its, uint32_t devid,
-uint32_t evid, uint32_t collid, uint32_t vlpi)
+static bool write_itte(struct virt_its *its, uint32_t devid,
+   uint32_t evid, uint32_t collid, uint32_t vlpi)
 {
 paddr_t addr;
 struct vits_itte itte;
@@ -390,6 +390,44 @@ out_unlock:
 return ret;
 }
 
+/*
+ * For a given virtual LPI read the enabled bit and priority from the virtual
+ * property table and update the virtual IRQ's state in the given pending_irq.
+ * Must be called with the respective VGIC VCPU lock held.
+ */
+static int update_lpi_property(struct domain *d, struct pending_irq *p)
+{
+paddr_t addr;
+uint8_t property;
+int ret;
+
+/*
+ * If no redistributor has its LPIs enabled yet, we can't access the
+ * property table. In this case we just can't update the properties,
+ * but this should not be an error from an ITS point of view.
+ * The control flow dependency here and a barrier instruction on the
+ * write side make sure we can access these without taking a lock.
+ */
+if ( !d->arch.vgic.rdists_enabled )
+return 0;
+
+addr = d->arch.vgic.rdist_propbase & GENMASK(51, 12);
+
+ret = vgic_access_guest_memory(d, addr + p->irq - LPI_OFFSET,
+   , sizeof(property), false);
+if ( ret )
+return ret;
+
+write_atomic(>lpi_priority, property & LPI_PROP_PRIO_MASK);
+
+if ( property & LPI_PROP_ENABLED )
+set_bit(GIC_IRQ_GUEST_ENABLED, >status);
+else
+clear_bit(GIC_IRQ_GUEST_ENABLED, >status);
+
+return 0;
+}
+
 /* Must be called with the ITS lock held. */
 static int its_discard_event(struct virt_its *its,
  uint32_t vdevid, uint32_t vevid)
@@ -514,6 +552,105 @@ static int its_handle_mapd(struct virt_its 

[Xen-devel] [PATCH v12 22/34] ARM: vITS: handle INT command

2017-06-14 Thread Andre Przywara
The INT command sets a given LPI identified by a DeviceID/EventID pair
as pending and thus triggers it to be injected.
As read_itte() is now eventually used, we add the static keyword.

Signed-off-by: Andre Przywara 
Reviewed-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c | 29 +++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 36910aa..8a2a0d2 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -186,8 +186,8 @@ static paddr_t its_get_itte_address(struct virt_its *its,
  * address and puts the result in vcpu_ptr and vlpi_ptr.
  * Must be called with the ITS lock held.
  */
-bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
-   struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
+static bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
+  struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
 {
 paddr_t addr;
 struct vits_itte itte;
@@ -259,6 +259,28 @@ static uint64_t its_cmd_mask_field(uint64_t *its_cmd, 
unsigned int word,
 #define its_cmd_get_validbit(cmd)   its_cmd_mask_field(cmd, 2, 63,  1)
 #define its_cmd_get_ittaddr(cmd)(its_cmd_mask_field(cmd, 2, 8, 44) << 
8)
 
+static int its_handle_int(struct virt_its *its, uint64_t *cmdptr)
+{
+uint32_t devid = its_cmd_get_deviceid(cmdptr);
+uint32_t eventid = its_cmd_get_id(cmdptr);
+struct vcpu *vcpu;
+uint32_t vlpi;
+bool ret;
+
+spin_lock(>its_lock);
+ret = read_itte(its, devid, eventid, , );
+spin_unlock(>its_lock);
+if ( !ret )
+return -1;
+
+if ( vlpi == INVALID_LPI )
+return -1;
+
+vgic_vcpu_inject_lpi(its->d, vlpi);
+
+return 0;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)  baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg) ((reg) & GENMASK(19, 5))
 
@@ -295,6 +317,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct 
virt_its *its)
 
 switch ( its_cmd_get_command(command) )
 {
+case GITS_CMD_INT:
+ret = its_handle_int(its, command);
+break;
 case GITS_CMD_SYNC:
 /* We handle ITS commands synchronously, so we ignore SYNC. */
 break;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 18/34] ARM: vGIC: advertise LPI support

2017-06-14 Thread Andre Przywara
To let a guest know about the availability of virtual LPIs, set the
respective bits in the virtual GIC registers and let a guest control
the LPI enable bit.
Only report the LPI capability if there is at least one ITS emulated
for that guest (which depends on the host having an ITS at the moment).
For Dom0 we report the same number of interrupts identifiers as the
host, whereas DomUs get a number fixed at 10 bits for the moments, which
covers all SPIs. Also we fix a slight inaccuracy here, since the
number of interrupt identifier specified in GICD_TYPER depends on the
stream interface and is independent from the number of actually wired
SPIs.
This also removes a "TBD" comment, as we now populate the processor
number in the GICR_TYPER register, which will be used by the ITS
emulation later on.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3.c | 83 +-
 1 file changed, 76 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 30981b2..90a2ae3 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -170,8 +170,19 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* We have not implemented LPI's, read zero */
-goto read_as_zero_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto read_as_zero_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+spin_lock_irqsave(>arch.vgic.lock, flags);
+*r = vgic_reg32_extract(!!(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED),
+info);
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+return 1;
+}
 
 case VREG32(GICR_IIDR):
 if ( dabt.size != DABT_WORD ) goto bad_width;
@@ -183,16 +194,20 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 uint64_t typer, aff;
 
 if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
-/* TBD: Update processor id in [23:8] when ITS support is added */
 aff = (MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 3) << 56 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 2) << 48 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 1) << 40 |
MPIDR_AFFINITY_LEVEL(v->arch.vmpidr, 0) << 32);
 typer = aff;
+/* We use the VCPU ID as the redistributor ID in bits[23:8] */
+typer |= v->vcpu_id << GICR_TYPER_PROC_NUM_SHIFT;
 
 if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST )
 typer |= GICR_TYPER_LAST;
 
+if ( v->domain->arch.vgic.has_its )
+typer |= GICR_TYPER_PLPIS;
+
 *r = vgic_reg64_extract(typer, info);
 
 return 1;
@@ -426,6 +441,40 @@ static uint64_t sanitize_pendbaser(uint64_t reg)
 return reg;
 }
 
+static void vgic_vcpu_enable_lpis(struct vcpu *v)
+{
+uint64_t reg = v->domain->arch.vgic.rdist_propbase;
+unsigned int nr_lpis = BIT((reg & 0x1f) + 1);
+
+/* rdists_enabled is protected by the domain lock. */
+ASSERT(spin_is_locked(>domain->arch.vgic.lock));
+
+if ( nr_lpis < LPI_OFFSET )
+nr_lpis = 0;
+else
+nr_lpis -= LPI_OFFSET;
+
+if ( !v->domain->arch.vgic.rdists_enabled )
+{
+v->domain->arch.vgic.nr_lpis = nr_lpis;
+/*
+ * Make sure nr_lpis is visible before rdists_enabled.
+ * We read nr_lpis (and rdist_propbase) outside of the lock in
+ * other functions, but guard those accesses by rdists_enabled, so
+ * make sure these are consistent.
+ */
+smp_mb();
+v->domain->arch.vgic.rdists_enabled = true;
+/*
+ * Make sure the per-domain rdists_enabled flag has been set before
+ * enabling this particular redistributor.
+ */
+smp_mb();
+}
+
+v->arch.vgic.flags |= VGIC_V3_LPIS_ENABLED;
+}
+
 static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
   uint32_t gicr_reg,
   register_t r)
@@ -436,8 +485,26 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, 
mmio_info_t *info,
 switch ( gicr_reg )
 {
 case VREG32(GICR_CTLR):
-/* LPI's not implemented */
-goto write_ignore_32;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto write_ignore_32;
+if ( dabt.size != DABT_WORD ) goto bad_width;
+
+vgic_lock(v);   /* protects rdists_enabled */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* LPIs can only be enabled once, but never disabled again. */
+if ( (r & GICR_CTLR_ENABLE_LPIS) &&
+ !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) )
+vgic_vcpu_enable_lpis(v);
+
+  

[Xen-devel] [PATCH v12 15/34] ARM: vGICv3: handle virtual LPI pending and property tables

2017-06-14 Thread Andre Przywara
Allow a guest to provide the address and size for the memory regions
it has reserved for the GICv3 pending and property tables.
We sanitise the various fields of the respective redistributor
registers.
The MMIO read and write accesses are protected by locks, to avoid any
changing of the property or pending table address while a redistributor
is live and also to protect the non-atomic vgic_reg64_extract() function
on the MMIO read side.

Signed-off-by: Andre Przywara 
Reviewed-by: Julien Grall 
---
 xen/arch/arm/vgic-v3.c   | 164 +++
 xen/include/asm-arm/domain.h |   9 +++
 2 files changed, 161 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 0b4669f..c53fa9c 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -233,12 +233,29 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 goto read_reserved;
 
 case VREG64(GICR_PROPBASER):
-/* LPI's not implemented */
-goto read_as_zero_64;
+if ( !v->domain->arch.vgic.has_its )
+goto read_as_zero_64;
+if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
+
+vgic_lock(v);
+*r = vgic_reg64_extract(v->domain->arch.vgic.rdist_propbase, info);
+vgic_unlock(v);
+return 1;
 
 case VREG64(GICR_PENDBASER):
-/* LPI's not implemented */
-goto read_as_zero_64;
+{
+unsigned long flags;
+
+if ( !v->domain->arch.vgic.has_its )
+goto read_as_zero_64;
+if ( !vgic_reg64_check_access(dabt) ) goto bad_width;
+
+spin_lock_irqsave(>arch.vgic.lock, flags);
+*r = vgic_reg64_extract(v->arch.vgic.rdist_pendbase, info);
+*r &= ~GICR_PENDBASER_PTZ;   /* WO, reads as 0 */
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+return 1;
+}
 
 case 0x0080:
 goto read_reserved;
@@ -335,11 +352,95 @@ read_unknown:
 return 1;
 }
 
+static uint64_t vgic_sanitise_field(uint64_t reg, uint64_t field_mask,
+int field_shift,
+uint64_t (*sanitise_fn)(uint64_t))
+{
+uint64_t field = (reg & field_mask) >> field_shift;
+
+field = sanitise_fn(field) << field_shift;
+
+return (reg & ~field_mask) | field;
+}
+
+/* We want to avoid outer shareable. */
+static uint64_t vgic_sanitise_shareability(uint64_t field)
+{
+switch ( field )
+{
+case GIC_BASER_OuterShareable:
+return GIC_BASER_InnerShareable;
+default:
+return field;
+}
+}
+
+/* Avoid any inner non-cacheable mapping. */
+static uint64_t vgic_sanitise_inner_cacheability(uint64_t field)
+{
+switch ( field )
+{
+case GIC_BASER_CACHE_nCnB:
+case GIC_BASER_CACHE_nC:
+return GIC_BASER_CACHE_RaWb;
+default:
+return field;
+}
+}
+
+/* Non-cacheable or same-as-inner are OK. */
+static uint64_t vgic_sanitise_outer_cacheability(uint64_t field)
+{
+switch ( field )
+{
+case GIC_BASER_CACHE_SameAsInner:
+case GIC_BASER_CACHE_nC:
+return field;
+default:
+return GIC_BASER_CACHE_nC;
+}
+}
+
+static uint64_t sanitize_propbaser(uint64_t reg)
+{
+reg = vgic_sanitise_field(reg, GICR_PROPBASER_SHAREABILITY_MASK,
+  GICR_PROPBASER_SHAREABILITY_SHIFT,
+  vgic_sanitise_shareability);
+reg = vgic_sanitise_field(reg, GICR_PROPBASER_INNER_CACHEABILITY_MASK,
+  GICR_PROPBASER_INNER_CACHEABILITY_SHIFT,
+  vgic_sanitise_inner_cacheability);
+reg = vgic_sanitise_field(reg, GICR_PROPBASER_OUTER_CACHEABILITY_MASK,
+  GICR_PROPBASER_OUTER_CACHEABILITY_SHIFT,
+  vgic_sanitise_outer_cacheability);
+
+reg &= ~GICR_PROPBASER_RES0_MASK;
+
+return reg;
+}
+
+static uint64_t sanitize_pendbaser(uint64_t reg)
+{
+reg = vgic_sanitise_field(reg, GICR_PENDBASER_SHAREABILITY_MASK,
+  GICR_PENDBASER_SHAREABILITY_SHIFT,
+  vgic_sanitise_shareability);
+reg = vgic_sanitise_field(reg, GICR_PENDBASER_INNER_CACHEABILITY_MASK,
+  GICR_PENDBASER_INNER_CACHEABILITY_SHIFT,
+  vgic_sanitise_inner_cacheability);
+reg = vgic_sanitise_field(reg, GICR_PENDBASER_OUTER_CACHEABILITY_MASK,
+  GICR_PENDBASER_OUTER_CACHEABILITY_SHIFT,
+  vgic_sanitise_outer_cacheability);
+
+reg &= ~GICR_PENDBASER_RES0_MASK;
+
+return reg;
+}
+
 static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
   uint32_t gicr_reg,
   register_t r)
 {
 struct hsr_dabt dabt = info->dabt;
+

[Xen-devel] [PATCH v12 26/34] ARM: GICv3: handle unmapped LPIs

2017-06-14 Thread Andre Przywara
When LPIs get unmapped by a guest, they might still be in some LR of
some VCPU. Nevertheless we remove the corresponding pending_irq
(possibly freeing it), and detect this case (irq_to_pending() returns
NULL) when the LR gets cleaned up later.
However a *new* LPI may get mapped with the same number while the old
LPI is *still* in some LR. To avoid getting the wrong state, we mark
every newly mapped LPI as PRISTINE, which means: has never been in an
LR before. If we detect the LPI in an LR anyway, it must have been an
older one, which we can simply retire.
Before inserting such a PRISTINE LPI into an LR, we must make sure that
it's not already in another LR, as the architecture forbids two
interrupts with the same virtual IRQ number on one CPU.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/gic.c | 51 ++
 xen/include/asm-arm/vgic.h |  6 ++
 2 files changed, 53 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 9d473d7..288e740 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -375,6 +375,8 @@ static inline void gic_set_lr(int lr, struct pending_irq *p,
 {
 ASSERT(!local_irq_is_enabled());
 
+clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, >status);
+
 gic_hw_ops->update_lr(lr, p, state);
 
 set_bit(GIC_IRQ_GUEST_VISIBLE, >status);
@@ -440,6 +442,40 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int 
virtual_irq)
 #endif
 }
 
+/*
+ * Find an unused LR to insert an IRQ into, starting with the LR given
+ * by @lr. If this new interrupt is a PRISTINE LPI, scan the other LRs to
+ * avoid inserting the same IRQ twice. This situation can occur when an
+ * event gets discarded while the LPI is in an LR, and a new LPI with the
+ * same number gets mapped quickly afterwards.
+ */
+static unsigned int gic_find_unused_lr(struct vcpu *v,
+   struct pending_irq *p,
+   unsigned int lr)
+{
+unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
+unsigned long *lr_mask = (unsigned long *) _cpu(lr_mask);
+struct gic_lr lr_val;
+
+ASSERT(spin_is_locked(>arch.vgic.lock));
+
+if ( unlikely(test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, >status)) )
+{
+unsigned int used_lr;
+
+for_each_set_bit(used_lr, lr_mask, nr_lrs)
+{
+gic_hw_ops->read_lr(used_lr, _val);
+if ( lr_val.virq == p->irq )
+return used_lr;
+}
+}
+
+lr = find_next_zero_bit(lr_mask, nr_lrs, lr);
+
+return lr;
+}
+
 void gic_raise_guest_irq(struct vcpu *v, unsigned int virtual_irq,
 unsigned int priority)
 {
@@ -455,7 +491,8 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int 
virtual_irq,
 
 if ( v == current && list_empty(>arch.vgic.lr_pending) )
 {
-i = find_first_zero_bit(_cpu(lr_mask), nr_lrs);
+i = gic_find_unused_lr(v, p, 0);
+
 if (i < nr_lrs) {
 set_bit(i, _cpu(lr_mask));
 gic_set_lr(i, p, GICH_LR_PENDING);
@@ -478,8 +515,14 @@ static void gic_update_one_lr(struct vcpu *v, int i)
 gic_hw_ops->read_lr(i, _val);
 irq = lr_val.virq;
 p = irq_to_pending(v, irq);
-/* An LPI might have been unmapped, in which case we just clean up here. */
-if ( unlikely(!p) )
+/*
+ * An LPI might have been unmapped, in which case we just clean up here.
+ * If that LPI is marked as PRISTINE, the information in the LR is bogus,
+ * as it belongs to a previous, already unmapped LPI. So we discard it
+ * here as well.
+ */
+if ( unlikely(!p ||
+  test_and_clear_bit(GIC_IRQ_GUEST_PRISTINE_LPI, >status)) )
 {
 ASSERT(is_lpi(irq));
 
@@ -589,7 +632,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
 inflight_r = >arch.vgic.inflight_irqs;
 list_for_each_entry_safe ( p, t, >arch.vgic.lr_pending, lr_queue )
 {
-lr = find_next_zero_bit(_cpu(lr_mask), nr_lrs, lr);
+lr = gic_find_unused_lr(v, p, lr);
 if ( lr >= nr_lrs )
 {
 /* No more free LRs: find a lower priority irq to evict */
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 6a23249..9ff713c 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -60,12 +60,18 @@ struct pending_irq
  * vcpu while it is still inflight and on an GICH_LR register on the
  * old vcpu.
  *
+ * GIC_IRQ_GUEST_PRISTINE_LPI: the IRQ is a newly mapped LPI, which
+ * has never been in an LR before. This means that any trace of an
+ * LPI with the same number in an LR must be from an older LPI, which
+ * has been unmapped before.
+ *
  */
 #define GIC_IRQ_GUEST_QUEUED   0
 #define GIC_IRQ_GUEST_ACTIVE   1
 #define GIC_IRQ_GUEST_VISIBLE  2
 #define GIC_IRQ_GUEST_ENABLED  3
 #define GIC_IRQ_GUEST_MIGRATING   4
+#define 

[Xen-devel] [PATCH] xen: idle_loop: either deal with tasklets or go idle

2017-06-14 Thread Dario Faggioli
Hi,

following up on this:

 https://lists.xen.org/archives/html/xen-devel/2017-06/msg01260.html

I did make a patch that moves do_tasklet() up a bit, within idle_loop().

While there, I did a bit more than that... let's see what you guys think. :-D

I've verified that this builds on ARM too, but have not run it (while I did
that, for x86):

 https://travis-ci.org/fdario/xen/builds/242888986

Thanks and Regards,
Dario
---
Dario Faggioli (1):
  xen: idle_loop: either deal with tasklets or go idle

 xen/arch/arm/domain.c |   21 ++---
 xen/arch/x86/domain.c |   12 +---
 xen/common/tasklet.c  |   10 +-
 xen/include/xen/tasklet.h |   12 +++-
 4 files changed, 35 insertions(+), 20 deletions(-)
--
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R Ltd., Cambridge (UK)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 29/34] ARM: vITS: handle DISCARD command

2017-06-14 Thread Andre Przywara
The DISCARD command drops the connection between a DeviceID/EventID
and an LPI/collection pair.
We mark the respective structure entries as not allocated and make
sure that any queued IRQs are removed.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c | 26 ++
 1 file changed, 26 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 07ee1b1..ad22bde 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -714,6 +714,29 @@ out_unlock:
 return ret;
 }
 
+static int its_handle_discard(struct virt_its *its, uint64_t *cmdptr)
+{
+uint32_t devid = its_cmd_get_deviceid(cmdptr);
+uint32_t eventid = its_cmd_get_id(cmdptr);
+int ret;
+
+spin_lock(>its_lock);
+
+/* Remove from the radix tree and remove the host entry. */
+ret = its_discard_event(its, devid, eventid);
+if ( ret )
+goto out_unlock;
+
+/* Remove from the guest's ITTE. */
+if ( !write_itte(its, devid, eventid, UNMAPPED_COLLECTION, INVALID_LPI) )
+ret = -1;
+
+out_unlock:
+spin_unlock(>its_lock);
+
+return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)  baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg) ((reg) & GENMASK(19, 5))
 
@@ -753,6 +776,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct 
virt_its *its)
 case GITS_CMD_CLEAR:
 ret = its_handle_clear(its, command);
 break;
+case GITS_CMD_DISCARD:
+ret = its_handle_discard(its, command);
+break;
 case GITS_CMD_INT:
 ret = its_handle_int(its, command);
 break;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 31/34] ARM: vITS: handle INVALL command

2017-06-14 Thread Andre Przywara
The INVALL command instructs an ITS to invalidate the configuration
data for all LPIs associated with a given redistributor (read: VCPU).
This is nasty to emulate exactly with our architecture, so we just
iterate over all mapped LPIs and filter for those from that particular
VCPU.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c | 79 ++
 1 file changed, 79 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 60cb807..f853987 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -504,6 +504,82 @@ out_unlock_its:
 return ret;
 }
 
+/*
+ * INVALL updates the per-LPI configuration status for every LPI mapped to
+ * a particular redistributor.
+ * We iterate over all mapped LPIs in our radix tree and update those.
+ */
+static int its_handle_invall(struct virt_its *its, uint64_t *cmdptr)
+{
+uint32_t collid = its_cmd_get_collection(cmdptr);
+struct vcpu *vcpu;
+struct pending_irq *pirqs[16];
+uint64_t vlpi = 0;  /* 64-bit to catch overflows */
+unsigned int nr_lpis, i;
+unsigned long flags;
+int ret = 0;
+
+/*
+ * As this implementation walks over all mapped LPIs, it might take
+ * too long for a real guest, so we might want to revisit this
+ * implementation for DomUs.
+ * However this command is very rare, also we don't expect many
+ * LPIs to be actually mapped, so it's fine for Dom0 to use.
+ */
+ASSERT(is_hardware_domain(its->d));
+
+/*
+ * If no redistributor has its LPIs enabled yet, we can't access the
+ * property table, so there is no point in executing this command.
+ * The control flow dependency here and a barrier instruction on the
+ * write side make sure we can access these without taking a lock.
+ */
+if ( !its->d->arch.vgic.rdists_enabled )
+return 0;
+
+spin_lock(>its_lock);
+vcpu = get_vcpu_from_collection(its, collid);
+spin_unlock(>its_lock);
+
+spin_lock_irqsave(>arch.vgic.lock, flags);
+read_lock(>d->arch.vgic.pend_lpi_tree_lock);
+
+do
+{
+int err;
+
+nr_lpis = radix_tree_gang_lookup(>d->arch.vgic.pend_lpi_tree,
+ (void **)pirqs, vlpi,
+ ARRAY_SIZE(pirqs));
+
+for ( i = 0; i < nr_lpis; i++ )
+{
+/* We only care about LPIs on our VCPU. */
+if ( pirqs[i]->lpi_vcpu_id != vcpu->vcpu_id )
+continue;
+
+vlpi = pirqs[i]->irq;
+/* If that fails for a single LPI, carry on to handle the rest. */
+err = update_lpi_property(its->d, pirqs[i]);
+if ( !err )
+update_lpi_vgic_status(vcpu, pirqs[i]);
+else
+ret = err;
+}
+/*
+ * Loop over the next gang of pending_irqs until we reached the end of
+ * a (fully populated) tree or the lookup function returns less LPIs than
+ * it has been asked for.
+ */
+} while ( (++vlpi < its->d->arch.vgic.nr_lpis) &&
+  (nr_lpis == ARRAY_SIZE(pirqs)) );
+
+read_unlock(>d->arch.vgic.pend_lpi_tree_lock);
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+
+return ret;
+}
+
 /* Must be called with the ITS lock held. */
 static int its_discard_event(struct virt_its *its,
  uint32_t vdevid, uint32_t vevid)
@@ -861,6 +937,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct 
virt_its *its)
 case GITS_CMD_INV:
 ret = its_handle_inv(its, command);
 break;
+case GITS_CMD_INVALL:
+ret = its_handle_invall(its, command);
+break;
 case GITS_CMD_MAPC:
 ret = its_handle_mapc(its, command);
 break;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 30/34] ARM: vITS: handle INV command

2017-06-14 Thread Andre Przywara
The INV command instructs the ITS to update the configuration data for
a given LPI by re-reading its entry from the property table.
We don't need to care so much about the priority value, but enabling
or disabling an LPI has some effect: We remove or push virtual LPIs
to their VCPUs, also check the virtual pending bit if an LPI gets enabled.

Signed-off-by: Andre Przywara 
Reviewed-by: Stefano Stabellini 
---
 xen/arch/arm/vgic-v3-its.c | 79 ++
 1 file changed, 79 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index ad22bde..60cb807 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -428,6 +428,82 @@ static int update_lpi_property(struct domain *d, struct 
pending_irq *p)
 return 0;
 }
 
+/*
+ * Checks whether an LPI that got enabled or disabled needs to change
+ * something in the VGIC (added or removed from the LR or queues).
+ * We don't disable the underlying physical LPI, because this requires
+ * queueing a host LPI command, which we can't afford to do on behalf
+ * of a guest.
+ * Must be called with the VCPU VGIC lock held.
+ */
+static void update_lpi_vgic_status(struct vcpu *v, struct pending_irq *p)
+{
+ASSERT(spin_is_locked(>arch.vgic.lock));
+
+if ( test_bit(GIC_IRQ_GUEST_ENABLED, >status) )
+{
+if ( !list_empty(>inflight) &&
+ !test_bit(GIC_IRQ_GUEST_VISIBLE, >status) )
+gic_raise_guest_irq(v, p->irq, p->lpi_priority);
+}
+else
+gic_remove_from_lr_pending(v, p);
+}
+
+static int its_handle_inv(struct virt_its *its, uint64_t *cmdptr)
+{
+struct domain *d = its->d;
+uint32_t devid = its_cmd_get_deviceid(cmdptr);
+uint32_t eventid = its_cmd_get_id(cmdptr);
+struct pending_irq *p;
+unsigned long flags;
+struct vcpu *vcpu;
+uint32_t vlpi;
+int ret = -1;
+
+/*
+ * If no redistributor has its LPIs enabled yet, we can't access the
+ * property table, so there is no point in executing this command.
+ * The control flow dependency here and a barrier instruction on the
+ * write side make sure we can access these without taking a lock.
+ */
+if ( !d->arch.vgic.rdists_enabled )
+return 0;
+
+spin_lock(>its_lock);
+
+/* Translate the event into a vCPU/vLPI pair. */
+if ( !read_itte(its, devid, eventid, , ) )
+goto out_unlock_its;
+
+if ( vlpi == INVALID_LPI )
+goto out_unlock_its;
+
+p = gicv3_its_get_event_pending_irq(d, its->doorbell_address,
+devid, eventid);
+if ( unlikely(!p) )
+goto out_unlock_its;
+
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/* Read the property table and update our cached status. */
+if ( update_lpi_property(d, p) )
+goto out_unlock;
+
+/* Check whether the LPI needs to go on a VCPU. */
+update_lpi_vgic_status(vcpu, p);
+
+ret = 0;
+
+out_unlock:
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+
+out_unlock_its:
+spin_unlock(>its_lock);
+
+return ret;
+}
+
 /* Must be called with the ITS lock held. */
 static int its_discard_event(struct virt_its *its,
  uint32_t vdevid, uint32_t vevid)
@@ -782,6 +858,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct 
virt_its *its)
 case GITS_CMD_INT:
 ret = its_handle_int(its, command);
 break;
+case GITS_CMD_INV:
+ret = its_handle_inv(its, command);
+break;
 case GITS_CMD_MAPC:
 ret = its_handle_mapc(its, command);
 break;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 34/34] ARM: vITS: create ITS subnodes for Dom0 DT

2017-06-14 Thread Andre Przywara
Dom0 expects all ITSes in the system to be propagated to be able to
use MSIs.
Create Dom0 DT nodes for each hardware ITS, keeping the register frame
address the same, as the doorbell address that the Dom0 drivers program
into the BARs has to match the hardware.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/gic-v3-its.c| 73 
 xen/arch/arm/gic-v3.c|  4 ++-
 xen/include/asm-arm/gic_v3_its.h | 12 +++
 3 files changed, 88 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 3d863cd..2d36030 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -20,6 +20,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -903,6 +904,78 @@ struct pending_irq *gicv3_assign_guest_event(struct domain 
*d,
 return pirq;
 }
 
+/*
+ * Create the respective guest DT nodes from a list of host ITSes.
+ * This copies the reg property, so the guest sees the ITS at the same address
+ * as the host.
+ */
+int gicv3_its_make_hwdom_dt_nodes(const struct domain *d,
+  const struct dt_device_node *gic,
+  void *fdt)
+{
+uint32_t len;
+int res;
+const void *prop = NULL;
+const struct dt_device_node *its = NULL;
+const struct host_its *its_data;
+
+if ( list_empty(_its_list) )
+return 0;
+
+/* The sub-nodes require the ranges property */
+prop = dt_get_property(gic, "ranges", );
+if ( !prop )
+{
+printk(XENLOG_ERR "Can't find ranges property for the gic node\n");
+return -FDT_ERR_XEN(ENOENT);
+}
+
+res = fdt_property(fdt, "ranges", prop, len);
+if ( res )
+return res;
+
+list_for_each_entry(its_data, _its_list, entry)
+{
+its = its_data->dt_node;
+
+res = fdt_begin_node(fdt, its->name);
+if ( res )
+return res;
+
+res = fdt_property_string(fdt, "compatible", "arm,gic-v3-its");
+if ( res )
+return res;
+
+res = fdt_property(fdt, "msi-controller", NULL, 0);
+if ( res )
+return res;
+
+if ( its->phandle )
+{
+res = fdt_property_cell(fdt, "phandle", its->phandle);
+if ( res )
+return res;
+}
+
+/* Use the same reg regions as the ITS node in host DTB. */
+prop = dt_get_property(its, "reg", );
+if ( !prop )
+{
+printk(XENLOG_ERR "GICv3: Can't find ITS reg property.\n");
+res = -FDT_ERR_XEN(ENOENT);
+return res;
+}
+
+res = fdt_property(fdt, "reg", prop, len);
+if ( res )
+return res;
+
+fdt_end_node(fdt);
+}
+
+return res;
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index d539d6c..c927306 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1172,8 +1172,10 @@ static int gicv3_make_hwdom_dt_node(const struct domain 
*d,
 
 res = fdt_property(fdt, "reg", new_cells, len);
 xfree(new_cells);
+if ( res )
+return res;
 
-return res;
+return gicv3_its_make_hwdom_dt_nodes(d, gic, fdt);
 }
 
 static const hw_irq_controller gicv3_host_irq_type = {
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 459b6fe..1fac1c7 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -158,6 +158,11 @@ int gicv3_its_setup_collection(unsigned int cpu);
 int vgic_v3_its_init_domain(struct domain *d);
 void vgic_v3_its_free_domain(struct domain *d);
 
+/* Create the appropriate DT nodes for a hardware domain. */
+int gicv3_its_make_hwdom_dt_nodes(const struct domain *d,
+  const struct dt_device_node *gic,
+  void *fdt);
+
 /*
  * Map a device on the host by allocating an ITT on the host (ITS).
  * "nr_event" specifies how many events (interrupts) this device will need.
@@ -242,6 +247,13 @@ static inline void vgic_v3_its_free_domain(struct domain 
*d)
 {
 }
 
+static inline int gicv3_its_make_hwdom_dt_nodes(const struct domain *d,
+const struct dt_device_node 
*gic,
+void *fdt)
+{
+return 0;
+}
+
 #endif /* CONFIG_HAS_ITS */
 
 #endif
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 24/34] ARM: vITS: handle CLEAR command

2017-06-14 Thread Andre Przywara
This introduces the ITS command handler for the CLEAR command, which
clears the pending state of an LPI.
This removes a not-yet injected, but already queued IRQ from a VCPU.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c | 55 ++
 1 file changed, 55 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 14cb1f0..4552bc9 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -52,6 +52,7 @@
  */
 struct virt_its {
 struct domain *d;
+paddr_t doorbell_address;
 unsigned int devid_bits;
 unsigned int evid_bits;
 spinlock_t vcmd_lock;   /* Protects the virtual command buffer, which 
*/
@@ -323,6 +324,57 @@ static int its_handle_mapc(struct virt_its *its, uint64_t 
*cmdptr)
 return 0;
 }
 
+/*
+ * CLEAR removes the pending state from an LPI. */
+static int its_handle_clear(struct virt_its *its, uint64_t *cmdptr)
+{
+uint32_t devid = its_cmd_get_deviceid(cmdptr);
+uint32_t eventid = its_cmd_get_id(cmdptr);
+struct pending_irq *p;
+struct vcpu *vcpu;
+uint32_t vlpi;
+unsigned long flags;
+int ret = -1;
+
+spin_lock(>its_lock);
+
+/* Translate the DevID/EvID pair into a vCPU/vLPI pair. */
+if ( !read_itte(its, devid, eventid, , ) )
+goto out_unlock;
+
+p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
+devid, eventid);
+/* Protect against an invalid LPI number. */
+if ( unlikely(!p) )
+goto out_unlock;
+
+/*
+ * TODO: This relies on the VCPU being correct in the ITS tables.
+ * This can be fixed by either using a per-IRQ lock or by using
+ * the VCPU ID from the pending_irq instead.
+ */
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+/*
+ * If the LPI is already visible on the guest, it is too late to
+ * clear the pending state. However this is a benign race that can
+ * happen on real hardware, too: If the LPI has already been forwarded
+ * to a CPU interface, a CLEAR request reaching the redistributor has
+ * no effect on that LPI anymore. Since LPIs are edge triggered and
+ * have no active state, we don't need to care about this here.
+ */
+if ( !test_bit(GIC_IRQ_GUEST_VISIBLE, >status) )
+gic_remove_irq_from_queues(vcpu, p);
+
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
+ret = 0;
+
+out_unlock:
+spin_unlock(>its_lock);
+
+return ret;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)  baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg) ((reg) & GENMASK(19, 5))
 
@@ -359,6 +411,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct 
virt_its *its)
 
 switch ( its_cmd_get_command(command) )
 {
+case GITS_CMD_CLEAR:
+ret = its_handle_clear(its, command);
+break;
 case GITS_CMD_INT:
 ret = its_handle_int(its, command);
 break;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 14/34] ARM: GICv3: forward pending LPIs to guests

2017-06-14 Thread Andre Przywara
Upon receiving an LPI on the host, we need to find the right VCPU and
virtual IRQ number to get this IRQ injected.
Iterate our two-level LPI table to find the domain ID and the virtual
LPI number quickly when the host takes an LPI. We then look up the
right VCPU in the struct pending_irq.
We use the existing injection function to let the GIC emulation deal
with this interrupt.
This introduces a do_LPI() as a hardware gic_ops.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/gic-v2.c|  7 
 xen/arch/arm/gic-v3-lpi.c| 79 
 xen/arch/arm/gic-v3.c|  1 +
 xen/arch/arm/gic.c   |  8 +++-
 xen/include/asm-arm/domain.h |  3 +-
 xen/include/asm-arm/gic.h|  2 +
 xen/include/asm-arm/gic_v3_its.h | 10 +
 7 files changed, 108 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 270a136..ffbe47c 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1217,6 +1217,12 @@ static int __init gicv2_init(void)
 return 0;
 }
 
+static void gicv2_do_LPI(unsigned int lpi)
+{
+/* No LPIs in a GICv2 */
+BUG();
+}
+
 const static struct gic_hw_operations gicv2_ops = {
 .info= _info,
 .init= gicv2_init,
@@ -1244,6 +1250,7 @@ const static struct gic_hw_operations gicv2_ops = {
 .make_hwdom_madt = gicv2_make_hwdom_madt,
 .map_hwdom_extra_mappings = gicv2_map_hwdown_extra_mappings,
 .iomem_deny_access   = gicv2_iomem_deny_access,
+.do_LPI  = gicv2_do_LPI,
 };
 
 /* Set up the GIC */
diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index dbaf45a..dc936fa 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -136,6 +136,85 @@ uint64_t gicv3_get_redist_address(unsigned int cpu, bool 
use_pta)
 return per_cpu(lpi_redist, cpu).redist_id << 16;
 }
 
+void vgic_vcpu_inject_lpi(struct domain *d, unsigned int virq)
+{
+/*
+ * TODO: this assumes that the struct pending_irq stays valid all of
+ * the time. We cannot properly protect this with the current locking
+ * scheme, but the future per-IRQ lock will solve this problem.
+ */
+struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
+unsigned int vcpu_id;
+
+if ( !p )
+return;
+
+vcpu_id = ACCESS_ONCE(p->lpi_vcpu_id);
+if ( vcpu_id >= d->max_vcpus )
+  return;
+
+vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
+}
+
+/*
+ * Handle incoming LPIs, which are a bit special, because they are potentially
+ * numerous and also only get injected into guests. Treat them specially here,
+ * by just looking up their target vCPU and virtual LPI number and hand it
+ * over to the injection function.
+ * Please note that LPIs are edge-triggered only, also have no active state,
+ * so spurious interrupts on the host side are no issue (we can just ignore
+ * them).
+ * Also a guest cannot expect that firing interrupts that haven't been
+ * fully configured yet will reach the CPU, so we don't need to care about
+ * this special case.
+ */
+void gicv3_do_LPI(unsigned int lpi)
+{
+struct domain *d;
+union host_lpi *hlpip, hlpi;
+
+irq_enter();
+
+/* EOI the LPI already. */
+WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
+
+/* Find out if a guest mapped something to this physical LPI. */
+hlpip = gic_get_host_lpi(lpi);
+if ( !hlpip )
+goto out;
+
+hlpi.data = read_u64_atomic(>data);
+
+/*
+ * Unmapped events are marked with an invalid LPI ID. We can safely
+ * ignore them, as they have no further state and no-one can expect
+ * to see them if they have not been mapped.
+ */
+if ( hlpi.virt_lpi == INVALID_LPI )
+goto out;
+
+d = rcu_lock_domain_by_id(hlpi.dom_id);
+if ( !d )
+goto out;
+
+/*
+ * TODO: Investigate what to do here for potential interrupt storms.
+ * As we keep all host LPIs enabled, for disabling LPIs we would need
+ * to queue a ITS host command, which we avoid so far during a guest's
+ * runtime. Also re-enabling would trigger a host command upon the
+ * guest sending a command, which could be an attack vector for
+ * hogging the host command queue.
+ * See the thread around here for some background:
+ * https://lists.xen.org/archives/html/xen-devel/2016-12/msg3.html
+ */
+vgic_vcpu_inject_lpi(d, hlpi.virt_lpi);
+
+rcu_unlock_domain(d);
+
+out:
+irq_exit();
+}
+
 static int gicv3_lpi_allocate_pendtable(uint64_t *reg)
 {
 uint64_t val;
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index fc3614e..d539d6c 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1692,6 +1692,7 @@ static const struct gic_hw_operations gicv3_ops = {
 .make_hwdom_dt_node  = gicv3_make_hwdom_dt_node,
 .make_hwdom_madt = 

[Xen-devel] [PATCH v12 21/34] ARM: vITS: provide access to struct pending_irq

2017-06-14 Thread Andre Przywara
For each device we allocate one struct pending_irq for each virtual
event (MSI).
Provide a helper function which returns the pointer to the appropriate
struct, to be able to find the right struct when given a virtual
deviceID/eventID pair.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/gic-v3-its.c| 59 
 xen/include/asm-arm/gic_v3_its.h |  4 +++
 2 files changed, 63 insertions(+)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index aebc257..38f0840 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -800,6 +800,65 @@ out:
 return ret;
 }
 
+/* Must be called with the its_device_lock held. */
+static struct its_device *get_its_device(struct domain *d, paddr_t vdoorbell,
+ uint32_t vdevid)
+{
+struct rb_node *node = d->arch.vgic.its_devices.rb_node;
+struct its_device *dev;
+
+ASSERT(spin_is_locked(>arch.vgic.its_devices_lock));
+
+while (node)
+{
+int cmp;
+
+dev = rb_entry(node, struct its_device, rbnode);
+cmp = compare_its_guest_devices(dev, vdoorbell, vdevid);
+
+if ( !cmp )
+return dev;
+
+if ( cmp > 0 )
+node = node->rb_left;
+else
+node = node->rb_right;
+}
+
+return NULL;
+}
+
+static struct pending_irq *get_event_pending_irq(struct domain *d,
+ paddr_t vdoorbell_address,
+ uint32_t vdevid,
+ uint32_t eventid,
+ uint32_t *host_lpi)
+{
+struct its_device *dev;
+struct pending_irq *pirq = NULL;
+
+spin_lock(>arch.vgic.its_devices_lock);
+dev = get_its_device(d, vdoorbell_address, vdevid);
+if ( dev && eventid < dev->eventids )
+{
+pirq = >pend_irqs[eventid];
+if ( host_lpi )
+*host_lpi = dev->host_lpi_blocks[eventid / LPI_BLOCK] +
+(eventid % LPI_BLOCK);
+}
+spin_unlock(>arch.vgic.its_devices_lock);
+
+return pirq;
+}
+
+struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
+paddr_t vdoorbell_address,
+uint32_t vdevid,
+uint32_t eventid)
+{
+return get_event_pending_irq(d, vdoorbell_address, vdevid, eventid, NULL);
+}
+
 /* Scan the DT for any ITS nodes and create a list of host ITSes out of it. */
 void gicv3_its_dt_init(const struct dt_device_node *node)
 {
diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h
index 5db7d04..be67726 100644
--- a/xen/include/asm-arm/gic_v3_its.h
+++ b/xen/include/asm-arm/gic_v3_its.h
@@ -171,6 +171,10 @@ void gicv3_free_host_lpi_block(uint32_t first_lpi);
 
 void vgic_vcpu_inject_lpi(struct domain *d, unsigned int virq);
 
+struct pending_irq *gicv3_its_get_event_pending_irq(struct domain *d,
+paddr_t vdoorbell_address,
+uint32_t vdevid,
+uint32_t eventid);
 #else
 
 static inline void gicv3_its_dt_init(const struct dt_device_node *node)
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 23/34] ARM: vITS: handle MAPC command

2017-06-14 Thread Andre Przywara
The MAPC command associates a given collection ID with a given
redistributor, thus mapping collections to VCPUs.
We just store the vcpu_id in the collection table for that.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c | 45 +
 1 file changed, 45 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 8a2a0d2..14cb1f0 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -115,6 +115,25 @@ static paddr_t get_baser_phys_addr(uint64_t reg)
 }
 
 /* Must be called with the ITS lock held. */
+static int its_set_collection(struct virt_its *its, uint16_t collid,
+  coll_table_entry_t vcpu_id)
+{
+paddr_t addr = get_baser_phys_addr(its->baser_coll);
+
+/* The collection table entry must be able to store a VCPU ID. */
+BUILD_BUG_ON(BIT(sizeof(coll_table_entry_t) * 8) < MAX_VIRT_CPUS);
+
+ASSERT(spin_is_locked(>its_lock));
+
+if ( collid >= its->max_collections )
+return -ENOENT;
+
+return vgic_access_guest_memory(its->d,
+addr + collid * sizeof(coll_table_entry_t),
+_id, sizeof(vcpu_id), true);
+}
+
+/* Must be called with the ITS lock held. */
 static struct vcpu *get_vcpu_from_collection(struct virt_its *its,
  uint16_t collid)
 {
@@ -281,6 +300,29 @@ static int its_handle_int(struct virt_its *its, uint64_t 
*cmdptr)
 return 0;
 }
 
+static int its_handle_mapc(struct virt_its *its, uint64_t *cmdptr)
+{
+uint32_t collid = its_cmd_get_collection(cmdptr);
+uint64_t rdbase = its_cmd_mask_field(cmdptr, 2, 16, 44);
+
+if ( collid >= its->max_collections )
+return -1;
+
+if ( rdbase >= its->d->max_vcpus )
+return -1;
+
+spin_lock(>its_lock);
+
+if ( its_cmd_get_validbit(cmdptr) )
+its_set_collection(its, collid, rdbase);
+else
+its_set_collection(its, collid, UNMAPPED_COLLECTION);
+
+spin_unlock(>its_lock);
+
+return 0;
+}
+
 #define ITS_CMD_BUFFER_SIZE(baser)  baser) & 0xff) + 1) << 12)
 #define ITS_CMD_OFFSET(reg) ((reg) & GENMASK(19, 5))
 
@@ -320,6 +362,9 @@ static int vgic_its_handle_cmds(struct domain *d, struct 
virt_its *its)
 case GITS_CMD_INT:
 ret = its_handle_int(its, command);
 break;
+case GITS_CMD_MAPC:
+ret = its_handle_mapc(its, command);
+break;
 case GITS_CMD_SYNC:
 /* We handle ITS commands synchronously, so we ignore SYNC. */
 break;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 20/34] ARM: vITS: introduce translation table walks

2017-06-14 Thread Andre Przywara
The ITS stores the target (v)CPU and the (virtual) LPI number in tables.
Introduce functions to walk those tables and translate an device ID -
event ID pair into a pair of virtual LPI and vCPU.
We map those tables on demand - which is cheap on arm64 - and copy the
respective entries before using them, to avoid the guest tampering with
them meanwhile.

To allow compiling without warnings, we declare two functions as
non-static for the moment, which two later patches will fix.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c | 140 +
 1 file changed, 140 insertions(+)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 5481791..36910aa 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -83,6 +83,7 @@ struct vits_itte
  * Each entry just contains the VCPU ID of the respective vCPU.
  */
 typedef uint16_t coll_table_entry_t;
+#define UNMAPPED_COLLECTION  ((coll_table_entry_t)~0)
 
 /*
  * Our device table encodings:
@@ -99,6 +100,145 @@ typedef uint64_t dev_table_entry_t;
 #define GITS_BASER_RO_MASK   (GITS_BASER_TYPE_MASK | \
   (0x1fL << GITS_BASER_ENTRY_SIZE_SHIFT))
 
+/*
+ * The physical address is encoded slightly differently depending on
+ * the used page size: the highest four bits are stored in the lowest
+ * four bits of the field for 64K pages.
+ */
+static paddr_t get_baser_phys_addr(uint64_t reg)
+{
+if ( reg & BIT(9) )
+return (reg & GENMASK(47, 16)) |
+((reg & GENMASK(15, 12)) << 36);
+else
+return reg & GENMASK(47, 12);
+}
+
+/* Must be called with the ITS lock held. */
+static struct vcpu *get_vcpu_from_collection(struct virt_its *its,
+ uint16_t collid)
+{
+paddr_t addr = get_baser_phys_addr(its->baser_coll);
+coll_table_entry_t vcpu_id;
+int ret;
+
+ASSERT(spin_is_locked(>its_lock));
+
+if ( collid >= its->max_collections )
+return NULL;
+
+ret = vgic_access_guest_memory(its->d,
+   addr + collid * sizeof(coll_table_entry_t),
+   _id, sizeof(coll_table_entry_t), 
false);
+if ( ret )
+return NULL;
+
+if ( vcpu_id == UNMAPPED_COLLECTION || vcpu_id >= its->d->max_vcpus )
+return NULL;
+
+return its->d->vcpu[vcpu_id];
+}
+
+/*
+ * Lookup the address of the Interrupt Translation Table associated with
+ * that device ID.
+ * TODO: add support for walking indirect tables.
+ */
+static int its_get_itt(struct virt_its *its, uint32_t devid,
+   dev_table_entry_t *itt)
+{
+paddr_t addr = get_baser_phys_addr(its->baser_dev);
+
+if ( devid >= its->max_devices )
+return -EINVAL;
+
+return vgic_access_guest_memory(its->d,
+addr + devid * sizeof(dev_table_entry_t),
+itt, sizeof(*itt), false);
+}
+
+/*
+ * Lookup the address of the Interrupt Translation Table associated with
+ * a device ID and return the address of the ITTE belonging to the event ID
+ * (which is an index into that table).
+ */
+static paddr_t its_get_itte_address(struct virt_its *its,
+uint32_t devid, uint32_t evid)
+{
+dev_table_entry_t itt;
+int ret;
+
+ret = its_get_itt(its, devid, );
+if ( ret )
+return INVALID_PADDR;
+
+if ( evid >= DEV_TABLE_ITT_SIZE(itt) ||
+ DEV_TABLE_ITT_ADDR(itt) == INVALID_PADDR )
+return INVALID_PADDR;
+
+return DEV_TABLE_ITT_ADDR(itt) + evid * sizeof(struct vits_itte);
+}
+
+/*
+ * Queries the collection and device tables to get the vCPU and virtual
+ * LPI number for a given guest event. This first accesses the guest memory
+ * to resolve the address of the ITTE, then reads the ITTE entry at this
+ * address and puts the result in vcpu_ptr and vlpi_ptr.
+ * Must be called with the ITS lock held.
+ */
+bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
+   struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
+{
+paddr_t addr;
+struct vits_itte itte;
+struct vcpu *vcpu;
+
+ASSERT(spin_is_locked(>its_lock));
+
+addr = its_get_itte_address(its, devid, evid);
+if ( addr == INVALID_PADDR )
+return false;
+
+if ( vgic_access_guest_memory(its->d, addr, , sizeof(itte), false) )
+return false;
+
+vcpu = get_vcpu_from_collection(its, itte.collection);
+if ( !vcpu )
+return false;
+
+*vcpu_ptr = vcpu;
+*vlpi_ptr = itte.vlpi;
+return true;
+}
+
+/*
+ * Queries the collection and device tables to translate the device ID and
+ * event ID and find the appropriate ITTE. The given collection ID and the
+ * virtual LPI number are then stored into that entry.
+ * If vcpu_ptr is provided, returns the VCPU belonging to that 

[Xen-devel] [PATCH v12 19/34] ARM: vITS: add command handling stub and MMIO emulation

2017-06-14 Thread Andre Przywara
Emulate the memory mapped ITS registers and provide a stub to introduce
the ITS command handling framework (but without actually emulating any
commands at this time).
This fixes a misnomer in our virtual ITS structure, where the spec is
confusingly using ID_bits in GITS_TYPER to denote the number of event IDs
(in contrast to GICD_TYPER, where it means number of LPIs).

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3-its.c   | 588 ++-
 xen/include/asm-arm/gic_v3_its.h |   3 +
 2 files changed, 590 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 065ffe2..5481791 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -19,6 +19,16 @@
  * along with this program; If not, see .
  */
 
+/*
+ * Locking order:
+ *
+ * its->vcmd_lock(protects the command queue)
+ * its->its_lock (protects the translation tables)
+ * d->its_devices_lock   (protects the device RB tree)
+ * v->vgic.lock  (protects the struct pending_irq)
+ * d->pend_lpi_tree_lock (protects the radix tree)
+ */
+
 #include 
 #include 
 #include 
@@ -43,7 +53,7 @@
 struct virt_its {
 struct domain *d;
 unsigned int devid_bits;
-unsigned int intid_bits;
+unsigned int evid_bits;
 spinlock_t vcmd_lock;   /* Protects the virtual command buffer, which 
*/
 uint64_t cwriter;   /* consists of CWRITER and CREADR and those   
*/
 uint64_t creadr;/* shadow variables cwriter and creadr. */
@@ -53,6 +63,7 @@ struct virt_its {
 uint64_t baser_dev, baser_coll; /* BASER0 and BASER1 for the guest */
 unsigned int max_collections;
 unsigned int max_devices;
+/* changing "enabled" requires to hold *both* the vcmd_lock and its_lock */
 bool enabled;
 };
 
@@ -67,6 +78,581 @@ struct vits_itte
 uint16_t pad;
 };
 
+/*
+ * Our collection table encoding:
+ * Each entry just contains the VCPU ID of the respective vCPU.
+ */
+typedef uint16_t coll_table_entry_t;
+
+/*
+ * Our device table encodings:
+ * Contains the guest physical address of the Interrupt Translation Table in
+ * bits [51:8], and the size of it is encoded as the number of bits minus one
+ * in the lowest 5 bits of the word.
+ */
+typedef uint64_t dev_table_entry_t;
+#define DEV_TABLE_ITT_ADDR(x) ((x) & GENMASK(51, 8))
+#define DEV_TABLE_ITT_SIZE(x) (BIT(((x) & GENMASK(4, 0)) + 1))
+#define DEV_TABLE_ENTRY(addr, bits) \
+(((addr) & GENMASK(51, 8)) | (((bits) - 1) & GENMASK(4, 0)))
+
+#define GITS_BASER_RO_MASK   (GITS_BASER_TYPE_MASK | \
+  (0x1fL << GITS_BASER_ENTRY_SIZE_SHIFT))
+
+/**
+ * Functions that handle ITS commands *
+ **/
+
+static uint64_t its_cmd_mask_field(uint64_t *its_cmd, unsigned int word,
+   unsigned int shift, unsigned int size)
+{
+return (its_cmd[word] >> shift) & GENMASK(size - 1, 0);
+}
+
+#define its_cmd_get_command(cmd)its_cmd_mask_field(cmd, 0,  0,  8)
+#define its_cmd_get_deviceid(cmd)   its_cmd_mask_field(cmd, 0, 32, 32)
+#define its_cmd_get_size(cmd)   its_cmd_mask_field(cmd, 1,  0,  5)
+#define its_cmd_get_id(cmd) its_cmd_mask_field(cmd, 1,  0, 32)
+#define its_cmd_get_physical_id(cmd)its_cmd_mask_field(cmd, 1, 32, 32)
+#define its_cmd_get_collection(cmd) its_cmd_mask_field(cmd, 2,  0, 16)
+#define its_cmd_get_target_addr(cmd)its_cmd_mask_field(cmd, 2, 16, 32)
+#define its_cmd_get_validbit(cmd)   its_cmd_mask_field(cmd, 2, 63,  1)
+#define its_cmd_get_ittaddr(cmd)(its_cmd_mask_field(cmd, 2, 8, 44) << 
8)
+
+#define ITS_CMD_BUFFER_SIZE(baser)  baser) & 0xff) + 1) << 12)
+#define ITS_CMD_OFFSET(reg) ((reg) & GENMASK(19, 5))
+
+static void dump_its_command(uint64_t *command)
+{
+gdprintk(XENLOG_WARNING, "  cmd 0x%02lx: %016lx %016lx %016lx %016lx\n",
+ its_cmd_get_command(command),
+ command[0], command[1], command[2], command[3]);
+}
+
+/*
+ * Must be called with the vcmd_lock held.
+ * TODO: Investigate whether we can be smarter here and don't need to hold
+ * the lock all of the time.
+ */
+static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its)
+{
+paddr_t addr = its->cbaser & GENMASK(51, 12);
+uint64_t command[4];
+
+ASSERT(spin_is_locked(>vcmd_lock));
+
+if ( its->cwriter >= ITS_CMD_BUFFER_SIZE(its->cbaser) )
+return -1;
+
+while ( its->creadr != its->cwriter )
+{
+int ret;
+
+ret = vgic_access_guest_memory(d, addr + its->creadr,
+   command, sizeof(command), false);
+if ( ret )
+return 

[Xen-devel] [PATCH v12 10/34] ARM: GIC: export and extend vgic_init_pending_irq()

2017-06-14 Thread Andre Przywara
For LPIs we later want to dynamically allocate struct pending_irqs.
So beside needing to initialize the struct from there we also need
to clean it up and re-initialize it later on.
Export vgic_init_pending_irq() and extend it to be reusable.

Signed-off-by: Andre Przywara 
Reviewed-by: Julien Grall 
---
 xen/arch/arm/vgic.c| 3 ++-
 xen/include/asm-arm/vgic.h | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index cb7ab3b..d7c4f32 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -60,8 +60,9 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned 
int irq)
 return vgic_get_rank(v, rank);
 }
 
-static void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
+void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
 {
+memset(p, 0, sizeof(*p));
 INIT_LIST_HEAD(>inflight);
 INIT_LIST_HEAD(>lr_queue);
 p->irq = virq;
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 65d2322..9dc487b 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -305,6 +305,7 @@ extern struct vcpu *vgic_get_target_vcpu(struct vcpu *v, 
unsigned int virq);
 extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq);
 extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
+extern void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq);
 extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, 
int s);
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 13/34] ARM: GIC: ITS: remove no longer needed VCPU ID in host LPI entry

2017-06-14 Thread Andre Przywara
To get easy access to the VCPU a forwarded LPI interrupt should be
injected to, so far we stored the VCPU ID in the host LPI entry.
However this creates a redundancy, since we keep the target VCPU in
the struct pending_irq already, which we can easily look up given the
domain and the virtual LPI number.
Apart from removing the redundancy this avoids having to update this
information later and keeping it in sync in a race-free fashion.
Since this information has not been used that, this patch actually does
not change anything, it just removes the declaration and initialization.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/gic-v3-lpi.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 292f2d0..dbaf45a 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -47,7 +47,7 @@ union host_lpi {
 struct {
 uint32_t virt_lpi;
 uint16_t dom_id;
-uint16_t vcpu_id;
+uint16_t pad;
 };
 };
 
@@ -417,7 +417,6 @@ int gicv3_allocate_host_lpi_block(struct domain *d, 
uint32_t *first_lpi)
  */
 hlpi.virt_lpi = INVALID_LPI;
 hlpi.dom_id = d->domain_id;
-hlpi.vcpu_id = INVALID_VCPU_ID;
 write_u64_atomic(_data.host_lpis[chunk][lpi_idx + i].data,
  hlpi.data);
 
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 00/34] arm64: Dom0 ITS emulation

2017-06-14 Thread Andre Przywara
Hi,

hopefully the final version, with only nits from v11 addressed.
The same restriction as for the previous versions  still apply: the locking
is considered somewhat insufficient and will be fixed by an upcoming rework.

Patches 01/34 and 02/34 should be applied for 4.9 still, since they fix
existing bugs.

The minor comments on v11 have been addressed and the respective tags
have been added. For a changelog see below (which omits typo fixes).

I dropped Julien's Acked-by from patch 25/34 (MAPD), since I changed
it slightly after Stefano's comment.

Cheers,
Andre

--
This series adds support for emulation of an ARM GICv3 ITS interrupt
controller. For hardware which relies on the ITS to provide interrupts for
its peripherals this code is needed to get a machine booted into Dom0 at
all. ITS emulation for DomUs is only really useful with PCI passthrough,
which is not yet available for ARM. It is expected that this feature
will be co-developed with the ITS DomU code. However this code drop here
considered DomU emulation already, to keep later architectural changes
to a minimum.

This is technical preview version to allow early testing of the feature.
Things not (properly) addressed in this release:
- There is only support for Dom0 at the moment. DomU support is only really
useful with PCI passthrough, which is not there yet for ARM.
- The MOVALL command is not emulated. In our case there is really nothing
to do here. We might need to revisit this in the future for DomU support.
- The INVALL command might need some rework to be more efficient. Currently
we iterate over all mapped LPIs, which might take a bit longer.
- Indirect tables are not supported. This affects both the host and the
virtual side.
- The ITS tables inside (Dom0) guest memory cannot easily be protected
at the moment (without restricting access to Xen as well). So for now
we trust Dom0 not to touch this memory (which the spec forbids as well).
- With malicious guests (DomUs) there is a possibility of an interrupt
storm triggered by a device. We would need to investigate what that means
for Xen and if there is a nice way to prevent this. Disabling the LPI on
the host side would require command queuing, which has its downsides to
be issued during runtime.
- Dom0 should make sure that the ITS resources (number of LPIs, devices,
events) later handed to a DomU are really limited, as a large number of
them could mean much time spend in Xen to initialize, free or handle those.
It is expected that the toolstack sets up a tailored ITS with just enough
resources to accommodate the needs of the actual passthrough-ed device(s).
- The command queue locking is currently suboptimal and should be made more
fine-grained in the future, if possible.
- Provide support for running with an IOMMU, to map the doorbell page
to all devices.


Some generic design principles:

* The current GIC code statically allocates structures for each supported
IRQ (both for the host and the guest), which due to the potentially
millions of LPI interrupts is not feasible to copy for the ITS.
So we refrain from introducing the ITS as a first class Xen interrupt
controller, also we don't hold struct irq_desc's or struct pending_irq's
for each possible LPI.
Fortunately LPIs are only interesting to guests, so we get away with
storing only the virtual IRQ number and the guest VCPU for each allocated
host LPI, which can be stashed into one uint64_t. This data is stored in
a two-level table, which is both memory efficient and quick to access.
We hook into the existing IRQ handling and VGIC code to avoid accessing
the normal structures, providing alternative methods for getting the
needed information (priority, is enabled?) for LPIs.
Whenever a guest maps a device, we allocate the maximum required number
of struct pending_irq's, so that any triggering LPI can find its data
structure. Upon the guest actually mapping the LPI, this pointer to the
corresponding pending_irq gets entered into a radix tree, so that it can
be quickly looked up.

* On the guest side we (later will) have to deal with malicious guests
trying to hog Xen with mapping requests for a lot of LPIs, for instance.
As the ITS actually uses system memory for storing status information,
we use this memory (which the guest has to provide) to naturally limit
a guest. Whenever we need information from any of the ITS tables, we
temporarily map them (which is cheap on arm64) and copy the required data.

* An obvious approach to handling some guest ITS commands would be to
propagate them to the host, for instance to map devices and LPIs and
to enable or disable LPIs.
However this (later with DomU support) will create an attack vector, as
a malicious guest could try to fill the host command queue with
propagated commands.
So we try to avoid this situation: Dom0 sending a device mapping (MAPD)
command is the only time we allow queuing commands to the host ITS command
queue, as this seems to be the only 

[Xen-devel] [PATCH v12 02/34] ARM: GICv3: enable ITS on the host

2017-06-14 Thread Andre Przywara
Even though the ITS emulation is not yet in place, the host ITS already
gets initialized and Xen tries to map the host collections.
However for commands to be processed we need to *enable* the ITS, which
will be done in a later patch not yet merged.
So those MAPC commands are not processed and run into a timeout, leading
to a panic on machines which advertise an ITS in their DT.
This patch just enables the ITS (but not the LPIs on each redistributor),
to get those MAPC commands executed.

This fixes booting Xen on ARM64 machines with an ITS and the
(EXPERT) ITS Kconfig option enabled.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/gic-v3-its.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
index 07280b3..aebc257 100644
--- a/xen/arch/arm/gic-v3-its.c
+++ b/xen/arch/arm/gic-v3-its.c
@@ -505,6 +505,10 @@ static int gicv3_its_init_single_its(struct host_its 
*hw_its)
 return -ENOMEM;
 writeq_relaxed(0, hw_its->its_base + GITS_CWRITER);
 
+/* Now enable interrupt translation and command processing on that ITS. */
+reg = readl_relaxed(hw_its->its_base + GITS_CTLR);
+writel_relaxed(reg | GITS_CTLR_ENABLE, hw_its->its_base + GITS_CTLR);
+
 return 0;
 }
 
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 17/34] ARM: vGICv3: re-use vgic_reg64_check_access

2017-06-14 Thread Andre Przywara
vgic_reg64_check_access() checks for a valid access width of a 64-bit
MMIO register, which is useful beyond the current GICv3 emulation only.
Move this function to the vgic-emul.h to be easily reusable.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v3.c  | 9 -
 xen/include/asm-arm/vgic-emul.h | 9 +
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index c53fa9c..30981b2 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -161,15 +161,6 @@ static void vgic_store_irouter(struct domain *d, struct 
vgic_irq_rank *rank,
 }
 }
 
-static inline bool vgic_reg64_check_access(struct hsr_dabt dabt)
-{
-/*
- * 64 bits registers can be accessible using 32-bit and 64-bit unless
- * stated otherwise (See 8.1.3 ARM IHI 0069A).
- */
-return ( dabt.size == DABT_DOUBLE_WORD || dabt.size == DABT_WORD );
-}
-
 static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v, mmio_info_t *info,
  uint32_t gicr_reg,
  register_t *r)
diff --git a/xen/include/asm-arm/vgic-emul.h b/xen/include/asm-arm/vgic-emul.h
index 184a1f0..e52fbaa 100644
--- a/xen/include/asm-arm/vgic-emul.h
+++ b/xen/include/asm-arm/vgic-emul.h
@@ -12,6 +12,15 @@
 #define VRANGE32(start, end) start ... end + 3
 #define VRANGE64(start, end) start ... end + 7
 
+/*
+ * 64 bits registers can be accessible using 32-bit and 64-bit unless
+ * stated otherwise (See 8.1.3 ARM IHI 0069A).
+ */
+static inline bool vgic_reg64_check_access(struct hsr_dabt dabt)
+{
+return ( dabt.size == DABT_DOUBLE_WORD || dabt.size == DABT_WORD );
+}
+
 #endif /* __ASM_ARM_VGIC_EMUL_H__ */
 
 /*
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 06/34] ARM: vGIC: move irq_to_pending() calls under the VGIC VCPU lock

2017-06-14 Thread Andre Przywara
So far irq_to_pending() is just a convenience function to lookup
statically allocated arrays. This will change with LPIs, which are
more dynamic, so the memory for their struct pending_irq might go away.
The proper answer to the issue of preventing stale pointers is
ref-counting, which requires more rework and will be introduced with
a later rework.
For now move the irq_to_pending() calls that are used with LPIs under the
VGIC VCPU lock, and only use the returned pointer while holding the lock.
This prevents the memory from being freed while we use it.
For the sake of completeness we take care about all irq_to_pending()
users, even those which later will never deal with LPIs.
Document the limits of vgic_num_irqs().

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/vgic.c| 42 --
 xen/include/asm-arm/vgic.h |  5 +
 2 files changed, 37 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 04d821a..f2f423f 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -234,23 +234,29 @@ static int vgic_get_virq_priority(struct vcpu *v, 
unsigned int virq)
 bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
 {
 unsigned long flags;
-struct pending_irq *p = irq_to_pending(old, irq);
+struct pending_irq *p;
+
+spin_lock_irqsave(>arch.vgic.lock, flags);
+
+p = irq_to_pending(old, irq);
 
 /* nothing to do for virtual interrupts */
 if ( p->desc == NULL )
+{
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
 return true;
+}
 
 /* migration already in progress, no need to do anything */
 if ( test_bit(GIC_IRQ_GUEST_MIGRATING, >status) )
 {
 gprintk(XENLOG_WARNING, "irq %u migration failed: requested while in 
progress\n", irq);
+spin_unlock_irqrestore(>arch.vgic.lock, flags);
 return false;
 }
 
 perfc_incr(vgic_irq_migrates);
 
-spin_lock_irqsave(>arch.vgic.lock, flags);
-
 if ( list_empty(>inflight) )
 {
 irq_set_affinity(p->desc, cpumask_of(new->processor));
@@ -285,6 +291,17 @@ void arch_move_irqs(struct vcpu *v)
 struct vcpu *v_target;
 int i;
 
+/*
+ * We don't migrate LPIs at the moment.
+ * If we ever do, we must make sure that the struct pending_irq does
+ * not go away, as there is no lock preventing this here.
+ * To ensure this, we check if the loop below ever touches LPIs.
+ * In the moment vgic_num_irqs() just covers SPIs, as it's mostly used
+ * for allocating the pending_irq and irq_desc array, in which LPIs
+ * don't participate.
+ */
+ASSERT(!is_lpi(vgic_num_irqs(d) - 1));
+
 for ( i = 32; i < vgic_num_irqs(d); i++ )
 {
 v_target = vgic_get_target_vcpu(v, i);
@@ -299,6 +316,7 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 {
 const unsigned long mask = r;
 struct pending_irq *p;
+struct irq_desc *desc;
 unsigned int irq;
 unsigned long flags;
 int i = 0;
@@ -307,17 +325,19 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 while ( (i = find_next_bit(, 32, i)) < 32 ) {
 irq = i + (32 * n);
 v_target = vgic_get_target_vcpu(v, irq);
+
+spin_lock_irqsave(_target->arch.vgic.lock, flags);
 p = irq_to_pending(v_target, irq);
 clear_bit(GIC_IRQ_GUEST_ENABLED, >status);
-spin_lock_irqsave(_target->arch.vgic.lock, flags);
 gic_remove_from_lr_pending(v_target, p);
+desc = p->desc;
 spin_unlock_irqrestore(_target->arch.vgic.lock, flags);
 
-if ( p->desc != NULL )
+if ( desc != NULL )
 {
-spin_lock_irqsave(>desc->lock, flags);
-p->desc->handler->disable(p->desc);
-spin_unlock_irqrestore(>desc->lock, flags);
+spin_lock_irqsave(>lock, flags);
+desc->handler->disable(desc);
+spin_unlock_irqrestore(>lock, flags);
 }
 i++;
 }
@@ -352,9 +372,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
 while ( (i = find_next_bit(, 32, i)) < 32 ) {
 irq = i + (32 * n);
 v_target = vgic_get_target_vcpu(v, irq);
+spin_lock_irqsave(_target->arch.vgic.lock, flags);
 p = irq_to_pending(v_target, irq);
 set_bit(GIC_IRQ_GUEST_ENABLED, >status);
-spin_lock_irqsave(_target->arch.vgic.lock, flags);
 if ( !list_empty(>inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, 
>status) )
 gic_raise_guest_irq(v_target, irq, p->priority);
 spin_unlock_irqrestore(_target->arch.vgic.lock, flags);
@@ -463,7 +483,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
 void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
 {
 uint8_t priority;
-struct pending_irq *iter, *n = irq_to_pending(v, virq);
+struct pending_irq *iter, *n;
 unsigned long flags;
 bool running;
 
@@ -471,6 +491,8 @@ void 

[Xen-devel] [PATCH v12 05/34] ARM: vGIC: rework gic_remove_from_queues()

2017-06-14 Thread Andre Przywara
The function name gic_remove_from_queues() was a bit of a misnomer,
since it just removes an IRQ from the pending queue, not both queues.
Rename the function to make this more clear, also give it a pointer to
a struct pending_irq directly and rely on the VGIC VCPU lock to be
already taken, so this can be used in more places. This results in the
lock to be taken in the caller instead now.
Replace the list removal in gic_clear_pending_irqs() with a call to
this function.

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/gic.c| 12 
 xen/arch/arm/vgic.c   |  5 -
 xen/include/asm-arm/gic.h |  2 +-
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index da19130..6c0c9c3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -400,15 +400,11 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, 
struct pending_irq *n)
 list_add_tail(>lr_queue, >arch.vgic.lr_pending);
 }
 
-void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
+void gic_remove_from_lr_pending(struct vcpu *v, struct pending_irq *p)
 {
-struct pending_irq *p = irq_to_pending(v, virtual_irq);
-unsigned long flags;
+ASSERT(spin_is_locked(>arch.vgic.lock));
 
-spin_lock_irqsave(>arch.vgic.lock, flags);
-if ( !list_empty(>lr_queue) )
-list_del_init(>lr_queue);
-spin_unlock_irqrestore(>arch.vgic.lock, flags);
+list_del_init(>lr_queue);
 }
 
 void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
@@ -609,7 +605,7 @@ void gic_clear_pending_irqs(struct vcpu *v)
 
 v->arch.lr_mask = 0;
 list_for_each_entry_safe ( p, t, >arch.vgic.lr_pending, lr_queue )
-list_del_init(>lr_queue);
+gic_remove_from_lr_pending(v, p);
 }
 
 int gic_events_need_delivery(void)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 18fe420..04d821a 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -309,7 +309,10 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 v_target = vgic_get_target_vcpu(v, irq);
 p = irq_to_pending(v_target, irq);
 clear_bit(GIC_IRQ_GUEST_ENABLED, >status);
-gic_remove_from_queues(v_target, irq);
+spin_lock_irqsave(_target->arch.vgic.lock, flags);
+gic_remove_from_lr_pending(v_target, p);
+spin_unlock_irqrestore(_target->arch.vgic.lock, flags);
+
 if ( p->desc != NULL )
 {
 spin_lock_irqsave(>desc->lock, flags);
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 836a103..3130634 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -243,7 +243,7 @@ extern void init_maintenance_interrupt(void);
 extern void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
 unsigned int priority);
 extern void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq);
-extern void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq);
+extern void gic_remove_from_lr_pending(struct vcpu *v, struct pending_irq *p);
 
 /* Accept an interrupt from the GIC and dispatch its handler */
 extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 11/34] ARM: vGIC: cache virtual LPI priority in struct pending_irq

2017-06-14 Thread Andre Przywara
We enhance struct pending_irq to cache the priority information
for LPIs. Reading the information from there is faster than accessing
the property table from guest memory. Also it use some padding area in
the struct, so does not require more memory.
This introduces the function to retrieve the LPI priority as a vgic_ops.
Also this moves the vgic_get_virq_priority() call in
vgic_vcpu_inject_irq() to happen after the NULL check of the pending_irq
pointer, so we can rely on the pointer in the new function.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic-v2.c |  7 +++
 xen/arch/arm/vgic-v3.c | 11 +++
 xen/arch/arm/vgic.c| 10 +++---
 xen/include/asm-arm/vgic.h |  2 ++
 4 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 488e6fa..4f8dee4 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -712,11 +712,18 @@ static struct pending_irq *vgic_v2_lpi_to_pending(struct 
domain *d,
 BUG();
 }
 
+static int vgic_v2_lpi_get_priority(struct domain *d, unsigned int vlpi)
+{
+/* Dummy function, no LPIs on a VGICv2. */
+BUG();
+}
+
 static const struct vgic_ops vgic_v2_ops = {
 .vcpu_init   = vgic_v2_vcpu_init,
 .domain_init = vgic_v2_domain_init,
 .domain_free = vgic_v2_domain_free,
 .lpi_to_pending = vgic_v2_lpi_to_pending,
+.lpi_get_priority = vgic_v2_lpi_get_priority,
 .max_vcpus = 8,
 };
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 9dee2df..0b4669f 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1577,12 +1577,23 @@ static struct pending_irq 
*vgic_v3_lpi_to_pending(struct domain *d,
 return pirq;
 }
 
+/* Retrieve the priority of an LPI from its struct pending_irq. */
+static int vgic_v3_lpi_get_priority(struct domain *d, uint32_t vlpi)
+{
+struct pending_irq *p = vgic_v3_lpi_to_pending(d, vlpi);
+
+ASSERT(p);
+
+return p->lpi_priority;
+}
+
 static const struct vgic_ops v3_ops = {
 .vcpu_init   = vgic_v3_vcpu_init,
 .domain_init = vgic_v3_domain_init,
 .domain_free = vgic_v3_domain_free,
 .emulate_reg  = vgic_v3_emulate_reg,
 .lpi_to_pending = vgic_v3_lpi_to_pending,
+.lpi_get_priority = vgic_v3_lpi_get_priority,
 /*
  * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
  * that can be supported is up to 4096(==256*16) in theory.
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index d7c4f32..204e0d9 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -227,8 +227,13 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned 
int virq)
 
 static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
 {
-struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
+struct vgic_irq_rank *rank;
+
+/* LPIs don't have a rank, also store their priority separately. */
+if ( is_lpi(virq) )
+return v->domain->arch.vgic.handler->lpi_get_priority(v->domain, virq);
 
+rank = vgic_rank_irq(v, virq);
 return ACCESS_ONCE(rank->priority[virq & INTERRUPT_RANK_MASK]);
 }
 
@@ -503,8 +508,6 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
 unsigned long flags;
 bool running;
 
-priority = vgic_get_virq_priority(v, virq);
-
 spin_lock_irqsave(>arch.vgic.lock, flags);
 
 n = irq_to_pending(v, virq);
@@ -530,6 +533,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
 goto out;
 }
 
+priority = vgic_get_virq_priority(v, virq);
 n->priority = priority;
 
 /* the irq is enabled */
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 9dc487b..d1fcea1 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -72,6 +72,7 @@ struct pending_irq
 #define GIC_INVALID_LR (uint8_t)~0
 uint8_t lr;
 uint8_t priority;
+uint8_t lpi_priority;   /* Caches the priority if this is an LPI. */
 /* inflight is used to append instances of pending_irq to
  * vgic.inflight_irqs */
 struct list_head inflight;
@@ -136,6 +137,7 @@ struct vgic_ops {
 bool (*emulate_reg)(struct cpu_user_regs *regs, union hsr hsr);
 /* lookup the struct pending_irq for a given LPI interrupt */
 struct pending_irq *(*lpi_to_pending)(struct domain *d, unsigned int vlpi);
+int (*lpi_get_priority)(struct domain *d, uint32_t vlpi);
 /* Maximum number of vCPU supported */
 const unsigned int max_vcpus;
 };
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 12/34] ARM: vGIC: add LPI VCPU ID to struct pending_irq

2017-06-14 Thread Andre Przywara
The target CPU for an LPI is encoded in the interrupt translation table
entry, so can't be easily derived from just an LPI number (short of
walking *all* tables and find the matching LPI).
To avoid this in case we need to know the VCPU (for the INVALL command,
for instance), put the VCPU ID in the struct pending_irq, so that it is
easily accessible.
We use the remaining 8 bits of padding space for that to avoid enlarging
the size of struct pending_irq. The number of VCPUs is limited to 127
at the moment anyway, which we also confirm with a BUILD_BUG_ON.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/vgic.c| 4 
 xen/include/asm-arm/vgic.h | 1 +
 2 files changed, 5 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 204e0d9..c097bd4 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -62,10 +62,14 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, 
unsigned int irq)
 
 void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
 {
+/* The lpi_vcpu_id field must be big enough to hold a VCPU ID. */
+BUILD_BUG_ON(BIT(sizeof(p->lpi_vcpu_id) * 8) < MAX_VIRT_CPUS);
+
 memset(p, 0, sizeof(*p));
 INIT_LIST_HEAD(>inflight);
 INIT_LIST_HEAD(>lr_queue);
 p->irq = virq;
+p->lpi_vcpu_id = INVALID_VCPU_ID;
 }
 
 static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index,
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index d1fcea1..33b2fb5 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -73,6 +73,7 @@ struct pending_irq
 uint8_t lr;
 uint8_t priority;
 uint8_t lpi_priority;   /* Caches the priority if this is an LPI. */
+uint8_t lpi_vcpu_id;/* The VCPU for an LPI. */
 /* inflight is used to append instances of pending_irq to
  * vgic.inflight_irqs */
 struct list_head inflight;
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 16/34] ARM: introduce vgic_access_guest_memory()

2017-06-14 Thread Andre Przywara
From: Vijaya Kumar K 

This function allows to copy a chunk of data from and to guest physical
memory. It looks up the associated page from the guest's p2m tree
and maps this page temporarily for the time of the access.
This function was originally written by Vijaya as part of an earlier series:
https://patchwork.kernel.org/patch/8177251

Signed-off-by: Vijaya Kumar K 
Signed-off-by: Andre Przywara 
Reviewed-by: Julien Grall 
---
 xen/arch/arm/vgic.c| 50 ++
 xen/include/asm-arm/vgic.h |  3 +++
 2 files changed, 53 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index c097bd4..789c58b 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -636,6 +637,55 @@ void vgic_free_virq(struct domain *d, unsigned int virq)
 }
 
 /*
+ * Temporarily map one physical guest page and copy data to or from it.
+ * The data to be copied cannot cross a page boundary.
+ */
+int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
+ uint32_t size, bool is_write)
+{
+struct page_info *page;
+uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
+p2m_type_t p2mt;
+void *p;
+
+/* Do not cross a page boundary. */
+if ( size > (PAGE_SIZE - offset) )
+{
+printk(XENLOG_G_ERR "d%d: vITS: memory access would cross page 
boundary\n",
+   d->domain_id);
+return -EINVAL;
+}
+
+page = get_page_from_gfn(d, paddr_to_pfn(gpa), , P2M_ALLOC);
+if ( !page )
+{
+printk(XENLOG_G_ERR "d%d: vITS: Failed to get table entry\n",
+   d->domain_id);
+return -EINVAL;
+}
+
+if ( !p2m_is_ram(p2mt) )
+{
+put_page(page);
+printk(XENLOG_G_ERR "d%d: vITS: memory used by the ITS should be RAM.",
+   d->domain_id);
+return -EINVAL;
+}
+
+p = __map_domain_page(page);
+
+if ( is_write )
+memcpy(p + offset, buf, size);
+else
+memcpy(buf, p + offset, size);
+
+unmap_domain_page(p);
+put_page(page);
+
+return 0;
+}
+
+/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 33b2fb5..6a23249 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -320,6 +320,9 @@ extern void register_vgic_ops(struct domain *d, const 
struct vgic_ops *ops);
 int vgic_v2_init(struct domain *d, int *mmio_count);
 int vgic_v3_init(struct domain *d, int *mmio_count);
 
+int vgic_access_guest_memory(struct domain *d, paddr_t gpa, void *buf,
+ uint32_t size, bool_t is_write);
+
 extern int domain_vgic_register(struct domain *d, int *mmio_count);
 extern int vcpu_vgic_free(struct vcpu *v);
 extern bool vgic_to_sgi(struct vcpu *v, register_t sgir,
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 01/34] ARM: vGIC: avoid rank lock when reading priority

2017-06-14 Thread Andre Przywara
When reading the priority value of a virtual interrupt, we were taking
the respective rank lock so far.
However for forwarded interrupts (Dom0 only so far) this may lead to a
deadlock with the following call chain:
- MMIO access to change the IRQ affinity, calling the ITARGETSR handler
- this handler takes the appropriate rank lock and calls vgic_store_itargetsr()
- vgic_store_itargetsr() will eventually call vgic_migrate_irq()
- if this IRQ is already in-flight, it will remove it from the old
  VCPU and inject it into the new one, by calling vgic_vcpu_inject_irq()
- vgic_vcpu_inject_irq will call vgic_get_virq_priority()
- vgic_get_virq_priority() tries to take the rank lock - again!
It seems like this code path has never been exercised before.

Fix this by avoiding taking the lock in vgic_get_virq_priority() (like we
do in vgic_get_target_vcpu()).
Actually we are just reading one byte, and priority changes while
interrupts are handled are a benign race that can happen on real hardware
too. So it is safe to just prevent the compiler from reading from the
struct more than once.

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/vgic-v2.c | 13 -
 xen/arch/arm/vgic-v3.c | 11 +++
 xen/arch/arm/vgic.c|  8 +---
 3 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index dc9f95b..9fa42e1 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -252,15 +252,15 @@ static int vgic_v2_distr_mmio_read(struct vcpu *v, 
mmio_info_t *info,
 case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
 {
 uint32_t ipriorityr;
+uint8_t rank_index;
 
 if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
 rank = vgic_rank_offset(v, 8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
 if ( rank == NULL ) goto read_as_zero;
+rank_index = REG_RANK_INDEX(8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
 
 vgic_lock_rank(v, rank, flags);
-ipriorityr = rank->ipriorityr[REG_RANK_INDEX(8,
- gicd_reg - 
GICD_IPRIORITYR,
- DABT_WORD)];
+ipriorityr = ACCESS_ONCE(rank->ipriorityr[rank_index]);
 vgic_unlock_rank(v, rank, flags);
 *r = vgic_reg32_extract(ipriorityr, info);
 
@@ -499,7 +499,7 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, 
mmio_info_t *info,
 
 case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
 {
-uint32_t *ipriorityr;
+uint32_t *ipriorityr, priority;
 
 if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
 rank = vgic_rank_offset(v, 8, gicd_reg - GICD_IPRIORITYR, DABT_WORD);
@@ -508,7 +508,10 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, 
mmio_info_t *info,
 ipriorityr = >ipriorityr[REG_RANK_INDEX(8,
   gicd_reg - 
GICD_IPRIORITYR,
   DABT_WORD)];
-vgic_reg32_update(ipriorityr, r, info);
+priority = ACCESS_ONCE(*ipriorityr);
+vgic_reg32_update(, r, info);
+ACCESS_ONCE(*ipriorityr) = priority;
+
 vgic_unlock_rank(v, rank, flags);
 return 1;
 }
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index d10757a..9018ddc 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -515,14 +515,15 @@ static int __vgic_v3_distr_common_mmio_read(const char 
*name, struct vcpu *v,
 case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
 {
 uint32_t ipriorityr;
+uint8_t rank_index;
 
 if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
 rank = vgic_rank_offset(v, 8, reg - GICD_IPRIORITYR, DABT_WORD);
 if ( rank == NULL ) goto read_as_zero;
+rank_index = REG_RANK_INDEX(8, reg - GICD_IPRIORITYR, DABT_WORD);
 
 vgic_lock_rank(v, rank, flags);
-ipriorityr = rank->ipriorityr[REG_RANK_INDEX(8, reg - GICD_IPRIORITYR,
- DABT_WORD)];
+ipriorityr = ACCESS_ONCE(rank->ipriorityr[rank_index]);
 vgic_unlock_rank(v, rank, flags);
 
 *r = vgic_reg32_extract(ipriorityr, info);
@@ -630,7 +631,7 @@ static int __vgic_v3_distr_common_mmio_write(const char 
*name, struct vcpu *v,
 
 case VRANGE32(GICD_IPRIORITYR, GICD_IPRIORITYRN):
 {
-uint32_t *ipriorityr;
+uint32_t *ipriorityr, priority;
 
 if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width;
 rank = vgic_rank_offset(v, 8, reg - GICD_IPRIORITYR, DABT_WORD);
@@ -638,7 +639,9 @@ static int __vgic_v3_distr_common_mmio_write(const char 
*name, struct vcpu *v,
 vgic_lock_rank(v, rank, flags);
 ipriorityr = >ipriorityr[REG_RANK_INDEX(8, reg - GICD_IPRIORITYR,
  

[Xen-devel] [PATCH v12 04/34] ARM: GICv3: setup number of LPI bits for a GICv3 guest

2017-06-14 Thread Andre Przywara
The host supports a certain number of LPI identifiers, as stored in
the GICD_TYPER register.
Store this number from the hardware register in vgic_v3_hw to allow
injecting the very same number into a guest (Dom0).
DomUs get the legacy number of 10 bits here, since for now it only sees
SPIs, so it does not need more. This should be revisited once we get
proper DomU ITS support.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
---
 xen/arch/arm/gic-v3.c|  6 +-
 xen/arch/arm/vgic-v3.c   | 16 +++-
 xen/include/asm-arm/domain.h |  1 +
 xen/include/asm-arm/vgic.h   |  3 ++-
 4 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index eda3410..fc3614e 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1597,6 +1597,7 @@ static int __init gicv3_init(void)
 {
 int res, i;
 uint32_t reg;
+unsigned int intid_bits;
 
 if ( !cpu_has_gicv3 )
 {
@@ -1640,8 +1641,11 @@ static int __init gicv3_init(void)
i, r->base, r->base + r->size);
 }
 
+reg = readl_relaxed(GICD + GICD_TYPER);
+intid_bits = GICD_TYPE_ID_BITS(reg);
+
 vgic_v3_setup_hw(dbase, gicv3.rdist_count, gicv3.rdist_regions,
- gicv3.rdist_stride);
+ gicv3.rdist_stride, intid_bits);
 gicv3_init_v2();
 
 spin_lock_init();
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 9018ddc..474cca7 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -57,18 +57,21 @@ static struct {
 unsigned int nr_rdist_regions;
 const struct rdist_region *regions;
 uint32_t rdist_stride; /* Re-distributor stride */
+unsigned int intid_bits;  /* Number of interrupt ID bits */
 } vgic_v3_hw;
 
 void vgic_v3_setup_hw(paddr_t dbase,
   unsigned int nr_rdist_regions,
   const struct rdist_region *regions,
-  uint32_t rdist_stride)
+  uint32_t rdist_stride,
+  unsigned int intid_bits)
 {
 vgic_v3_hw.enabled = 1;
 vgic_v3_hw.dbase = dbase;
 vgic_v3_hw.nr_rdist_regions = nr_rdist_regions;
 vgic_v3_hw.regions = regions;
 vgic_v3_hw.rdist_stride = rdist_stride;
+vgic_v3_hw.intid_bits = intid_bits;
 }
 
 static struct vcpu *vgic_v3_irouter_to_vcpu(struct domain *d, uint64_t irouter)
@@ -1485,6 +1488,8 @@ static int vgic_v3_domain_init(struct domain *d)
 
 first_cpu += size / d->arch.vgic.rdist_stride;
 }
+
+d->arch.vgic.intid_bits = vgic_v3_hw.intid_bits;
 }
 else
 {
@@ -1500,6 +1505,15 @@ static int vgic_v3_domain_init(struct domain *d)
 d->arch.vgic.rdist_regions[0].base = GUEST_GICV3_GICR0_BASE;
 d->arch.vgic.rdist_regions[0].size = GUEST_GICV3_GICR0_SIZE;
 d->arch.vgic.rdist_regions[0].first_cpu = 0;
+
+/*
+ * TODO: only SPIs for now, adjust this when guests need LPIs.
+ * Please note that this value just describes the bits required
+ * in the stream interface, which is of no real concern for our
+ * emulation. So we just go with "10" here to cover all eventual
+ * SPIs (even if the guest implements less).
+ */
+d->arch.vgic.intid_bits = 10;
 }
 
 ret = vgic_v3_its_init_domain(d);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 6de8082..7c3829d 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -111,6 +111,7 @@ struct arch_domain
 uint32_t rdist_stride;  /* Re-Distributor stride */
 struct rb_root its_devices; /* Devices mapped to an ITS */
 spinlock_t its_devices_lock;/* Protects the its_devices tree */
+unsigned int intid_bits;
 #endif
 } vgic;
 
diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
index 544867a..df75064 100644
--- a/xen/include/asm-arm/vgic.h
+++ b/xen/include/asm-arm/vgic.h
@@ -346,7 +346,8 @@ struct rdist_region;
 void vgic_v3_setup_hw(paddr_t dbase,
   unsigned int nr_rdist_regions,
   const struct rdist_region *regions,
-  uint32_t rdist_stride);
+  uint32_t rdist_stride,
+  unsigned int intid_bits);
 #endif
 
 #endif /* __ASM_ARM_VGIC_H__ */
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 03/34] ARM: GICv3: enable LPIs on the host

2017-06-14 Thread Andre Przywara
Now that the host part of the ITS code is in place, we can enable the
LPIs on each redistributor to get the show rolling.
At this point there would be no LPIs mapped, as guests don't know about
the ITS yet.

Signed-off-by: Andre Przywara 
Acked-by: Stefano Stabellini 
---
 xen/arch/arm/gic-v3.c | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index a559e5e..eda3410 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -620,6 +620,21 @@ static int gicv3_enable_redist(void)
 return 0;
 }
 
+/* Enable LPIs on this redistributor (only useful when the host has an ITS). */
+static bool gicv3_enable_lpis(void)
+{
+uint32_t val;
+
+val = readl_relaxed(GICD_RDIST_BASE + GICR_TYPER);
+if ( !(val & GICR_TYPER_PLPIS) )
+return false;
+
+val = readl_relaxed(GICD_RDIST_BASE + GICR_CTLR);
+writel_relaxed(val | GICR_CTLR_ENABLE_LPIS, GICD_RDIST_BASE + GICR_CTLR);
+
+return true;
+}
+
 static int __init gicv3_populate_rdist(void)
 {
 int i;
@@ -731,11 +746,14 @@ static int gicv3_cpu_init(void)
 if ( gicv3_enable_redist() )
 return -ENODEV;
 
+/* If the host has any ITSes, enable LPIs now. */
 if ( gicv3_its_host_has_its() )
 {
 ret = gicv3_its_setup_collection(smp_processor_id());
 if ( ret )
 return ret;
+if ( !gicv3_enable_lpis() )
+return -EBUSY;
 }
 
 /* Set priority on PPI and SGI interrupts */
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 09/34] ARM: GICv3: introduce separate pending_irq structs for LPIs

2017-06-14 Thread Andre Przywara
For the same reason that allocating a struct irq_desc for each
possible LPI is not an option, having a struct pending_irq for each LPI
is also not feasible. We only care about mapped LPIs, so we can get away
with having struct pending_irq's only for them.
Maintain a radix tree per domain where we drop the pointer to the
respective pending_irq. The index used is the virtual LPI number.
The memory for the actual structures has been allocated already per
device at device mapping time.
Teach the existing VGIC functions to find the right pointer when being
given a virtual LPI number.

Signed-off-by: Andre Przywara 
Acked-by: Julien Grall 
Reviewed-by: Stefano Stabellini 
---
 xen/arch/arm/vgic-v2.c   |  8 
 xen/arch/arm/vgic-v3.c   | 30 ++
 xen/arch/arm/vgic.c  |  2 ++
 xen/include/asm-arm/domain.h |  2 ++
 xen/include/asm-arm/vgic.h   |  2 ++
 5 files changed, 44 insertions(+)

diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 9fa42e1..488e6fa 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -705,10 +705,18 @@ static void vgic_v2_domain_free(struct domain *d)
 /* Nothing to be cleanup for this driver */
 }
 
+static struct pending_irq *vgic_v2_lpi_to_pending(struct domain *d,
+  unsigned int vlpi)
+{
+/* Dummy function, no LPIs on a VGICv2. */
+BUG();
+}
+
 static const struct vgic_ops vgic_v2_ops = {
 .vcpu_init   = vgic_v2_vcpu_init,
 .domain_init = vgic_v2_domain_init,
 .domain_free = vgic_v2_domain_free,
+.lpi_to_pending = vgic_v2_lpi_to_pending,
 .max_vcpus = 8,
 };
 
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index 474cca7..9dee2df 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -1457,6 +1457,9 @@ static int vgic_v3_domain_init(struct domain *d)
 d->arch.vgic.nr_regions = rdist_count;
 d->arch.vgic.rdist_regions = rdist_regions;
 
+rwlock_init(>arch.vgic.pend_lpi_tree_lock);
+radix_tree_init(>arch.vgic.pend_lpi_tree);
+
 /*
  * Domain 0 gets the hardware address.
  * Guests get the virtual platform layout.
@@ -1545,14 +1548,41 @@ static int vgic_v3_domain_init(struct domain *d)
 static void vgic_v3_domain_free(struct domain *d)
 {
 vgic_v3_its_free_domain(d);
+/*
+ * It is expected that at this point all actual ITS devices have been
+ * cleaned up already. The struct pending_irq's, for which the pointers
+ * have been stored in the radix tree, are allocated and freed by device.
+ * On device unmapping all the entries are removed from the tree and
+ * the backing memory is freed.
+ */
+radix_tree_destroy(>arch.vgic.pend_lpi_tree, NULL);
 xfree(d->arch.vgic.rdist_regions);
 }
 
+/*
+ * Looks up a virtual LPI number in our tree of mapped LPIs. This will return
+ * the corresponding struct pending_irq, which we also use to store the
+ * enabled and pending bit plus the priority.
+ * Returns NULL if an LPI cannot be found (or no LPIs are supported).
+ */
+static struct pending_irq *vgic_v3_lpi_to_pending(struct domain *d,
+  unsigned int lpi)
+{
+struct pending_irq *pirq;
+
+read_lock(>arch.vgic.pend_lpi_tree_lock);
+pirq = radix_tree_lookup(>arch.vgic.pend_lpi_tree, lpi);
+read_unlock(>arch.vgic.pend_lpi_tree_lock);
+
+return pirq;
+}
+
 static const struct vgic_ops v3_ops = {
 .vcpu_init   = vgic_v3_vcpu_init,
 .domain_init = vgic_v3_domain_init,
 .domain_free = vgic_v3_domain_free,
 .emulate_reg  = vgic_v3_emulate_reg,
+.lpi_to_pending = vgic_v3_lpi_to_pending,
 /*
  * We use both AFF1 and AFF0 in (v)MPIDR. Thus, the max number of CPU
  * that can be supported is up to 4096(==256*16) in theory.
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 9cc9563..cb7ab3b 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -469,6 +469,8 @@ struct pending_irq *irq_to_pending(struct vcpu *v, unsigned 
int irq)
  * are used for SPIs; the rests are used for per cpu irqs */
 if ( irq < 32 )
 n = >arch.vgic.pending_irqs[irq];
+else if ( is_lpi(irq) )
+n = v->domain->arch.vgic.handler->lpi_to_pending(v->domain, irq);
 else
 n = >domain->arch.vgic.pending_irqs[irq - 32];
 return n;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7c3829d..3d8e84c 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -111,6 +111,8 @@ struct arch_domain
 uint32_t rdist_stride;  /* Re-Distributor stride */
 struct rb_root its_devices; /* Devices mapped to an ITS */
 spinlock_t its_devices_lock;/* Protects the its_devices tree */
+struct radix_tree_root pend_lpi_tree; /* Stores struct pending_irq's */
+rwlock_t 

[Xen-devel] [PATCH v12 08/34] ARM: GIC: Add checks for NULL pointer pending_irq's

2017-06-14 Thread Andre Przywara
For LPIs the struct pending_irq's are dynamically allocated and the
pointers will be stored in a radix tree. Since an LPI can be "unmapped"
at any time, teach the VGIC how to deal with irq_to_pending() returning
a NULL pointer.
We just do nothing in this case or clean up the LR if the virtual LPI
number was still in an LR.

Those are all call sites for irq_to_pending(), as per:
"git grep irq_to_pending", and their evaluations:
(PROTECTED means: added NULL check and bailing out)

xen/arch/arm/gic.c:
gic_route_irq_to_guest(): only called for SPIs, added ASSERT()
gic_remove_irq_from_guest(): only called for SPIs, added ASSERT()
gic_remove_from_lr_pending(): PROTECTED, called within VCPU VGIC lock
gic_raise_inflight_irq(): PROTECTED, called under VCPU VGIC lock
gic_raise_guest_irq(): PROTECTED, called under VCPU VGIC lock
gic_update_one_lr(): PROTECTED, called under VCPU VGIC lock

xen/arch/arm/vgic.c:
vgic_migrate_irq(): not called for LPIs (virtual IRQs), added ASSERT()
arch_move_irqs(): not iterating over LPIs, LPI ASSERT already in place
vgic_disable_irqs(): not called for LPIs, added ASSERT()
vgic_enable_irqs(): not called for LPIs, added ASSERT()
vgic_vcpu_inject_irq(): PROTECTED, moved under VCPU VGIC lock

xen/include/asm-arm/event.h:
local_events_need_delivery_nomask(): only called for a PPI, added ASSERT()

xen/include/asm-arm/vgic.h:
(prototype)

Signed-off-by: Andre Przywara 
Reviewed-by: Julien Grall 
Acked-by: Stefano Stabellini 
---
 xen/arch/arm/gic.c  | 26 --
 xen/arch/arm/vgic.c | 21 +
 xen/include/asm-arm/event.h |  3 +++
 3 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index a59591d..e1dfd66 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -148,6 +148,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int 
virq,
 /* Caller has already checked that the IRQ is an SPI */
 ASSERT(virq >= 32);
 ASSERT(virq < vgic_num_irqs(d));
+ASSERT(!is_lpi(virq));
 
 vgic_lock_rank(v_target, rank, flags);
 
@@ -184,6 +185,7 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned 
int virq,
 ASSERT(spin_is_locked(>lock));
 ASSERT(test_bit(_IRQ_GUEST, >status));
 ASSERT(p->desc == desc);
+ASSERT(!is_lpi(virq));
 
 vgic_lock_rank(v_target, rank, flags);
 
@@ -420,6 +422,10 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int 
virtual_irq)
 {
 struct pending_irq *n = irq_to_pending(v, virtual_irq);
 
+/* If an LPI has been removed meanwhile, there is nothing left to raise. */
+if ( unlikely(!n) )
+return;
+
 ASSERT(spin_is_locked(>arch.vgic.lock));
 
 if ( list_empty(>lr_queue) )
@@ -439,20 +445,25 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int 
virtual_irq,
 {
 int i;
 unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;
+struct pending_irq *p = irq_to_pending(v, virtual_irq);
 
 ASSERT(spin_is_locked(>arch.vgic.lock));
 
+if ( unlikely(!p) )
+/* An unmapped LPI does not need to be raised. */
+return;
+
 if ( v == current && list_empty(>arch.vgic.lr_pending) )
 {
 i = find_first_zero_bit(_cpu(lr_mask), nr_lrs);
 if (i < nr_lrs) {
 set_bit(i, _cpu(lr_mask));
-gic_set_lr(i, irq_to_pending(v, virtual_irq), GICH_LR_PENDING);
+gic_set_lr(i, p, GICH_LR_PENDING);
 return;
 }
 }
 
-gic_add_to_lr_pending(v, irq_to_pending(v, virtual_irq));
+gic_add_to_lr_pending(v, p);
 }
 
 static void gic_update_one_lr(struct vcpu *v, int i)
@@ -467,6 +478,17 @@ static void gic_update_one_lr(struct vcpu *v, int i)
 gic_hw_ops->read_lr(i, _val);
 irq = lr_val.virq;
 p = irq_to_pending(v, irq);
+/* An LPI might have been unmapped, in which case we just clean up here. */
+if ( unlikely(!p) )
+{
+ASSERT(is_lpi(irq));
+
+gic_hw_ops->clear_lr(i);
+clear_bit(i, _cpu(lr_mask));
+
+return;
+}
+
 if ( lr_val.state & GICH_LR_ACTIVE )
 {
 set_bit(GIC_IRQ_GUEST_ACTIVE, >status);
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 9771463..9cc9563 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -236,6 +236,9 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, 
unsigned int irq)
 unsigned long flags;
 struct pending_irq *p;
 
+/* This will never be called for an LPI, as we don't migrate them. */
+ASSERT(!is_lpi(irq));
+
 spin_lock_irqsave(>arch.vgic.lock, flags);
 
 p = irq_to_pending(old, irq);
@@ -320,6 +323,9 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
 int i = 0;
 struct vcpu *v_target;
 
+/* LPIs will never be disabled via this function. */
+ASSERT(!is_lpi(32 * n + 31));
+
 while ( (i = find_next_bit(, 32, i)) < 32 ) {
 irq = i + (32 * n);
   

[Xen-devel] [PATCH v12 07/34] ARM: vGIC: introduce gic_remove_irq_from_queues()

2017-06-14 Thread Andre Przywara
To avoid code duplication in a later patch, introduce a generic function
to remove a virtual IRQ from the VGIC.
Call that function instead of the open-coded version in vgic_migrate_irq().

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/gic.c| 9 +
 xen/arch/arm/vgic.c   | 4 +---
 xen/include/asm-arm/gic.h | 1 +
 3 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 6c0c9c3..a59591d 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -407,6 +407,15 @@ void gic_remove_from_lr_pending(struct vcpu *v, struct 
pending_irq *p)
 list_del_init(>lr_queue);
 }
 
+void gic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p)
+{
+ASSERT(spin_is_locked(>arch.vgic.lock));
+
+clear_bit(GIC_IRQ_GUEST_QUEUED, >status);
+list_del_init(>inflight);
+gic_remove_from_lr_pending(v, p);
+}
+
 void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq)
 {
 struct pending_irq *n = irq_to_pending(v, virtual_irq);
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index f2f423f..9771463 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -266,9 +266,7 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, 
unsigned int irq)
 /* If the IRQ is still lr_pending, re-inject it to the new vcpu */
 if ( !list_empty(>lr_queue) )
 {
-clear_bit(GIC_IRQ_GUEST_QUEUED, >status);
-list_del_init(>lr_queue);
-list_del_init(>inflight);
+gic_remove_irq_from_queues(old, p);
 irq_set_affinity(p->desc, cpumask_of(new->processor));
 spin_unlock_irqrestore(>arch.vgic.lock, flags);
 vgic_vcpu_inject_irq(new, irq);
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 3130634..7b2e98c 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -244,6 +244,7 @@ extern void gic_raise_guest_irq(struct vcpu *v, unsigned 
int irq,
 unsigned int priority);
 extern void gic_raise_inflight_irq(struct vcpu *v, unsigned int virtual_irq);
 extern void gic_remove_from_lr_pending(struct vcpu *v, struct pending_irq *p);
+extern void gic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p);
 
 /* Accept an interrupt from the GIC and dispatch its handler */
 extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);
-- 
2.9.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v11 10/34] ARM: GIC: export and extend vgic_init_pending_irq()

2017-06-14 Thread Julien Grall

Hi Andre,

On 06/14/2017 04:54 PM, Andre Przywara wrote:

Hi,

On 12/06/17 16:36, Julien Grall wrote:

Hi Andre,

On 09/06/17 18:41, Andre Przywara wrote:

For LPIs we later want to dynamically allocate struct pending_irqs.
So beside needing to initialize the struct from there we also need
to clean it up and re-initialize it later on.
Export vgic_init_pending_irq() and extend it to be reusable.

Signed-off-by: Andre Przywara 
---
  xen/arch/arm/vgic.c| 4 +++-
  xen/include/asm-arm/vgic.h | 1 +
  2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 2e4820f..7e8dba6 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -60,8 +60,10 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v,
unsigned int irq)
  return vgic_get_rank(v, rank);
  }

-static void vgic_init_pending_irq(struct pending_irq *p, unsigned int
virq)
+void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
  {
+memset(p, 0, sizeof(*p));


So for initialization, we will clear the memory twice which looks rather
pointless (see the current caller).

We probably to drop the memset or replace xzalloc by xalloc in the
caller. I would be ok to see this change in a follow-up patch. Assuming
you will sent a patch:


So I checked the callers and now moved the memset from here to
its_discard_event(), just before the call to vgic_init_pending_irq().
That should be safe, because:
1) For the existing code (initialising SGIs/PPIs and SPIs) we always
zero pending_irq anyway, either by xzalloc or by an explicit memset.
2) The call in its_discard_event() has now an explicit memset before the
call.
3) Allocating struct pending_irqs for LPI upon mapping a device already
uses xzalloc, so they are initially zeroed. Before we re-use a struct,
we call its_discard_event(), which zeroes it as described in 2)


The place I am the most concerned is in the MAPTI. Because you would 
call vgic_init_pending_irq assuming this would have already been zeroed. 
It is not straight-forward when looking at the code who did that.


I would prefer to keep the memset in vgic_init_pending_irq and avoid it 
in the caller. This is more future proof.



So I merged the change (remove memset here, put it in
its_discard_event()) into the new series.
Please tell me if that is too dangerous and I can back it out again.
Let's look for a follow-up patch and not in this series. I don't want to 
delay the series just for that.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.9-testing test] 110417: tolerable FAIL - PUSHED

2017-06-14 Thread osstest service owner
flight 110417 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110417/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-localmigrate/x10 fail like 110374
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail like 110392
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 110392
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop fail like 110392
 test-amd64-amd64-xl-rtds  9 debian-install   fail  like 110392
 test-armhf-armhf-xl-rtds 15 guest-start/debian.repeatfail  like 110392
 test-amd64-amd64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-xsm  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64  9 windows-installfail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-arm64-arm64-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-arm64-arm64-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 saverestore-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  12 migrate-support-checkfail   never pass
 test-arm64-arm64-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemut-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386  9 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386  9 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386  9 windows-install fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64  9 windows-install fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  9 windows-install fail never pass

version targeted for testing:
 xen  91503b282eff582d74927ed25668fae65fd228ba
baseline version:
 xen  89b71d14621850c6c4b87a2cb3476efb069aeca9

Last test of basis   110392  2017-06-13 01:47:18 Z1 days
Testing same since   110417  2017-06-13 21:31:36 Z0 days1 attempts


People who touched revisions under test:
  Armando Vega 
  Ian Jackson 
  Jan Beulich 
  Wei Liu 

jobs:
 build-amd64-xsm   

Re: [Xen-devel] [PATCH v11 10/34] ARM: GIC: export and extend vgic_init_pending_irq()

2017-06-14 Thread Andre Przywara
Hi,

On 12/06/17 16:36, Julien Grall wrote:
> Hi Andre,
> 
> On 09/06/17 18:41, Andre Przywara wrote:
>> For LPIs we later want to dynamically allocate struct pending_irqs.
>> So beside needing to initialize the struct from there we also need
>> to clean it up and re-initialize it later on.
>> Export vgic_init_pending_irq() and extend it to be reusable.
>>
>> Signed-off-by: Andre Przywara 
>> ---
>>  xen/arch/arm/vgic.c| 4 +++-
>>  xen/include/asm-arm/vgic.h | 1 +
>>  2 files changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>> index 2e4820f..7e8dba6 100644
>> --- a/xen/arch/arm/vgic.c
>> +++ b/xen/arch/arm/vgic.c
>> @@ -60,8 +60,10 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v,
>> unsigned int irq)
>>  return vgic_get_rank(v, rank);
>>  }
>>
>> -static void vgic_init_pending_irq(struct pending_irq *p, unsigned int
>> virq)
>> +void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
>>  {
>> +memset(p, 0, sizeof(*p));
> 
> So for initialization, we will clear the memory twice which looks rather
> pointless (see the current caller).
> 
> We probably to drop the memset or replace xzalloc by xalloc in the
> caller. I would be ok to see this change in a follow-up patch. Assuming
> you will sent a patch:

So I checked the callers and now moved the memset from here to
its_discard_event(), just before the call to vgic_init_pending_irq().
That should be safe, because:
1) For the existing code (initialising SGIs/PPIs and SPIs) we always
zero pending_irq anyway, either by xzalloc or by an explicit memset.
2) The call in its_discard_event() has now an explicit memset before the
call.
3) Allocating struct pending_irqs for LPI upon mapping a device already
uses xzalloc, so they are initially zeroed. Before we re-use a struct,
we call its_discard_event(), which zeroes it as described in 2)

So I merged the change (remove memset here, put it in
its_discard_event()) into the new series.
Please tell me if that is too dangerous and I can back it out again.

Cheers,
Andre.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 110440: tolerable trouble: broken/pass - PUSHED

2017-06-14 Thread osstest service owner
flight 110440 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/110440/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  c55667bd0ad8f04688abfd5c6317709dc00f88ab
baseline version:
 xen  3db971fa33fa2ee3989859b455213bb33bac7e05

Last test of basis   110436  2017-06-14 10:01:12 Z0 days
Testing same since   110440  2017-06-14 13:01:40 Z0 days1 attempts


People who touched revisions under test:
  Ian Jackson 
  Jan Beulich 
  Konrad Rzeszutek Wilk 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-arm64-arm64-xl-xsm  broken  
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable-smoke
+ revision=c55667bd0ad8f04688abfd5c6317709dc00f88ab
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x '!=' x/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/home/osstest/repos/lock
++ exec with-lock-ex -w /home/osstest/repos/lock ./ap-push xen-unstable-smoke 
c55667bd0ad8f04688abfd5c6317709dc00f88ab
+ branch=xen-unstable-smoke
+ revision=c55667bd0ad8f04688abfd5c6317709dc00f88ab
+ . ./cri-lock-repos
++ . ./cri-common
+++ . ./cri-getconfig
+++ umask 002
+++ getrepos
 getconfig Repos
 perl -e '
use Osstest;
readglobalconfig();
print $c{"Repos"} or die $!;
'
+++ local repos=/home/osstest/repos
+++ '[' -z /home/osstest/repos ']'
+++ '[' '!' -d /home/osstest/repos ']'
+++ echo /home/osstest/repos
++ repos=/home/osstest/repos
++ repos_lock=/home/osstest/repos/lock
++ '[' x/home/osstest/repos/lock '!=' x/home/osstest/repos/lock ']'
+ . ./cri-common
++ . ./cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable-smoke
+ qemuubranch=qemu-upstream-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ '[' xqemu-upstream-unstable = x ']'
+ select_prevxenbranch
++ ./cri-getprevxenbranch xen-unstable-smoke
+ prevxenbranch=xen-4.9-testing
+ '[' xc55667bd0ad8f04688abfd5c6317709dc00f88ab = x ']'
+ : tested/2.6.39.x
+ . ./ap-common
++ : osst...@xenbits.xen.org
+++ getconfig OsstestUpstream
+++ perl -e '
use Osstest;
readglobalconfig();
print $c{"OsstestUpstream"} or die $!;
'
++ :
++ : git://xenbits.xen.org/xen.git
++ : osst...@xenbits.xen.org:/home/xen/git/xen.git
++ : git://xenbits.xen.org/qemu-xen-traditional.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/xtf.git
++ : osst...@xenbits.xen.org:/home/xen/git/xtf.git
++ : git://xenbits.xen.org/xtf.git
++ : git://xenbits.xen.org/libvirt.git
++ : osst...@xenbits.xen.org:/home/xen/git/libvirt.git
++ : git://xenbits.xen.org/libvirt.git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : git
++ : git://xenbits.xen.org/osstest/rumprun.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/rumprun.git
++ : git://git.seabios.org/seabios.git
++ : osst...@xenbits.xen.org:/home/xen/git/osstest/seabios.git
++ : git://xenbits.xen.org/osstest/seabios.git
++ : 

Re: [Xen-devel] [PATCH 2/2] arm: traps: handle PSCI calls inside `smccc.c`

2017-06-14 Thread Julien Grall



On 06/14/2017 03:37 PM, Volodymyr Babchuk wrote:

Hi Julien,


Hi Volodymyr,



On 14.06.17 17:21, Julien Grall wrote:

PSCI is part of HVC/SMC interface, so it should be handled in
appropriate place: `smccc.c`. This patch just moves PSCI
handler calls from `traps.c` to `smccc.c`.

PSCI is considered as two different "services" in terms of SMCCC.
Older PSCI 1.0 is treated as "architecture service", while never
PSCI 2.0 is defined as "standard secure service".

Also old accessors PSCI_ARG() and PSCI_RESULT_REG() were replaced
with generic set_user_reg()/get_user_reg() functions.
This is a call to split the patch in multiple small ones to ease the 
review.


I like the idea of using SMCC for PSCI, and will review the code when 
it will be split.


Okay, then I'll will send a separate patch that reworks PSCI code in 
traps.c, because this change is not relevant for SMCCC patch series.


I would be ok if you append it to this series. Afterall, it is clean-up 
to implement SMCC properly :).


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [ovmf baseline-only test] 71563: tolerable FAIL

2017-06-14 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 71563 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71563/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 build-amd64-libvirt   5 libvirt-buildfail   like 71560
 build-i386-libvirt5 libvirt-buildfail   like 71560

version targeted for testing:
 ovmf 46e2632b4e873dc191bf008c95b47340c8957a47
baseline version:
 ovmf 983f59932db28ae37b9f9e545c1258bc59aa71ca

Last test of basis71560  2017-06-13 18:22:27 Z0 days
Testing same since71563  2017-06-14 13:21:53 Z0 days1 attempts


People who touched revisions under test:
  Ruiyu Ni 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
 test-amd64-i386-xl-qemuu-ovmf-amd64  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


commit 46e2632b4e873dc191bf008c95b47340c8957a47
Author: Ruiyu Ni 
Date:   Tue Jun 13 16:23:18 2017 +0800

ShellBinPkg: Ia32/X64 Shell binary update.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ruiyu Ni 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] About the parameter list of tools/libxc/xc_domain.c:xc_domain_add_to_physmap()

2017-06-14 Thread Zhongze Liu
I didn't saw your mail. Sorry.

2017-06-14 23:13 GMT+08:00 Zhongze Liu :
> 2017-06-14 22:42 GMT+08:00 Wei Liu :
>> On Wed, Jun 14, 2017 at 09:19:23PM +0800, Zhongze Liu wrote:
>>> Hi Xen developers,
>>>
>>> In tools/libxc/xc_domain.c:xc_domain_add_to_physmap() the .size field
>>> of the xen_add_to_physmap
>>> struct can't be controlled by the caller through the parameter list
>>> and wasn't explicitly initialized ( and
>>> thus default to zero ). This implicitly prevents the caller from doing
>>> an XENMEMSPACE_gmfn_range-
>>> call. Is it a mistake or is it intentionally done so?
>>
>>
>> The size parameter doesn't make much sense to me because you can only
>> specify one gpfn in xen_add_to_physmap_t. I guess that's an oversight
>> when designing the interface. But we couldn't change it once it was
>> released.
>>
>
> But according to 725, 741: xen/common/memory.c, the the function is mapping
> a range from gpfn to gpfn+size into tdom.
>
>>
>> I guess what you really need is the _batch variant.
>
> Cheers,
>
> Zhongze Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] About the parameter list of tools/libxc/xc_domain.c:xc_domain_add_to_physmap()

2017-06-14 Thread Zhongze Liu
2017-06-14 22:42 GMT+08:00 Wei Liu :
> On Wed, Jun 14, 2017 at 09:19:23PM +0800, Zhongze Liu wrote:
>> Hi Xen developers,
>>
>> In tools/libxc/xc_domain.c:xc_domain_add_to_physmap() the .size field
>> of the xen_add_to_physmap
>> struct can't be controlled by the caller through the parameter list
>> and wasn't explicitly initialized ( and
>> thus default to zero ). This implicitly prevents the caller from doing
>> an XENMEMSPACE_gmfn_range-
>> call. Is it a mistake or is it intentionally done so?
>
>
> The size parameter doesn't make much sense to me because you can only
> specify one gpfn in xen_add_to_physmap_t. I guess that's an oversight
> when designing the interface. But we couldn't change it once it was
> released.
>

But according to 725, 741: xen/common/memory.c, the the function is mapping
a range from gpfn to gpfn+size into tdom.

>
> I guess what you really need is the _batch variant.

Cheers,

Zhongze Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] About the parameter list of tools/libxc/xc_domain.c:xc_domain_add_to_physmap()

2017-06-14 Thread Wei Liu
On Wed, Jun 14, 2017 at 03:42:34PM +0100, Wei Liu wrote:
> On Wed, Jun 14, 2017 at 09:19:23PM +0800, Zhongze Liu wrote:
> > Hi Xen developers,
> > 
> > In tools/libxc/xc_domain.c:xc_domain_add_to_physmap() the .size field
> > of the xen_add_to_physmap
> > struct can't be controlled by the caller through the parameter list
> > and wasn't explicitly initialized ( and
> > thus default to zero ). This implicitly prevents the caller from doing
> > an XENMEMSPACE_gmfn_range-
> > call. Is it a mistake or is it intentionally done so?
> 
> 
> The size parameter doesn't make much sense to me because you can only
> specify one gpfn in xen_add_to_physmap_t. I guess that's an oversight
> when designing the interface. But we couldn't change it once it was
> released.

After reading the code more carefully, it seems that you can specify the
size parameter to get Xen to insert mapping for gpfn + size pages in
guest address space.

We don't use the interface like that in tree, but you can change the
code to do that if necessary. Just introduce a new function.

In any case, the batch function should work.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 2/4] xen: add sysfs node for guest type

2017-06-14 Thread Juergen Gross
On 14/06/17 17:01, Boris Ostrovsky wrote:
> On 06/14/2017 11:00 AM, Juergen Gross wrote:
>> On 14/06/17 16:48, Boris Ostrovsky wrote:
 diff --git a/drivers/xen/sys-hypervisor.c b/drivers/xen/sys-hypervisor.c
 index 84106f9c456c..d641e9970d5d 100644
 --- a/drivers/xen/sys-hypervisor.c
 +++ b/drivers/xen/sys-hypervisor.c
 @@ -50,6 +50,18 @@ static int __init xen_sysfs_type_init(void)
return sysfs_create_file(hypervisor_kobj, _attr.attr);
  }
  
 +static ssize_t guest_type_show(struct hyp_sysfs_attr *attr, char *buffer)
 +{
 +  return sprintf(buffer, "%s\n", xen_guest_type);
 +}
>>>
>>> So I know I gave my R-b for this patch but can't we just key off
>>> xen_domain_type and not have xen_guest_type at all?
>> So we'd need to introduce XEN_PVH_DOMAIN and adjust xen_hvm_domain().
> 
> Can't we use xen_pvh_domain()?

Sure. I thought you meant to have the needed information all in
xen_domain_type.

I'll adjust the patch.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


  1   2   >