[Xen-devel] [qemu-mainline test] 116173: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116173 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116173/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm  20 guest-start/debian.repeat fail REGR. vs. 116126
 test-armhf-armhf-xl-cubietruck  6 xen-installfail REGR. vs. 116126

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeatfail like 116107
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeatfail  like 116107
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116126
 test-amd64-amd64-xl-qcow219 guest-start/debian.repeatfail  like 116126
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116126
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116126
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116126
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116126
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116126
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuu1fa0f627d03cd0d0755924247cafeb42969016bf
baseline version:
 qemuu4ffa88c99c54d2a30f79e3dbecec50b023eff1c8

Last test of basis   116126  2017-11-13 00:49:34 Z2 days
Failing since116146  2017-11-13 18:53:48 Z1 days3 attempts
Testing same since   116173  2017-11-14 22:20:27 Z0 days1 attempts


People who touched revisions under test:
  Alberto Garcia 
  Alex Bennée 
  Alexey Kardashevskiy 
  Alistair Francis 
  Christian Borntraeger 
  Cornelia Huck 
  David Gibson 
  Emilio G. Cota 
  Eric Blake 
  Fam Zheng 
  Gerd Hoffmann 
  Greg Kurz 
  Jason Wang 
  Jeff Cody 
  Jens Freimann 
  Mao Zhongyi 
  Max Reitz 
  Mike Nawrocki 
  Peter Maydell 
  Philippe Mathieu-Daudé 
  Prasad J Pandit 
  Richard Henderson 
 

Re: [Xen-devel] Unable to create guest PV domain on OMAP5432

2017-11-14 Thread Jayadev Kumaran
Hello Andrii,

>> BTW, what is your dom0 system? Does it have bash?
> *dom0 uses a modified kernel(3.15) with Xen support and  default omap fs*


I made certain changes to my configuration file. Instead of trying to use a
disk, I want to the guest domain up from  ramdisk image. My new
configuration file looks like

"
name = "android"

kernel = "/home/root/android/kernel"
ramdisk = "/home/root/android/ramdisk.img"
#bootloader = "/usr/lib/xen-4.4/bin/pygrub"

memory = 512
vcpus = 1

device_model_version = 'qemu-xen-traditional'

extra = "console=hvc0 rw init=/bin/sh earlyprintk=xenboot"

"

I'm able to create a guest domain as well.















































































































*root@omap5-evm:~# xl -vvv create android.cfg Parsing config from
android.cfglibxl: debug: libxl_create.c:1646:do_domain_create: Domain 0:ao
0x46e30: create: how=(nil) callback=(nil) poller=0x46e90libxl: debug:
libxl_arm.c:87:libxl__arch_domain_prepare_config: Configure the
domainlibxl: debug: libxl_arm.c:90:libxl__arch_domain_prepare_config:  -
Allocate 0 SPIslibxl: debug: libxl_create.c:987:initiate_domain_create:
Domain 1:running bootloaderlibxl: debug:
libxl_bootloader.c:335:libxl__bootloader_run: Domain 1:no bootloader
configured, using user supplied kernellibxl: debug:
libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x47780: deregister
unregistered(XEN) grant_table.c:1688:d0v0 Expanding d1 grant table from 0
to 1 framesdomainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0 rw
init=/bin/sh earlyprintk=xenboot", features=""libxl: debug:
libxl_dom.c:779:libxl__build_pv: pv kernel mapped 0 path
/home/root/android/kerneldomainbuilder: detail: xc_dom_kernel_file:
filename="/home/root/android/kernel"domainbuilder: detail:
xc_dom_malloc_filemap: 4782 kBdomainbuilder: detail:
xc_dom_ramdisk_file:
filename="/home/root/android/ramdisk.img"domainbuilder: detail:
xc_dom_malloc_filemap: 179 kBdomainbuilder: detail:
xc_dom_boot_xen_init: ver 4.10, caps xen-3.0-armv7l domainbuilder: detail:
xc_dom_rambase_init: RAM starts at 4domainbuilder: detail:
xc_dom_parse_image: calleddomainbuilder: detail: xc_dom_find_loader: trying
multiboot-binary loader ... domainbuilder: detail: loader probe
faileddomainbuilder: detail: xc_dom_find_loader: trying Linux zImage
(ARM64) loader ... domainbuilder: detail: xc_dom_probe_zimage64_kernel:
kernel is not an arm64 Imagedomainbuilder: detail: loader probe
faileddomainbuilder: detail: xc_dom_find_loader: trying Linux zImage
(ARM32) loader ... domainbuilder: detail: loader probe OKdomainbuilder:
detail: xc_dom_parse_zimage32_kernel: calleddomainbuilder: detail:
xc_dom_parse_zimage32_kernel: xen-3.0-armv7l: 0x40008000 ->
0x404b3b28libxl: debug: libxl_arm.c:866:libxl__prepare_dtb: constructing
DTB for Xen version 4.10 guestlibxl: debug:
libxl_arm.c:867:libxl__prepare_dtb:  - vGIC version: V2libxl: debug:
libxl_arm.c:321:make_chosen_node: /chosen/bootargs = console=hvc0 rw
init=/bin/sh earlyprintk=xenbootlibxl: debug:
libxl_arm.c:328:make_chosen_node: /chosen adding placeholder linux,initrd
propertieslibxl: debug: libxl_arm.c:441:make_memory_nodes: Creating
placeholder node /memory@4000libxl: debug:
libxl_arm.c:441:make_memory_nodes: Creating placeholder node
/memory@2libxl: debug: libxl_arm.c:964:libxl__prepare_dtb: fdt
total size 1394domainbuilder: detail: xc_dom_devicetree_mem: calledlibxl:
debug: libxl_arm.c:1005:libxl__arch_domain_init_hw_description: Generating
ACPI tables is disabled by user.domainbuilder: detail: xc_dom_mem_init: mem
512 MB, pages 0x2 pages, 4k eachdomainbuilder: detail: xc_dom_mem_init:
0x2 pagesdomainbuilder: detail: xc_dom_boot_mem_init:
calleddomainbuilder: detail: set_mode: guest xen-3.0-armv7l, address size
32domainbuilder: detail: xc_dom_malloc: 1024 kBdomainbuilder:
detail: populate_guest_memory: populating RAM @
4000-6000 (512MB)domainbuilder: detail:
populate_one_size: populated 0x100/0x100 entries with shift 9domainbuilder:
detail: meminit: placing boot modules at 0x4800domainbuilder: detail:
meminit: ramdisk: 0x4800 -> 0x4802d000domainbuilder: detail: meminit:
devicetree: 0x4802d000 -> 0x4802e000libxl: debug:
libxl_arm.c:1073:libxl__arch_domain_finalise_hw_description: /chosen
updating initrd properties to cover 4800-4802d000libxl: debug:
libxl_arm.c:1039:finalise_one_node: Populating placeholder node
/memory@4000libxl: debug: libxl_arm.c:1033:finalise_one_node: Nopping
out placeholder node /memory@2domainbuilder: detail:
xc_dom_build_image: calleddomainbuilder: detail:
xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x40008+0x4ac at
0xb6098000domainbuilder: detail: xc_dom_alloc_segment:   kernel   :
0x40008000 -> 0x404b4000  (pfn 0x40008 + 0x4ac pages)domainbuilder: detail:
xc_dom_load_zimage_kernel: calleddomainbuilder: detail:
xc_dom_load_zimage_kernel: kernel seg 0x40008000-0x404b4000domainbuilder:
detail: 

[Xen-devel] [seabios test] 116168: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116168 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116168/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail pass in 
116148

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 116148 like 115539
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  63451fca13c75870e1703eb3e20584d91179aebc
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   11 days
Testing same since   115733  2017-11-10 17:19:59 Z4 days8 attempts


People who touched revisions under test:
  Kevin O'Connor 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 63451fca13c75870e1703eb3e20584d91179aebc
Author: Kevin O'Connor 
Date:   Fri Nov 10 11:49:19 2017 -0500

docs: Note v1.11.0 release

Signed-off-by: Kevin O'Connor 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH 00/31] CPUFreq on ARM

2017-11-14 Thread Jassi Brar
On 15 November 2017 at 02:16, Oleksandr Tyshchenko  wrote:
> On Tue, Nov 14, 2017 at 12:49 PM, Andre Przywara
>  wrote:
>

> 3. Direct ported SCPI protocol, mailbox infrastructure and the ARM SMC 
> triggered mailbox driver. All components except mailbox driver are in 
> mainline Linux.

 Why do you actually need this mailbox framework?
>
It is unnecessary if you are always going to use one particular signal
mechanism, say SMC. However ...

 Actually I just
 proposed the SMC driver the make it fit into the Linux framework. All we
 actually need for SCPI is to write a simple command into some memory and
 "press a button". I don't see a need to import the whole Linux
 framework, especially as our mailbox usage is actually just a corner
 case of the mailbox's capability (namely a "single-bit" doorbell).
 The SMC use case is trivial to implement, and I believe using the Juno
 mailbox is similarly simple, for instance.
>
... Its going to be SMC and MHU now... and you talk about Rockchip as
well later. That becomes unwieldy.


>>
>>> Protocol relies on mailbox feature, so I ported mailbox too. I think,
>>> it would be much more easy for me to just add
>>> a few required commands handling with issuing SMC call and without any
>>> mailbox infrastructure involved.
>>> But, I want to show what is going on and what place these things come from.
>>
>> I appreciate that, but I think we already have enough "bloated" Linux +
>> glue code in Xen. And in particular the Linux mailbox framework is much
>> more powerful than we need for SCPI, so we have a lot of unneeded
>> functionality.
>
That is a painful misconception.
Mailbox api is designed to be (almost) as light weight as being
transparent. Please have a look at mbox_send_message() and see how
negligible overhead it adds for "SMC controller" that you compare
against here. just integer manipulations protected by a spinlock.
Of course if your protocol needs async messaging, you pay the price
but only fair.


>> If we just want to support CPUfreq using SCPI via SMC/Juno MHU/Rockchip
>> mailbox, we can get away with a *much* simpler solution.
>
> Agree, but I am afraid that simplifying things now might lead to some
> difficulties when there is a need
> to integrate a little bit different mailbox IP. Also, we need to
> recheck if SCMI, we might want to support as well,
> have the similar interface with mailbox.
>
Exactly.


>> - We would need to port mailbox drivers one-by-one anyway, so we could
>> as well implement the simple "press-the-button" subset for each mailbox
>> separately.
>
Is it about virtual controller?

>> The interface between the SCPI code and the mailbox is
>> probably just "signal_mailbox()".
>
Afterall we should have the following to spread the nice feeling of
"supporting doorbell controllers"  :)

mailbox_client.h
***
void signal_mailbox(struct mbox_chan *chan)
{
   (void)mbox_send_message(chan, NULL);
   mbox_client_txdone(chan, 0);
}


Cheers!

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.1 baseline-only test] 72447: regressions - trouble: broken/fail/pass

2017-11-14 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72447 linux-4.1 real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72447/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386 broken
 test-amd64-i386-freebsd10-i386  4 host-install(4)   broken REGR. vs. 72330
 test-armhf-armhf-examine 11 examine-serial/bootloader fail REGR. vs. 72330
 test-armhf-armhf-examine 12 examine-serial/kernel fail REGR. vs. 72330
 test-armhf-armhf-xl-midway   16 guest-start/debian.repeat fail REGR. vs. 72330
 test-amd64-amd64-xl-qcow219 guest-start/debian.repeat fail REGR. vs. 72330
 test-amd64-i386-xl-qemut-debianhvm-amd64 15 guest-saverestore.2 fail REGR. vs. 
72330
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 72330
 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeat fail REGR. vs. 72330
 test-amd64-amd64-xl-qemut-win10-i386 16 guest-localmigrate/x10 fail REGR. vs. 
72330

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail REGR. vs. 72330
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail REGR. vs. 72330
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail REGR. vs. 72330

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeat fail baseline 
untested
 test-amd64-amd64-qemuu-nested-intel 13 xen-install/l1fail blocked in 72330
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   like 72330
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   like 72330
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   like 72330
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 72330
 test-amd64-amd64-examine  4 memdisk-try-append   fail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install fail never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 10 windows-install fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 17 guest-stop  fail never pass

version targeted for testing:
 linux200d858d94b4d8ed7a287e3a3c2b860ae9e17e83
baseline version:
 linuxb8342068e3011832d723aa379a3180d37a4d59df

Last test of basis72330  

[Xen-devel] [linux-linus test] 116164: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116164 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116164/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore.2 fail REGR. vs. 
115643
 build-amd64-pvops 6 kernel-build fail REGR. vs. 115643

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115643
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeatfail like 115643
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115643
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115643
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115643
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115643
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 

Re: [Xen-devel] [RFC v2 5/7] acpi:arm64: Add support for parsing IORT table

2017-11-14 Thread Goel, Sameer


On 11/8/2017 7:41 AM, Manish Jaggi wrote:
> Hi Sameer
> 
> On 9/21/2017 6:07 AM, Sameer Goel wrote:
>> Add support for parsing IORT table to initialize SMMU devices.
>> * The code for creating an SMMU device has been modified, so that the SMMU
>> device can be initialized.
>> * The NAMED NODE code has been commented out as this will need DOM0 kernel
>> support.
>> * ITS code has been included but it has not been tested.
>>
>> Signed-off-by: Sameer Goel 
> Followup of the discussions we had on iort parsing and querying streamID and 
> deviceId based on RID.
> I have extended your patchset with a patch that provides an alternative
> way of parsing iort into maps : {rid-streamid}, {rid-deviceID)
> which can directly be looked up for searching streamId for a rid. This
> will remove the need to traverse iort table again.
> 
> The test patch just describes the proposed flow and how the parsing and
> query code might fit in. I have not tested it.
> The code only compiles.
> 
> https://github.com/mjaggi-cavium/xen-wip/commit/df006d64bdbb5c8344de5a710da8bf64c9e8edd5
> (This repo has all 7 of your patches + test code patch merged.
> 
> Note: The commit text of the patch describes the basic flow /assumptions / 
> usage of functions.
> Please see the code along with the v2 design draft.
> [RFC] [Draft Design v2] ACPI/IORT Support in Xen.
> https://lists.xen.org/archives/html/xen-devel/2017-11/msg00512.html
> 
> I seek your advice on this. Please provide your feedback.
I responded back on the other thread. I think we are fixing something that is 
not broken. I will try to post a couple of new RFCs and let's discuss this with 
incremental changes on the mailing list.

Thanks,
Sameer
> 
> Thanks
> Manish
> 
> 
>> ---
>>   xen/arch/arm/setup.c   |   3 +
>>   xen/drivers/acpi/Makefile  |   1 +
>>   xen/drivers/acpi/arm/Makefile  |   1 +
>>   xen/drivers/acpi/arm/iort.c    | 173 
>> +
>>   xen/drivers/passthrough/arm/smmu.c |   1 +
>>   xen/include/acpi/acpi_iort.h   |  17 ++--
>>   xen/include/asm-arm/device.h   |   2 +
>>   xen/include/xen/acpi.h |  21 +
>>   xen/include/xen/pci.h  |   8 ++
>>   9 files changed, 146 insertions(+), 81 deletions(-)
>>   create mode 100644 xen/drivers/acpi/arm/Makefile
>>
>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 92f173b..4ba09b2 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -49,6 +49,7 @@
>>   #include 
>>   #include 
>>   #include 
>> +#include 
>>     struct bootinfo __initdata bootinfo;
>>   @@ -796,6 +797,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>>     tasklet_subsys_init();
>>   +    /* Parse the ACPI iort data */
>> +    acpi_iort_init();
>>     xsm_dt_init();
>>   diff --git a/xen/drivers/acpi/Makefile b/xen/drivers/acpi/Makefile
>> index 444b11d..e7ffd82 100644
>> --- a/xen/drivers/acpi/Makefile
>> +++ b/xen/drivers/acpi/Makefile
>> @@ -1,5 +1,6 @@
>>   subdir-y += tables
>>   subdir-y += utilities
>> +subdir-$(CONFIG_ARM) += arm
>>   subdir-$(CONFIG_X86) += apei
>>     obj-bin-y += tables.init.o
>> diff --git a/xen/drivers/acpi/arm/Makefile b/xen/drivers/acpi/arm/Makefile
>> new file mode 100644
>> index 000..7c039bb
>> --- /dev/null
>> +++ b/xen/drivers/acpi/arm/Makefile
>> @@ -0,0 +1 @@
>> +obj-y += iort.o
>> diff --git a/xen/drivers/acpi/arm/iort.c b/xen/drivers/acpi/arm/iort.c
>> index 2e368a6..7f54062 100644
>> --- a/xen/drivers/acpi/arm/iort.c
>> +++ b/xen/drivers/acpi/arm/iort.c
>> @@ -14,17 +14,47 @@
>>    * This file implements early detection/parsing of I/O mapping
>>    * reported to OS through firmware via I/O Remapping Table (IORT)
>>    * IORT document number: ARM DEN 0049A
>> + *
>> + * Based on Linux drivers/acpi/arm64/iort.c
>> + * => commit ca78d3173cff3503bcd15723b049757f75762d15
>> + *
>> + * Xen modification:
>> + * Sameer Goel 
>> + * Copyright (C) 2017, The Linux Foundation, All rights reserved.
>> + *
>>    */
>>   -#define pr_fmt(fmt)    "ACPI: IORT: " fmt
>> -
>> -#include 
>> -#include 
>> -#include 
>> -#include 
>> -#include 
>> -#include 
>> -#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +
>> +#include 
>> +
>> +/* Xen: Define compatibility functions */
>> +#define FW_BUG    "[Firmware Bug]: "
>> +#define pr_err(fmt, ...) printk(XENLOG_ERR fmt, ## __VA_ARGS__)
>> +#define pr_warn(fmt, ...) printk(XENLOG_WARNING fmt, ## __VA_ARGS__)
>> +
>> +/* Alias to Xen allocation helpers */
>> +#define kfree xfree
>> +#define kmalloc(size, flags)    _xmalloc(size, sizeof(void *))
>> +#define kzalloc(size, flags)    _xzalloc(size, sizeof(void *))
>> +
>> +/* Redefine WARN macros */
>> +#undef WARN
>> +#undef WARN_ON
>> +#define WARN(condition, format...) ({    \
>> +    int __ret_warn_on = !!(condition);    \
>> +    if 

[Xen-devel] [xen-unstable test] 116161: tolerable FAIL - PUSHED

2017-11-14 Thread osstest service owner
flight 116161 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116161/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 116108
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116132
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116132
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 116132
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116132
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116132
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116132
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116132
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116132
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116132
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  36c80e29e36eee02f20f18e7f32267442b18c8bd
baseline version:
 xen  3b2966e72c414592cd2c86c21a0d4664cf627b9c

Last test of basis   116132  2017-11-13 05:28:59 Z1 days
Failing since116150  2017-11-14 03:00:55 Z0 days2 attempts
Testing same since   116161  2017-11-14 16:48:27 Z0 days1 attempts


People who touched revisions under test:
  Anthony PERARD 
  Bhupinder Thakur 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 
  Pawel Wieczorkiewicz 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  

[Xen-devel] [linux-3.18 baseline-only test] 72446: regressions - FAIL

2017-11-14 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72446 linux-3.18 real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72446/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install 
fail REGR. vs. 72416

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail like 72416
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   like 72416
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   like 72416
 test-amd64-i386-freebsd10-amd64 11 guest-start fail like 72416
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   like 72416
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeatfail  like 72416
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeatfail   like 72416
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install  fail like 72416
 test-amd64-amd64-xl-qcow219 guest-start/debian.repeatfail   like 72416
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 72416
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 72416
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail like 72416
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail like 72416
 test-amd64-amd64-examine  4 memdisk-try-append   fail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass

version targeted for testing:
 linux943dc0b3ef9f0168494d6dca305cd0cf53a0b3d4
baseline version:
 linux4f823316dac3de3463dfbea2be3812102a76e246

Last test of basis72416  2017-11-03 07:25:57 Z   11 days
Testing same since72446  2017-11-14 08:51:59 Z0 days1 attempts


People who touched revisions under test:
  Alexander Boyko 
  Andrew Morton 
  Andy Shevchenko 
  Arnd Bergmann 
  Ashish Samant 
  Boris 

[Xen-devel] [qemu-mainline test] 116156: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116156 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116156/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
116126

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeatfail like 116107
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeatfail  like 116107
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116126
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116126
 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeatfail  like 116126
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116126
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116126
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116126
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuu2e550e31518f90cc9cb7e5c855a1995c317463a3
baseline version:
 qemuu4ffa88c99c54d2a30f79e3dbecec50b023eff1c8

Last test of basis   116126  2017-11-13 00:49:34 Z1 days
Failing since116146  2017-11-13 18:53:48 Z1 days2 attempts
Testing same since   116156  2017-11-14 11:53:09 Z0 days1 attempts


People who touched revisions under test:
  Alistair Francis 
  Christian Borntraeger 
  Cornelia Huck 
  Eric Blake 
  Fam Zheng 
  Gerd Hoffmann 
  Peter Maydell 
  Philippe Mathieu-Daudé 
  Richard Henderson 
  Samuel Thibault 
  Sergio Lopez 
  Stefan Hajnoczi 
  Tao Wu 
  Vladimir Sementsov-Ogievskiy 
  Yi Min Zhao 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64

Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-14 Thread Boris Ostrovsky
On 11/14/2017 04:11 AM, Juergen Gross wrote:
> On 13/11/17 19:33, Stefano Stabellini wrote:
>> On Mon, 13 Nov 2017, Juergen Gross wrote:
>>> On 11/11/17 00:57, Stefano Stabellini wrote:
 On Tue, 7 Nov 2017, Juergen Gross wrote:
> On 06/11/17 23:17, Stefano Stabellini wrote:
>> mutex_trylock() returns 1 if you take the lock and 0 if not. Assume you
>> take in_mutex on the first try, but you can't take out_mutex. Next times
>> you call mutex_trylock() in_mutex is going to fail. It's an endless
>> loop.
>>
>> Solve the problem by moving the two mutex_trylock calls to two separate
>> loops.
>>
>> Reported-by: Dan Carpenter 
>> Signed-off-by: Stefano Stabellini 
>> CC: boris.ostrov...@oracle.com
>> CC: jgr...@suse.com
>> ---
>>  drivers/xen/pvcalls-front.c | 5 +++--
>>  1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
>> index 0c1ec68..047dce7 100644
>> --- a/drivers/xen/pvcalls-front.c
>> +++ b/drivers/xen/pvcalls-front.c
>> @@ -1048,8 +1048,9 @@ int pvcalls_front_release(struct socket *sock)
>>   * is set to NULL -- we only need to wait for the 
>> existing
>>   * waiters to return.
>>   */
>> -while (!mutex_trylock(>active.in_mutex) ||
>> -   !mutex_trylock(>active.out_mutex))
>> +while (!mutex_trylock(>active.in_mutex))
>> +cpu_relax();
>> +while (!mutex_trylock(>active.out_mutex))
>>  cpu_relax();
> Any reason you don't just use mutex_lock()?
 Hi Juergen, sorry for the late reply.

 Yes, you are right. Given the patch, it would be just the same to use
 mutex_lock.

 This is where I realized that actually we have a problem: no matter if
 we use mutex_lock or mutex_trylock, there are no guarantees that we'll
 be the last to take the in/out_mutex. Other waiters could be still
 outstanding.

 We solved the same problem using a refcount in pvcalls_front_remove. In
 this case, I was thinking of reusing the mutex internal counter for
 efficiency, instead of adding one more refcount.

 For using the mutex as a refcount, there is really no need to call
 mutex_trylock or mutex_lock. I suggest checking on the mutex counter
 directly:


while (atomic_long_read(>active.in_mutex.owner) != 0UL ||
   atomic_long_read(>active.out_mutex.owner) != 0UL)
cpu_relax();

 Cheers,

 Stefano


 ---

 xen/pvcalls: fix potential endless loop in pvcalls-front.c

 mutex_trylock() returns 1 if you take the lock and 0 if not. Assume you
 take in_mutex on the first try, but you can't take out_mutex. Next time
 you call mutex_trylock() in_mutex is going to fail. It's an endless
 loop.

 Actually, we don't want to use mutex_trylock at all: we don't need to
 take the mutex, we only need to wait until the last mutex waiter/holder
 releases it.

 Instead of calling mutex_trylock or mutex_lock, just check on the mutex
 refcount instead.

 Reported-by: Dan Carpenter 
 Signed-off-by: Stefano Stabellini 
 CC: boris.ostrov...@oracle.com
 CC: jgr...@suse.com

 diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
 index 0c1ec68..9f33cb8 100644
 --- a/drivers/xen/pvcalls-front.c
 +++ b/drivers/xen/pvcalls-front.c
 @@ -1048,8 +1048,8 @@ int pvcalls_front_release(struct socket *sock)
 * is set to NULL -- we only need to wait for the existing
 * waiters to return.
 */
 -  while (!mutex_trylock(>active.in_mutex) ||
 - !mutex_trylock(>active.out_mutex))
 +  while (atomic_long_read(>active.in_mutex.owner) != 0UL ||
 + atomic_long_read(>active.out_mutex.owner) != 0UL)
>>> I don't like this.
>>>
>>> Can't you use a kref here? Even if it looks like more overhead it is
>>> much cleaner. There will be no questions regarding possible races,
>>> while with an approach like yours will always smell racy (can't there
>>> be someone taking the mutex just after above test?).
>>>
>>> In no case you should make use of the mutex internals.
>> Boris' suggestion solves that problem well. Would you be OK with the
>> proposed
>>
>> while(mutex_is_locked(>active.in_mutex.owner) ||
>>   mutex_is_locked(>active.out_mutex.owner))
>> cpu_relax();
>>
>> ?
> I'm not convinced there isn't a race.
>
> In pvcalls_front_recvmsg() sock->sk->sk_send_head is being read and only
> then 

Re: [Xen-devel] [RFC PATCH 00/31] CPUFreq on ARM

2017-11-14 Thread Oleksandr Tyshchenko
On Tue, Nov 14, 2017 at 12:49 PM, Andre Przywara
 wrote:
> Hi,
Hi Andre

>
> On 13/11/17 19:40, Oleksandr Tyshchenko wrote:
>> On Mon, Nov 13, 2017 at 5:21 PM, Andre Przywara
>>  wrote:
>>> Hi,
>> Hi Andre,
>>
>>>
>>> thanks very much for your work on this!
>> Thank you for your comments.
>>
>>>
>>> On 09/11/17 17:09, Oleksandr Tyshchenko wrote:
 From: Oleksandr Tyshchenko 

 Hi, all.

 The purpose of this RFC patch series is to add CPUFreq support to Xen on 
 ARM.
 Motivation of hypervisor based CPUFreq is to enable one of the main PM 
 use-cases in virtualized system powered by Xen hypervisor. Rationale 
 behind this activity is that CPU virtualization is done by hypervisor and 
 the guest OS doesn't actually know anything about physical CPUs because it 
 is running on virtual CPUs. It is quite clear that a decision about 
 frequency change should be taken by hypervisor as only it has information 
 about actual CPU load.
>>>
>>> Can you please sketch your usage scenario or workloads here? I can think
>>> of quite different scenarios (oversubscribed server vs. partitioning
>>> RTOS guests, for instance). The usefulness of CPUFreq and the trade-offs
>>> in the design are quite different between those.
>> We keep embedded use-cases in mind. For example, it is a system with
>> several domains,
>> where one domain has most critical SW running on and other domain(s)
>> are, let say, for entertainment purposes.
>> I think, the CPUFreq is useful where power consumption is a question.
>
> Does the SoC you use allow different frequencies for each core? Or is it
> one frequency for all cores? Most x86 CPU allow different frequencies
> for each core, AFAIK. Just having the same OPP for the whole SoC might
> limit the usefulness of this approach in general.
Good question. All cores in a cluster share the same clock. It is
impossible to set different frequencies on the cores inside one
cluster.

>
>>> In general I doubt that a hypervisor scheduling vCPUs is in a good
>>> position to make a decision on the proper frequency physical CPUs should
>>> run with. From all I know it's already hard for an OS kernel to make
>>> that call. So I would actually expect that guests provide some input,
>>> for instance by signalling OPP change request up to the hypervisor. This
>>> could then decide to act on it - or not.
>> Each running guest sees only part of the picture, but hypervisor has
>> the whole picture, it knows all about CPU, measures CPU load and able
>> to choose required CPU frequency to run on.
>
> But based on what data? All Xen sees is a vCPU trapping on MMIO, a
> hypercall or on WFI, for that matter. It does not know much more about
> the guest, especially it's rather clueless about what the guest OS
> actually intended to do.
> For instance Linux can track the actual utilization of a core by keeping
> statistics of runnable processes and monitoring their time slice usage.
> It can see that a certain process exhibits periodical, but bursty CPU
> usage, which may hint that is could run at lower frequency. Xen does not
> see this fine granular information.
>
>> I am wondering, does Xen
>> need additional input from guests for make a decision?
>
> I very much believe so. The guest OS is in a much better position to
> make that call.
>
>> BTW, currently guest domain on ARM doesn't even know how many physical
>> CPUs the system has and what are these OPPs. When creating guest
>> domain Xen inserts only dummy CPU nodes. All CPU info, such as clocks,
>> OPPs, thermal, etc are not passed to guest.
>
> Sure, because this is what virtualization is about. And I am not asking
> for unconditionally allowing any guest to change frequency.
> But there could be certain use cases where this could be considered:
> Think about your "critical SW" mentioned above, which is probably some
> RTOS, also possibly running on pinned vCPUs. For that
> (latency-sensitive) guest it might be well suited to run at a lower
> frequency for some time, but how should Xen know about this?
> "Normally" the best strategy to save power is to run as fast as
> possible, finish all outstanding work, then put the core to sleep.
> Because not running at all consumes much less energy than running at a
> reduced frequency. But this may not be suitable for an RTOS.
Saying "one domain has most critical SW running on" I meant hardware
domain/driver domain or even other
domain which perform some important tasks (disk, net, display, camera,
whatever) which treated by the whole system as critical
and must never fail. Other domains, for example, it might be Android
as well, are not critical at all from the system point of view.
Being honest, I haven't considered yet using CPUFreq in system where
some RT guest is present.
I think it is something that should be *thoroughly* investigated and
then worked out.
I am not familiar with RT 

[Xen-devel] [seabios test] 116154: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116154 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116154/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail pass in 
116148

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 116148 like 115539
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  63451fca13c75870e1703eb3e20584d91179aebc
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   10 days
Testing same since   115733  2017-11-10 17:19:59 Z4 days7 attempts


People who touched revisions under test:
  Kevin O'Connor 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 63451fca13c75870e1703eb3e20584d91179aebc
Author: Kevin O'Connor 
Date:   Fri Nov 10 11:49:19 2017 -0500

docs: Note v1.11.0 release

Signed-off-by: Kevin O'Connor 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 116162: tolerable all pass - PUSHED

2017-11-14 Thread osstest service owner
flight 116162 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116162/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
baseline version:
 xen  36c80e29e36eee02f20f18e7f32267442b18c8bd

Last test of basis   116158  2017-11-14 14:02:48 Z0 days
Testing same since   116162  2017-11-14 17:01:41 Z0 days1 attempts


People who touched revisions under test:
  Eric Chanudet 
  Min He 
  Yi Zhang 
  Yu Zhang 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/xen.git
   36c80e2..b9ee1fd  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-linus test] 116152: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116152 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116152/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2  17 guest-start.2fail REGR. vs. 115628
 build-amd64-pvops 6 kernel-build fail REGR. vs. 115643

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds  6 xen-install  fail REGR. vs. 115643

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked 
n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1)   blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-xsm   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)   blocked  n/a
 test-amd64-amd64-examine  1 build-check(1)   blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm  1 build-check(1)blocked n/a
 test-amd64-amd64-xl-rtds  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115643
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeatfail like 115643
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115643
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115643
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115643
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115643
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeatfail  like 115643
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 

[Xen-devel] [distros-debian-snapshot test] 72445: tolerable FAIL

2017-11-14 Thread Platform Team regression test user
flight 72445 distros-debian-snapshot real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72445/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-amd64-current-netinst-pygrub 10 debian-di-install fail like 
72430
 test-amd64-i386-amd64-weekly-netinst-pygrub 10 debian-di-install fail like 
72430
 test-amd64-amd64-i386-weekly-netinst-pygrub 10 debian-di-install fail like 
72430
 test-amd64-amd64-amd64-weekly-netinst-pygrub 10 debian-di-install fail like 
72430
 test-armhf-armhf-armhf-daily-netboot-pygrub 10 debian-di-install fail like 
72430
 test-amd64-amd64-i386-daily-netboot-pygrub 10 debian-di-install fail like 72430
 test-amd64-amd64-amd64-daily-netboot-pvgrub 10 debian-di-install fail like 
72430
 test-amd64-i386-amd64-daily-netboot-pygrub 10 debian-di-install fail like 72430
 test-amd64-i386-i386-daily-netboot-pvgrub 10 debian-di-install fail like 72430
 test-amd64-i386-i386-weekly-netinst-pygrub 10 debian-di-install fail like 72430
 test-amd64-i386-i386-current-netinst-pygrub 10 debian-di-install fail like 
72430
 test-amd64-amd64-i386-current-netinst-pygrub 10 debian-di-install fail like 
72430
 test-amd64-i386-amd64-current-netinst-pygrub 10 debian-di-install fail like 
72430

baseline version:
 flight   72430

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-daily-netboot-pvgrub  fail
 test-amd64-i386-i386-daily-netboot-pvgrubfail
 test-amd64-i386-amd64-daily-netboot-pygrub   fail
 test-armhf-armhf-armhf-daily-netboot-pygrub  fail
 test-amd64-amd64-i386-daily-netboot-pygrub   fail
 test-amd64-amd64-amd64-current-netinst-pygrubfail
 test-amd64-i386-amd64-current-netinst-pygrub fail
 test-amd64-amd64-i386-current-netinst-pygrub fail
 test-amd64-i386-i386-current-netinst-pygrub  fail
 test-amd64-amd64-amd64-weekly-netinst-pygrub fail
 test-amd64-i386-amd64-weekly-netinst-pygrub  fail
 test-amd64-amd64-i386-weekly-netinst-pygrub  fail
 test-amd64-i386-i386-weekly-netinst-pygrub   fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable-smoke test] 116158: tolerable all pass - PUSHED

2017-11-14 Thread osstest service owner
flight 116158 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116158/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  36c80e29e36eee02f20f18e7f32267442b18c8bd
baseline version:
 xen  20ed7c8177da2847d65bb3373c6f1263671322d4

Last test of basis   116143  2017-11-13 16:05:04 Z1 days
Testing same since   116158  2017-11-14 14:02:48 Z0 days1 attempts


People who touched revisions under test:
  Bhupinder Thakur 
  Ian Jackson 
  Julien Grall 
  Pawel Wieczorkiewicz 
  Wei Liu 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/xen.git
   20ed7c8..36c80e2  36c80e29e36eee02f20f18e7f32267442b18c8bd -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable test] 116150: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116150 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116150/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeat fail REGR. vs. 116132
 test-amd64-amd64-xl-qemut-win7-amd64 10 windows-install  fail REGR. vs. 116132

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116132
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116132
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 116132
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116132
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116132
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail 
like 116132
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116132
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeatfail  like 116132
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116132
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116132
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116132
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  20ed7c8177da2847d65bb3373c6f1263671322d4
baseline version:
 xen  3b2966e72c414592cd2c86c21a0d4664cf627b9c

Last test of basis   116132  2017-11-13 05:28:59 Z1 days
Testing same since   116150  2017-11-14 03:00:55 Z0 days1 attempts


People who touched revisions under test:
  Anthony PERARD 
  Jan Beulich 
  Wei Liu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  

Re: [Xen-devel] [PATCH] x86/hvm: Fix rcu_unlock_domain call bypass

2017-11-14 Thread Adrian Pop
Hello,

On Tue, Nov 14, 2017 at 08:25:57AM -0700, Jan Beulich wrote:
> >>> On 14.11.17 at 16:11,  wrote:
> > rcu_lock_current_domain is called at the beginning of do_altp2m_op, but
> > the altp2m_vcpu_enable_notify subop handler might skip calling
> > rcu_unlock_domain, possibly hanging the domain altogether.
> 
> I fully agree with the change, but the description needs improvement.
> For one, why would the domain be hanging with
> 
> static inline struct domain *rcu_lock_current_domain(void)
> {
> return /*rcu_lock_domain*/(current->domain);
> }
> 
> ? And even if the lock function invocation wasn't commented
> out, all it does is preempt_disable(). That may cause an
> assertion to trigger in debug builds, but that's not a domain
> hang. Plus ...

Sorry, I was indeed referring to the preempt_count() assertion, only
using poor wording.  I had tested something else using
rcu_lock_domain_by_id() instead of rcu_lock_current_domain() which
triggered the assertion.

> 
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -4534,12 +4534,18 @@ static int do_altp2m_op(
> >  
> >  if ( a.u.enable_notify.pad || a.domain != DOMID_SELF ||
> >   a.u.enable_notify.vcpu_id != curr->vcpu_id )
> > +{
> >  rc = -EINVAL;
> > +break;
> > +}
> 
> ... you also change flow here, which is a second bug you address,
> but you fail to mention it.

OK.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 for-4.10 1/2] x86/mm: fix potential race conditions in map_pages_to_xen().

2017-11-14 Thread Yu Zhang



On 11/14/2017 8:32 PM, Julien Grall wrote:

Hi,

On 14/11/17 08:20, Jan Beulich wrote:

On 14.11.17 at 07:53,  wrote:

From: Min He 

In map_pages_to_xen(), a L2 page table entry may be reset to point to
a superpage, and its corresponding L1 page table need be freed in such
scenario, when these L1 page table entries are mapping to consecutive
page frames and having the same mapping flags.

However, variable `pl1e` is not protected by the lock before L1 page 
table

is enumerated. A race condition may happen if this code path is invoked
simultaneously on different CPUs.

For example, `pl1e` value on CPU0 may hold an obsolete value, pointing
to a page which has just been freed on CPU1. Besides, before this page
is reused, it will still be holding the old PTEs, referencing 
consecutive
page frames. Consequently the `free_xen_pagetable(l2e_to_l1e(ol2e))` 
will

be triggered on CPU0, resulting the unexpected free of a normal page.

This patch fixes the above problem by protecting the `pl1e` with the 
lock.


Also, there're other potential race conditions. For instance, the L2/L3
entry may be modified concurrently on different CPUs, by routines 
such as
map_pages_to_xen(), modify_xen_mappings() etc. To fix this, this 
patch will
check the _PAGE_PRESENT and _PAGE_PSE flags, after the spinlock is 
obtained,

for the corresponding L2/L3 entry.

Signed-off-by: Min He 
Signed-off-by: Yi Zhang 
Signed-off-by: Yu Zhang 


Reviewed-by: Jan Beulich 


Please try to have a cover letter in the future when you have multiple 
patches. This will make easier to give comments/release-ack for the 
all the patches. Anyway for the 2 patches:


Oh, got it. Thanks for the suggestion. :-)

Yu



Release-acked-by: Julien Grall 

Cheers,




___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/hvm: Fix rcu_unlock_domain call bypass

2017-11-14 Thread Andrew Cooper
On 14/11/17 15:11, Adrian Pop wrote:
> rcu_lock_current_domain is called at the beginning of do_altp2m_op, but
> the altp2m_vcpu_enable_notify subop handler might skip calling
> rcu_unlock_domain, possibly hanging the domain altogether.
>
> Signed-off-by: Adrian Pop 

Reviewed-by: Andrew Cooper 

CC'ing Julien.  This is 4.10 material IMO; it would be a security issue
if rcu_lock_current_domain() wasn't a nop in Xen.  Debug builds are also
liable to hit an assertion pertaining to the preempt_count() (which
again, is only ever read in debug builds).

> ---
>  xen/arch/x86/hvm/hvm.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 205b4cb685..0af498a312 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4534,12 +4534,18 @@ static int do_altp2m_op(
>  
>  if ( a.u.enable_notify.pad || a.domain != DOMID_SELF ||
>   a.u.enable_notify.vcpu_id != curr->vcpu_id )
> +{
>  rc = -EINVAL;
> +break;
> +}
>  
>  if ( !gfn_eq(vcpu_altp2m(curr).veinfo_gfn, INVALID_GFN) ||
>   mfn_eq(get_gfn_query_unlocked(curr->domain,
>  a.u.enable_notify.gfn, ), INVALID_MFN) )
> -return -EINVAL;
> +{
> +rc = -EINVAL;
> +break;
> +}
>  
>  vcpu_altp2m(curr).veinfo_gfn = _gfn(a.u.enable_notify.gfn);
>  altp2m_vcpu_update_vmfunc_ve(curr);


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86/hvm: Fix rcu_unlock_domain call bypass

2017-11-14 Thread Jan Beulich
>>> On 14.11.17 at 16:11,  wrote:
> rcu_lock_current_domain is called at the beginning of do_altp2m_op, but
> the altp2m_vcpu_enable_notify subop handler might skip calling
> rcu_unlock_domain, possibly hanging the domain altogether.

I fully agree with the change, but the description needs improvement.
For one, why would the domain be hanging with

static inline struct domain *rcu_lock_current_domain(void)
{
return /*rcu_lock_domain*/(current->domain);
}

? And even if the lock function invocation wasn't commented
out, all it does is preempt_disable(). That may cause an
assertion to trigger in debug builds, but that's not a domain
hang. Plus ...

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4534,12 +4534,18 @@ static int do_altp2m_op(
>  
>  if ( a.u.enable_notify.pad || a.domain != DOMID_SELF ||
>   a.u.enable_notify.vcpu_id != curr->vcpu_id )
> +{
>  rc = -EINVAL;
> +break;
> +}

... you also change flow here, which is a second bug you address,
but you fail to mention it.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] x86/hvm: Fix rcu_unlock_domain call bypass

2017-11-14 Thread Adrian Pop
rcu_lock_current_domain is called at the beginning of do_altp2m_op, but
the altp2m_vcpu_enable_notify subop handler might skip calling
rcu_unlock_domain, possibly hanging the domain altogether.

Signed-off-by: Adrian Pop 
---
 xen/arch/x86/hvm/hvm.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 205b4cb685..0af498a312 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4534,12 +4534,18 @@ static int do_altp2m_op(
 
 if ( a.u.enable_notify.pad || a.domain != DOMID_SELF ||
  a.u.enable_notify.vcpu_id != curr->vcpu_id )
+{
 rc = -EINVAL;
+break;
+}
 
 if ( !gfn_eq(vcpu_altp2m(curr).veinfo_gfn, INVALID_GFN) ||
  mfn_eq(get_gfn_query_unlocked(curr->domain,
 a.u.enable_notify.gfn, ), INVALID_MFN) )
-return -EINVAL;
+{
+rc = -EINVAL;
+break;
+}
 
 vcpu_altp2m(curr).veinfo_gfn = _gfn(a.u.enable_notify.gfn);
 altp2m_vcpu_update_vmfunc_ve(curr);
-- 
2.15.0


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] tools: xentoolcore_restrict_all: Do deregistration before close

2017-11-14 Thread Ian Jackson
Ross Lagerwall writes ("Re: [PATCH] tools: xentoolcore_restrict_all: Do 
deregistration before close"):
> On 11/14/2017 12:15 PM, Ian Jackson wrote:
> > + * Note for multi-threaded programs: If xentoolcore_restrict_all is
> > + * called concurrently with a function which /or closes Xen library
> 
> "which /or closes..." - Is this a typo?

Yes, fixed, thanks.

> > -close(h->fd);
> > xentoolcore__deregister_active_handle(>tc_ah);
> > +close(h->fd);
> >   
> 
> Since the rest of this file uses tabs, you may as well use tabs for this 
> line as well.

I didn't change the use of tabs vs. the use of spaces.

> Reviewed-by: Ross Lagerwall 

Thanks,
Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] tools: xentoolcore_restrict_all: Do deregistration before close

2017-11-14 Thread Ian Jackson
Julien Grall writes ("Re: [PATCH] tools: xentoolcore_restrict_all: Do 
deregistration before close"):
> I think this is 4.10 material, xentoolcore was introduced in this 
> release and it would be good to have it right from now. I want to 
> confirm that you are both happy with that?

Yes, absolutely.  Sorry, I forgot the for-4.10 tag in the Subject.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] tools: xentoolcore_restrict_all: Do deregistration before close

2017-11-14 Thread Ross Lagerwall

On 11/14/2017 12:15 PM, Ian Jackson wrote:

Closing the fd before unhooking it from the list runs the risk that a
concurrent thread calls xentoolcore_restrict_all will operate on the
old fd value, which might refer to a new fd by then.  So we need to do
it in the other order.

Sadly this weakens the guarantee provided by xentoolcore_restrict_all
slight, but not (I think) in a problematic way.  It would be possible
to implement the previous guarantee, but it would involve replacing
all of the close() calls in all of the individual osdep parts of all
of the individual libraries with calls to a new function which does
dup2("/dev/null", thing->fd);
pthread_mutex_lock(_lock);
thing->fd = -1;
pthread_mutex_unlock(_lock);
close(fd);
which would be terribly tedious.


...

diff --git a/tools/libs/toolcore/include/xentoolcore.h 
b/tools/libs/toolcore/include/xentoolcore.h
index 8d28c2d..b3a3c93 100644
--- a/tools/libs/toolcore/include/xentoolcore.h
+++ b/tools/libs/toolcore/include/xentoolcore.h
@@ -39,6 +39,15 @@
   * fail (even though such a call is potentially meaningful).
   * (If called again with a different domid, it will necessarily fail.)
   *
+ * Note for multi-threaded programs: If xentoolcore_restrict_all is
+ * called concurrently with a function which /or closes Xen library


"which /or closes..." - Is this a typo?


+ * handles (e.g.  libxl_ctx_free, xs_close), the restriction is only
+ * guaranteed to be effective after all of the closing functions have
+ * returned, even if that is later than the return from
+ * xentoolcore_restrict_all.  (Of course if xentoolcore_restrict_all
+ * it is called concurrently with opening functions, the new handles
+ * might or might not be restricted.)
+ *
   *  
   *  IMPORTANT - IMPLEMENTATION STATUS
   *
diff --git a/tools/libs/toolcore/include/xentoolcore_internal.h 
b/tools/libs/toolcore/include/xentoolcore_internal.h
index dbdb1dd..04f5848 100644
--- a/tools/libs/toolcore/include/xentoolcore_internal.h
+++ b/tools/libs/toolcore/include/xentoolcore_internal.h
@@ -48,8 +48,10 @@
   * 4. ONLY THEN actually open the relevant fd or whatever
   *
   *   III. during the "close handle" function
- * 1. FIRST close the relevant fd or whatever
- * 2. call xentoolcore__deregister_active_handle
+ * 1. FIRST call xentoolcore__deregister_active_handle
+ * 2. close the relevant fd or whatever
+ *
+ * [ III(b). Do the same as III for error exit from the open function. ]
   *
   *   IV. in the restrict_callback function
   * * Arrange that the fd (or other handle) can no longer by used
diff --git a/tools/xenstore/xs.c b/tools/xenstore/xs.c
index 23f3f09..abffd9c 100644
--- a/tools/xenstore/xs.c
+++ b/tools/xenstore/xs.c
@@ -279,9 +279,9 @@ err:
saved_errno = errno;
  
  	if (h) {

+   xentoolcore__deregister_active_handle(>tc_ah);
if (h->fd >= 0)
close(h->fd);
-   xentoolcore__deregister_active_handle(>tc_ah);
}
free(h);
  
@@ -342,8 +342,8 @@ static void close_fds_free(struct xs_handle *h) {

close(h->watch_pipe[1]);
}
  
-close(h->fd);

xentoolcore__deregister_active_handle(>tc_ah);
+close(h->fd);
  


Since the rest of this file uses tabs, you may as well use tabs for this 
line as well.


Reviewed-by: Ross Lagerwall 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libs/evtchn: Remove active handler on clean-up or failure

2017-11-14 Thread Julien Grall

Hi Wei,

On 14/11/17 13:53, Wei Liu wrote:

On Tue, Nov 14, 2017 at 12:14:14PM +, Julien Grall wrote:

Hi,

On 14/11/17 11:51, Ian Jackson wrote:

Ross Lagerwall writes ("Re: [PATCH for-4.10] libs/evtchn: Remove active handler on 
clean-up or failure"):

On 11/10/2017 05:10 PM, Julien Grall wrote:

Commit 89d55473ed16543044a31d1e0d4660cf5a3f49df "xentoolcore_restrict_all:
Implement for libxenevtchn" added a call to register allowing to
restrict the event channel.

However, the call to deregister the handler was not performed if open
failed or when closing the event channel. This will result to corrupt
the list of handlers and potentially crash the application later one.


Sorry for not spotting this during review.
The fix is correct as far as it goes, so:

Acked-by: Ian Jackson 


The call to xentoolcore_deregister_active_handle is done at the same
place as for the grants. But I am not convinced this is thread safe as
there are potential race between close the event channel and restict
handler. Do we care about that?

...

However, I think it should call xentoolcore__deregister_active_handle()
_before_ calling osdep_evtchn_close() to avoid trying to restrict a
closed fd or some other fd that happens to have the same number.


You are right.  But this slightly weakens the guarantee provided by
xentoolcore_restrict_all.


I think all the other libs need to be fixed as well, unless there was a
reason it was done this way.


I will send a further patch.  In the meantime I suggest we apply
Julien's fix.


I am going to leave the decision to you and Wei. It feels a bit odd to
release-ack my patch :).


We can only commit patches that are both acked and release-acked. The
latter gives RM control over when the patch should be applied.
Sometimes it is better to wait until something else happens (like
getting the tree to a stable state).

That's how I used release-ack anyway.


I feel a bit odd to release-ack my patch and usually for Arm patches 
deferred to Stefano the decision whether the patch is suitable for the 
release.




For this particular patch, my interpretation of what you just said
is you've given us release-ack and we can apply this patch anytime. I
will commit it soon.


Thanks! I hope it will fixed some osstest failure.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] tools: xentoolcore_restrict_all: Do deregistration before close

2017-11-14 Thread Julien Grall

Hi,

On 14/11/17 14:02, Wei Liu wrote:

On Tue, Nov 14, 2017 at 12:15:42PM +, Ian Jackson wrote:

Closing the fd before unhooking it from the list runs the risk that a
concurrent thread calls xentoolcore_restrict_all will operate on the
old fd value, which might refer to a new fd by then.  So we need to do
it in the other order.

Sadly this weakens the guarantee provided by xentoolcore_restrict_all
slight, but not (I think) in a problematic way.  It would be possible


slightly


to implement the previous guarantee, but it would involve replacing
all of the close() calls in all of the individual osdep parts of all
of the individual libraries with calls to a new function which does
dup2("/dev/null", thing->fd);
pthread_mutex_lock(_lock);
thing->fd = -1;
pthread_mutex_unlock(_lock);
close(fd);
which would be terribly tedious.

Signed-off-by: Ian Jackson 


Acked-by: Wei Liu 


I think this is 4.10 material, xentoolcore was introduced in this 
release and it would be good to have it right from now. I want to 
confirm that you are both happy with that?


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-unstable baseline-only test] 72444: regressions - FAIL

2017-11-14 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72444 xen-unstable real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72444/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2   6 xen-install   fail REGR. vs. 72439
 test-amd64-i386-freebsd10-amd64 11 guest-startfail REGR. vs. 72439
 test-amd64-i386-xl   20 guest-start/debian.repeat fail REGR. vs. 72439
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 72439
 test-amd64-amd64-xl-qcow219 guest-start/debian.repeat fail REGR. vs. 72439
 test-armhf-armhf-examine 11 examine-serial/bootloader fail REGR. vs. 72439
 test-armhf-armhf-examine 12 examine-serial/kernel fail REGR. vs. 72439

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail like 72439
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail like 72439
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   like 72439
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   like 72439
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   like 72439
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeatfail  like 72439
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeatfail   like 72439
 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeatfail   like 72439
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 72439
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail like 72439
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 72439
 test-amd64-amd64-examine  4 memdisk-try-append   fail   never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 10 windows-install fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 17 guest-stop fail never pass

version targeted for testing:
 xen  3b2966e72c414592cd2c86c21a0d4664cf627b9c
baseline version:
 xen  92f0d4392e73727819c5a83fcce447515efaf2f5

Last test of basis72439  2017-11-10 14:46:12 Z3 days
Testing same since72444  2017-11-14 02:17:32 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Jan Beulich 
  Roger Pau 

Re: [Xen-devel] [PATCH] tools: xentoolcore_restrict_all: Do deregistration before close

2017-11-14 Thread Wei Liu
On Tue, Nov 14, 2017 at 12:15:42PM +, Ian Jackson wrote:
> Closing the fd before unhooking it from the list runs the risk that a
> concurrent thread calls xentoolcore_restrict_all will operate on the
> old fd value, which might refer to a new fd by then.  So we need to do
> it in the other order.
> 
> Sadly this weakens the guarantee provided by xentoolcore_restrict_all
> slight, but not (I think) in a problematic way.  It would be possible

slightly

> to implement the previous guarantee, but it would involve replacing
> all of the close() calls in all of the individual osdep parts of all
> of the individual libraries with calls to a new function which does
>dup2("/dev/null", thing->fd);
>pthread_mutex_lock(_lock);
>thing->fd = -1;
>pthread_mutex_unlock(_lock);
>close(fd);
> which would be terribly tedious.
> 
> Signed-off-by: Ian Jackson 

Acked-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/4 v3 for-4.10] libxl: Fix the bug introduced in commit "libxl: use correct type modifier for vuart_gfn"

2017-11-14 Thread Wei Liu
On Tue, Nov 14, 2017 at 05:12:26PM +0530, Bhupinder Thakur wrote:
> Hi,
> 
> On 14 Nov 2017 3:35 pm, "Wei Liu"  wrote:
> 
> > On Mon, Nov 13, 2017 at 03:56:23PM +, Julien Grall wrote:
> > > Hi Wei,
> > >
> > > Sorry I missed that e-mail.
> > >
> > > On 10/31/2017 10:07 AM, Wei Liu wrote:
> > > > Change the tag to for-4.10.
> > > >
> > > > Julien, this is needed to fix vuart emulation.
> > >
> > > To confirm, only patch #1 is candidate for Xen 4.10, right? The rest
> > will be
> > > queued for Xen 4.11?
> > >
> >
> > I think so.
> >
> > Bhupinder, can you confirm that?
> >
> 
> Yes. Only first patch is required for fixing the compilation issue.
> 

Thanks. I will commit the first patch and put the rest in my for-next
queue.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libs/evtchn: Remove active handler on clean-up or failure

2017-11-14 Thread Wei Liu
On Tue, Nov 14, 2017 at 12:14:14PM +, Julien Grall wrote:
> Hi,
> 
> On 14/11/17 11:51, Ian Jackson wrote:
> > Ross Lagerwall writes ("Re: [PATCH for-4.10] libs/evtchn: Remove active 
> > handler on clean-up or failure"):
> > > On 11/10/2017 05:10 PM, Julien Grall wrote:
> > > > Commit 89d55473ed16543044a31d1e0d4660cf5a3f49df 
> > > > "xentoolcore_restrict_all:
> > > > Implement for libxenevtchn" added a call to register allowing to
> > > > restrict the event channel.
> > > > 
> > > > However, the call to deregister the handler was not performed if open
> > > > failed or when closing the event channel. This will result to corrupt
> > > > the list of handlers and potentially crash the application later one.
> > 
> > Sorry for not spotting this during review.
> > The fix is correct as far as it goes, so:
> > 
> > Acked-by: Ian Jackson 
> > 
> > > > The call to xentoolcore_deregister_active_handle is done at the same
> > > > place as for the grants. But I am not convinced this is thread safe as
> > > > there are potential race between close the event channel and restict
> > > > handler. Do we care about that?
> > ...
> > > However, I think it should call xentoolcore__deregister_active_handle()
> > > _before_ calling osdep_evtchn_close() to avoid trying to restrict a
> > > closed fd or some other fd that happens to have the same number.
> > 
> > You are right.  But this slightly weakens the guarantee provided by
> > xentoolcore_restrict_all.
> > 
> > > I think all the other libs need to be fixed as well, unless there was a
> > > reason it was done this way.
> > 
> > I will send a further patch.  In the meantime I suggest we apply
> > Julien's fix.
> 
> I am going to leave the decision to you and Wei. It feels a bit odd to
> release-ack my patch :).

We can only commit patches that are both acked and release-acked. The
latter gives RM control over when the patch should be applied.
Sometimes it is better to wait until something else happens (like
getting the tree to a stable state).

That's how I used release-ack anyway.

For this particular patch, my interpretation of what you just said
is you've given us release-ack and we can apply this patch anytime. I
will commit it soon.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough

2017-11-14 Thread Marek Marczykowski-Górecki
On Mon, Nov 13, 2017 at 03:41:24PM +, George Dunlap wrote:
> Signed-off-by: George Dunlap 
> ---
> CC: Ian Jackson 
> CC: Wei Liu 
> CC: Andrew Cooper 
> CC: Jan Beulich 
> CC: Stefano Stabellini 
> CC: Konrad Wilk 
> CC: Tim Deegan 
> CC: Rich Persaud 
> CC: Marek Marczykowski-Górecki 
> CC: Christopher Clark 
> CC: James McKenzie 
> ---
>  SUPPORT.md | 33 -
>  1 file changed, 32 insertions(+), 1 deletion(-)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 3e352198ce..a8388f3dc5 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md

(...)

> @@ -522,6 +536,23 @@ Virtual Performance Management Unit for HVM guests
>  Disabled by default (enable with hypervisor command line option).
>  This feature is not security supported: see 
> http://xenbits.xen.org/xsa/advisory-163.html
>  
> +### x86/PCI Device Passthrough
> +
> +Status: Supported, with caveats
> +
> +Only systems using IOMMUs will be supported.

s/will be/are/ ?

> +
> +Not compatible with migration, altp2m, introspection, memory sharing, or 
> memory paging.
> +
> +Because of hardware limitations
> +(affecting any operating system or hypervisor),
> +it is generally not safe to use this feature 
> +to expose a physical device to completely untrusted guests.
> +However, this feature can still confer significant security benefit 
> +when used to remove drivers and backends from domain 0
> +(i.e., Driver Domains).
> +See docs/PCI-IOMMU-bugs.txt for more information.
> +
>  ### ARM/Non-PCI device passthrough
>  
>  Status: Supported

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC] [Draft Design v2] ACPI/IORT Support in Xen.

2017-11-14 Thread Julien Grall

Hi Manish,

On 08/11/17 14:38, Manish Jaggi wrote:

ACPI/IORT Support in Xen.
--
 Draft 2

Revision History:

Changes since v1-
- Modified IORT Parsing data structures.
- Added RID->StreamID and RID->DeviceID map as per Andre's suggestion.
- Added reference code which can be read along with this document.
- Removed domctl for DomU, it would be covered in PCI-PT design.

Introduction:
-

I had sent out patch series [0] to hide smmu from Dom0 IORT.
This document is a rework of the series as it:
(a) extends scope by adding parsing of IORT table once
and storing it in in-memory data structures, which can then be used
for querying. This would eliminate the need to parse complete iort
table multiple times.

(b) Generation of IORT for domains be independent using a set of
helper routines.

Index


1. What is IORT. What are its components ?
2. Current Support in Xen
3. IORT for Dom0
4. IORT for DomU
5. Parsing of IORT in Xen
6. Generation of IORT
7. Implementation Phases
8. References

1. IORT Structure ?

IORT refers to Input Output remapping table. It is essentially used to find
information about the IO topology (PCIRC-SMMU-ITS) and relationships 
between

devices.

A general structure of IORT [1]:
It has nodes for PCI RC, SMMU, ITS and Platform devices. Using an IORT 
table

relationship between RID -> StreamID -> DeviceId can be obtained.
Which device is behind which SMMU and which interrupt controller, topology
is described in IORT Table.

Some PCI RC may be not behind an SMMU, and directly map RID->DeviceID.

RID is a requester ID in PCI context,
StreamID is the ID of the device in SMMU context,
DeviceID is the ID programmed in ITS.

Each iort_node contains an ID map array to translate one ID into another.
IDmap Entry {input_range, output_range, output_node_ref, id_count}
This array is associated with PCI RC node, SMMU node, Named component node.
and can reference to a SMMU or ITS node.

2. Current Support of IORT
---
IORT is proposed to be used by Xen to setup SMMU's and platform devices
and for translating RID->StreamID and RID->DeviceID.


I am not sure to understand "to setup SMMU's and platform devices...". 
With IORT, a software can discover list of SMMUs and the IDs to 
configure the ITS and SMMUs for each device (e.g PCI, integrated...) on 
the platform. You will not be able to discover the list of platform 
devices through it.


Also, it is not really "proposed". It is the only way to get those 
information from ACPI.




It is proposed in this document to parse iort once and use the information
to translate RID without traversing IORT again and again.

Also Xen prepares an IORT table for dom0 based on host IORT.
For DomU IORT table proposed only in case of device passthrough.

3. IORT for Dom0
-
IORT for Dom0 is based on host iort. Few nodes could be removed or 
modified.

  For instance
- Host SMMU nodes should not be present as Xen should only touch it.
- platform nodes (named components) may be controlled by xen command line.


I am not sure where does this example come from? As I said, there are no 
plan to support Platform Device passthrough with ACPI. A better example 
here would removing PMCG.




4. IORT for DomU
-
IORT for DomU should be generated by toolstack. IORT table is only present
in case of device passthrough.

At a minimum domU IORT should include a single PCIRC and ITS Group.
Similar PCIRC can be added in DSDT.
The exact structure of DomU IORT would be covered along with PCI PT design.

5. Parsing of IORT in Xen
--
IORT nodes can be saved in structures so that IORT table parsing can be 
done
once and is reused by all xen subsystems like ITS / SMMU etc, domain 
creation.

Proposed are the structures to hold IORT information. [4]

struct rid_map_struct {
 void *pcirc_node;
 u16 ib; /* Input base */
 u32 ob; /* Output base */
 u16 idc; /* Id Count */
  struct list_head entry;
};

struct iort_ref
{
 struct list_head rid_streamId_map;
 struct list_head rid_deviceId_map;
}iortref;

5.1 Functions to query StreamID and DeviceID from RID.

void query_streamId(void *pcirc_node, u16 rid, u32 *streamId);
void query_deviceId(void *pcirc_node, u16 rid, u32 *deviceId);

Adding a mapping is done via helper functions

intadd_rid_streamId_map(void*pcirc_node, u32 ib, u32 ob, u32 idc) 
intadd_rid_deviceId_map(void*pcirc_node, u32 ib, u32 ob, u32 idc) - 
rid-streamId map is straight forward and is created using pci_rc's idmap

- rid-deviceId map is created by translating streamIds to deviceIds.
 fixup_rid_deviceId_map function does that. (See [6])

It is proposed that query functions should replace functions like
iort_node_map_rid which is currently used in linux and is imported in Xen
in the patchset [2][5]

5.2 Proposed Flow of parsing
The flow is based on the patchset in [5]. I have added a 

[Xen-devel] [libvirt test] 116153: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116153 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116153/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt6 libvirt-buildfail REGR. vs. 115476
 build-armhf-libvirt   6 libvirt-buildfail REGR. vs. 115476

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  f9d8b0270ff717c7882c6799e1cafabce11aa69a
baseline version:
 libvirt  1bf893406637e852daeaafec6617d3ee3716de25

Last test of basis   115476  2017-11-02 04:22:37 Z   12 days
Failing since115509  2017-11-03 04:20:26 Z   11 days   11 attempts
Testing same since   116153  2017-11-14 04:28:32 Z0 days1 attempts


People who touched revisions under test:
  Andrea Bolognani 
  Christian Ehrhardt 
  Daniel Veillard 
  Dawid Zamirski 
  Jim Fehlig 
  Jiri Denemark 
  John Ferlan 
  Michal Privoznik 
  Nikolay Shirokovskiy 
  Peter Krempa 
  Pino Toscano 
  Viktor Mihajlovski 
  Wim ten Have 
  xinhua.Cao 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  fail
 build-i386-libvirt   fail
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmblocked 
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm blocked 
 test-amd64-i386-libvirt-xsm  blocked 
 test-amd64-amd64-libvirt pass
 test-armhf-armhf-libvirt blocked 
 test-amd64-i386-libvirt  blocked 
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair blocked 
 test-amd64-i386-libvirt-qcow2blocked 
 test-armhf-armhf-libvirt-raw blocked 
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1590 lines long.)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 0/6] libxl: create standalone vkb device

2017-11-14 Thread Oleksandr Grytsov
On Wed, Nov 1, 2017 at 5:05 PM, Oleksandr Grytsov  wrote:

> From: Oleksandr Grytsov 
>
> Changes since initial:
>  * add setting backend-type to xenstore
>  * add id field to indentify the vkb device on backend side
>
> Oleksandr Grytsov (6):
>   libxl: move vkb device to libxl_vkb.c
>   libxl: fix vkb XS entry and type
>   libxl: add backend type and id to vkb
>   libxl: vkb add list and info functions
>   xl: add vkb config parser and CLI
>   docs: add vkb device to xl.cfg and xl
>
>  docs/man/xl.cfg.pod.5.in|  28 ++
>  docs/man/xl.pod.1.in|  22 +
>  tools/libxl/Makefile|   1 +
>  tools/libxl/libxl.h |  10 ++
>  tools/libxl/libxl_console.c |  53 ---
>  tools/libxl/libxl_create.c  |   3 +
>  tools/libxl/libxl_dm.c  |   1 +
>  tools/libxl/libxl_types.idl |  19 
>  tools/libxl/libxl_utils.h   |   3 +
>  tools/libxl/libxl_vkb.c | 226 ++
> ++
>  tools/xl/Makefile   |   2 +-
>  tools/xl/xl.h   |   3 +
>  tools/xl/xl_cmdtable.c  |  15 +++
>  tools/xl/xl_parse.c |  75 ++-
>  tools/xl/xl_parse.h |   2 +-
>  tools/xl/xl_vkb.c   | 142 
>  16 files changed, 549 insertions(+), 56 deletions(-)
>  create mode 100644 tools/libxl/libxl_vkb.c
>  create mode 100644 tools/xl/xl_vkb.c
>
> --
> 2.7.4
>
>
ping

-- 
Best Regards,
Oleksandr Grytsov.
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 0/5] libxl: add PV sound device

2017-11-14 Thread Oleksandr Grytsov
On Wed, Nov 1, 2017 at 5:04 PM, Oleksandr Grytsov  wrote:

> From: Oleksandr Grytsov 
>
> This patch set adds PV sound device support to xl.cfg and xl.
> See sndif.h for protocol implementation details.
>
> Changes since initial:
>  * fix code style
>  * change unique-id from int to string (to make id more user readable)
>
> Oleksandr Grytsov (5):
>   libxl: add PV sound device
>   libxl: add vsnd list and info
>   xl: add PV sound condif parser
>   xl: add vsnd CLI commands
>   docs: add PV sound device config
>
>  docs/man/xl.cfg.pod.5.in | 150 
>  docs/man/xl.pod.1.in |  30 ++
>  tools/libxl/Makefile |   2 +-
>  tools/libxl/libxl.h  |  24 ++
>  tools/libxl/libxl_create.c   |   1 +
>  tools/libxl/libxl_internal.h |   1 +
>  tools/libxl/libxl_types.idl  |  83 +
>  tools/libxl/libxl_types_internal.idl |   1 +
>  tools/libxl/libxl_utils.h|   3 +
>  tools/libxl/libxl_vsnd.c | 699 ++
> +
>  tools/xl/Makefile|   2 +-
>  tools/xl/xl.h|   3 +
>  tools/xl/xl_cmdtable.c   |  15 +
>  tools/xl/xl_parse.c  | 246 
>  tools/xl/xl_parse.h  |   1 +
>  tools/xl/xl_vsnd.c   | 203 ++
>  16 files changed, 1462 insertions(+), 2 deletions(-)
>  create mode 100644 tools/libxl/libxl_vsnd.c
>  create mode 100644 tools/xl/xl_vsnd.c
>
> --
> 2.7.4
>
>
ping

-- 
Best Regards,
Oleksandr Grytsov.
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 for-4.10 1/2] x86/mm: fix potential race conditions in map_pages_to_xen().

2017-11-14 Thread Julien Grall

Hi,

On 14/11/17 08:20, Jan Beulich wrote:

On 14.11.17 at 07:53,  wrote:

From: Min He 

In map_pages_to_xen(), a L2 page table entry may be reset to point to
a superpage, and its corresponding L1 page table need be freed in such
scenario, when these L1 page table entries are mapping to consecutive
page frames and having the same mapping flags.

However, variable `pl1e` is not protected by the lock before L1 page table
is enumerated. A race condition may happen if this code path is invoked
simultaneously on different CPUs.

For example, `pl1e` value on CPU0 may hold an obsolete value, pointing
to a page which has just been freed on CPU1. Besides, before this page
is reused, it will still be holding the old PTEs, referencing consecutive
page frames. Consequently the `free_xen_pagetable(l2e_to_l1e(ol2e))` will
be triggered on CPU0, resulting the unexpected free of a normal page.

This patch fixes the above problem by protecting the `pl1e` with the lock.

Also, there're other potential race conditions. For instance, the L2/L3
entry may be modified concurrently on different CPUs, by routines such as
map_pages_to_xen(), modify_xen_mappings() etc. To fix this, this patch will
check the _PAGE_PRESENT and _PAGE_PSE flags, after the spinlock is obtained,
for the corresponding L2/L3 entry.

Signed-off-by: Min He 
Signed-off-by: Yi Zhang 
Signed-off-by: Yu Zhang 


Reviewed-by: Jan Beulich 


Please try to have a cover letter in the future when you have multiple 
patches. This will make easier to give comments/release-ack for the all 
the patches. Anyway for the 2 patches:


Release-acked-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] tools: xentoolcore_restrict_all: Do deregistration before close

2017-11-14 Thread Ian Jackson
Closing the fd before unhooking it from the list runs the risk that a
concurrent thread calls xentoolcore_restrict_all will operate on the
old fd value, which might refer to a new fd by then.  So we need to do
it in the other order.

Sadly this weakens the guarantee provided by xentoolcore_restrict_all
slight, but not (I think) in a problematic way.  It would be possible
to implement the previous guarantee, but it would involve replacing
all of the close() calls in all of the individual osdep parts of all
of the individual libraries with calls to a new function which does
   dup2("/dev/null", thing->fd);
   pthread_mutex_lock(_lock);
   thing->fd = -1;
   pthread_mutex_unlock(_lock);
   close(fd);
which would be terribly tedious.

Signed-off-by: Ian Jackson 
---
 tools/libs/call/core.c | 4 ++--
 tools/libs/devicemodel/core.c  | 4 ++--
 tools/libs/evtchn/core.c   | 4 ++--
 tools/libs/foreignmemory/core.c| 4 ++--
 tools/libs/gnttab/gnttab_core.c| 4 ++--
 tools/libs/toolcore/include/xentoolcore.h  | 9 +
 tools/libs/toolcore/include/xentoolcore_internal.h | 6 --
 tools/xenstore/xs.c| 4 ++--
 8 files changed, 25 insertions(+), 14 deletions(-)

diff --git a/tools/libs/call/core.c b/tools/libs/call/core.c
index b256fce..f3a3400 100644
--- a/tools/libs/call/core.c
+++ b/tools/libs/call/core.c
@@ -59,8 +59,8 @@ xencall_handle *xencall_open(xentoollog_logger *logger, 
unsigned open_flags)
 return xcall;
 
 err:
-osdep_xencall_close(xcall);
 xentoolcore__deregister_active_handle(>tc_ah);
+osdep_xencall_close(xcall);
 xtl_logger_destroy(xcall->logger_tofree);
 free(xcall);
 return NULL;
@@ -73,8 +73,8 @@ int xencall_close(xencall_handle *xcall)
 if ( !xcall )
 return 0;
 
-rc = osdep_xencall_close(xcall);
 xentoolcore__deregister_active_handle(>tc_ah);
+rc = osdep_xencall_close(xcall);
 buffer_release_cache(xcall);
 xtl_logger_destroy(xcall->logger_tofree);
 free(xcall);
diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
index b66d4f9..355b7de 100644
--- a/tools/libs/devicemodel/core.c
+++ b/tools/libs/devicemodel/core.c
@@ -68,8 +68,8 @@ xendevicemodel_handle *xendevicemodel_open(xentoollog_logger 
*logger,
 
 err:
 xtl_logger_destroy(dmod->logger_tofree);
-xencall_close(dmod->xcall);
 xentoolcore__deregister_active_handle(>tc_ah);
+xencall_close(dmod->xcall);
 free(dmod);
 return NULL;
 }
@@ -83,8 +83,8 @@ int xendevicemodel_close(xendevicemodel_handle *dmod)
 
 rc = osdep_xendevicemodel_close(dmod);
 
-xencall_close(dmod->xcall);
 xentoolcore__deregister_active_handle(>tc_ah);
+xencall_close(dmod->xcall);
 xtl_logger_destroy(dmod->logger_tofree);
 free(dmod);
 return rc;
diff --git a/tools/libs/evtchn/core.c b/tools/libs/evtchn/core.c
index 2dba58b..aff6ecf 100644
--- a/tools/libs/evtchn/core.c
+++ b/tools/libs/evtchn/core.c
@@ -55,8 +55,8 @@ xenevtchn_handle *xenevtchn_open(xentoollog_logger *logger, 
unsigned open_flags)
 return xce;
 
 err:
-osdep_evtchn_close(xce);
 xentoolcore__deregister_active_handle(>tc_ah);
+osdep_evtchn_close(xce);
 xtl_logger_destroy(xce->logger_tofree);
 free(xce);
 return NULL;
@@ -69,8 +69,8 @@ int xenevtchn_close(xenevtchn_handle *xce)
 if ( !xce )
 return 0;
 
-rc = osdep_evtchn_close(xce);
 xentoolcore__deregister_active_handle(>tc_ah);
+rc = osdep_evtchn_close(xce);
 xtl_logger_destroy(xce->logger_tofree);
 free(xce);
 return rc;
diff --git a/tools/libs/foreignmemory/core.c b/tools/libs/foreignmemory/core.c
index 79b24d2..7c8562a 100644
--- a/tools/libs/foreignmemory/core.c
+++ b/tools/libs/foreignmemory/core.c
@@ -57,8 +57,8 @@ xenforeignmemory_handle 
*xenforeignmemory_open(xentoollog_logger *logger,
 return fmem;
 
 err:
-osdep_xenforeignmemory_close(fmem);
 xentoolcore__deregister_active_handle(>tc_ah);
+osdep_xenforeignmemory_close(fmem);
 xtl_logger_destroy(fmem->logger_tofree);
 free(fmem);
 return NULL;
@@ -71,8 +71,8 @@ int xenforeignmemory_close(xenforeignmemory_handle *fmem)
 if ( !fmem )
 return 0;
 
-rc = osdep_xenforeignmemory_close(fmem);
 xentoolcore__deregister_active_handle(>tc_ah);
+rc = osdep_xenforeignmemory_close(fmem);
 xtl_logger_destroy(fmem->logger_tofree);
 free(fmem);
 return rc;
diff --git a/tools/libs/gnttab/gnttab_core.c b/tools/libs/gnttab/gnttab_core.c
index 5f761e5..98f1591 100644
--- a/tools/libs/gnttab/gnttab_core.c
+++ b/tools/libs/gnttab/gnttab_core.c
@@ -54,8 +54,8 @@ xengnttab_handle *xengnttab_open(xentoollog_logger *logger, 
unsigned open_flags)
 return xgt;
 
 err:
-osdep_gnttab_close(xgt);
 xentoolcore__deregister_active_handle(>tc_ah);
+osdep_gnttab_close(xgt);
 

Re: [Xen-devel] [PATCH for-4.10] libs/evtchn: Remove active handler on clean-up or failure

2017-11-14 Thread Ian Jackson
Ross Lagerwall writes ("Re: [PATCH for-4.10] libs/evtchn: Remove active handler 
on clean-up or failure"):
> Now that I look at it, a similar scenario can happen during open. Since 
> the handle is registered before it is actually opened, a concurrent 
> xentoolcore_restrict_all() will try to restrict a handle that it not 
> properly set up.

I think this is not a problem because the handle has thing->fd = -1.
So the restrict call will be a no-op (or give EBADF).

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libs/evtchn: Remove active handler on clean-up or failure

2017-11-14 Thread Julien Grall

Hi,

On 14/11/17 11:51, Ian Jackson wrote:

Ross Lagerwall writes ("Re: [PATCH for-4.10] libs/evtchn: Remove active handler on 
clean-up or failure"):

On 11/10/2017 05:10 PM, Julien Grall wrote:

Commit 89d55473ed16543044a31d1e0d4660cf5a3f49df "xentoolcore_restrict_all:
Implement for libxenevtchn" added a call to register allowing to
restrict the event channel.

However, the call to deregister the handler was not performed if open
failed or when closing the event channel. This will result to corrupt
the list of handlers and potentially crash the application later one.


Sorry for not spotting this during review.
The fix is correct as far as it goes, so:

Acked-by: Ian Jackson 


The call to xentoolcore_deregister_active_handle is done at the same
place as for the grants. But I am not convinced this is thread safe as
there are potential race between close the event channel and restict
handler. Do we care about that?

...

However, I think it should call xentoolcore__deregister_active_handle()
_before_ calling osdep_evtchn_close() to avoid trying to restrict a
closed fd or some other fd that happens to have the same number.


You are right.  But this slightly weakens the guarantee provided by
xentoolcore_restrict_all.


I think all the other libs need to be fixed as well, unless there was a
reason it was done this way.


I will send a further patch.  In the meantime I suggest we apply
Julien's fix.


I am going to leave the decision to you and Wei. It feels a bit odd to 
release-ack my patch :).


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libs/evtchn: Remove active handler on clean-up or failure

2017-11-14 Thread Ross Lagerwall

On 11/14/2017 11:51 AM, Ian Jackson wrote:

Ross Lagerwall writes ("Re: [PATCH for-4.10] libs/evtchn: Remove active handler on 
clean-up or failure"):

On 11/10/2017 05:10 PM, Julien Grall wrote:

Commit 89d55473ed16543044a31d1e0d4660cf5a3f49df "xentoolcore_restrict_all:
Implement for libxenevtchn" added a call to register allowing to
restrict the event channel.

However, the call to deregister the handler was not performed if open
failed or when closing the event channel. This will result to corrupt
the list of handlers and potentially crash the application later one.


Sorry for not spotting this during review.
The fix is correct as far as it goes, so:

Acked-by: Ian Jackson 


The call to xentoolcore_deregister_active_handle is done at the same
place as for the grants. But I am not convinced this is thread safe as
there are potential race between close the event channel and restict
handler. Do we care about that?

...

However, I think it should call xentoolcore__deregister_active_handle()
_before_ calling osdep_evtchn_close() to avoid trying to restrict a
closed fd or some other fd that happens to have the same number.


You are right.  But this slightly weakens the guarantee provided by
xentoolcore_restrict_all.



Now that I look at it, a similar scenario can happen during open. Since 
the handle is registered before it is actually opened, a concurrent 
xentoolcore_restrict_all() will try to restrict a handle that it not 
properly set up.


I think it is OK if xentoolcore_restrict_all() works with any open 
handle where a handle is defined as open if it has _completed_ the call 
to e.g. xenevtchn_open() and has not yet called xenevtchn_close().


--
Ross Lagerwall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops

2017-11-14 Thread Juergen Gross
On 14/11/17 12:43, Quan Xu wrote:
> 
> 
> On 2017/11/14 18:27, Juergen Gross wrote:
>> On 14/11/17 10:38, Quan Xu wrote:
>>>
>>> On 2017/11/14 15:30, Juergen Gross wrote:
 On 14/11/17 08:02, Quan Xu wrote:
> On 2017/11/13 18:53, Juergen Gross wrote:
>> On 13/11/17 11:06, Quan Xu wrote:
>>> From: Quan Xu 
>>>
>>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is
>>> called
>>> in idle path which will poll for a while before we enter the real
>>> idle
>>> state.
>>>
>>> In virtualization, idle path includes several heavy operations
>>> includes timer access(LAPIC timer or TSC deadline timer) which will
>>> hurt performance especially for latency intensive workload like
>>> message
>>> passing task. The cost is mainly from the vmexit which is a hardware
>>> context switch between virtual machine and hypervisor. Our
>>> solution is
>>> to poll for a while and do not enter real idle path if we can get
>>> the
>>> schedule event during polling.
>>>
>>> Poll may cause the CPU waste so we adopt a smart polling
>>> mechanism to
>>> reduce the useless poll.
>>>
>>> Signed-off-by: Yang Zhang 
>>> Signed-off-by: Quan Xu 
>>> Cc: Juergen Gross 
>>> Cc: Alok Kataria 
>>> Cc: Rusty Russell 
>>> Cc: Thomas Gleixner 
>>> Cc: Ingo Molnar 
>>> Cc: "H. Peter Anvin" 
>>> Cc: x...@kernel.org
>>> Cc: virtualizat...@lists.linux-foundation.org
>>> Cc: linux-ker...@vger.kernel.org
>>> Cc: xen-de...@lists.xenproject.org
>> Hmm, is the idle entry path really so critical to performance that a
>> new
>> pvops function is necessary?
> Juergen, Here is the data we get when running benchmark netperf:
>    1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
>   29031.6 bit/s -- 76.1 %CPU
>
>    2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
>   35787.7 bit/s -- 129.4 %CPU
>
>    3. w/ kvm dynamic poll:
>   35735.6 bit/s -- 200.0 %CPU
>
>    4. w/patch and w/ kvm dynamic poll:
>   42225.3 bit/s -- 198.7 %CPU
>
>    5. idle=poll
>   37081.7 bit/s -- 998.1 %CPU
>
>
>
>    w/ this patch, we will improve performance by 23%.. even we could
> improve
>    performance by 45.4%, if we use w/patch and w/ kvm dynamic poll.
> also the
>    cost of CPU is much lower than 'idle=poll' case..
 I don't question the general idea. I just think pvops isn't the best
 way
 to implement it.

>> Wouldn't a function pointer, maybe guarded
>> by a static key, be enough? A further advantage would be that this
>> would
>> work on other architectures, too.
> I assume this feature will be ported to other archs.. a new pvops
> makes
>>>    sorry, a typo.. /other archs/other hypervisors/
>>>    it refers hypervisor like Xen, HyperV and VMware)..
>>>
> code
> clean and easy to maintain. also I tried to add it into existed pvops,
> but it
> doesn't match.
 You are aware that pvops is x86 only?
>>> yes, I'm aware..
>>>
 I really don't see the big difference in maintainability compared to
 the
 static key / function pointer variant:

 void (*guest_idle_poll_func)(void);
 struct static_key guest_idle_poll_key __read_mostly;

 static inline void guest_idle_poll(void)
 {
  if (static_key_false(_idle_poll_key))
  guest_idle_poll_func();
 }
>>>
>>>
>>> thank you for your sample code :)
>>> I agree there is no big difference.. I think we are discussion for two
>>> things:
>>>   1) x86 VM on different hypervisors
>>>   2) different archs VM on kvm hypervisor
>>>
>>> What I want to do is x86 VM on different hypervisors, such as kvm / xen
>>> / hyperv ..
>> Why limit the solution to x86 if the more general solution isn't
>> harder?
>>
>> As you didn't give any reason why the pvops approach is better other
>> than you don't care for non-x86 platforms you won't get an "Ack" from
>> me for this patch.
> 
> 
> It just looks a little odder to me. I understand you care about no-x86
> arch.
> 
> Are you aware 'pv_time_ops' for arm64/arm/x86 archs, defined in
>    - arch/arm64/include/asm/paravirt.h
>    - arch/x86/include/asm/paravirt_types.h
>    - arch/arm/include/asm/paravirt.h

Yes, I know. This is just a hack to make it compile. Other than the
same names this has nothing to do with pvops, but is just a function
vector.

> I am unfamilar with arm code. IIUC, if you'd implement pv_idle_ops
> for arm/arm64 arch, you'd define a same structure in
>    - arch/arm64/include/asm/paravirt.h or
>    - arch/arm/include/asm/paravirt.h
> 
> .. instead of static key / fuction.
> 
> then 

Re: [Xen-devel] [PATCH for-4.10] libs/evtchn: Remove active handler on clean-up or failure

2017-11-14 Thread Ian Jackson
Ross Lagerwall writes ("Re: [PATCH for-4.10] libs/evtchn: Remove active handler 
on clean-up or failure"):
> On 11/10/2017 05:10 PM, Julien Grall wrote:
> > Commit 89d55473ed16543044a31d1e0d4660cf5a3f49df "xentoolcore_restrict_all:
> > Implement for libxenevtchn" added a call to register allowing to
> > restrict the event channel.
> > 
> > However, the call to deregister the handler was not performed if open
> > failed or when closing the event channel. This will result to corrupt
> > the list of handlers and potentially crash the application later one.

Sorry for not spotting this during review.
The fix is correct as far as it goes, so:

Acked-by: Ian Jackson 

> > The call to xentoolcore_deregister_active_handle is done at the same
> > place as for the grants. But I am not convinced this is thread safe as
> > there are potential race between close the event channel and restict
> > handler. Do we care about that?
...
> However, I think it should call xentoolcore__deregister_active_handle() 
> _before_ calling osdep_evtchn_close() to avoid trying to restrict a 
> closed fd or some other fd that happens to have the same number.

You are right.  But this slightly weakens the guarantee provided by
xentoolcore_restrict_all.

> I think all the other libs need to be fixed as well, unless there was a 
> reason it was done this way.

I will send a further patch.  In the meantime I suggest we apply
Julien's fix.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops

2017-11-14 Thread Quan Xu



On 2017/11/14 18:27, Juergen Gross wrote:

On 14/11/17 10:38, Quan Xu wrote:


On 2017/11/14 15:30, Juergen Gross wrote:

On 14/11/17 08:02, Quan Xu wrote:

On 2017/11/13 18:53, Juergen Gross wrote:

On 13/11/17 11:06, Quan Xu wrote:

From: Quan Xu 

So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
in idle path which will poll for a while before we enter the real idle
state.

In virtualization, idle path includes several heavy operations
includes timer access(LAPIC timer or TSC deadline timer) which will
hurt performance especially for latency intensive workload like
message
passing task. The cost is mainly from the vmexit which is a hardware
context switch between virtual machine and hypervisor. Our solution is
to poll for a while and do not enter real idle path if we can get the
schedule event during polling.

Poll may cause the CPU waste so we adopt a smart polling mechanism to
reduce the useless poll.

Signed-off-by: Yang Zhang 
Signed-off-by: Quan Xu 
Cc: Juergen Gross 
Cc: Alok Kataria 
Cc: Rusty Russell 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Cc: xen-de...@lists.xenproject.org

Hmm, is the idle entry path really so critical to performance that a
new
pvops function is necessary?

Juergen, Here is the data we get when running benchmark netperf:
   1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
  29031.6 bit/s -- 76.1 %CPU

   2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
  35787.7 bit/s -- 129.4 %CPU

   3. w/ kvm dynamic poll:
  35735.6 bit/s -- 200.0 %CPU

   4. w/patch and w/ kvm dynamic poll:
  42225.3 bit/s -- 198.7 %CPU

   5. idle=poll
  37081.7 bit/s -- 998.1 %CPU



   w/ this patch, we will improve performance by 23%.. even we could
improve
   performance by 45.4%, if we use w/patch and w/ kvm dynamic poll.
also the
   cost of CPU is much lower than 'idle=poll' case..

I don't question the general idea. I just think pvops isn't the best way
to implement it.


Wouldn't a function pointer, maybe guarded
by a static key, be enough? A further advantage would be that this
would
work on other architectures, too.

I assume this feature will be ported to other archs.. a new pvops makes

   sorry, a typo.. /other archs/other hypervisors/
   it refers hypervisor like Xen, HyperV and VMware)..


code
clean and easy to maintain. also I tried to add it into existed pvops,
but it
doesn't match.

You are aware that pvops is x86 only?

yes, I'm aware..


I really don't see the big difference in maintainability compared to the
static key / function pointer variant:

void (*guest_idle_poll_func)(void);
struct static_key guest_idle_poll_key __read_mostly;

static inline void guest_idle_poll(void)
{
 if (static_key_false(_idle_poll_key))
     guest_idle_poll_func();
}



thank you for your sample code :)
I agree there is no big difference.. I think we are discussion for two
things:
  1) x86 VM on different hypervisors
  2) different archs VM on kvm hypervisor

What I want to do is x86 VM on different hypervisors, such as kvm / xen
/ hyperv ..

Why limit the solution to x86 if the more general solution isn't
harder?

As you didn't give any reason why the pvops approach is better other
than you don't care for non-x86 platforms you won't get an "Ack" from
me for this patch.



It just looks a little odder to me. I understand you care about no-x86 arch.

Are you aware 'pv_time_ops' for arm64/arm/x86 archs, defined in
   - arch/arm64/include/asm/paravirt.h
   - arch/x86/include/asm/paravirt_types.h
   - arch/arm/include/asm/paravirt.h

I am unfamilar with arm code. IIUC, if you'd implement pv_idle_ops
for arm/arm64 arch, you'd define a same structure in
   - arch/arm64/include/asm/paravirt.h or
   - arch/arm/include/asm/paravirt.h

.. instead of static key / fuction.

then implement a real function in
   - arch/arm/kernel/paravirt.c.

Also I wonder HOW/WHERE to define a static key/function, then to benifit
x86/no-x86 archs?

Quan
Alibaba Cloud


And KVM would just need to set guest_idle_poll_func and enable the
static key. Works on non-x86 architectures, too.


.. referred to 'pv_mmu_ops', HyperV and Xen can implement their own
functions for 'pv_mmu_ops'.
I think it is the same to pv_idle_ops.

with above explaination, do you still think I need to define the static
key/function pointer variant?

btw, any interest to port it to Xen HVM guest? :)

Maybe. But this should work for Xen on ARM, too.


Juergen




___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/4 v3 for-4.10] libxl: Fix the bug introduced in commit "libxl: use correct type modifier for vuart_gfn"

2017-11-14 Thread Bhupinder Thakur
Hi,

On 14 Nov 2017 3:35 pm, "Wei Liu"  wrote:

> On Mon, Nov 13, 2017 at 03:56:23PM +, Julien Grall wrote:
> > Hi Wei,
> >
> > Sorry I missed that e-mail.
> >
> > On 10/31/2017 10:07 AM, Wei Liu wrote:
> > > Change the tag to for-4.10.
> > >
> > > Julien, this is needed to fix vuart emulation.
> >
> > To confirm, only patch #1 is candidate for Xen 4.10, right? The rest
> will be
> > queued for Xen 4.11?
> >
>
> I think so.
>
> Bhupinder, can you confirm that?
>

Yes. Only first patch is required for fixing the compilation issue.

Regards,
Bhupinder
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 116146: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116146 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116146/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm   7 xen-boot fail REGR. vs. 116126

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeatfail like 116107
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeatfail  like 116107
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116126
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116126
 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeatfail  like 116126
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116126
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116126
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116126
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116126
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuu7edaf99759017d3e175e37cffc3536e86a3bd380
baseline version:
 qemuu4ffa88c99c54d2a30f79e3dbecec50b023eff1c8

Last test of basis   116126  2017-11-13 00:49:34 Z1 days
Testing same since   116146  2017-11-13 18:53:48 Z0 days1 attempts


People who touched revisions under test:
  Alistair Francis 
  Christian Borntraeger 
  Cornelia Huck 
  Eric Blake 
  Peter Maydell 
  Philippe Mathieu-Daudé 
  Richard Henderson 
  Samuel Thibault 
  Sergio Lopez 
  Stefan Hajnoczi 
  Tao Wu 
  Vladimir Sementsov-Ogievskiy 
  Yi Min Zhao 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt   

Re: [Xen-devel] [RFC PATCH 00/31] CPUFreq on ARM

2017-11-14 Thread Andre Przywara
Hi,

On 13/11/17 19:40, Oleksandr Tyshchenko wrote:
> On Mon, Nov 13, 2017 at 5:21 PM, Andre Przywara
>  wrote:
>> Hi,
> Hi Andre
> 
>>
>> thanks very much for your work on this!
> Thank you for your comments.
> 
>>
>> On 09/11/17 17:09, Oleksandr Tyshchenko wrote:
>>> From: Oleksandr Tyshchenko 
>>>
>>> Hi, all.
>>>
>>> The purpose of this RFC patch series is to add CPUFreq support to Xen on 
>>> ARM.
>>> Motivation of hypervisor based CPUFreq is to enable one of the main PM 
>>> use-cases in virtualized system powered by Xen hypervisor. Rationale behind 
>>> this activity is that CPU virtualization is done by hypervisor and the 
>>> guest OS doesn't actually know anything about physical CPUs because it is 
>>> running on virtual CPUs. It is quite clear that a decision about frequency 
>>> change should be taken by hypervisor as only it has information about 
>>> actual CPU load.
>>
>> Can you please sketch your usage scenario or workloads here? I can think
>> of quite different scenarios (oversubscribed server vs. partitioning
>> RTOS guests, for instance). The usefulness of CPUFreq and the trade-offs
>> in the design are quite different between those.
> We keep embedded use-cases in mind. For example, it is a system with
> several domains,
> where one domain has most critical SW running on and other domain(s)
> are, let say, for entertainment purposes.
> I think, the CPUFreq is useful where power consumption is a question.

Does the SoC you use allow different frequencies for each core? Or is it
one frequency for all cores? Most x86 CPU allow different frequencies
for each core, AFAIK. Just having the same OPP for the whole SoC might
limit the usefulness of this approach in general.

>> In general I doubt that a hypervisor scheduling vCPUs is in a good
>> position to make a decision on the proper frequency physical CPUs should
>> run with. From all I know it's already hard for an OS kernel to make
>> that call. So I would actually expect that guests provide some input,
>> for instance by signalling OPP change request up to the hypervisor. This
>> could then decide to act on it - or not.
> Each running guest sees only part of the picture, but hypervisor has
> the whole picture, it knows all about CPU, measures CPU load and able
> to choose required CPU frequency to run on.

But based on what data? All Xen sees is a vCPU trapping on MMIO, a
hypercall or on WFI, for that matter. It does not know much more about
the guest, especially it's rather clueless about what the guest OS
actually intended to do.
For instance Linux can track the actual utilization of a core by keeping
statistics of runnable processes and monitoring their time slice usage.
It can see that a certain process exhibits periodical, but bursty CPU
usage, which may hint that is could run at lower frequency. Xen does not
see this fine granular information.

> I am wondering, does Xen
> need additional input from guests for make a decision?

I very much believe so. The guest OS is in a much better position to
make that call.

> BTW, currently guest domain on ARM doesn't even know how many physical
> CPUs the system has and what are these OPPs. When creating guest
> domain Xen inserts only dummy CPU nodes. All CPU info, such as clocks,
> OPPs, thermal, etc are not passed to guest.

Sure, because this is what virtualization is about. And I am not asking
for unconditionally allowing any guest to change frequency.
But there could be certain use cases where this could be considered:
Think about your "critical SW" mentioned above, which is probably some
RTOS, also possibly running on pinned vCPUs. For that
(latency-sensitive) guest it might be well suited to run at a lower
frequency for some time, but how should Xen know about this?
"Normally" the best strategy to save power is to run as fast as
possible, finish all outstanding work, then put the core to sleep.
Because not running at all consumes much less energy than running at a
reduced frequency. But this may not be suitable for an RTOS.

So I think we would need a combined approach:
a) Let an administrator (via tools running in Dom0) tell Xen about power
management strategies to use for certain guests. An RTOS could be
treated differently (lower, but constant frequency) than an
"entertainment" guest (varying frequency, based on guest OS input), also
differently than some background guest doing logging, OTA update, etc.
(constant high frequency, but putting cores to sleep instead as often as
possible).
b) Allow some guests (based on policy from (a)) to signal CPUFreq change
requests to the hypervisor. Xen takes those into account, though it may
decide to not act immediately on it, because it is going to schedule
another vCPU, for instance.
c) Have some way of actually realising certain OPPs. This could be via
an SCPI client in Xen, or some other way. Might be an implementation detail.

>>> Although these required components (CPUFreq core, 

Re: [Xen-devel] [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops

2017-11-14 Thread Juergen Gross
On 14/11/17 10:38, Quan Xu wrote:
> 
> 
> On 2017/11/14 15:30, Juergen Gross wrote:
>> On 14/11/17 08:02, Quan Xu wrote:
>>>
>>> On 2017/11/13 18:53, Juergen Gross wrote:
 On 13/11/17 11:06, Quan Xu wrote:
> From: Quan Xu 
>
> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
> in idle path which will poll for a while before we enter the real idle
> state.
>
> In virtualization, idle path includes several heavy operations
> includes timer access(LAPIC timer or TSC deadline timer) which will
> hurt performance especially for latency intensive workload like
> message
> passing task. The cost is mainly from the vmexit which is a hardware
> context switch between virtual machine and hypervisor. Our solution is
> to poll for a while and do not enter real idle path if we can get the
> schedule event during polling.
>
> Poll may cause the CPU waste so we adopt a smart polling mechanism to
> reduce the useless poll.
>
> Signed-off-by: Yang Zhang 
> Signed-off-by: Quan Xu 
> Cc: Juergen Gross 
> Cc: Alok Kataria 
> Cc: Rusty Russell 
> Cc: Thomas Gleixner 
> Cc: Ingo Molnar 
> Cc: "H. Peter Anvin" 
> Cc: x...@kernel.org
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: linux-ker...@vger.kernel.org
> Cc: xen-de...@lists.xenproject.org
 Hmm, is the idle entry path really so critical to performance that a
 new
 pvops function is necessary?
>>> Juergen, Here is the data we get when running benchmark netperf:
>>>   1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
>>>  29031.6 bit/s -- 76.1 %CPU
>>>
>>>   2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
>>>  35787.7 bit/s -- 129.4 %CPU
>>>
>>>   3. w/ kvm dynamic poll:
>>>  35735.6 bit/s -- 200.0 %CPU
>>>
>>>   4. w/patch and w/ kvm dynamic poll:
>>>  42225.3 bit/s -- 198.7 %CPU
>>>
>>>   5. idle=poll
>>>  37081.7 bit/s -- 998.1 %CPU
>>>
>>>
>>>
>>>   w/ this patch, we will improve performance by 23%.. even we could
>>> improve
>>>   performance by 45.4%, if we use w/patch and w/ kvm dynamic poll.
>>> also the
>>>   cost of CPU is much lower than 'idle=poll' case..
>> I don't question the general idea. I just think pvops isn't the best way
>> to implement it.
>>
 Wouldn't a function pointer, maybe guarded
 by a static key, be enough? A further advantage would be that this
 would
 work on other architectures, too.
>>> I assume this feature will be ported to other archs.. a new pvops makes
> 
>   sorry, a typo.. /other archs/other hypervisors/
>   it refers hypervisor like Xen, HyperV and VMware)..
> 
>>> code
>>> clean and easy to maintain. also I tried to add it into existed pvops,
>>> but it
>>> doesn't match.
>> You are aware that pvops is x86 only?
> 
> yes, I'm aware..
> 
>> I really don't see the big difference in maintainability compared to the
>> static key / function pointer variant:
>>
>> void (*guest_idle_poll_func)(void);
>> struct static_key guest_idle_poll_key __read_mostly;
>>
>> static inline void guest_idle_poll(void)
>> {
>> if (static_key_false(_idle_poll_key))
>>     guest_idle_poll_func();
>> }
> 
> 
> 
> thank you for your sample code :)
> I agree there is no big difference.. I think we are discussion for two
> things:
>  1) x86 VM on different hypervisors
>  2) different archs VM on kvm hypervisor
> 
> What I want to do is x86 VM on different hypervisors, such as kvm / xen
> / hyperv ..

Why limit the solution to x86 if the more general solution isn't
harder?

As you didn't give any reason why the pvops approach is better other
than you don't care for non-x86 platforms you won't get an "Ack" from
me for this patch.

> 
>> And KVM would just need to set guest_idle_poll_func and enable the
>> static key. Works on non-x86 architectures, too.
>>
> 
> .. referred to 'pv_mmu_ops', HyperV and Xen can implement their own
> functions for 'pv_mmu_ops'.
> I think it is the same to pv_idle_ops.
> 
> with above explaination, do you still think I need to define the static
> key/function pointer variant?
> 
> btw, any interest to port it to Xen HVM guest? :)

Maybe. But this should work for Xen on ARM, too.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops

2017-11-14 Thread Quan Xu



On 2017/11/14 16:22, Wanpeng Li wrote:

2017-11-14 16:15 GMT+08:00 Quan Xu :


On 2017/11/14 15:12, Wanpeng Li wrote:

2017-11-14 15:02 GMT+08:00 Quan Xu :


On 2017/11/13 18:53, Juergen Gross wrote:

On 13/11/17 11:06, Quan Xu wrote:

From: Quan Xu 

So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
in idle path which will poll for a while before we enter the real idle
state.

In virtualization, idle path includes several heavy operations
includes timer access(LAPIC timer or TSC deadline timer) which will
hurt performance especially for latency intensive workload like message
passing task. The cost is mainly from the vmexit which is a hardware
context switch between virtual machine and hypervisor. Our solution is
to poll for a while and do not enter real idle path if we can get the
schedule event during polling.

Poll may cause the CPU waste so we adopt a smart polling mechanism to
reduce the useless poll.

Signed-off-by: Yang Zhang 
Signed-off-by: Quan Xu 
Cc: Juergen Gross 
Cc: Alok Kataria 
Cc: Rusty Russell 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Cc: xen-de...@lists.xenproject.org

Hmm, is the idle entry path really so critical to performance that a new
pvops function is necessary?

Juergen, Here is the data we get when running benchmark netperf:
   1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
  29031.6 bit/s -- 76.1 %CPU

   2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
  35787.7 bit/s -- 129.4 %CPU

   3. w/ kvm dynamic poll:
  35735.6 bit/s -- 200.0 %CPU

Actually we can reduce the CPU utilization by sleeping a period of
time as what has already been done in the poll logic of IO subsystem,
then we can improve the algorithm in kvm instead of introduing another
duplicate one in the kvm guest.

We really appreciate upstream's kvm dynamic poll mechanism, which is
really helpful for a lot of scenario..

However, as description said, in virtualization, idle path includes
several heavy operations includes timer access (LAPIC timer or TSC
deadline timer) which will hurt performance especially for latency
intensive workload like message passing task. The cost is mainly from
the vmexit which is a hardware context switch between virtual machine
and hypervisor.

for upstream's kvm dynamic poll mechanism, even you could provide a
better algorism, how could you bypass timer access (LAPIC timer or TSC
deadline timer), or a hardware context switch between virtual machine
and hypervisor. I know these is a tradeoff.

Furthermore, here is the data we get when running benchmark contextswitch
to measure the latency(lower is better):

1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
   3402.9 ns/ctxsw -- 199.8 %CPU

2. w/ patch and disable kvm dynamic poll:
   1163.5 ns/ctxsw -- 205.5 %CPU

3. w/ kvm dynamic poll:
   2280.6 ns/ctxsw -- 199.5 %CPU

so, these tow solution are quite similar, but not duplicate..

that's also why to add a generic idle poll before enter real idle path.
When a reschedule event is pending, we can bypass the real idle path.


There is a similar logic in the idle governor/driver, so how this
patchset influence the decision in the idle governor/driver when
running on bare-metal(power managment is not exposed to the guest so
we will not enter into idle driver in the guest)?



This is expected to take effect only when running as a virtual machine with
proper CONFIG_* enabled. This can not work on bare mental even with proper
CONFIG_* enabled.

Quan
Alibaba Cloud

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [seabios test] 116148: regressions - FAIL

2017-11-14 Thread osstest service owner
flight 116148 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116148/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115539
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  63451fca13c75870e1703eb3e20584d91179aebc
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   10 days
Testing same since   115733  2017-11-10 17:19:59 Z3 days6 attempts


People who touched revisions under test:
  Kevin O'Connor 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit 63451fca13c75870e1703eb3e20584d91179aebc
Author: Kevin O'Connor 
Date:   Fri Nov 10 11:49:19 2017 -0500

docs: Note v1.11.0 release

Signed-off-by: Kevin O'Connor 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 1/4 v3 for-4.10] libxl: Fix the bug introduced in commit "libxl: use correct type modifier for vuart_gfn"

2017-11-14 Thread Wei Liu
On Mon, Nov 13, 2017 at 03:56:23PM +, Julien Grall wrote:
> Hi Wei,
> 
> Sorry I missed that e-mail.
> 
> On 10/31/2017 10:07 AM, Wei Liu wrote:
> > Change the tag to for-4.10.
> > 
> > Julien, this is needed to fix vuart emulation.
> 
> To confirm, only patch #1 is candidate for Xen 4.10, right? The rest will be
> queued for Xen 4.11?
> 

I think so. 

Bhupinder, can you confirm that?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v5 02/29] Replace all occurances of __FUNCTION__ with __func__

2017-11-14 Thread Gerd Hoffmann
On Mon, Nov 13, 2017 at 02:34:42PM -0800, Alistair Francis wrote:
> Replace all occurs of __FUNCTION__ except for the check in checkpatch
> with the non GCC specific __func__.
> 
> One line in hcd-musb.c was manually tweaked to pass checkpatch.
> 
> Signed-off-by: Alistair Francis 
> Cc: Gerd Hoffmann 
> Cc: Andrzej Zaborowski 
> Cc: Stefano Stabellini 
> Cc: Anthony Perard 
> Cc: John Snow 
> Cc: Aurelien Jarno 
> Cc: Yongbok Kim 
> Cc: Peter Crosthwaite 
> Cc: Stefan Hajnoczi 
> Cc: Fam Zheng 
> Cc: Juan Quintela 
> Cc: "Dr. David Alan Gilbert" 
> Cc: qemu-...@nongnu.org
> Cc: qemu-bl...@nongnu.org
> Cc: xen-de...@lists.xenproject.org
> Reviewed-by: Eric Blake 
> Reviewed-by: Stefan Hajnoczi 
> Reviewed-by: Anthony PERARD 
> Reviewed-by: Juan Quintela 

Reviewed-by: Gerd Hoffmann 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops

2017-11-14 Thread Quan Xu



On 2017/11/14 15:30, Juergen Gross wrote:

On 14/11/17 08:02, Quan Xu wrote:


On 2017/11/13 18:53, Juergen Gross wrote:

On 13/11/17 11:06, Quan Xu wrote:

From: Quan Xu 

So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
in idle path which will poll for a while before we enter the real idle
state.

In virtualization, idle path includes several heavy operations
includes timer access(LAPIC timer or TSC deadline timer) which will
hurt performance especially for latency intensive workload like message
passing task. The cost is mainly from the vmexit which is a hardware
context switch between virtual machine and hypervisor. Our solution is
to poll for a while and do not enter real idle path if we can get the
schedule event during polling.

Poll may cause the CPU waste so we adopt a smart polling mechanism to
reduce the useless poll.

Signed-off-by: Yang Zhang 
Signed-off-by: Quan Xu 
Cc: Juergen Gross 
Cc: Alok Kataria 
Cc: Rusty Russell 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Cc: xen-de...@lists.xenproject.org

Hmm, is the idle entry path really so critical to performance that a new
pvops function is necessary?

Juergen, Here is the data we get when running benchmark netperf:
  1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
     29031.6 bit/s -- 76.1 %CPU

  2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
     35787.7 bit/s -- 129.4 %CPU

  3. w/ kvm dynamic poll:
     35735.6 bit/s -- 200.0 %CPU

  4. w/patch and w/ kvm dynamic poll:
     42225.3 bit/s -- 198.7 %CPU

  5. idle=poll
     37081.7 bit/s -- 998.1 %CPU



  w/ this patch, we will improve performance by 23%.. even we could improve
  performance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the
  cost of CPU is much lower than 'idle=poll' case..

I don't question the general idea. I just think pvops isn't the best way
to implement it.


Wouldn't a function pointer, maybe guarded
by a static key, be enough? A further advantage would be that this would
work on other architectures, too.

I assume this feature will be ported to other archs.. a new pvops makes


  sorry, a typo.. /other archs/other hypervisors/
  it refers hypervisor like Xen, HyperV and VMware)..


code
clean and easy to maintain. also I tried to add it into existed pvops,
but it
doesn't match.

You are aware that pvops is x86 only?


yes, I'm aware..


I really don't see the big difference in maintainability compared to the
static key / function pointer variant:

void (*guest_idle_poll_func)(void);
struct static_key guest_idle_poll_key __read_mostly;

static inline void guest_idle_poll(void)
{
if (static_key_false(_idle_poll_key))
guest_idle_poll_func();
}




thank you for your sample code :)
I agree there is no big difference.. I think we are discussion for two 
things:

 1) x86 VM on different hypervisors
 2) different archs VM on kvm hypervisor

What I want to do is x86 VM on different hypervisors, such as kvm / xen 
/ hyperv ..



And KVM would just need to set guest_idle_poll_func and enable the
static key. Works on non-x86 architectures, too.



.. referred to 'pv_mmu_ops', HyperV and Xen can implement their own 
functions for 'pv_mmu_ops'.

I think it is the same to pv_idle_ops.

with above explaination, do you still think I need to define the static
key/function pointer variant?

btw, any interest to port it to Xen HVM guest? :)

Quan
Alibaba Cloud

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [linux-4.1 test] 116145: tolerable FAIL - PUSHED

2017-11-14 Thread osstest service owner
flight 116145 linux-4.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116145/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 19 guest-start/debian.repeat fail in 116104 pass in 
116145
 test-armhf-armhf-xl-arndale   7 xen-boot fail in 116104 pass in 116145
 test-amd64-i386-libvirt-qcow2 18 guest-start.2   fail in 116124 pass in 116104
 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeat fail in 116124 pass 
in 116145
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeat  fail pass in 115693
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeat fail pass in 116124

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail like 114646
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 114665
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 114665
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 114665
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 114665
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 114665
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 114665
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux200d858d94b4d8ed7a287e3a3c2b860ae9e17e83
baseline version:
 linuxb8342068e3011832d723aa379a3180d37a4d59df

Last test of basis   114665  2017-10-17 22:46:39 Z   27 days
Testing same since   115693  2017-11-09 08:22:41 Z5 days7 attempts


People who touched revisions under test:
  Adrian Salido 
  Afzal Mohammed 
  Al Viro 
  Alan Stern 
  Alden Tondettar 
  Alexander Potapenko 

Re: [Xen-devel] [PATCH] xen/pvcalls: fix potential endless loop in pvcalls-front.c

2017-11-14 Thread Juergen Gross
On 13/11/17 19:33, Stefano Stabellini wrote:
> On Mon, 13 Nov 2017, Juergen Gross wrote:
>> On 11/11/17 00:57, Stefano Stabellini wrote:
>>> On Tue, 7 Nov 2017, Juergen Gross wrote:
 On 06/11/17 23:17, Stefano Stabellini wrote:
> mutex_trylock() returns 1 if you take the lock and 0 if not. Assume you
> take in_mutex on the first try, but you can't take out_mutex. Next times
> you call mutex_trylock() in_mutex is going to fail. It's an endless
> loop.
>
> Solve the problem by moving the two mutex_trylock calls to two separate
> loops.
>
> Reported-by: Dan Carpenter 
> Signed-off-by: Stefano Stabellini 
> CC: boris.ostrov...@oracle.com
> CC: jgr...@suse.com
> ---
>  drivers/xen/pvcalls-front.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
> index 0c1ec68..047dce7 100644
> --- a/drivers/xen/pvcalls-front.c
> +++ b/drivers/xen/pvcalls-front.c
> @@ -1048,8 +1048,9 @@ int pvcalls_front_release(struct socket *sock)
>* is set to NULL -- we only need to wait for the existing
>* waiters to return.
>*/
> - while (!mutex_trylock(>active.in_mutex) ||
> -!mutex_trylock(>active.out_mutex))
> + while (!mutex_trylock(>active.in_mutex))
> + cpu_relax();
> + while (!mutex_trylock(>active.out_mutex))
>   cpu_relax();

 Any reason you don't just use mutex_lock()?
>>>
>>> Hi Juergen, sorry for the late reply.
>>>
>>> Yes, you are right. Given the patch, it would be just the same to use
>>> mutex_lock.
>>>
>>> This is where I realized that actually we have a problem: no matter if
>>> we use mutex_lock or mutex_trylock, there are no guarantees that we'll
>>> be the last to take the in/out_mutex. Other waiters could be still
>>> outstanding.
>>>
>>> We solved the same problem using a refcount in pvcalls_front_remove. In
>>> this case, I was thinking of reusing the mutex internal counter for
>>> efficiency, instead of adding one more refcount.
>>>
>>> For using the mutex as a refcount, there is really no need to call
>>> mutex_trylock or mutex_lock. I suggest checking on the mutex counter
>>> directly:
>>>
>>>
>>> while (atomic_long_read(>active.in_mutex.owner) != 0UL ||
>>>atomic_long_read(>active.out_mutex.owner) != 0UL)
>>> cpu_relax();
>>>
>>> Cheers,
>>>
>>> Stefano
>>>
>>>
>>> ---
>>>
>>> xen/pvcalls: fix potential endless loop in pvcalls-front.c
>>>
>>> mutex_trylock() returns 1 if you take the lock and 0 if not. Assume you
>>> take in_mutex on the first try, but you can't take out_mutex. Next time
>>> you call mutex_trylock() in_mutex is going to fail. It's an endless
>>> loop.
>>>
>>> Actually, we don't want to use mutex_trylock at all: we don't need to
>>> take the mutex, we only need to wait until the last mutex waiter/holder
>>> releases it.
>>>
>>> Instead of calling mutex_trylock or mutex_lock, just check on the mutex
>>> refcount instead.
>>>
>>> Reported-by: Dan Carpenter 
>>> Signed-off-by: Stefano Stabellini 
>>> CC: boris.ostrov...@oracle.com
>>> CC: jgr...@suse.com
>>>
>>> diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
>>> index 0c1ec68..9f33cb8 100644
>>> --- a/drivers/xen/pvcalls-front.c
>>> +++ b/drivers/xen/pvcalls-front.c
>>> @@ -1048,8 +1048,8 @@ int pvcalls_front_release(struct socket *sock)
>>>  * is set to NULL -- we only need to wait for the existing
>>>  * waiters to return.
>>>  */
>>> -   while (!mutex_trylock(>active.in_mutex) ||
>>> -  !mutex_trylock(>active.out_mutex))
>>> +   while (atomic_long_read(>active.in_mutex.owner) != 0UL ||
>>> +  atomic_long_read(>active.out_mutex.owner) != 0UL)
>>
>> I don't like this.
>>
>> Can't you use a kref here? Even if it looks like more overhead it is
>> much cleaner. There will be no questions regarding possible races,
>> while with an approach like yours will always smell racy (can't there
>> be someone taking the mutex just after above test?).
>>
>> In no case you should make use of the mutex internals.
> 
> Boris' suggestion solves that problem well. Would you be OK with the
> proposed
> 
> while(mutex_is_locked(>active.in_mutex.owner) ||
>   mutex_is_locked(>active.out_mutex.owner))
> cpu_relax();
> 
> ?

I'm not convinced there isn't a race.

In pvcalls_front_recvmsg() sock->sk->sk_send_head is being read and only
then in_mutex is taken. What happens if pvcalls_front_release() resets
sk_send_head and manages to test the mutex before the mutex is locked?

Even in case this is impossible: the whole construct seems to be rather

[Xen-devel] [linux-3.18 test] 116140: tolerable FAIL - PUSHED

2017-11-14 Thread osstest service owner
flight 116140 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116140/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeat fail in 116106 pass 
in 116140
 test-amd64-amd64-xl-qcow219 guest-start/debian.repeat  fail pass in 116106
 test-amd64-amd64-i386-pvgrub 19 guest-start/debian.repeat  fail pass in 116121
 test-armhf-armhf-xl-multivcpu 16 guest-start/debian.repeat fail pass in 116121

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2 17 guest-start/debian.repeat fail in 116106 like 
115495
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115495
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115495
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115495
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 115495
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115495
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115495
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115495
 test-armhf-armhf-xl-vhd  15 guest-start/debian.repeatfail  like 115495
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux943dc0b3ef9f0168494d6dca305cd0cf53a0b3d4
baseline version:
 linux4f823316dac3de3463dfbea2be3812102a76e246

Last test of basis   115495  2017-11-02 19:35:18 Z   11 days
Testing same since   115673  2017-11-08 09:43:38 Z5 days9 attempts


People who touched revisions under test:
  Alexander Boyko 
  Andrew Morton 
  Andy Shevchenko 
  Arnd Bergmann 
  Ashish Samant 
  

Re: [Xen-devel] [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops

2017-11-14 Thread Wanpeng Li
2017-11-14 16:15 GMT+08:00 Quan Xu :
>
>
> On 2017/11/14 15:12, Wanpeng Li wrote:
>>
>> 2017-11-14 15:02 GMT+08:00 Quan Xu :
>>>
>>>
>>> On 2017/11/13 18:53, Juergen Gross wrote:

 On 13/11/17 11:06, Quan Xu wrote:
>
> From: Quan Xu 
>
> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
> in idle path which will poll for a while before we enter the real idle
> state.
>
> In virtualization, idle path includes several heavy operations
> includes timer access(LAPIC timer or TSC deadline timer) which will
> hurt performance especially for latency intensive workload like message
> passing task. The cost is mainly from the vmexit which is a hardware
> context switch between virtual machine and hypervisor. Our solution is
> to poll for a while and do not enter real idle path if we can get the
> schedule event during polling.
>
> Poll may cause the CPU waste so we adopt a smart polling mechanism to
> reduce the useless poll.
>
> Signed-off-by: Yang Zhang 
> Signed-off-by: Quan Xu 
> Cc: Juergen Gross 
> Cc: Alok Kataria 
> Cc: Rusty Russell 
> Cc: Thomas Gleixner 
> Cc: Ingo Molnar 
> Cc: "H. Peter Anvin" 
> Cc: x...@kernel.org
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: linux-ker...@vger.kernel.org
> Cc: xen-de...@lists.xenproject.org

 Hmm, is the idle entry path really so critical to performance that a new
 pvops function is necessary?
>>>
>>> Juergen, Here is the data we get when running benchmark netperf:
>>>   1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
>>>  29031.6 bit/s -- 76.1 %CPU
>>>
>>>   2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
>>>  35787.7 bit/s -- 129.4 %CPU
>>>
>>>   3. w/ kvm dynamic poll:
>>>  35735.6 bit/s -- 200.0 %CPU
>>
>> Actually we can reduce the CPU utilization by sleeping a period of
>> time as what has already been done in the poll logic of IO subsystem,
>> then we can improve the algorithm in kvm instead of introduing another
>> duplicate one in the kvm guest.
>
> We really appreciate upstream's kvm dynamic poll mechanism, which is
> really helpful for a lot of scenario..
>
> However, as description said, in virtualization, idle path includes
> several heavy operations includes timer access (LAPIC timer or TSC
> deadline timer) which will hurt performance especially for latency
> intensive workload like message passing task. The cost is mainly from
> the vmexit which is a hardware context switch between virtual machine
> and hypervisor.
>
> for upstream's kvm dynamic poll mechanism, even you could provide a
> better algorism, how could you bypass timer access (LAPIC timer or TSC
> deadline timer), or a hardware context switch between virtual machine
> and hypervisor. I know these is a tradeoff.
>
> Furthermore, here is the data we get when running benchmark contextswitch
> to measure the latency(lower is better):
>
> 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
>   3402.9 ns/ctxsw -- 199.8 %CPU
>
> 2. w/ patch and disable kvm dynamic poll:
>   1163.5 ns/ctxsw -- 205.5 %CPU
>
> 3. w/ kvm dynamic poll:
>   2280.6 ns/ctxsw -- 199.5 %CPU
>
> so, these tow solution are quite similar, but not duplicate..
>
> that's also why to add a generic idle poll before enter real idle path.
> When a reschedule event is pending, we can bypass the real idle path.
>

There is a similar logic in the idle governor/driver, so how this
patchset influence the decision in the idle governor/driver when
running on bare-metal(power managment is not exposed to the guest so
we will not enter into idle driver in the guest)?

Regards,
Wanpeng Li

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 for-4.10 2/2] x86/mm: fix a potential race condition in modify_xen_mappings().

2017-11-14 Thread Jan Beulich
>>> On 14.11.17 at 07:53,  wrote:
> In modify_xen_mappings(), a L1/L2 page table shall be freed,
> if all entries of this page table are empty. Corresponding
> L2/L3 PTE will need be cleared in such scenario.
> 
> However, concurrent paging structure modifications on different
> CPUs may cause the L2/L3 PTEs to be already be cleared or set
> to reference a superpage.
> 
> Therefore the logic to enumerate the L1/L2 page table and to
> reset the corresponding L2/L3 PTE need to be protected with
> spinlock. And the _PAGE_PRESENT and _PAGE_PSE flags need be
> checked after the lock is obtained.
> 
> Signed-off-by: Yu Zhang 

Reviewed-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3 for-4.10 1/2] x86/mm: fix potential race conditions in map_pages_to_xen().

2017-11-14 Thread Jan Beulich
>>> On 14.11.17 at 07:53,  wrote:
> From: Min He 
> 
> In map_pages_to_xen(), a L2 page table entry may be reset to point to
> a superpage, and its corresponding L1 page table need be freed in such
> scenario, when these L1 page table entries are mapping to consecutive
> page frames and having the same mapping flags.
> 
> However, variable `pl1e` is not protected by the lock before L1 page table
> is enumerated. A race condition may happen if this code path is invoked
> simultaneously on different CPUs.
> 
> For example, `pl1e` value on CPU0 may hold an obsolete value, pointing
> to a page which has just been freed on CPU1. Besides, before this page
> is reused, it will still be holding the old PTEs, referencing consecutive
> page frames. Consequently the `free_xen_pagetable(l2e_to_l1e(ol2e))` will
> be triggered on CPU0, resulting the unexpected free of a normal page.
> 
> This patch fixes the above problem by protecting the `pl1e` with the lock.
> 
> Also, there're other potential race conditions. For instance, the L2/L3
> entry may be modified concurrently on different CPUs, by routines such as
> map_pages_to_xen(), modify_xen_mappings() etc. To fix this, this patch will
> check the _PAGE_PRESENT and _PAGE_PSE flags, after the spinlock is obtained,
> for the corresponding L2/L3 entry.
> 
> Signed-off-by: Min He 
> Signed-off-by: Yi Zhang 
> Signed-off-by: Yu Zhang 

Reviewed-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops

2017-11-14 Thread Quan Xu



On 2017/11/14 15:12, Wanpeng Li wrote:

2017-11-14 15:02 GMT+08:00 Quan Xu :


On 2017/11/13 18:53, Juergen Gross wrote:

On 13/11/17 11:06, Quan Xu wrote:

From: Quan Xu 

So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called
in idle path which will poll for a while before we enter the real idle
state.

In virtualization, idle path includes several heavy operations
includes timer access(LAPIC timer or TSC deadline timer) which will
hurt performance especially for latency intensive workload like message
passing task. The cost is mainly from the vmexit which is a hardware
context switch between virtual machine and hypervisor. Our solution is
to poll for a while and do not enter real idle path if we can get the
schedule event during polling.

Poll may cause the CPU waste so we adopt a smart polling mechanism to
reduce the useless poll.

Signed-off-by: Yang Zhang 
Signed-off-by: Quan Xu 
Cc: Juergen Gross 
Cc: Alok Kataria 
Cc: Rusty Russell 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: "H. Peter Anvin" 
Cc: x...@kernel.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Cc: xen-de...@lists.xenproject.org

Hmm, is the idle entry path really so critical to performance that a new
pvops function is necessary?

Juergen, Here is the data we get when running benchmark netperf:
  1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
 29031.6 bit/s -- 76.1 %CPU

  2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0):
 35787.7 bit/s -- 129.4 %CPU

  3. w/ kvm dynamic poll:
 35735.6 bit/s -- 200.0 %CPU

Actually we can reduce the CPU utilization by sleeping a period of
time as what has already been done in the poll logic of IO subsystem,
then we can improve the algorithm in kvm instead of introduing another
duplicate one in the kvm guest.

We really appreciate upstream's kvm dynamic poll mechanism, which is
really helpful for a lot of scenario..

However, as description said, in virtualization, idle path includes
several heavy operations includes timer access (LAPIC timer or TSC
deadline timer) which will hurt performance especially for latency
intensive workload like message passing task. The cost is mainly from
the vmexit which is a hardware context switch between virtual machine
and hypervisor.

for upstream's kvm dynamic poll mechanism, even you could provide a
better algorism, how could you bypass timer access (LAPIC timer or TSC
deadline timer), or a hardware context switch between virtual machine
and hypervisor. I know these is a tradeoff.

Furthermore, here is the data we get when running benchmark contextswitch
to measure the latency(lower is better):

1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0):
  3402.9 ns/ctxsw -- 199.8 %CPU

2. w/ patch and disable kvm dynamic poll:
  1163.5 ns/ctxsw -- 205.5 %CPU

3. w/ kvm dynamic poll:
  2280.6 ns/ctxsw -- 199.5 %CPU

so, these tow solution are quite similar, but not duplicate..

that's also why to add a generic idle poll before enter real idle path.
When a reschedule event is pending, we can bypass the real idle path.


Quan
Alibaba Cloud





Regards,
Wanpeng Li


  4. w/patch and w/ kvm dynamic poll:
 42225.3 bit/s -- 198.7 %CPU

  5. idle=poll
 37081.7 bit/s -- 998.1 %CPU



  w/ this patch, we will improve performance by 23%.. even we could improve
  performance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the
  cost of CPU is much lower than 'idle=poll' case..


Wouldn't a function pointer, maybe guarded
by a static key, be enough? A further advantage would be that this would
work on other architectures, too.


I assume this feature will be ported to other archs.. a new pvops makes code
clean and easy to maintain. also I tried to add it into existed pvops, but
it
doesn't match.



Quan
Alibaba Cloud


Juergen




___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel