[Xen-devel] [linux-4.9 test] 121711: trouble: broken/fail/pass
flight 121711 linux-4.9 real [real] http://logs.test-lab.xenproject.org/osstest/logs/121711/ Failures and problems with tests :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemut-win7-amd64 broken Tests which are failing intermittently (not blocking): test-amd64-amd64-xl-qemut-win7-amd64 4 host-install(4) broken pass in 121522 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail in 121522 pass in 121711 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 14 guest-localmigrate fail pass in 121522 Tests which did not succeed, but are not blocking: test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail in 121522 like 121371 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 121371 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 121371 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 121371 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 121371 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail never pass test-amd64-i386-xl-pvshim12 guest-start fail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-xl 13 migrate-support-checkfail never pass test-arm64-arm64-xl 14 saverestore-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt 14 saverestore-support-checkfail never pass test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass version targeted for testing: linuxf080bba272b1e3f9bbf0b6c
[Xen-devel] [PATCH v5 0/4] x86/PVHv2: Add memory map pointer to hvm_start_info struct
Here is the patch series for updating the canonical definition of the hvm_start_info struct corresponding to the discussion happening on the linux-kernel and kvm mailing lists regarding Qemu/KVM use of the PVH entry point: KVM: x86: Allow Qemu/KVM to use PVH entry point https://lkml.org/lkml/2018/2/28/1121 Patch 1 contains all the changes to the hvm_start_info struct and patches 2-4 modify Xen to use the new memory map fields of the structure. Changes since v4: * Patch 1: - Addressed a couple of nits in the comments * Patches 2-4: - Rebase to upstream - Simplify interfaces - Avoid unnecessary dom->e80 allocation - Fix start_page size calculation (and make it applicable to both HVM and PVH) Changes since v3: * Cleaned up hard tabs in start_info.h (patch 1) * Removed comment about "For PV guests only 0 allowed, for PVH 0 or 1 allowed" from start_info.h (patch 1) * Make the map available to both HVM and PVH guests (patches 2-4) * Re-organize libxl changes (patches 2-4) Changes since v2: * Better definition of the memory map types including addition of new symbols and tightening up the comments as suggested. * Added a couple of BUILD_BUG_ON() statements to the c code in patch #4 to document and verify the relationship between these memory types and e820 types. Changes since v1: * Made updates to code comments as suggested by Jan and Roger, including better definition of the memory map type field. * Boris provided additional patches to populate the new fields in the hvm_start_info struct as Jan (and later Roger also) had requested. Boris Ostrovsky (3): libxl/x86: Build e820 map earlier for HVM/PVH guests libxl: Store e820 map in xc_dom_image libxc: Pass e820 map to HVM/PVH guests via hvm_start_info Maran Wilson (1): x86/PVHv2: Add memory map pointer to hvm_start_info struct tools/libxc/include/xc_dom.h | 7 +++- tools/libxc/xc_dom_x86.c | 29 + tools/libxl/libxl_arch.h | 10 + tools/libxl/libxl_arm.c | 11 + tools/libxl/libxl_create.c | 2 +- tools/libxl/libxl_dom.c | 18 tools/libxl/libxl_internal.h | 2 +- tools/libxl/libxl_x86.c | 52 ++- xen/include/public/arch-x86/hvm/start_info.h | 63 +++- 9 files changed, 143 insertions(+), 51 deletions(-) -- 1.8.3.1 ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH v5 3/4] libxl: Store e820 map in xc_dom_image
From: Boris Ostrovsky We will later copy it to hvm_start_info. (Also remove stale comment claming that xc_dom_image.start_info_seg is only used for HVMlite guests) Signed-off-by: Boris Ostrovsky --- Cc: Ian Jackson Cc: Wei Liu Cc: Roger Pau Monné Cc: Boris Ostrovsky Cc: Maran Wilson --- Changes in v5 * No need to allocate/copy to dom->e820, we can just point to the already allocated e820. --- tools/libxc/include/xc_dom.h | 7 ++- tools/libxl/libxl_x86.c | 3 +++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h index 491cad8..8a66889 100644 --- a/tools/libxc/include/xc_dom.h +++ b/tools/libxc/include/xc_dom.h @@ -99,7 +99,7 @@ struct xc_dom_image { struct xc_dom_seg p2m_seg; struct xc_dom_seg pgtables_seg; struct xc_dom_seg devicetree_seg; -struct xc_dom_seg start_info_seg; /* HVMlite only */ +struct xc_dom_seg start_info_seg; xen_pfn_t start_info_pfn; xen_pfn_t console_pfn; xen_pfn_t xenstore_pfn; @@ -224,6 +224,11 @@ struct xc_dom_image { /* Extra SMBIOS structures passed to HVMLOADER */ struct xc_hvm_firmware_module smbios_module; +#if defined(__i386__) || defined(__x86_64__) +struct e820entry *e820; +unsigned int e820_entries; +#endif + xen_pfn_t vuart_gfn; }; diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c index a7c9704..78affdd 100644 --- a/tools/libxl/libxl_x86.c +++ b/tools/libxl/libxl_x86.c @@ -578,6 +578,9 @@ static int domain_construct_memmap(libxl__gc *gc, goto out; } +dom->e820 = e820; +dom->e820_entries = e820_entries; + out: return rc; } -- 1.8.3.1 ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH v5 4/4] libxc: Pass e820 map to HVM/PVH guests via hvm_start_info
From: Boris Ostrovsky Signed-off-by: Boris Ostrovsky Signed-off-by: Maran Wilson --- Cc: Ian Jackson Cc: Wei Liu Cc: Roger Pau Monné Cc: Boris Ostrovsky Cc: Maran Wilson --- Changes in v5: * Fix calculation of start_info_size (and move it from under "if(!dom->device_model)" * Rebase --- tools/libxc/xc_dom_x86.c | 29 + 1 file changed, 29 insertions(+) diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c index 8784d1a..e33a288 100644 --- a/tools/libxc/xc_dom_x86.c +++ b/tools/libxc/xc_dom_x86.c @@ -35,6 +35,8 @@ #include #include +#include + #include "xg_private.h" #include "xc_dom.h" #include "xenctrl.h" @@ -633,6 +635,9 @@ static int alloc_magic_pages_hvm(struct xc_dom_image *dom) start_info_size += HVMLOADER_MODULE_CMDLINE_SIZE * HVMLOADER_MODULE_MAX_COUNT; +start_info_size += +dom->e820_entries * sizeof(struct hvm_memmap_table_entry); + if ( !dom->device_model ) { if ( dom->cmdline ) @@ -1665,7 +1670,9 @@ static int bootlate_hvm(struct xc_dom_image *dom) uint32_t domid = dom->guest_domid; xc_interface *xch = dom->xch; struct hvm_start_info *start_info; +size_t modsize; struct hvm_modlist_entry *modlist; +struct hvm_memmap_table_entry *memmap; unsigned int i; start_info = xc_map_foreign_range(xch, domid, dom->start_info_seg.pages << @@ -1720,7 +1727,29 @@ static int bootlate_hvm(struct xc_dom_image *dom) ((uintptr_t)modlist - (uintptr_t)start_info); } +/* + * Check a couple of XEN_HVM_MEMMAP_TYPEs to verify consistency with + * their corresponding e820 numerical values. + */ +BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_RAM != E820_RAM); +BUILD_BUG_ON(XEN_HVM_MEMMAP_TYPE_ACPI != E820_ACPI); + +modsize = HVMLOADER_MODULE_MAX_COUNT * +(sizeof(*modlist) + HVMLOADER_MODULE_CMDLINE_SIZE); +memmap = (void*)modlist + modsize; + +start_info->memmap_paddr = (dom->start_info_seg.pfn << PAGE_SHIFT) + +((uintptr_t)modlist - (uintptr_t)start_info) + modsize; +start_info->memmap_entries = dom->e820_entries; +for ( i = 0; i < dom->e820_entries; i++ ) +{ +memmap[i].addr = dom->e820[i].addr; +memmap[i].size = dom->e820[i].size; +memmap[i].type = dom->e820[i].type; +} + start_info->magic = XEN_HVM_START_MAGIC_VALUE; +start_info->version = 1; munmap(start_info, dom->start_info_seg.pages << XC_DOM_PAGE_SHIFT(dom)); -- 1.8.3.1 ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH v5 2/4] libxl/x86: Build e820 map earlier for HVM/PVH guests
From: Boris Ostrovsky Since hvm_start_info has now been expanded to include memory map (i.e. e820) we need to know size of this map by the time we create dom->start_info_seg in alloc_magic_pages_hvm(). To do so we have to call libxl__arch_domain_construct_memmap() earlier, before xc_dom_build_image(). And since libxl__arch_domain_construct_memmap() is only used by for x86 we can make this call from x86's libxl__arch_domain_finalise_hw_description(), at the same time removing its NOP definition from ARM code and renaming and making it static in libxl_x86.c Signed-off-by: Boris Ostrovsky --- Cc: Ian Jackson Cc: Wei Liu Cc: Roger Pau Monné Cc: Boris Ostrovsky Cc: Maran Wilson --- Changes in v5: * Adjusted call interfaces to take into account the fact that libxl_domain_build_info is pointed to from libxl_domain_config. --- tools/libxl/libxl_arch.h | 10 ++--- tools/libxl/libxl_arm.c | 11 ++ tools/libxl/libxl_create.c | 2 +- tools/libxl/libxl_dom.c | 18 +++- tools/libxl/libxl_internal.h | 2 +- tools/libxl/libxl_x86.c | 49 +++- 6 files changed, 43 insertions(+), 49 deletions(-) diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h index 784ec7f..e3b6f5f 100644 --- a/tools/libxl/libxl_arch.h +++ b/tools/libxl/libxl_arch.h @@ -41,7 +41,8 @@ int libxl__arch_domain_init_hw_description(libxl__gc *gc, /* finalize arch specific hardware description. */ _hidden int libxl__arch_domain_finalise_hw_description(libxl__gc *gc, - libxl_domain_build_info *info, + uint32_t domid, + libxl_domain_config *d_config, struct xc_dom_image *dom); /* perform any pending hardware initialization */ @@ -62,13 +63,6 @@ int libxl__arch_vnuma_build_vmemrange(libxl__gc *gc, _hidden int libxl__arch_domain_map_irq(libxl__gc *gc, uint32_t domid, int irq); -/* arch specific to construct memory mapping function */ -_hidden -int libxl__arch_domain_construct_memmap(libxl__gc *gc, -libxl_domain_config *d_config, -uint32_t domid, -struct xc_dom_image *dom); - _hidden void libxl__arch_domain_build_info_acpi_setdefault( libxl_domain_build_info *b_info); diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c index 906fd0d..fbe8786 100644 --- a/tools/libxl/libxl_arm.c +++ b/tools/libxl/libxl_arm.c @@ -1039,7 +1039,8 @@ static void finalise_one_node(libxl__gc *gc, void *fdt, const char *uname, } int libxl__arch_domain_finalise_hw_description(libxl__gc *gc, - libxl_domain_build_info *info, + uint32_t domid, + libxl_domain_config *d_config, struct xc_dom_image *dom) { void *fdt = dom->devicetree_blob; @@ -1133,14 +1134,6 @@ int libxl__arch_domain_map_irq(libxl__gc *gc, uint32_t domid, int irq) return xc_domain_bind_pt_spi_irq(CTX->xch, domid, irq, irq); } -int libxl__arch_domain_construct_memmap(libxl__gc *gc, -libxl_domain_config *d_config, -uint32_t domid, -struct xc_dom_image *dom) -{ -return 0; -} - void libxl__arch_domain_build_info_acpi_setdefault( libxl_domain_build_info *b_info) { diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c index c43f391..2b5c7ee 100644 --- a/tools/libxl/libxl_create.c +++ b/tools/libxl/libxl_create.c @@ -488,7 +488,7 @@ int libxl__domain_build(libxl__gc *gc, break; case LIBXL_DOMAIN_TYPE_PV: -ret = libxl__build_pv(gc, domid, info, state); +ret = libxl__build_pv(gc, domid, d_config, state); if (ret) goto out; diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 2e29b52..8c3607b 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -698,9 +698,10 @@ static int set_vnuma_info(libxl__gc *gc, uint32_t domid, } static int libxl__build_dom(libxl__gc *gc, uint32_t domid, - libxl_domain_build_info *info, libxl__domain_build_state *state, + libxl_domain_config *d_config, libxl__domain_build_state *state, struct xc_dom_image *dom) { +libxl_domain_build_info *const info = &d_config->b_info; uint64_t mem_kb; int ret; @@ -733,7 +734,7 @@ static int libxl__build_dom(libxl__gc *gc, uint32_t domid, LOGE(ERROR, "xc_dom_boot_mem_init failed"); goto out; } -if ( (ret = libxl__arch_domain_finalise_hw_description(gc, info, dom)) != 0 ) { +if ( (ret =
[Xen-devel] [PATCH v5 1/4] x86/PVHv2: Add memory map pointer to hvm_start_info struct
The start info structure that is defined as part of the x86/HVM direct boot ABI and used for starting Xen PVH guests would be more versatile if it also included a way to pass information about the memory map to the guest. This would allow KVM guests to share the same entry point. Signed-off-by: Maran Wilson Reviewed-by: Roger Pau Monné Acked-by: Jan Beulich --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Boris Ostrovsky Cc: Roger Pau Monné --- xen/include/public/arch-x86/hvm/start_info.h | 63 +++- 1 file changed, 62 insertions(+), 1 deletion(-) diff --git a/xen/include/public/arch-x86/hvm/start_info.h b/xen/include/public/arch-x86/hvm/start_info.h index 6484159..50af9ea 100644 --- a/xen/include/public/arch-x86/hvm/start_info.h +++ b/xen/include/public/arch-x86/hvm/start_info.h @@ -33,7 +33,7 @@ *| magic | Contains the magic value XEN_HVM_START_MAGIC_VALUE *|| ("xEn3" with the 0x80 bit of the "E" set). * 4 ++ - *| version| Version of this structure. Current version is 0. New + *| version| Version of this structure. Current version is 1. New *|| versions are guaranteed to be backwards-compatible. * 8 ++ *| flags | SIF_xxx flags. @@ -48,6 +48,15 @@ * 32 ++ *| rsdp_paddr | Physical address of the RSDP ACPI data structure. * 40 ++ + *| memmap_paddr | Physical address of the (optional) memory map. Only + *|| present in version 1 and newer of the structure. + * 48 ++ + *| memmap_entries | Number of entries in the memory map table. Zero + *|| if there is no memory map being provided. Only + *|| present in version 1 and newer of the structure. + * 52 ++ + *| reserved | Version 1 and newer only. + * 56 ++ * * The layout of each entry in the module structure is the following: * @@ -62,14 +71,52 @@ *| reserved | * 32 ++ * + * The layout of each entry in the memory map table is as follows: + * + * 0 ++ + *| addr | Base address + * 8 ++ + *| size | Size of mapping in bytes + * 16 ++ + *| type | Type of mapping as defined between the hypervisor + *|| and guest. See XEN_HVM_MEMMAP_TYPE_* values below. + * 20 +| + *| reserved | + * 24 ++ + * * The address and sizes are always a 64bit little endian unsigned integer. * * NB: Xen on x86 will always try to place all the data below the 4GiB * boundary. + * + * Version numbers of the hvm_start_info structure have evolved like this: + * + * Version 0: Initial implementation. + * + * Version 1: Added the memmap_paddr/memmap_entries fields (plus 4 bytes of + * padding) to the end of the hvm_start_info struct. These new + * fields can be used to pass a memory map to the guest. The + * memory map is optional and so guests that understand version 1 + * of the structure must check that memmap_entries is non-zero + * before trying to read the memory map. */ #define XEN_HVM_START_MAGIC_VALUE 0x336ec578 /* + * The values used in the type field of the memory map table entries are + * defined below and match the Address Range Types as defined in the "System + * Address Map Interfaces" section of the ACPI Specification. Please refer to + * section 15 in version 6.2 of the ACPI spec: http://uefi.org/specifications + */ +#define XEN_HVM_MEMMAP_TYPE_RAM 1 +#define XEN_HVM_MEMMAP_TYPE_RESERVED 2 +#define XEN_HVM_MEMMAP_TYPE_ACPI 3 +#define XEN_HVM_MEMMAP_TYPE_NVS 4 +#define XEN_HVM_MEMMAP_TYPE_UNUSABLE 5 +#define XEN_HVM_MEMMAP_TYPE_DISABLED 6 +#define XEN_HVM_MEMMAP_TYPE_PMEM 7 + +/* * C representation of the x86/HVM start info layout. * * The canonical definition of this layout is above, this is just a way to @@ -86,6 +133,13 @@ struct hvm_start_info { uint64_t cmdline_paddr; /* Physical address of the command line. */ uint64_t rsdp_paddr;/* Physical address of the RSDP ACPI data*/ /* structure.*/ +/* All following fields only present in version 1 and newer */ +uint64_t memmap_paddr; /* Physical address of an array of */ +/* hvm_memmap_table_entry. */ +uint32_t memmap_entries;/* Number of entries in the memmap table.*/ +/* Value will be zero if there is no memory */ +/* map being provided. */ +uint32_t reserved; /* Must be zero. */ }; struct hvm_modlist_entry {
Re: [Xen-devel] [PATCH v3 0/5] sndif: add explicit back and front synchronization
ping On 03/27/2018 08:41 AM, Oleksandr Andrushchenko wrote: Hi, Konrad! Could you please review? Thank you, Oleksandr On 03/21/2018 09:25 AM, Oleksandr Andrushchenko wrote: On 03/21/2018 09:20 AM, Takashi Iwai wrote: On Wed, 21 Mar 2018 08:15:36 +0100, Oleksandr Andrushchenko wrote: On 03/20/2018 10:22 PM, Takashi Iwai wrote: On Mon, 19 Mar 2018 08:22:19 +0100, Oleksandr Andrushchenko wrote: From: Oleksandr Andrushchenko Hello, all! In order to provide explicit synchronization between backend and frontend the following changes are introduced in the protocol: - bump protocol version to 2 - add new ring buffer for sending asynchronous events from backend to frontend to report number of bytes played by the frontend (XENSND_EVT_CUR_POS) - introduce trigger events for playback control: start/stop/pause/resume - add "req-" prefix to event-channel and ring-ref to unify naming of the Xen event channels for requests and events - add XENSND_OP_HW_PARAM_QUERY request to read/update stream configuration space: request passes desired intervals/formats for the stream parameters and the response returns allowed intervals and formats mask that can be used. Changes since v2: 1. Konrad's r-b tag for version patch 2. MAJOR: changed req/resp/evt packet sizes from 32 to 64 octets 3. Reworked XENSND_OP_HW_PARAM_QUERY so it now sends all parameters at once, allowing to check all the configuration space. 4. Minor documentation cleanup (added missed "reserved" fields) Changes since v1: 1. Changed protocol version definition from string to integer, so it can easily be used in comparisons. Konrad, I have removed your r-b tag for the reason of this change. 2. In order to provide explicit stream parameter negotiation between backend and frontend the following changes are introduced in the protocol: add XENSND_OP_HW_PARAM_QUERY request to read/update configuration space for the parameter given: request passes desired parameter interval (mask) and the response to this request returns min/max interval (mask) for the parameter to be used. Parameters supported by this request/response: - format mask - sample rate interval - number of channels interval - buffer size, interval, frames - period size, interval, frames I can't judge exactly about the protocol without the actual FE/BE implementations, but the change looks good to me, especially if you've already tested something. Thank you, I have tested the changes and need them to start upstreaming the frontend driver used to test the protocol. Do you mind if I put your Acked-by (or you prefer Reviewed-by?) tag to these patches: [PATCH v3 4/5] sndif: Add explicit back and front synchronization [PATCH v3 5/5] sndif: Add explicit back and front parameter negotiation Sure, feel free to take my ack: Reviewed-by: Takashi Iwai Thank you Takashi Please note, that the changes first to be merged into Xen and then I'll prepare the same, but for the kernel If other people have no concern, let's go ahead with FE/BE stuff. Konrad, are you ok with the changes? thanks, Takashi Thank you, Oleksandr ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [qemu-mainline test] 121705: regressions - FAIL
flight 121705 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/121705/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1 fail REGR. vs. 120095 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1 fail REGR. vs. 120095 Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 120095 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 120095 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 120095 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 120095 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 120095 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 120095 test-amd64-i386-xl-pvshim12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 14 saverestore-support-checkfail never pass test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-xl 13 migrate-support-checkfail never pass test-arm64-arm64-xl 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass version targeted for testing: qemuuf184de7553272223d6af731d7d623a7cebf710b5 baseline version: qemuu6697439794f72b3501ee16bb95d16854f9981421 Last test of basis 120095 2018-02-28 13:46:33 Z 33 days Failing since120146 2018-03-02 10:10:57 Z 31 days 20 attempts Testing same since 121644 2018-04-01 10:43:14 Z1 days2 attempts People who touched revisions under test: Alberto Garcia Alex Bennée Alex Bennée Alex Williamson Alexey Kardashevskiy Alistair Francis Alistair Francis Andrew Jones Andrey Smirnov Anton Nefedov BALATON Zoltan Bastian Koppelmann Bastian Koppelmann (tricore) Bill Paul Brijesh Singh Bruce Rogers Chao Peng Christian Borntraeger Claudio Imbrenda Collin L. Walling Core
[Xen-devel] [libvirt test] 121707: tolerable all pass - PUSHED
flight 121707 libvirt real [real] http://logs.test-lab.xenproject.org/osstest/logs/121707/ Failures :-/ but no regressions. Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 121380 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 121380 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 121380 test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt 14 saverestore-support-checkfail never pass test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-arm64-arm64-libvirt-qcow2 12 migrate-support-checkfail never pass test-arm64-arm64-libvirt-qcow2 13 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass version targeted for testing: libvirt 439c27b1ae35e0daab6e86fc6320ea1682a3aabd baseline version: libvirt c595fc788e410ef27947804d18ca9a33362e3959 Last test of basis 121380 2018-03-30 15:36:24 Z3 days Testing same since 121707 2018-04-02 04:20:30 Z0 days1 attempts People who touched revisions under test: Daniel Veillard Ján Tomko Michal Privoznik Pino Toscano jobs: build-amd64-xsm pass build-arm64-xsm pass build-armhf-xsm pass build-i386-xsm pass build-amd64 pass build-arm64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-arm64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-pvopspass build-arm64-pvopspass build-armhf-pvopspass build-i386-pvops pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass test-amd64-amd64-libvirt-xsm pass test-arm64-arm64-libvirt-xsm pass test-armhf-armhf-libvirt-xsm pass test-amd64-i386-libvirt-xsm pass test-amd64-amd64-libvirt pass test-arm64-arm64-libvirt pass test-armhf-armhf-libvirt pass test-amd64-i386-libvirt pass test-amd64-amd64-libvirt-pairpass test-amd64-i386-libvirt-pair pass test-arm64-arm64-libvirt-qcow2 pass test-armhf-armhf-libvirt-raw pass test-amd64-amd64-libvirt-vhd pass sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Pushing r
[Xen-devel] [ovmf test] 121710: all pass - PUSHED
flight 121710 ovmf real [real] http://logs.test-lab.xenproject.org/osstest/logs/121710/ Perfect :-) All tests in this flight passed as required version targeted for testing: ovmf 5b91bf82c67b586b9588cbe4bbffa1588f6b5926 baseline version: ovmf 9c7d0d499296e444e39e9b6b34d8c121a325b295 Last test of basis 121669 2018-04-01 17:26:25 Z1 days Testing same since 121710 2018-04-02 06:30:22 Z0 days1 attempts People who touched revisions under test: Heyi Guo Renhao Liang Yi Li jobs: build-amd64-xsm pass build-i386-xsm pass build-amd64 pass build-i386 pass build-amd64-libvirt pass build-i386-libvirt pass build-amd64-pvopspass build-i386-pvops pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Pushing revision : To xenbits.xen.org:/home/xen/git/osstest/ovmf.git 9c7d0d4992..5b91bf82c6 5b91bf82c67b586b9588cbe4bbffa1588f6b5926 -> xen-tested-master ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [xen-4.9-testing test] 121704: FAIL
flight 121704 xen-4.9-testing real [real] http://logs.test-lab.xenproject.org/osstest/logs/121704/ Failures and problems with tests :-( Tests which did not succeed and are blocking, including tests which could not be run: test-xtf-amd64-amd64-3 broken in 121358 Tests which are failing intermittently (not blocking): test-xtf-amd64-amd64-3 4 host-install(4) broken in 121358 pass in 121704 test-armhf-armhf-libvirt 6 xen-install fail in 121358 pass in 121704 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore.2 fail in 121358 pass in 121704 test-amd64-amd64-xl-qemuu-ovmf-amd64 16 guest-localmigrate/x10 fail in 121460 pass in 121704 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail in 121460 pass in 121704 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail in 121460 pass in 121704 test-amd64-i386-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail in 121460 pass in 121704 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail pass in 121331 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail pass in 121358 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail pass in 121460 test-armhf-armhf-xl-arndale 6 xen-installfail pass in 121460 Regressions which are regarded as allowable (not blocking): test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail in 121331 REGR. vs. 121015 Tests which did not succeed, but are not blocking: test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail blocked in 121015 test-armhf-armhf-xl-rtds 12 guest-start fail in 121331 like 121015 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail in 121331 like 121015 test-amd64-amd64-xl-qemut-ws16-amd64 18 guest-start/win.repeat fail in 121460 blocked in 121015 test-armhf-armhf-xl-arndale 13 migrate-support-check fail in 121460 never pass test-armhf-armhf-xl-arndale 14 saverestore-support-check fail in 121460 never pass test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail like 121015 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail like 121015 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 121015 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 121015 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 121015 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 121015 test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl 13 migrate-support-checkfail never pass test-arm64-arm64-xl 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-credit2 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 14 saverestore-support-checkfail never pass test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail ne
[Xen-devel] [xen-4.7-testing test] 121700: FAIL
flight 121700 xen-4.7-testing real [real] http://logs.test-lab.xenproject.org/osstest/logs/121700/ Failures and problems with tests :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-amd64-pvgrub broken in 121444 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm broken in 121444 Tests which are failing intermittently (not blocking): test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 4 host-install(4) broken in 121444 pass in 121700 test-amd64-amd64-amd64-pvgrub 4 host-install(4) broken in 121444 pass in 121700 test-xtf-amd64-amd64-3 50 xtf/test-hvm64-lbr-tsx-vmentry fail in 121444 pass in 121700 test-xtf-amd64-amd64-4 50 xtf/test-hvm64-lbr-tsx-vmentry fail pass in 121444 Tests which did not succeed, but are not blocking: test-xtf-amd64-amd64-5 50 xtf/test-hvm64-lbr-tsx-vmentry fail in 121444 like 121093 test-xtf-amd64-amd64-2 50 xtf/test-hvm64-lbr-tsx-vmentry fail like 121093 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 121247 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 121247 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 121247 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 121247 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail like 121247 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 121247 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 121247 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 121247 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 121247 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 121247 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 121247 test-xtf-amd64-amd64-3 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-1 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-4 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-2 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-5 52 xtf/test-hvm64-memop-seg fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-xl 13 migrate-support-checkfail never pass test-arm64-arm64-xl 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-credit2 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-check
[Xen-devel] [rumprun test] 121706: regressions - FAIL
flight 121706 rumprun real [real] http://logs.test-lab.xenproject.org/osstest/logs/121706/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-amd64-rumprun 6 rumprun-buildfail REGR. vs. 106754 build-i386-rumprun6 rumprun-buildfail REGR. vs. 106754 Tests which did not succeed, but are not blocking: test-amd64-amd64-rumprun-amd64 1 build-check(1) blocked n/a test-amd64-i386-rumprun-i386 1 build-check(1) blocked n/a version targeted for testing: rumprun 94bdf32ac57b84c1b42150d21f0ad79b3b5dd99c baseline version: rumprun c7f2f016becc1cd0e85da6e1b25a8e7f9fb2aa74 Last test of basis 106754 2017-03-18 04:21:25 Z 380 days Testing same since 120360 2018-03-09 04:19:20 Z 24 days 21 attempts People who touched revisions under test: Kent McLeod Kent McLeod Naja Melan Sebastian Wicki Wei Liu jobs: build-amd64 pass build-i386 pass build-amd64-pvopspass build-i386-pvops pass build-amd64-rumprun fail build-i386-rumprun fail test-amd64-amd64-rumprun-amd64 blocked test-amd64-i386-rumprun-i386 blocked sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Not pushing. commit 94bdf32ac57b84c1b42150d21f0ad79b3b5dd99c Merge: 8fe40c8 b3c1033 Author: Kent McLeod Date: Fri Feb 16 09:15:45 2018 +1100 Merge pull request #118 from kent-mcleod/stretch-linking-defaultpie Fix linking on Debian Stretch (gcc-6) commit b3c1033b090b65e8e86999ddd063c174502aa3f0 Author: Kent McLeod Date: Wed Feb 14 16:43:16 2018 +1100 Add further -no-pie checks to Rumprun build tools This builds upon the previous commit to add -no-pie anywhere the relocatable flag (-Wl,-r) is used to handle compilers that enable -pie by default (Such as Debian Stretch). commit 8fe40c84edddfbf472b4a7cce960df749701174c Merge: c7f2f01 685f4ab Author: Sebastian Wicki Date: Fri Jan 5 15:04:18 2018 +0100 Merge pull request #112 from najamelan/bugfix/gcc7-fallthrough Add the -Wimplicit-fallthrough=0 flag to allow compiling with GCC7 commit 685f4ab3b74b6f1e1b40bdd3d2c42efa44bf385d Author: Naja Melan Date: Thu Jan 4 16:07:46 2018 + Make the disabling of the fallthrough warning dependent on GCC version This should prevent older gcc versions from choking on unknown argument. I have not tested this, just wrote the code directly on github. Use with caution. commit 34056451174e8722b972229fefc1bf9e0b89a7da Author: Naja Melan Date: Wed Jan 3 18:57:50 2018 + Add the -Wimplicit-fallthrough=0 flag to allow compiling with GCC7 GCC7 comes with a new warning "implicit-fallthrough" which will prevent building the netbsd-src. For more information: https://dzone.com/articles/implicit-fallthrough-in-gcc-7 commit 35d81194b7feb75d20af3ba4fdb45ea76230852f Author: Wei Liu Date: Wed Jun 7 16:30:00 2017 +0100 Fix linking on Debian Stretch Provide cc-option. Use that to check if -no-pie is available and append it when necessary. Signed-off-by: Wei Liu ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [linux-linus test] 121679: regressions - FAIL
flight 121679 linux-linus real [real] http://logs.test-lab.xenproject.org/osstest/logs/121679/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 118324 test-amd64-i386-libvirt 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemuu-ovmf-amd64 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemuu-win10-i386 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemut-win7-amd64 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-qemuu-rhel6hvm-amd 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemut-debianhvm-amd64 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-qemut-rhel6hvm-amd 7 xen-boot fail REGR. vs. 118324 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail REGR. vs. 118324 test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail REGR. vs. 118324 test-amd64-i386-examine 8 reboot fail REGR. vs. 118324 test-amd64-i386-xl-qemuu-ws16-amd64 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-qemuu-rhel6hvm-intel 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemut-ws16-amd64 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 118324 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 118324 test-amd64-i386-libvirt-pair 10 xen-boot/src_hostfail REGR. vs. 118324 test-amd64-i386-libvirt-pair 11 xen-boot/dst_hostfail REGR. vs. 118324 test-amd64-i386-xl-qemut-win10-i386 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-rumprun-i386 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-qemut-rhel6hvm-intel 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemuu-win7-amd64 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl7 xen-boot fail REGR. vs. 118324 test-amd64-i386-libvirt-xsm 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-freebsd10-i386 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemuu-debianhvm-amd64 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 118324 test-amd64-i386-freebsd10-amd64 7 xen-boot fail REGR. vs. 118324 test-armhf-armhf-xl-cubietruck 6 xen-installfail REGR. vs. 118324 test-amd64-amd64-xl-qemuu-debianhvm-amd64 16 guest-localmigrate/x10 fail REGR. vs. 118324 Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 118324 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 118324 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 118324 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 118324 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 118324 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 118324 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 118324 test-amd64-i386-xl-pvshim 7 xen-boot fail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-xl 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail
[Xen-devel] [PATCH v1] Xen-blkfront fixes to dynamically adjust ring.
Hi! This patch allows dynamic reconfiguration of the three different parameters that an Xen blkfront driver initially negotiates: * max_indirect_segs: Maximum amount of segments. * max_ring_page_order: Maximum order of pages to be used for the shared ring. * max_queues: Maximum of queues(rings) to be used. But the storage backend, workload, and guest memory result in very different tuning requirements. It's impossible to centrally predict application characteristics so it's best to leave allow the settings can be dynamiclly adjusted based on workload inside the guest. drivers/block/xen-blkfront.c | 320 --- 1 file changed, 304 insertions(+), 16 deletions(-) Bob Liu (1): xen-blkfront: dynamic configuration of per-vbd resources ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH v1] xen-blkfront: dynamic configuration of per-vbd resources
From: Bob Liu The current VBD layer reserves buffer space for each attached device based on three statically configured settings which are read at boot time. * max_indirect_segs: Maximum amount of segments. * max_ring_page_order: Maximum order of pages to be used for the shared ring. * max_queues: Maximum of queues(rings) to be used. But the storage backend, workload, and guest memory result in very different tuning requirements. It's impossible to centrally predict application characteristics so it's best to leave allow the settings can be dynamiclly adjusted based on workload inside the Guest. Usage: Show current values: cat /sys/devices/vbd-xxx/max_indirect_segs cat /sys/devices/vbd-xxx/max_ring_page_order cat /sys/devices/vbd-xxx/max_queues Write new values: echo > /sys/devices/vbd-xxx/max_indirect_segs echo > /sys/devices/vbd-xxx/max_ring_page_order echo > /sys/devices/vbd-xxx/max_queues Signed-off-by: Bob Liu Signed-off-by: Somasundaram Krishnasamy Signed-off-by: Konrad Rzeszutek Wilk --- drivers/block/xen-blkfront.c | 320 --- 1 file changed, 304 insertions(+), 16 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 92ec1bbece51..4ebd368f4d1a 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -46,6 +46,7 @@ #include #include #include +#include #include #include @@ -217,6 +218,11 @@ struct blkfront_info /* Save uncomplete reqs and bios for migration. */ struct list_head requests; struct bio_list bio_list; + /* For dynamic configuration. */ + unsigned int reconfiguring:1; + int new_max_indirect_segments; + int new_max_ring_page_order; + int new_max_queues; }; static unsigned int nr_minors; @@ -1355,6 +1361,31 @@ static void blkif_free(struct blkfront_info *info, int suspend) for (i = 0; i < info->nr_rings; i++) blkif_free_ring(&info->rinfo[i]); + /* Remove old xenstore nodes. */ + if (info->nr_ring_pages > 1) + xenbus_rm(XBT_NIL, info->xbdev->nodename, "ring-page-order"); + + if (info->nr_rings == 1) { + if (info->nr_ring_pages == 1) { + xenbus_rm(XBT_NIL, info->xbdev->nodename, "ring-ref"); + } else { + for (i = 0; i < info->nr_ring_pages; i++) { + char ring_ref_name[RINGREF_NAME_LEN]; + + snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i); + xenbus_rm(XBT_NIL, info->xbdev->nodename, ring_ref_name); + } + } + } else { + xenbus_rm(XBT_NIL, info->xbdev->nodename, "multi-queue-num-queues"); + + for (i = 0; i < info->nr_rings; i++) { + char queuename[QUEUE_NAME_LEN]; + + snprintf(queuename, QUEUE_NAME_LEN, "queue-%u", i); + xenbus_rm(XBT_NIL, info->xbdev->nodename, queuename); + } + } kfree(info->rinfo); info->rinfo = NULL; info->nr_rings = 0; @@ -1778,10 +1809,18 @@ static int talk_to_blkback(struct xenbus_device *dev, if (!info) return -ENODEV; - max_page_order = xenbus_read_unsigned(info->xbdev->otherend, - "max-ring-page-order", 0); - ring_page_order = min(xen_blkif_max_ring_order, max_page_order); - info->nr_ring_pages = 1 << ring_page_order; + err = xenbus_scanf(XBT_NIL, info->xbdev->otherend, + "max-ring-page-order", "%u", &max_page_order); + if (err != 1) + info->nr_ring_pages = 1; + else { + ring_page_order = min(xen_blkif_max_ring_order, max_page_order); + if (info->new_max_ring_page_order) { + BUG_ON(info->new_max_ring_page_order > max_page_order); + ring_page_order = info->new_max_ring_page_order; + } + info->nr_ring_pages = 1 << ring_page_order; + } err = negotiate_mq(info); if (err) @@ -1903,6 +1942,10 @@ static int negotiate_mq(struct blkfront_info *info) backend_max_queues = xenbus_read_unsigned(info->xbdev->otherend, "multi-queue-max-queues", 1); info->nr_rings = min(backend_max_queues, xen_blkif_max_queues); + if (info->new_max_queues) { + BUG_ON(info->new_max_queues > backend_max_queues); + info->nr_rings = info->new_max_queues; + } /* We need at least one ring. */ if (!info->nr_rings) info->nr_rings = 1; @@ -2261,6 +2304,8 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) */ static void blkfront_gather_backend_features(struct blkfront_info *info) { + int er
[Xen-devel] [xen-unstable test] 121682: regressions - FAIL
flight 121682 xen-unstable real [real] http://logs.test-lab.xenproject.org/osstest/logs/121682/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 121272 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 121272 Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 121272 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 121272 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 121272 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 121272 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 121272 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 121272 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 121272 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 121272 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 121272 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 121272 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-xl-pvshim12 guest-start fail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl 13 migrate-support-checkfail never pass test-arm64-arm64-xl 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-credit2 13 migrate-support-checkfail never pass test-arm64-arm64-libvirt-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-credit2 14 saverestore-support-checkfail never pass test-arm64-arm64-libvirt-xsm 14 saverestore-support-checkfail never pass test-arm64-arm64-xl-xsm 13 migrate-support-checkfail never pass test-arm64-arm64-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl 13 migrate-support-checkfail never pass test-armhf-armhf-xl 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass version targeted for testing: xen 6bbcb226cebac90f8ce5ac901e000bfd3ad783c5 baseline version: xen eabb83121226d5a6a5a68da3a913ac0b5bb1e0cf Last test of basis 121272 2018-03-25 16:16:07 Z7 days Failing since121307 2018-03-27 00:5
[Xen-devel] [xen-4.6-testing test] 121686: regressions - FAIL
flight 121686 xen-4.6-testing real [real] http://logs.test-lab.xenproject.org/osstest/logs/121686/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail REGR. vs. 119227 Tests which are failing intermittently (not blocking): test-xtf-amd64-amd64-3 50 xtf/test-hvm64-lbr-tsx-vmentry fail in 121420 pass in 121686 test-xtf-amd64-amd64-1 50 xtf/test-hvm64-lbr-tsx-vmentry fail pass in 121420 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail pass in 121420 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail pass in 121420 Tests which did not succeed, but are not blocking: test-xtf-amd64-amd64-2 50 xtf/test-hvm64-lbr-tsx-vmentry fail in 121420 like 119187 test-xtf-amd64-amd64-5 50 xtf/test-hvm64-lbr-tsx-vmentry fail in 121420 like 119227 test-xtf-amd64-amd64-4 50 xtf/test-hvm64-lbr-tsx-vmentry fail in 121420 like 119227 test-armhf-armhf-xl-rtds 12 guest-start fail in 121420 like 119227 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail like 119187 test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 119227 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail like 119227 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 119227 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 119227 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail like 119227 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 119227 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 119227 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail like 119227 test-xtf-amd64-amd64-2 37 xtf/test-hvm32pae-memop-seg fail never pass test-xtf-amd64-amd64-5 37 xtf/test-hvm32pae-memop-seg fail never pass test-xtf-amd64-amd64-4 37 xtf/test-hvm32pae-memop-seg fail never pass test-xtf-amd64-amd64-1 37 xtf/test-hvm32pae-memop-seg fail never pass test-xtf-amd64-amd64-2 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-5 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-4 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-1 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-3 37 xtf/test-hvm32pae-memop-seg fail never pass test-xtf-amd64-amd64-5 76 xtf/test-pv32pae-xsa-194 fail never pass test-xtf-amd64-amd64-2 76 xtf/test-pv32pae-xsa-194 fail never pass test-xtf-amd64-amd64-4 76 xtf/test-pv32pae-xsa-194 fail never pass test-xtf-amd64-amd64-1 76 xtf/test-pv32pae-xsa-194 fail never pass test-xtf-amd64-amd64-3 52 xtf/test-hvm64-memop-seg fail never pass test-xtf-amd64-amd64-3 76 xtf/test-pv32pae-xsa-194 fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-i386-libvirt 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-checkfail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-armhf-armhf-xl-arndale 13 migrate-support-checkfail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-checkfail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-checkfail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 13 migrate-support-checkfail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 12 migrate-support-checkfail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-checkfail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-suppor
[Xen-devel] [RESEND PATCH v5 1/2] libxl: Implement the handler to handle unrecoverable AER errors
Implement the callback function to handle unrecoverable AER errors, and also the public APIs that can be used to register/unregister the handler. When an AER error occurs, the handler will forcibly remove the erring PCIe device from the guest. Signed-off-by: Venu Busireddy --- tools/libxl/libxl.h | 7 +++ tools/libxl/libxl_event.h| 7 +++ tools/libxl/libxl_internal.h | 8 +++ tools/libxl/libxl_pci.c | 123 +++ 4 files changed, 145 insertions(+) diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index eca0ea2c50..99a3c8ae1f 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -1120,6 +1120,13 @@ void libxl_mac_copy(libxl_ctx *ctx, libxl_mac *dst, const libxl_mac *src); */ #define LIBXL_HAVE_PV_SHIM 1 +/* LIBXL_HAVE_AER_EVENTS_HANDLER + * + * If this is defined, libxl has the library functions called + * libxl_reg_aer_events_handler and libxl_unreg_aer_events_handler. + */ +#define LIBXL_HAVE_AER_EVENTS_HANDLER 1 + typedef char **libxl_string_list; void libxl_string_list_dispose(libxl_string_list *sl); int libxl_string_list_length(const libxl_string_list *sl); diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h index 1ea789e231..63c29ae800 100644 --- a/tools/libxl/libxl_event.h +++ b/tools/libxl/libxl_event.h @@ -184,6 +184,13 @@ void libxl_evdisable_domain_death(libxl_ctx *ctx, libxl_evgen_domain_death*); * may generate only a DEATH event. */ +typedef struct libxl__aer_watch libxl_aer_watch; +int libxl_reg_aer_events_handler(libxl_ctx *, uint32_t); + /* + * Registers a handler to handle the occurrence of unrecoverable AER errors. + */ +void libxl_unreg_aer_events_handler(libxl_ctx *, uint32_t); + typedef struct libxl__evgen_disk_eject libxl_evgen_disk_eject; int libxl_evenable_disk_eject(libxl_ctx *ctx, uint32_t domid, const char *vdev, libxl_ev_user, libxl_evgen_disk_eject **evgen_out); diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h index 506687fbe9..7972490050 100644 --- a/tools/libxl/libxl_internal.h +++ b/tools/libxl/libxl_internal.h @@ -356,6 +356,14 @@ struct libxl__ev_child { LIBXL_LIST_ENTRY(struct libxl__ev_child) entry; }; +/* + * Structure used for AER event handling. + */ +struct libxl__aer_watch { +uint32_t domid; +libxl__ev_xswatch watch; +struct libxl__aer_watch *next; +}; /* * evgen structures, which are the state we use for generating diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c index 4755a0c93c..c121c9f8cc 100644 --- a/tools/libxl/libxl_pci.c +++ b/tools/libxl/libxl_pci.c @@ -1686,6 +1686,129 @@ static int libxl_device_pci_compare(libxl_device_pci *d1, return COMPARE_PCI(d1, d2); } +static void aer_backend_watch_callback(libxl__egc *egc, + libxl__ev_xswatch *watch, + const char *watch_path, + const char *event_path) +{ +EGC_GC; +libxl_aer_watch *aer_ws = CONTAINER_OF(watch, *aer_ws, watch); +int rc; +uint32_t dom, bus, dev, fn; +uint32_t domid = aer_ws->domid; +char *p, *path; +const char *aerFailedSBDF; +libxl_device_pci pcidev; + +/* Extract the backend directory. */ +path = libxl__strdup(gc, event_path); +p = strrchr(path, '/'); +if ((p == NULL) || (strcmp(p, "/aerFailedSBDF") != 0)) +return; +/* Truncate the string so it points to the backend directory. */ +*p = '\0'; + +/* Fetch the value of the failed PCI device. */ +rc = libxl__xs_read_checked(gc, XBT_NULL, +GCSPRINTF("%s/aerFailedSBDF", path), &aerFailedSBDF); +if (rc || !aerFailedSBDF) +return; +LOGD(ERROR, domid, " aerFailedSBDF = %s", aerFailedSBDF); +sscanf(aerFailedSBDF, "%x:%x:%x.%x", &dom, &bus, &dev, &fn); + +libxl_device_pci_init(&pcidev); +pcidev_struct_fill(&pcidev, dom, bus, dev, fn, 0); +/* Forcibly remove the device from the guest */ +rc = libxl__device_pci_remove_common(gc, domid, &pcidev, 1); +if (rc) +LOGD(ERROR, domid, " libxl__device_pci_remove_common() failed, rc=x%x", +(unsigned int)rc); + +return; +} + +static libxl_aer_watch *manage_aer_ws_list(libxl_aer_watch *in, uint32_t domid) +{ +static libxl_aer_watch *aer_ws = NULL; +libxl_aer_watch *iter, *prev = NULL; + +if (in) { +if (aer_ws) +in->next = aer_ws; +iter = aer_ws = in; +} else { +iter = aer_ws; +while (iter) { +if (iter->domid == domid) { +if (prev) +prev->next = iter->next; +else +aer_ws = iter->next; +break; +} +prev = iter; +iter = iter->next; +} +} +return iter; +} + +static void store_aer_ws(libxl_aer_watch *aer_ws) +{ +manage_aer_ws_list(aer_ws, 0); +
[Xen-devel] [RESEND PATCH v5 0/2] Containing AER unrecoverable errors
This patch set is part of a set of patches that together allow containment of unrecoverable AER errors from PCIe devices assigned to guests in passthrough mode. The containment is achieved by forcibly removing the erring PCIe device from the guest. The original xen-pciback patch corresponding to this patch set is: https://lists.xen.org/archives/html/xen-devel/2017-06/msg03274.html. It will be reposted after this patch set is accepted. Changes in v5: * v4 worked only in the case of guests created using 'xl' command. Enhanced the fix to work for guests created using libvirt too. Changes in v4: * Made the following changes suggested by Wei Liu. - Combine multiple LIBXL_HAVE_* definitions into one. - Use libxl__calloc() instead of malloc(). Changes in v3: * Made the following changes suggested by Wei Liu. - Added LIBXL_HAVE macros to libxl.h. - Don't hard-code dom0's domid to 0. Instead, use libxl__get_domid(). - Corrected comments. * Made the following changes based on comments from Ian Jackson. - Got rid of the global variable aer_watch. - Added documentation (comments in code) for the new API calls. - Removed the unnecessary writes to xenstore. Changes in v2: - Instead of killing the guest and hiding the device, forcibly remove the device from the guest. Venu Busireddy (2): libxl: Implement the handler to handle unrecoverable AER errors xl: Register the AER event handler that handles AER errors tools/libxl/libxl.h | 7 +++ tools/libxl/libxl_create.c | 11 +++- tools/libxl/libxl_domain.c | 1 + tools/libxl/libxl_event.h| 7 +++ tools/libxl/libxl_internal.h | 8 +++ tools/libxl/libxl_pci.c | 123 +++ tools/xl/xl_vmcontrol.c | 14 - 7 files changed, 168 insertions(+), 3 deletions(-) ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [RESEND PATCH v5 2/2] xl: Register the AER event handler that handles AER errors
When a guest is created, register the AER event handler to handle the AER errors. When an AER error occurs, the handler will forcibly remove the erring PCIe device from the guest. Signed-off-by: Venu Busireddy Signed-off-by: Wim Ten Have --- tools/libxl/libxl_create.c | 11 +-- tools/libxl/libxl_domain.c | 1 + tools/xl/xl_vmcontrol.c| 14 +- 3 files changed, 23 insertions(+), 3 deletions(-) diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c index c498135246..2d247da5f0 100644 --- a/tools/libxl/libxl_create.c +++ b/tools/libxl/libxl_create.c @@ -1663,7 +1663,7 @@ static int do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config, { AO_CREATE(ctx, 0, ao_how); libxl__app_domain_create_state *cdcs; -int rc; +int rc, ao_rc; GCNEW(cdcs); cdcs->dcs.ao = ao; @@ -1698,7 +1698,14 @@ static int do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config, initiate_domain_create(egc, &cdcs->dcs); -return AO_INPROGRESS; +ao_rc = AO_INPROGRESS; +rc = libxl_reg_aer_events_handler(ctx, *domid); +if (rc) { +/* Log the error, and move on... */ +LOGD(ERROR, *domid, +"libxl_reg_aer_events_handler() failed, rc = %d", rc); +} +return ao_rc; out_err: return AO_CREATE_FAIL(rc); diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c index 13b1c73d40..b8fb5e0349 100644 --- a/tools/libxl/libxl_domain.c +++ b/tools/libxl/libxl_domain.c @@ -906,6 +906,7 @@ void libxl__domain_destroy(libxl__egc *egc, libxl__domain_destroy_state *dds) STATE_AO_GC(dds->ao); uint32_t stubdomid = libxl_get_stubdom_id(CTX, dds->domid); +libxl_unreg_aer_events_handler(CTX, dds->domid); if (stubdomid) { dds->stubdom.ao = ao; dds->stubdom.domid = stubdomid; diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c index 89c2b25ded..5bf415fa6e 100644 --- a/tools/xl/xl_vmcontrol.c +++ b/tools/xl/xl_vmcontrol.c @@ -945,8 +945,11 @@ start: libxl_domain_unpause(ctx, domid); ret = domid; /* caller gets success in parent */ -if (!daemonize && !monitor) +if (!daemonize && !monitor) { +/* Unregister aer events handler before returning/exiting */ +libxl_unreg_aer_events_handler(ctx, domid); goto out; +} if (dom_info->vnc) autoconnect_vncviewer(domid, vncautopass); @@ -958,9 +961,17 @@ start: ret = do_daemonize(name, NULL); free(name); if (ret) { +/* Unregister aer events handler before returning/exiting */ +libxl_unreg_aer_events_handler(ctx, domid); ret = (ret == 1) ? domid : ret; goto out; } +/* Child has new ctx. Re-register the events handler in child's ctx */ +ret = libxl_reg_aer_events_handler(ctx, domid); +if (ret) { +/* Log the error, and move on... */ +LOG("libxl_reg_aer_events_handler() failed, ret = %d", ret); +} need_daemon = 0; } LOG("Waiting for domain %s (domid %u) to die [pid %ld]", @@ -1059,6 +1070,7 @@ start: case LIBXL_EVENT_TYPE_DOMAIN_DEATH: LOG("Domain %u has been destroyed.", domid); +libxl_unreg_aer_events_handler(ctx, domid); libxl_event_free(ctx, event); ret = 0; goto out; ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH v2 07/17] arm64: vgic-v3: Add ICV_EOIR1_EL1 handler
On 04/02/2018 04:33 PM, Manish Jaggi wrote: On 03/27/2018 03:48 PM, Marc Zyngier wrote: On 27/03/18 10:07, Manish Jaggi wrote: This patch is ported to xen from linux commit b6f49035b4bf6e2709f2a5fed3107f5438c1fd02 KVM: arm64: vgic-v3: Add ICV_EOIR1_EL1 handler Add a handler for writing the guest's view of the ICC_EOIR1_EL1 register. This involves dropping the priority of the interrupt, and deactivating it if required (EOImode == 0). Signed-off-by : Manish Jaggi --- xen/arch/arm/arm64/vgic-v3-sr.c | 136 xen/include/asm-arm/arm64/sysregs.h | 1 + xen/include/asm-arm/gic_v3_defs.h | 4 ++ 3 files changed, 141 insertions(+) diff --git a/xen/arch/arm/arm64/vgic-v3-sr.c b/xen/arch/arm/arm64/vgic-v3-sr.c index 026d64506f..e32ec01f56 100644 --- a/xen/arch/arm/arm64/vgic-v3-sr.c +++ b/xen/arch/arm/arm64/vgic-v3-sr.c @@ -33,6 +33,7 @@ #define ICC_IAR1_EL1_SPURIOUS 0x3ff #define VGIC_MAX_SPI 1019 +#define VGIC_MIN_LPI 8192 static int vgic_v3_bpr_min(void) { @@ -482,6 +483,137 @@ static void vreg_emulate_iar(struct cpu_user_regs *regs, const union hsr hsr) vgic_v3_read_iar(regs, hsr); } +static int vgic_v3_find_active_lr(int intid, uint64_t *lr_val) +{ + int i; + unsigned int used_lrs = gic_get_num_lrs(); This is quite a departure from the existing code. KVM always allocate LRs sequentially, and used_lrs represents the current upper bound. IIUC, Xen uses a function gic_find_unused_lr to find an unused LR. xen/arch/arm/gic.c: gic_raise_guest_irq gic_find_unused_lr Here, you seem to be looking at *all* the LRs. Is that safe? IIUC Xen does not maintain a used_lrs, it does have an lr_mask, but that is static in gic.c To do something like +for_each_set_bit(i, lr_mask, nr_lrs) + { + u64 val = __gic_v3_get_lr(i); + u8 lr_prio = (val & ICH_LR_PRIORITY_MASK) >> ICH_LR_PRIORITY_SHIFT; + /* Not pending in the state? */ + if ((val & ICH_LR_STATE) != ICH_LR_PENDING_BIT) + continue; I need to do some jugglery to make lr_mask visible outside of xen/arch/arm/gic.c The easiest would be to add an extern function, harder way would be to add it in gic_hw_operations - vgic_v3_highest_priority_lr iterates is interested in used LR's which sre in Pending state. - emulating IAR is done with interrupts disabled - iterating over all the LRs and finding which ones are in Pending. Just to add I was answering for using num_lrs for used_lrs, above was for IAR flow. This holds the same for EOIR flow as well. The bigger point is unless I add some jugglery to access static value outside gic.c this is the only solution. Stefano/Andre/Julien Please suggest if there is some better way... Are you guaranteed not to have any stale state? I would request Stefano/Andre/Julien to comment here... In any case, the change should be documented. + + for ( i = 0; i < used_lrs; i++ ) + { + uint64_t val = gicv3_ich_read_lr(i); + + if ( (val & ICH_LR_VIRTUAL_ID_MASK) == intid && + (val & ICH_LR_ACTIVE_BIT) ) + { + *lr_val = val; + return i; + } + } + + *lr_val = ICC_IAR1_EL1_SPURIOUS; + return -1; +} Thanks, M. ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH v2 07/17] arm64: vgic-v3: Add ICV_EOIR1_EL1 handler
On 03/27/2018 03:48 PM, Marc Zyngier wrote: On 27/03/18 10:07, Manish Jaggi wrote: This patch is ported to xen from linux commit b6f49035b4bf6e2709f2a5fed3107f5438c1fd02 KVM: arm64: vgic-v3: Add ICV_EOIR1_EL1 handler Add a handler for writing the guest's view of the ICC_EOIR1_EL1 register. This involves dropping the priority of the interrupt, and deactivating it if required (EOImode == 0). Signed-off-by : Manish Jaggi --- xen/arch/arm/arm64/vgic-v3-sr.c | 136 xen/include/asm-arm/arm64/sysregs.h | 1 + xen/include/asm-arm/gic_v3_defs.h | 4 ++ 3 files changed, 141 insertions(+) diff --git a/xen/arch/arm/arm64/vgic-v3-sr.c b/xen/arch/arm/arm64/vgic-v3-sr.c index 026d64506f..e32ec01f56 100644 --- a/xen/arch/arm/arm64/vgic-v3-sr.c +++ b/xen/arch/arm/arm64/vgic-v3-sr.c @@ -33,6 +33,7 @@ #define ICC_IAR1_EL1_SPURIOUS0x3ff #define VGIC_MAX_SPI 1019 +#define VGIC_MIN_LPI 8192 static int vgic_v3_bpr_min(void) { @@ -482,6 +483,137 @@ static void vreg_emulate_iar(struct cpu_user_regs *regs, const union hsr hsr) vgic_v3_read_iar(regs, hsr); } +static int vgic_v3_find_active_lr(int intid, uint64_t *lr_val) +{ +int i; +unsigned int used_lrs = gic_get_num_lrs(); This is quite a departure from the existing code. KVM always allocate LRs sequentially, and used_lrs represents the current upper bound. IIUC, Xen uses a function gic_find_unused_lr to find an unused LR. xen/arch/arm/gic.c: gic_raise_guest_irq gic_find_unused_lr Here, you seem to be looking at *all* the LRs. Is that safe? IIUC Xen does not maintain a used_lrs, it does have an lr_mask, but that is static in gic.c To do something like +for_each_set_bit(i, lr_mask, nr_lrs) + { + u64 val = __gic_v3_get_lr(i); + u8 lr_prio = (val & ICH_LR_PRIORITY_MASK) >> ICH_LR_PRIORITY_SHIFT; + /* Not pending in the state? */ + if ((val & ICH_LR_STATE) != ICH_LR_PENDING_BIT) + continue; I need to do some jugglery to make lr_mask visible outside of xen/arch/arm/gic.c The easiest would be to add an extern function, harder way would be to add it in gic_hw_operations - vgic_v3_highest_priority_lr iterates is interested in used LR's which sre in Pending state. - emulating IAR is done with interrupts disabled - iterating over all the LRs and finding which ones are in Pending. Are you guaranteed not to have any stale state? I would request Stefano/Andre/Julien to comment here... In any case, the change should be documented. + +for ( i = 0; i < used_lrs; i++ ) +{ +uint64_t val = gicv3_ich_read_lr(i); + +if ( (val & ICH_LR_VIRTUAL_ID_MASK) == intid && +(val & ICH_LR_ACTIVE_BIT) ) +{ +*lr_val = val; +return i; +} +} + +*lr_val = ICC_IAR1_EL1_SPURIOUS; +return -1; +} Thanks, M. ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] stubdom --disable-pv-grub has not effect
Am Mon, 2 Apr 2018 10:44:23 +0100 schrieb Wei Liu : > I suppose you hit some sort of compile error because pv-grub has rotten? A post-build check scans the build log for warnings. Some of them are seen as fatal, and the otherwise successful build is marked as FAIL, no rpm package is provided. Since I have no need for pvgrub, disabling it seems like the obvious choice. Olaf pgppXb9_fQUbf.pgp Description: Digitale Signatur von OpenPGP ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] stubdom --disable-pv-grub has not effect
On Mon, Apr 02, 2018 at 11:40:54AM +0200, Olaf Hering wrote: > On Sun, Apr 01, Wei Liu wrote: > > > No. That's a bug in our build system. > > Thanks. For some reason only gcc48-4.8.3.rpm from SLE_11 is affected, > not gcc43-4.3.4.rpm from SLE_11 nor gcc48-4.8.5.rpm from SLE_12. > This fixes my packages for the time being: > sed -i '/ stubdom install-grub/d' Makefile I suppose you hit some sort of compile error because pv-grub has rotten? The proper fix is, of course, make sure stubdom is really disabled. I will see if I can get around to it at some point. Wei. ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH v2] x86/boot: Disable IBRS in intr/nmi exit path at bootup stage
On 2018/3/27 16:52, Jan Beulich wrote: On 27.03.18 at 06:52, wrote: After reset, IBRS is disabled by processor, but a coming intr/nmi leave IBRS enabled after their exit. It's not necessory for bootup code to run in low performance with IBRS enabled. On ORACLE X6-2(500GB/88 cpus, dom0 11GB/20 vcpus), we observed an 200s+ delay in construct_dom0. By initializing use_shadow_spec_ctrl with the result of (system_state < SYS_STATE_active), IBRS is disabled in intr/nmi exit path at bootup stage. Then delay in construct_dom0 is ~50s. When hot-onlining a CPU, we initialize IBRS early and set use_shadow_spec_ctrl to false to avoid Branch Target Injection from sibling threads. v2: Use (system_state < SYS_STATE_active) to initialize use_shadow_spec_ctrl instead of literal 1 per Jan. Please place revision information below the first --- marker. --- a/xen/include/asm-x86/spec_ctrl.h +++ b/xen/include/asm-x86/spec_ctrl.h @@ -32,8 +32,22 @@ extern uint8_t default_bti_ist_info; static inline void init_shadow_spec_ctrl_state(void) { struct cpu_info *info = get_cpu_info(); +uint32_t val = SPEC_CTRL_IBRS; Why do you need this variable? +/* Initialize IA32_SPEC_CTRL MSR for hotplugging cpu early */ +if ( system_state >= SYS_STATE_active ) +asm volatile (ALTERNATIVE(ASM_NOP3, "wrmsr", X86_FEATURE_XEN_IBRS_SET) + :: "a" (val), "c" (MSR_SPEC_CTRL), "d" (0) : "memory"); I can see the point of doing this, but the title of the patch doesn't cover it (I think this has been missing independent of your interrupt/ NMI paths consideration). Further INIT# (unlike RESET#) doesn't clear the register, so you may want/need to also clear the register in the X86_FEATURE_XEN_IBRS_CLEAR case. Also you don't need ASM_NOP3 here after 4008c71d7a ("x86/alt: Support for automatic padding calculations"). Additionally I think it would be better to keep low and high parts of the value next to each other in the constraints, rather than putting the MSR index in the middle. -info->shadow_spec_ctrl = info->use_shadow_spec_ctrl = 0; +info->shadow_spec_ctrl = 0; +/* + * We want to make sure we clear IBRS in interrupt exit path + * (DO_SPEC_CTRL_EXIT_TO_XEN) while dom0 is still booting to + * avoid unnecessary performance impact. As soon as dom0 has + * booted use_shadow_spec_ctrl will be cleared, for example, + * in idle routine. + */ +info->use_shadow_spec_ctrl = system_state < SYS_STATE_active; I think the code overall would be more readable if you had just a single condition (in if/else form). And then there is the question of whether to use < / >= or != / == : In the resume case, not guest vCPU-s are active (yet), so perhaps the latter would be better. In any event please give Andrew a chance to reply before you send another version, as he may have a different opinion and/or other valuable input. Hi Andrew, May I have your comments? If there is no further suggestions from you, I'll prepare to make the new version. Thanks Zhenzhong ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] stubdom --disable-pv-grub has not effect
On Sun, Apr 01, Wei Liu wrote: > No. That's a bug in our build system. Thanks. For some reason only gcc48-4.8.3.rpm from SLE_11 is affected, not gcc43-4.3.4.rpm from SLE_11 nor gcc48-4.8.5.rpm from SLE_12. This fixes my packages for the time being: sed -i '/ stubdom install-grub/d' Makefile Olaf signature.asc Description: PGP signature ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] X86 Community Call - Wed Apr 11, 14:00 - 15:00 UTC - Call for Agenda Items
Hi, Lars, Chao and I will be traveling and thus will miss the meeting. From our side, we propose continue to discuss the features which we didn't cover last time, including SGX, SPP, PT-VMX and 288 vCPU. Best Regards John Ji -Original Message- From: George Dunlap [mailto:george.dun...@citrix.com] Sent: Wednesday, March 28, 2018 10:41 PM To: Lars Kurth ; xen-de...@lists.xensource.com Cc: committ...@xenproject.org; Juergen Gross ; Janakarajan Natarajan ; Tamas K Lengyel ; Wei Liu ; Andrew Cooper ; Daniel Kiper ; Roger Pau Monné ; Christopher Clark ; Rich Persaud ; Paul Durrant ; Jan Beulich' ; Brian Woods ; intel-xen Subject: Re: X86 Community Call - Wed Apr 11, 14:00 - 15:00 UTC - Call for Agenda Items On 03/22/2018 10:22 AM, Lars Kurth wrote: > Hi all, > > please find attached > a) Meeting details (just a link with timezones) – the meeting invite will > follow when we have an agenda >Bridge details – will be sent with the meeting invite >I am thinking of using GotoMeeting, but want to try this with a > Linux only user before I commit > c) Call for agenda items > > A few suggestions were made, such as XPTI status (if applicable), PVH > status Also we have some left-overs from the last call: see > https://lists.xenproject.org/archives/html/xen-devel/2018-03/threads.h > tml#01571 > > Regards > Lars > > == Meeting Details == > Wed April 11, 15:00 - 16:00 UTC > > International meeting times: > https://www.timeanddate.com/worldclock/meetingdetails.html?year=2018&m > onth=4&day=11&hour=14&min=0&sec=0&p1=224&p2=24&p3=179&p4=136&p5=37&p6= > 33 It looks like the above should say "15:00 - 16:00 BST"? I'll send agenda items closer to the time of the meeting. -George ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH v8] new config option vtsc_tolerance_khz to avoid TSC emulation
On Sun, Apr 01, 2018 at 10:29:58PM +0200, Olaf Hering wrote: > Add an option to control when vTSC emulation will be activated for a > domU with tsc_mode=default. Without such option each TSC access from > domU will be emulated, which causes a significant perfomance drop for > workloads that make use of rdtsc. > > One option to avoid the TSC option is to run domUs with tsc_mode=native. > This has the drawback that migrating a domU from a "2.3GHz" class host > to a "2.4GHz" class host may change the rate at wich the TSC counter > increases, the domU may not be prepared for that. > > With the new option the host admin can decide how a domU should behave > when it is migrated across systems of the same class. Since there is > always some jitter when Xen calibrates the cpu_khz value, all hosts of > the same class will most likely have slightly different values. As a > result vTSC emulation is unavoidable. Data collected during the incident > which triggered this change showed a jitter of up to 200 KHz across > systems of the same class. > > Existing padding fields are reused to store vtsc_khz_tolerance as u16. > > Signed-off-by: Olaf Hering Reviewed-by: Wei Liu ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel