[PATCH v2] pci: fix handling of PCI bridges with subordinate bus number 0xff

2021-09-24 Thread Igor Druzhinin
ses(). Signed-off-by: Igor Druzhinin --- v2: - fix free_pdev() as well - style fixes --- xen/drivers/passthrough/pci.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index fc4fa2e..d65cda8 100644 --- a/xen/driv

[PATCH] pci: fix handling of PCI bridges with subordinate bus number 0xff

2021-09-23 Thread Igor Druzhinin
ses(). Signed-off-by: Igor Druzhinin --- xen/drivers/passthrough/pci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index fc4fa2e..48b415c 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pc

[PATCH v2] tools/libxc: use uint32_t for pirq in xc_domain_irq_permission

2021-07-12 Thread Igor Druzhinin
an Beulich Signed-off-by: Igor Druzhinin Acked-by: Christian Lindig --- Changes in v2: - extra wording for clarity in commit message (Julien) - change allow_access to bool (Andrew) - add padding (Jan) --- tools/include/xenctrl.h | 4 ++-- tools/libs/ctrl/xc_domain.c | 4 ++-- t

Re: [PATCH] tools/libxc: use uint32_t for pirq in xc_domain_irq_permission

2021-07-07 Thread Igor Druzhinin
On 07/07/2021 14:21, Julien Grall wrote: On 07/07/2021 14:14, Jan Beulich wrote: On 07.07.2021 14:59, Julien Grall wrote: On 07/07/2021 13:54, Jan Beulich wrote: On 07.07.2021 14:51, Julien Grall wrote: On 07/07/2021 02:02, Igor Druzhinin wrote: Current unit8_t for pirq argument

Re: [PATCH] tools/libxc: use uint32_t for pirq in xc_domain_irq_permission

2021-07-07 Thread Igor Druzhinin
On 08/07/2021 02:26, Andrew Cooper wrote: On 08/07/2021 02:14, Igor Druzhinin wrote: On 08/07/2021 02:11, Andrew Cooper wrote: On 08/07/2021 02:08, Igor Druzhinin wrote: On 07/07/2021 10:19, Andrew Cooper wrote: On 07/07/2021 08:46, Jan Beulich wrote: --- a/tools/include/xenctrl.h +++ b

Re: [PATCH] tools/libxc: use uint32_t for pirq in xc_domain_irq_permission

2021-07-07 Thread Igor Druzhinin
On 08/07/2021 02:11, Andrew Cooper wrote: On 08/07/2021 02:08, Igor Druzhinin wrote: On 07/07/2021 10:19, Andrew Cooper wrote: On 07/07/2021 08:46, Jan Beulich wrote: --- a/tools/include/xenctrl.h +++ b/tools/include/xenctrl.h @@ -1385,7 +1385,7 @@ int xc_domain_ioport_permission(xc_interface

Re: [PATCH] tools/libxc: use uint32_t for pirq in xc_domain_irq_permission

2021-07-07 Thread Igor Druzhinin
On 07/07/2021 10:19, Andrew Cooper wrote: On 07/07/2021 08:46, Jan Beulich wrote: --- a/tools/include/xenctrl.h +++ b/tools/include/xenctrl.h @@ -1385,7 +1385,7 @@ int xc_domain_ioport_permission(xc_interface *xch, int xc_domain_irq_permission(xc_interface *xch,

[PATCH] tools/libxc: use uint32_t for pirq in xc_domain_irq_permission

2021-07-06 Thread Igor Druzhinin
is release cycle. Signed-off-by: Igor Druzhinin --- tools/include/xenctrl.h | 2 +- tools/libs/ctrl/xc_domain.c | 2 +- tools/ocaml/libs/xc/xenctrl_stubs.c | 2 +- xen/include/public/domctl.h | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/too

Re: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up

2021-06-18 Thread Igor Druzhinin
On 18/06/2021 18:15, Igor Druzhinin wrote: On 18/06/2021 17:00, Jan Beulich wrote: At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT address range") documentation correctly stated that the range was completely fixed. For Fam17 and newer, it lives at the top o

Re: [PATCH] x86/AMD: make HT range dynamic for Fam17 and up

2021-06-18 Thread Igor Druzhinin
On 18/06/2021 17:00, Jan Beulich wrote: At the time of d838ac2539cf ("x86: don't allow Dom0 access to the HT address range") documentation correctly stated that the range was completely fixed. For Fam17 and newer, it lives at the top of physical address space, though. From "Open-Source

BUG in 1f3d87c75129 ("x86/vpt: do not take pt_migrate rwlock in some cases")

2021-06-14 Thread Igor Druzhinin
Hi, Boris, Stephen, Roger, We have stress tested recent changes on staging-4.13 which includes a backport of the subject. Since the backport is identical to the master branch and all of the pre-reqs are in place - we have no reason to believe the issue is not the same on master. Here is what we

Re: [PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

2021-04-20 Thread Igor Druzhinin
889702-13104-1-git-send-email-igor.druzhi...@citrix.com Switched to a new branch 'test' 3102519 xen-mapcache: avoid a race on memory map while using MAP_FIXED === OUTPUT BEGIN === ERROR: Author email address is mangled by the mailing list #2: Author: Igor Druzhinin via total: 1 errors, 0 warnings, 21

Re: [PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

2021-04-20 Thread Igor Druzhinin
On 20/04/2021 09:53, Roger Pau Monné wrote: On Tue, Apr 20, 2021 at 04:35:02AM +0100, Igor Druzhinin wrote: When we're replacing the existing mapping there is possibility of a race on memory map with other threads doing mmap operations - the address being unmapped/re-mapped could be occupied

[PATCH] xen-mapcache: avoid a race on memory map while using MAP_FIXED

2021-04-19 Thread Igor Druzhinin
accesses to the replaced region - those might still fail with SIGBUS due to xenforeignmemory_map not being atomic. So we're still not expecting those. Tested-by: Anthony PERARD Signed-off-by: Igor Druzhinin --- hw/i386/xen/xen-mapcache.c | 15 ++- 1 file changed, 14 insertions(+), 1

[PATCH v5 1/2] x86/vtx: add LBR_SELECT to the list of LBR MSRs

2021-04-15 Thread Igor Druzhinin
This MSR exists since Nehalem / Silvermont and is actively used by Linux, for instance, to improve sampling efficiency. Signed-off-by: Igor Druzhinin --- Changes in v5: - added Silvermont+ LBR_SELECT support New patch in v4 as suggested by Andrew. --- xen/arch/x86/hvm/vmx/vmx.c | 20

[PATCH v5 2/2] x86/intel: insert Ice Lake-SP and Ice Lake-D model numbers

2021-04-15 Thread Igor Druzhinin
it shouldn't be present in has_if_pschange_mc list. Provisionally assume the same to be the case for Ice Lake-D. Reviewed-by: Jan Beulich Signed-off-by: Igor Druzhinin --- No changes in v5. Changes in v4: - now based on SDM update - new LBR (0x1e0)does not seem to be exposed in the docs Changes

Re: [PATCH v4 1/2] x86/vtx: add LBR_SELECT to the list of LBR MSRs

2021-04-14 Thread Igor Druzhinin
On 14/04/2021 12:41, Jan Beulich wrote: On 14.04.2021 06:40, Igor Druzhinin wrote: --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -2915,14 +2915,16 @@ static const struct lbr_info { }, nh_lbr[] = { { MSR_IA32_LASTINTFROMIP, 1 }, { MSR_IA32_LASTINTTOIP

[PATCH v4 2/2] x86/intel: insert Ice Lake-SP and Ice Lake-D model numbers

2021-04-13 Thread Igor Druzhinin
it shouldn't be present in has_if_pschange_mc list. Provisionally assume the same to be the case for Ice Lake-D while advisory is not yet updated. Signed-off-by: Igor Druzhinin --- Changes in v4: - now based on SDM update - new LBR (0x1e0)does not seem to be exposed in the docs Changes in v3

[PATCH v4 1/2] x86/vtx: add LBR_SELECT to the list of LBR MSRs

2021-04-13 Thread Igor Druzhinin
This MSR exists since Nehalem and is actively used by Linux, for instance, to improve sampling efficiency. Signed-off-by: Igor Druzhinin --- New patch in v4 as suggested by Andrew. --- xen/arch/x86/hvm/vmx/vmx.c | 7 +-- xen/include/asm-x86/msr-index.h | 6 +- 2 files changed, 10

[PATCH] x86/vPMU: Extend vPMU support to version 5

2021-04-13 Thread Igor Druzhinin
Version 5 is backwards compatible with version 3. This allows to enable vPMU on Ice Lake CPUs. Signed-off-by: Igor Druzhinin --- xen/arch/x86/cpu/vpmu_intel.c | 7 --- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c

Re: [PATCH v3] x86/intel: insert Ice Lake-X (server) and Ice Lake-D model numbers

2021-04-08 Thread Igor Druzhinin
On 27/01/2021 09:52, Andrew Cooper wrote: On 23/12/2020 20:32, Igor Druzhinin wrote: LBR, C-state MSRs should correspond to Ice Lake desktop according to External Design Specification vol.2 for both models. Ice Lake-X is known to expose IF_PSCHANGE_MC_NO in IA32_ARCH_CAPABILITIES MSR

RE: Troubles analyzing crash dumps from xl dump-core

2021-03-10 Thread Igor Druzhinin
> On 30.01.21 19:53, Roman Shaposhnik wrote: > > On Fri, Jan 29, 2021 at 11:28 PM Jürgen Groß wrote: > >> > >> On 29.01.21 21:12, Roman Shaposhnik wrote: > >>> Hi! > >>> > >>> I'm trying to see how much mileage I can get out of > >>> crash(1) 7.2.8 (based on gdb 7.6) when it comes to analyzing

Re: [PATCH for-4.15] vtd: make sure QI/IR are disabled before initialisation

2021-03-08 Thread Igor Druzhinin
On 08/03/2021 08:18, Jan Beulich wrote: On 08.03.2021 08:00, Igor Druzhinin wrote: BIOS might pass control to Xen leaving QI and/or IR in enabled and/or partially configured state. In case of x2APIC code path where EIM is enabled early in boot - those are correctly disabled by Xen before any

[PATCH for-4.15] vtd: make sure QI/IR are disabled before initialisation

2021-03-07 Thread Igor Druzhinin
initialization failures on some ICX based platforms where QI is left pre-enabled and partially configured by BIOS. Unify the behaviour between x2APIC and xAPIC code paths keeping that in line with what Linux does. Signed-off-by: Igor Druzhinin --- xen/arch/x86/apic.c | 2 +- xen

[PATCH v2 2/2] tools/libxl: only set viridian flags on new domains

2021-02-03 Thread Igor Druzhinin
at destination side. That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e) extending default viridian feature set making the values from the previous migration streams and those set at domain construction different. Suggested-by: Andrew Cooper Signed-off-by: Igor Druzhinin

[PATCH v2 1/2] tools/libxl: pass libxl__domain_build_state to libxl__arch_domain_create

2021-02-03 Thread Igor Druzhinin
No functional change. Signed-off-by: Igor Druzhinin --- New patch in v2 as requested. --- tools/libs/light/libxl_arch.h | 6 -- tools/libs/light/libxl_arm.c | 4 +++- tools/libs/light/libxl_dom.c | 2 +- tools/libs/light/libxl_x86.c | 6 -- 4 files changed, 12 insertions(+), 6

Re: [PATCH] tools/libxl: only set viridian flags on new domains

2021-02-02 Thread Igor Druzhinin
On 03/02/2021 04:01, Igor Druzhinin wrote: > Domains migrating or restoring should have viridian HVM param key in > the migration stream already and setting that twice results in Xen > returing -EEXIST on the second attempt later (during migration stream parsing) > in case the values

[PATCH] tools/libxl: only set viridian flags on new domains

2021-02-02 Thread Igor Druzhinin
at destination side. That issue is now resurfaced by the latest commits (983524671 and 7e5cffcd1e) extending default viridian feature set making the values from the previous migration streams and those set at domain construction different. Signed-off-by: Igor Druzhinin --- tools/libs/light

Re: [PATCH] xen/netback: avoid race in xenvif_rx_ring_slots_available()

2021-02-02 Thread Igor Druzhinin
ue held. > > Reported-by: Igor Druzhinin > Fixes: 23025393dbeb3b8b3 ("xen/netback: use lateeoi irq binding") > Cc: sta...@vger.kernel.org > Signed-off-by: Juergen Gross Appreciate a quick fix! Is this the only place that sort of race could happen now? Igor

Re: staging: unable to restore HVM with Viridian param set

2021-02-02 Thread Igor Druzhinin
On 02/02/2021 08:35, Paul Durrant wrote: >> -Original Message- >> From: Igor Druzhinin >> Sent: 02 February 2021 00:20 >> To: Andrew Cooper ; Tamas K Lengyel >> ; Xen-devel >> ; Wei Liu ; Ian Jackson >> ; Anthony >> PERARD ; Paul Durrant

dom0 crash in xenvif_rx_ring_slots_available

2021-02-01 Thread Igor Druzhinin
Juergen, We've got a crash report from one of our customers (see below) running 4.4 kernel. The functions seem to be the new that came with XSA-332 and nothing like that has been reported before in their cloud. It appears there is some use-after-free happening on skb in the following code

Re: staging: unable to restore HVM with Viridian param set

2021-02-01 Thread Igor Druzhinin
n 01/02/2021 22:57, Andrew Cooper wrote: > On 01/02/2021 22:51, Tamas K Lengyel wrote: >> Hi all, >> trying to restore a Windows VM saved on Xen 4.14 with Xen staging results in: >> >> # xl restore -p /shared/cfg/windows10.save >> Loading new save file /shared/cfg/windows10.save (new xl fmt info

Re: [PATCH v2 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-25 Thread Igor Druzhinin
On 12/01/2021 04:17, Igor Druzhinin wrote: > TLFS 7.8.1 stipulates that "a virtual processor index must be less than > the maximum number of virtual processors per partition" that "can be obtained > through CPUID leaf 0x4005". Furthermore, "Requirement

Re: [PATCH] OvmfPkg/XenPlatformPei: Grab 64-bit PCI MMIO hole size from OVMF info table

2021-01-19 Thread Igor Druzhinin
On 19/01/2021 13:20, Anthony PERARD wrote: > On Mon, Jan 11, 2021 at 03:45:18AM +0000, Igor Druzhinin wrote: >> diff --git a/OvmfPkg/XenPlatformPei/MemDetect.c >> b/OvmfPkg/XenPlatformPei/MemDetect.c >> index 1f81eee..4175a2f 100644 >> --- a/OvmfPkg/XenPlatformPei/M

[PATCH] OvmfPkg/XenPlatformPei: Use CPUID to get physical address width on Xen

2021-01-12 Thread Igor Druzhinin
its directly from CPUID that should be what baremetal UEFI systems do. Signed-off-by: Igor Druzhinin --- OvmfPkg/OvmfXen.dsc| 3 + OvmfPkg/XenPlatformPei/MemDetect.c | 166 +++-- 2 files changed, 15 insertions(+), 154 deletions(-) diff --git a/Ovmf

[PATCH v2 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-11 Thread Igor Druzhinin
nges exposing ExProcessorMasks this allows a recent Windows VM with Viridian extension enabled to have more than 64 vCPUs without going into BSOD in some cases. Since we didn't expose the leaf before and to keep CPUID data consistent for incoming streams from previous Xen versions - let's keep it behind a

[PATCH v2 2/2] viridian: allow vCPU hotplug for Windows VMs

2021-01-11 Thread Igor Druzhinin
set the option on by default. Signed-off-by: Igor Druzhinin --- Changes on v2: - hide the bit under an option and expose it in libxl --- docs/man/xl.cfg.5.pod.in | 7 ++- tools/include/libxl.h| 6 ++ tools/libs/light/libxl_types.idl | 1 + tools/libs/light

Re: [PATCH] hvmloader: pass PCI MMIO layout to OVMF as an info table

2021-01-11 Thread Igor Druzhinin
On 11/01/2021 15:35, Laszlo Ersek wrote: > [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments > unless you have verified the sender and know the content is safe. > > On 01/11/21 16:26, Igor Druzhinin wrote: >> On 11/01/2021 15:21, Jan Beulich wrote: >&

Re: [PATCH] hvmloader: pass PCI MMIO layout to OVMF as an info table

2021-01-11 Thread Igor Druzhinin
On 11/01/2021 15:21, Jan Beulich wrote: > On 11.01.2021 15:49, Laszlo Ersek wrote: >> On 01/11/21 15:00, Igor Druzhinin wrote: >>> On 11/01/2021 09:27, Jan Beulich wrote: >>>> On 11.01.2021 05:53, Igor Druzhinin wrote: >>>>> We faced a problem wi

Re: [PATCH] hvmloader: pass PCI MMIO layout to OVMF as an info table

2021-01-11 Thread Igor Druzhinin
On 11/01/2021 14:14, Jan Beulich wrote: > On 11.01.2021 15:00, Igor Druzhinin wrote: >> On 11/01/2021 09:27, Jan Beulich wrote: >>> On 11.01.2021 05:53, Igor Druzhinin wrote: >>>> --- a/tools/firmware/hvmloader/ovmf.c >>>> +++ b/tools/firmware/hvmlo

Re: [PATCH] hvmloader: pass PCI MMIO layout to OVMF as an info table

2021-01-11 Thread Igor Druzhinin
On 11/01/2021 09:27, Jan Beulich wrote: > On 11.01.2021 05:53, Igor Druzhinin wrote: >> We faced a problem with passing through a PCI device with 64GB BAR to >> UEFI guest. The BAR is expectedly programmed into 64-bit PCI aperture at >> 64G address which pushes physical add

Re: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-11 Thread Igor Druzhinin
On 11/01/2021 13:47, Paul Durrant wrote: >> -Original Message- >> From: Jan Beulich >> Sent: 11 January 2021 13:38 >> To: Igor Druzhinin ; p...@xen.org >> Cc: w...@xen.org; i...@xenproject.org; anthony.per...@citrix.com; >> andrew.coop...@citrix.c

Re: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-11 Thread Igor Druzhinin
On 11/01/2021 09:16, Jan Beulich wrote: > On 11.01.2021 10:12, Paul Durrant wrote: >>> From: Paul Durrant >>> Sent: 11 January 2021 09:10 >>> From: Jan Beulich Sent: 11 January 2021 09:00 On 11.01.2021 09:45, Paul Durrant wrote: > You can add my R-b to the patch.

[PATCH] hvmloader: pass PCI MMIO layout to OVMF as an info table

2021-01-10 Thread Igor Druzhinin
) - extend the info structure with a new table. Since the structure was initially created to be extendable - the change is backward compatible. Signed-off-by: Igor Druzhinin --- Companion change in OVMF: https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg00516.html --- tools/firmware

[PATCH] OvmfPkg/XenPlatformPei: Grab 64-bit PCI MMIO hole size from OVMF info table

2021-01-10 Thread Igor Druzhinin
between OVMF and hvmloader preserving compatibility. Signed-off-by: Igor Druzhinin --- The change is backward compatible and has a companion change for hvmloader. Are we still waiting to clean up Xen stuff from QEMU platform? Or I need to duplicate my changed there (I hope not)? --- OvmfPkg

Re: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-08 Thread Igor Druzhinin
On 08/01/2021 08:32, Paul Durrant wrote: >> -Original Message- >> From: Igor Druzhinin >> Sent: 08 January 2021 00:47 >> To: xen-devel@lists.xenproject.org >> Cc: p...@xen.org; w...@xen.org; i...@xenproject.org; >> anthony.per...@citrix.com; >

Re: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-08 Thread Igor Druzhinin
On 08/01/2021 13:17, Jan Beulich wrote: > On 08.01.2021 12:27, Igor Druzhinin wrote: >> On 08/01/2021 09:19, Jan Beulich wrote: >>> On 08.01.2021 01:46, Igor Druzhinin wrote: >>>> --- a/tools/libs/light/libxl_x86.c >>>> +++ b/tools/libs/light/libxl

Re: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-08 Thread Igor Druzhinin
On 08/01/2021 08:32, Paul Durrant wrote: >> -Original Message- >> From: Igor Druzhinin >> Sent: 08 January 2021 00:47 >> To: xen-devel@lists.xenproject.org >> Cc: p...@xen.org; w...@xen.org; i...@xenproject.org; >> anthony.per...@citrix.com; >

Re: [PATCH 2/2] viridian: allow vCPU hotplug for Windows VMs

2021-01-08 Thread Igor Druzhinin
On 08/01/2021 11:40, Paul Durrant wrote: >> -Original Message- >> From: Igor Druzhinin >> Sent: 08 January 2021 11:36 >> To: p...@xen.org; xen-devel@lists.xenproject.org >> Cc: w...@xen.org; i...@xenproject.org; anthony.per...@citrix.com; >> andr

Re: [PATCH 2/2] viridian: allow vCPU hotplug for Windows VMs

2021-01-08 Thread Igor Druzhinin
On 08/01/2021 11:33, Paul Durrant wrote: >> -Original Message- >> From: Igor Druzhinin >> Sent: 08 January 2021 11:30 >> To: p...@xen.org; xen-devel@lists.xenproject.org >> Cc: w...@xen.org; i...@xenproject.org; anthony.per...@citrix.com; >> andr

Re: [PATCH 2/2] viridian: allow vCPU hotplug for Windows VMs

2021-01-08 Thread Igor Druzhinin
On 08/01/2021 08:38, Paul Durrant wrote: >> -Original Message- >> From: Igor Druzhinin >> Sent: 08 January 2021 00:47 >> To: xen-devel@lists.xenproject.org >> Cc: p...@xen.org; w...@xen.org; i...@xenproject.org; >> anthony.per...@citrix.com; >

Re: [PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-08 Thread Igor Druzhinin
On 08/01/2021 09:19, Jan Beulich wrote: > On 08.01.2021 01:46, Igor Druzhinin wrote: >> --- a/tools/libs/light/libxl_x86.c >> +++ b/tools/libs/light/libxl_x86.c >> @@ -336,7 +336,7 @@ static int hvm_set_viridian_features(libxl__gc *gc, >> uint32_t domid, >>

[PATCH 1/2] viridian: remove implicit limit of 64 VPs per partition

2021-01-07 Thread Igor Druzhinin
nges exposing ExProcessorMasks this allows a recent Windows VM with Viridian extension enabled to have more than 64 vCPUs without going into immediate BSOD. Since we didn't expose the leaf before and to keep CPUID data consistent for incoming streams from previous Xen versions - let's keep it behind a

[PATCH 2/2] viridian: allow vCPU hotplug for Windows VMs

2021-01-07 Thread Igor Druzhinin
discussion here: https://patchwork.kernel.org/project/qemu-devel/patch/1455364815-19586-1-git-send-email-...@openvz.org/ Signed-off-by: Igor Druzhinin --- xen/arch/x86/hvm/viridian/viridian.c | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/viridian/viridian.c

Re: [PATCH v3] x86/intel: insert Ice Lake-X (server) and Ice Lake-D model numbers

2021-01-06 Thread Igor Druzhinin
On 06/01/2021 11:04, Jan Beulich wrote: > On 23.12.2020 21:32, Igor Druzhinin wrote: >> LBR, C-state MSRs should correspond to Ice Lake desktop according to >> External Design Specification vol.2 for both models. >> >> Ice Lake-X is known to expose IF_PSCHANGE_MC_NO in

[PATCH v3] x86/intel: insert Ice Lake-X (server) and Ice Lake-D model numbers

2020-12-23 Thread Igor Druzhinin
and therefore it shouldn't be present in has_if_pschange_mc list. Provisionally assume the same to be the case for Ice Lake-D. Signed-off-by: Igor Druzhinin --- Changes in v3: - Add Ice Lake-D model numbers - Drop has_if_pschange_mc hunk following Tiger Lake related discussion - IF_PSCHANGE_MC_NO

Re: [PATCH v2] x86/intel: insert Ice Lake X (server) model numbers

2020-12-21 Thread Igor Druzhinin
On 21/12/2020 16:36, Jan Beulich wrote: > On 19.10.2020 04:47, Igor Druzhinin wrote: >> LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond >> to Ice Lake desktop according to External Design Specification vol.2. >> >> Signed-off-by: Igor Druzhin

Re: [PATCH v3 1/2] x86/IRQ: make max number of guests for a shared IRQ configurable

2020-12-07 Thread Igor Druzhinin
On 07/12/2020 09:43, Jan Beulich wrote: > On 06.12.2020 18:43, Igor Druzhinin wrote: >> @@ -1633,11 +1640,12 @@ int pirq_guest_bind(struct vcpu *v, struct pirq >> *pirq, int will_share) >> goto retry; >> } >> >> -if ( action->nr_gues

[PATCH v3 1/2] x86/IRQ: make max number of guests for a shared IRQ configurable

2020-12-06 Thread Igor Druzhinin
is higher than 7 but could later be increased even more if necessary. Signed-off-by: Igor Druzhinin --- Changes in v2: - introduced a command line option as suggested - set initial default limit to 16 Changes in v3: - changed option name to us - instead of _ - used uchar instead of uint to utilize

[PATCH v3 2/2] x86/IRQ: allocate guest array of max size only for shareable IRQs

2020-12-06 Thread Igor Druzhinin
pplied with an array of that size. Since it's now less impactful to use higher "irq-max-guests" value - bump the default to 32. That should give more headroom for future systems. Signed-off-by: Igor Druzhinin --- New in v2. Based on Jan's suggestion. Changes in v3: - almost none since I

[PATCH v2 1/2] x86/IRQ: make max number of guests for a shared IRQ configurable

2020-12-02 Thread Igor Druzhinin
is higher than 7 but could later be increased even more if necessary. Signed-off-by: Igor Druzhinin --- Changes in v2: - introduced a command line option as suggested - set the default limit to 16 for now --- docs/misc/xen-command-line.pandoc | 9 + xen/arch/x86/irq.c| 19

[PATCH v2 2/2] x86/IRQ: allocate guest array of max size only for shareable IRQs

2020-12-02 Thread Igor Druzhinin
pplied with an array of that size. Since it's now less impactful to use higher "irq_max_guests" value - bump the default to 32. That should give more headroom for future systems. Signed-off-by: Igor Druzhinin --- New in v2. This is suggested by Jan and is optional for me. --- docs/misc/xen-c

Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31

2020-12-02 Thread Igor Druzhinin
On 02/12/2020 15:21, Jan Beulich wrote: > On 02.12.2020 15:53, Igor Druzhinin wrote: >> On 02/12/2020 09:25, Jan Beulich wrote: >>> Instead I'm wondering whether this wouldn't better be a Kconfig >>> setting (or even command line controllable). There don't look

Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31

2020-12-02 Thread Igor Druzhinin
On 02/12/2020 09:25, Jan Beulich wrote: > On 01.12.2020 00:59, Igor Druzhinin wrote: >> Current limit of 7 is too restrictive for modern systems where one GSI >> could be shared by potentially many PCI INTx sources where each of them >> corresponds to a device passed through t

[PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31

2020-11-30 Thread Igor Druzhinin
as interrupt pin for the majority of PCI devices behind a single router, resulting in overuse of a GSI. Signed-off-by: Igor Druzhinin --- If people think that would make sense - I can rework the array to a list of domain pointers to avoid the limit. --- xen/arch/x86/irq.c | 2 +- 1 file changed, 1

[PATCH v2] x86/intel: insert Ice Lake X (server) model numbers

2020-10-18 Thread Igor Druzhinin
LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond to Ice Lake desktop according to External Design Specification vol.2. Signed-off-by: Igor Druzhinin --- Changes in v2: - keep partial sorting Andrew, since you have access to these documents, please review as you have

Re: [PATCH v2] hvmloader: flip "ACPI data" to "ACPI NVS" type for ACPI table region

2020-10-16 Thread Igor Druzhinin
On 16/10/2020 14:34, Sander Eikelenboom wrote: > On 16/10/2020 08:34, Jan Beulich wrote: >> On 16.10.2020 02:39, Igor Druzhinin wrote: >>> ACPI specification contains statements describing memory marked with regular >>> "ACPI data" type as reclaimable by th

[PATCH v2] hvmloader: flip "ACPI data" to "ACPI NVS" type for ACPI table region

2020-10-15 Thread Igor Druzhinin
biguity in it and is described by the spec as non-reclaimable (so cannot ever be treated like RAM). Signed-off-by: Igor Druzhinin --- Changes in v2: - Put the exact reasoning into a comment - Improved commit message --- tools/firmware/hvmloader/e820.c | 11 --- 1 file changed, 8 inser

Re: [PATCH 1/2] x86/intel: insert Ice Lake X (server) model numbers

2020-10-14 Thread Igor Druzhinin
On 14/10/2020 16:47, Jan Beulich wrote: > On 13.10.2020 05:02, Igor Druzhinin wrote: >> LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond >> to Ice Lake desktop according to External Design Specification vol.2. > > Could you tell me where this

Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI table region

2020-10-13 Thread Igor Druzhinin
On 13/10/2020 16:54, Jan Beulich wrote: > On 13.10.2020 17:47, Igor Druzhinin wrote: >> On 13/10/2020 16:35, Jan Beulich wrote: >>> On 13.10.2020 14:59, Igor Druzhinin wrote: >>>> On 13/10/2020 13:51, Jan Beulich wrote: >>>>> As a consequence I

Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI table region

2020-10-13 Thread Igor Druzhinin
On 13/10/2020 16:35, Jan Beulich wrote: > On 13.10.2020 14:59, Igor Druzhinin wrote: >> On 13/10/2020 13:51, Jan Beulich wrote: >>> As a consequence I think we will also want to adjust Xen itself to >>> automatically disable ACPI when it ends up consuming E801 data. Or

Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI table region

2020-10-13 Thread Igor Druzhinin
On 13/10/2020 13:51, Jan Beulich wrote: > On 13.10.2020 12:50, Igor Druzhinin wrote: >> ACPI specification contains statements describing memory marked with regular >> "ACPI data" type as reclaimable by the guest. Although the guest shouldn't >> really do

[PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI table region

2020-10-13 Thread Igor Druzhinin
ential problems from using reclaimable memory type. Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is described by the spec as non-reclaimable (so cannot ever be treated like RAM). Signed-off-by: Igor Druzhinin --- tools/firmware/hvmloader/e820.c | 7 ---

[PATCH 1/2] x86/intel: insert Ice Lake X (server) model numbers

2020-10-12 Thread Igor Druzhinin
LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond to Ice Lake desktop according to External Design Specification vol.2. Signed-off-by: Igor Druzhinin --- xen/arch/x86/acpi/cpu_idle.c | 1 + xen/arch/x86/hvm/vmx/vmx.c | 3 ++- 2 files changed, 3 insertions(+), 1

[PATCH 2/2] x86/mwait-idle: Customize IceLake server support

2020-10-12 Thread Igor Druzhinin
-by: Chen Yu Signed-off-by: Rafael J. Wysocki [Linux commit a472ad2bcea479ba068880125d7273fc95c14b70] Signed-off-by: Igor Druzhinin --- Applying this gives almost 100% boost in sysbench cpu test on Whitley SDP --- xen/arch/x86/cpu/mwait-idle.c | 28 1 file changed, 28

Re: [SUSPECTED SPAM]Xen-unstable :can't boot HVM guests, bisected to commit: "hvmloader: indicate ACPI tables with "ACPI data" type in e820"

2020-10-11 Thread Igor Druzhinin
On 11/10/2020 11:40, Igor Druzhinin wrote: > On 11/10/2020 10:43, Sander Eikelenboom wrote: >> On 11/10/2020 02:06, Igor Druzhinin wrote: >>> On 10/10/2020 18:51, Sander Eikelenboom wrote: >>>> Hi Igor/Jan, >>>> >>>> I tried to update my AMD m

Re: [SUSPECTED SPAM]Xen-unstable :can't boot HVM guests, bisected to commit: "hvmloader: indicate ACPI tables with "ACPI data" type in e820"

2020-10-11 Thread Igor Druzhinin
On 11/10/2020 10:43, Sander Eikelenboom wrote: > On 11/10/2020 02:06, Igor Druzhinin wrote: >> On 10/10/2020 18:51, Sander Eikelenboom wrote: >>> Hi Igor/Jan, >>> >>> I tried to update my AMD machine to current xen-unstable, but >>> unfortunately th

Re: [SUSPECTED SPAM]Xen-unstable :can't boot HVM guests, bisected to commit: "hvmloader: indicate ACPI tables with "ACPI data" type in e820"

2020-10-10 Thread Igor Druzhinin
On 10/10/2020 18:51, Sander Eikelenboom wrote: > Hi Igor/Jan, > > I tried to update my AMD machine to current xen-unstable, but > unfortunately the HVM guests don't boot after that. The guest keeps > using CPU-cycles but I don't get to a command prompt (or any output at > all). PVH guests run

[PATCH v4] hvmloader: indicate ACPI tables with "ACPI data" type in e820

2020-09-08 Thread Igor Druzhinin
PVH guests. 1MB should be enough for now but could be later extended if required. Signed-off-by: Igor Druzhinin --- Changes in v4: - gated new region creation on acpi_enabled - added a comment to explain reserved region start point Changes in v3: - switched from NVS to regular "ACPI data"

Re: [PATCH v3] hvmloader: indicate ACPI tables with "ACPI data" type in e820

2020-09-08 Thread Igor Druzhinin
On 08/09/2020 10:15, Jan Beulich wrote: > On 08.09.2020 01:42, Igor Druzhinin wrote: >> Changes in v3: >> - switched from NVS to regular "ACPI data" type by separating ACPI >> allocations >> into their own region >> - gave more information on particu

[PATCH v3] hvmloader: indicate ACPI tables with "ACPI data" type in e820

2020-09-07 Thread Igor Druzhinin
PVH guests. 1MB should be enough for now but could be later extended if required. Signed-off-by: Igor Druzhinin --- Changes in v3: - switched from NVS to regular "ACPI data" type by separating ACPI allocations into their own region - gave more information on particular kexec usecase that

Re: [PATCH v2.1] hvmloader: indicate dynamically allocated memory as ACPI NVS in e820

2020-09-04 Thread Igor Druzhinin
On 04/09/2020 15:40, Jan Beulich wrote: > On 04.09.2020 13:49, Igor Druzhinin wrote: >> On 04/09/2020 09:33, Jan Beulich wrote: >>> On 01.09.2020 04:50, Igor Druzhinin wrote: >>>> Guest kernel does need to know in some cases where the tables are located >>>

Re: [PATCH v2.1] hvmloader: indicate dynamically allocated memory as ACPI NVS in e820

2020-09-04 Thread Igor Druzhinin
On 04/09/2020 09:33, Jan Beulich wrote: > On 01.09.2020 04:50, Igor Druzhinin wrote: >> Guest kernel does need to know in some cases where the tables are located >> to treat these regions properly. One example is kexec process where >> the first kernel needs to pass firm

Re: [PATCH v2.1] hvmloader: indicate dynamically allocated memory as ACPI NVS in e820

2020-09-01 Thread Igor Druzhinin
On 01/09/2020 10:28, Roger Pau Monné wrote: > On Tue, Sep 01, 2020 at 03:50:34AM +0100, Igor Druzhinin wrote: >> Guest kernel does need to know in some cases where the tables are located >> to treat these regions properly. One example is kexec process where >> the first

[PATCH v2.1] hvmloader: indicate dynamically allocated memory as ACPI NVS in e820

2020-09-01 Thread Igor Druzhinin
PI reclaim (ACPI table) memory would avoid potential reuse of this memory by the guest taking into account this region may contain runtime structures like VM86 TSS, etc. If necessary, those can be moved away later and the region marked as reclaimable. Signed-off-by: Igor Druzhinin --- Chang

[PATCH v2] hvmloader: indicate dynamically allocated memory as ACPI NVS in e820

2020-09-01 Thread Igor Druzhinin
gular ACPI (ACPI table) memory would avoid potential reuse of this memory by the guest taking into account this region may contain runtime structures like VM86 TSS, etc. If necessary, those can be moved away later and the region marked as reclaimable. Signed-off-by: Igor Druzhinin --- tool

Re: [PATCH] hvmloader: indicate firmware tables as ACPI NVS in e820

2020-08-28 Thread Igor Druzhinin
On 28/08/2020 08:51, Jan Beulich wrote: > On 28.08.2020 02:13, Igor Druzhinin wrote: >> Guest kernel does need to know in some cases where the tables are located >> to treat these regions properly. One example is kexec process where >> the first kernel needs to pass firm

[PATCH] hvmloader: indicate firmware tables as ACPI NVS in e820

2020-08-27 Thread Igor Druzhinin
tial reuse of this memory by the guest. Swtiching from Reserved to ACPI NVS type for this memory would also mean its content is preserved across S4 (which is true for any ACPI type according to the spec). Signed-off-by: Igor Druzhinin --- tools/firmware/hvmloader/e820.c | 21 +-

Re: [PATCH] OvmfPkg: End timer interrupt later to avoid stack overflow under load

2020-06-16 Thread Igor Druzhinin
On 16/06/2020 19:42, Laszlo Ersek wrote > If I understand correctly, TimerInterruptHandler() > [OvmfPkg/8254TimerDxe/Timer.c] currently does the following: > > - RaiseTPL (TPL_HIGH_LEVEL) --> mask interrupts from being delivered > > - mLegacy8259->EndOfInterrupt() --> permit the PIC to generate

[PATCH for-4.14 v3] tools/xen-ucode: return correct exit code on failed microcode update

2020-06-16 Thread Igor Druzhinin
Otherwise it's difficult to know if operation failed inside the automation. While at it, also switch to returning 1 and 2 instead of errno to avoid incompatibilies between errno and special exit code numbers. Signed-off-by: Igor Druzhinin --- Changes in v3: - conventionally return 1 and 2

Re: [PATCH for-4.14 v2] tools/xen-ucode: fix error code propagation of microcode load operation

2020-06-16 Thread Igor Druzhinin
On 16/06/2020 13:25, Jan Beulich wrote: > [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments > unless you have verified the sender and know the content is safe. > > On 16.06.2020 13:42, Igor Druzhinin wrote: >> @@ -62,8 +62,11 @@ int main(in

[PATCH for-4.14 v2] tools/xen-ucode: fix error code propagation of microcode load operation

2020-06-16 Thread Igor Druzhinin
Otherwise it's impossible to know the reason for a fault or blob rejection inside the automation. While at it, also change return code of incorrect invokation to EINVAL. Signed-off-by: Igor Druzhinin --- Changes in v2: - simply call "return errno". On Linux that seems to be safe as va

Re: [XEN PATCH for-4.14] tools/xen-ucode: fix error code propagation of microcode load operation

2020-06-12 Thread Igor Druzhinin
On 12/06/2020 17:53, Ian Jackson wrote: > Igor Druzhinin writes ("[PATCH] tools/xen-ucode: fix error code propagation > of microcode load operation"): >> Otherwise it's impossible to know the reason for a fault or blob rejection >> inside the automation. > ... >

[PATCH] tools/xen-ucode: fix error code propagation of microcode load operation

2020-06-12 Thread Igor Druzhinin
Otherwise it's impossible to know the reason for a fault or blob rejection inside the automation. Signed-off-by: Igor Druzhinin --- tools/misc/xen-ucode.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/tools/misc/xen-ucode.c b/tools/misc/xen-ucode.c index 0c257f4

Re: [PATCH for-4.14 v3] x86/svm: do not try to handle recalc NPT faults immediately

2020-06-04 Thread Igor Druzhinin
On 04/06/2020 11:50, Paul Durrant wrote: >> -Original Message- >> From: Jan Beulich >> Sent: 04 June 2020 11:34 >> To: p...@xen.org >> Cc: 'Igor Druzhinin' ; >> xen-devel@lists.xenproject.org; >> andrew.coop...@citrix.com; w...@xen.org; roger...

[PATCH for-4.14 v3] x86/svm: do not try to handle recalc NPT faults immediately

2020-06-03 Thread Igor Druzhinin
EPT implementation. Reviewed-by: Jan Beulich Reviewed-by: Roger Pau Monn?? Signed-off-by: Igor Druzhinin --- Changes in v2: - replace rc with recalc_done bool - updated comment in finish_type_change() - significantly extended commit description Changes in v3: - covert bool to int implic

Re: [PATCH v2] x86/svm: do not try to handle recalc NPT faults immediately

2020-06-03 Thread Igor Druzhinin
On 03/06/2020 12:48, Paul Durrant wrote: >> -Original Message- >> From: Igor Druzhinin >> Sent: 03 June 2020 12:45 >> To: p...@xen.org; 'Jan Beulich' >> Cc: xen-devel@lists.xenproject.org; andrew.coop...@citrix.com; w...@xen.org; >> roger@cit

Re: [PATCH v2] x86/svm: do not try to handle recalc NPT faults immediately

2020-06-03 Thread Igor Druzhinin
On 03/06/2020 12:28, Paul Durrant wrote: >> -Original Message- >> From: Jan Beulich >> Sent: 03 June 2020 12:22 >> To: p...@xen.org >> Cc: 'Igor Druzhinin' ; >> xen-devel@lists.xenproject.org; >> andrew.coop...@citrix.com; w...@xen.org; roger...

[PATCH v2] x86/svm: do not try to handle recalc NPT faults immediately

2020-06-02 Thread Igor Druzhinin
return a positive value - it's safe to replace ">= 0" with just "== 0" in VMEXIT_NPF handler. finish_type_change() is also not affected by the change as being able to deal with >0 return value of p2m->recalc from EPT implementation. Reviewed-by: Roger Pau Monn??

Re: [PATCH] x86/svm: do not try to handle recalc NPT faults immediately

2020-05-29 Thread Igor Druzhinin
On 29/05/2020 16:17, Igor Druzhinin wrote: > On 29/05/2020 15:34, Jan Beulich wrote: >> On 29.05.2020 02:35, Igor Druzhinin wrote: >>> A recalculation NPT fault doesn't always require additional handling >>> in hvm_hap_nested_page_fault(), moreover in general case

  1   2   3   4   >