On Wed, Aug 19, 2020 at 09:12:02AM +0200, Jan Beulich wrote:
> On 13.08.2020 11:45, Roger Pau Monné wrote:
> > On Thu, Aug 13, 2020 at 09:10:31AM +0100, Paul Durrant wrote:
> >>> -----Original Message-----
> >>> From: Xen-devel <[email protected]> On Behalf Of 
> >>> David Woodhouse
> >>> Sent: 11 August 2020 14:25
> >>> To: Paul Durrant <[email protected]>; 
> >>> [email protected]; Roger Pau Monne
> >>> <[email protected]>
> >>> Cc: Eslam Elnikety <[email protected]>; Andrew Cooper 
> >>> <[email protected]>; Shan Haitao
> >>> <[email protected]>; Jan Beulich <[email protected]>
> >>> Subject: Re: [Xen-devel] [PATCH v2] x86/hvm: re-work viridian APIC assist 
> >>> code
> >>>
> >>> Resending this straw man patch at Roger's request, to restart discussion.
> >>>
> >>> Redux: In order to cope with the relatively rare case of unmaskable
> >>> legacy MSIs, each vlapic EOI takes a domain-global spinlock just to
> >>> iterate over all IRQs and determine that there's actually nothing to
> >>> do.
> >>>
> >>> In my testing, I observe that this drops Windows performance on passed-
> >>> through NVMe from 2.2M IOPS down to about 1.0M IOPS.
> >>>
> >>> I have a variant of this patch which just has a single per-domain "I
> >>> attached legacy unmaskable MSIs" flag, which is never cleared. The
> >>> patch below is per-vector (but Roger points out it should be per-vCPU
> >>> per-vector). I don't know that we really care enough to do more than
> >>> the single per-domain flag, which in real life would never happen
> >>> anyway unless you have crappy hardware, at which point you get back to
> >>> today's status quo.
> >>>
> >>> My main concern is that this code is fairly sparsely documented and I'm
> >>> only 99% sure that this code path really *is* only for unmaskable MSIs,
> >>> and doesn't have some other esoteric use. A second opinion on that
> >>> would be particularly welcome.
> >>>
> >>
> >> The loop appears to be there to handle the case where multiple
> >> devices assigned to a domain have MSIs programmed with the same
> >> dest/vector... which seems like an odd thing for a guest to do but I
> >> guess it is at liberty to do it. Does it matter whether they are
> >> maskable or not?
> > 
> > Such configuration would never work properly, as lapic vectors are
> > edge triggered and thus can't be safely shared between devices?
> 
> Wait - there are two aspects here: Vectors are difficult to be shared
> on the same CPU (but it's not impossible if the devices and their
> drivers meet certain conditions). But the bitmap gets installed as a
> per-domain rather than a per-vcpu one, and using the same vector on
> different CPUs is definitely possible, as demonstrated by both Xen
> itself as well as Linux.

Yes, that's why I've requested the bitmap to be per-vcpu, but given
the work I'm doing related to interrupt EOI callbacks maybe this won't
be needed.

Thanks, Roger.

Reply via email to