Hi Marc,

On 5/26/20 6:11 PM, Marc Zyngier wrote:
> On a system that uses SPIs to implement MSIs (as it would be
> the case on a GICv2 system exposing a GICv2m to its guests),
> we deny the possibility of injecting SPIs on the in-atomic
> fast-path.
> 
> This results in a very large amount of context-switches
> (roughly equivalent to twice the interrupt rate) on the host,
> and suboptimal performance for the guest (as measured with
> a test workload involving a virtio interface backed by vhost-net).
> Given that GICv2 systems are usually on the low-end of the spectrum
> performance wise, they could do without the aggravation.
> 
> We solved this for GICv3+ITS by having a translation cache. But
> SPIs do not need any extra infrastructure, and can be immediately
> injected in the virtual distributor as the locking is already
> heavy enough that we don't need to worry about anything.
> 
> This halves the number of context switches for the same workload.
> 
> Signed-off-by: Marc Zyngier <[email protected]>
> ---
>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>  2 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
> b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index d8cdfea5cc96..11a9f81115ab 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
There is still a comment above saying
 * Currently only direct MSI injection is supported.
> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
> kvm_kernel_irq_routing_entry *e,
>                             struct kvm *kvm, int irq_source_id, int level,
>                             bool line_status)
>  {
> -     if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
> +     if (!level)
> +             return -EWOULDBLOCK;
> +
> +     switch (e->type) {
> +     case KVM_IRQ_ROUTING_MSI: {
>               struct kvm_msi msi;
>  
> +             if (!vgic_has_its(kvm))
> +                     return -EINVAL;
Shouldn't we return -EWOULDBLOCK by default?
QEMU does not use that path with GICv2m but in kvm_set_routing_entry() I
don't see any check related to the ITS.
> +
>               kvm_populate_msi(e, &msi);
> -             if (!vgic_its_inject_cached_translation(kvm, &msi))
> -                     return 0;
> +             return vgic_its_inject_cached_translation(kvm, &msi);

>       }
>  
> -     return -EWOULDBLOCK;
> +     case KVM_IRQ_ROUTING_IRQCHIP:
> +             /* Injecting SPIs is always possible in atomic context */
> +             return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, 
> line_status);
what about the  mutex_lock(&kvm->lock) called from within
vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
> +
> +     default:
> +             return -EWOULDBLOCK;
> +     }
>  }
>  
>  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
> index c012a52b19f5..40cbaca81333 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, 
> struct kvm_msi *msi)
>  
>       db = (u64)msi->address_hi << 32 | msi->address_lo;
>       irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
> -
>       if (!irq)
> -             return -1;
> +             return -EWOULDBLOCK;
>  
>       raw_spin_lock_irqsave(&irq->irq_lock, flags);
>       irq->pending_latch = true;
> 
Thanks

Eric

_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to