On Tuesday 20 April 2010 23:54:01 Marcelo Tosatti wrote:
> The assigned device interrupt work handler calls kvm_set_irq, which
> can sleep, for example, waiting for the ioapic mutex, from irq disabled
> section.
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=15725
> 
> Fix by dropping assigned_dev_lock (and re-enabling interrupts)
> before invoking kvm_set_irq for the KVM_DEV_IRQ_HOST_MSIX case. Other
> cases do not require the lock or interrupts disabled (a new work
> instance will be queued in case of concurrent interrupt).

Looks fine, but depends on the new work would be queued sounds a little 
tricky...

How about a local_irq_disable() at the beginning? It can ensure no concurrent 
interrupts would happen as well I think.

> 
> KVM-Stable-Tag.
> Signed-off-by: Marcelo Tosatti <[email protected]>
> 
> diff --git a/virt/kvm/assigned-dev.c b/virt/kvm/assigned-dev.c
> index 47ca447..7ac7bbe 100644
> --- a/virt/kvm/assigned-dev.c
> +++ b/virt/kvm/assigned-dev.c
> @@ -64,24 +64,33 @@ static void
>  kvm_assigned_dev_interrupt_work_handler(struct work_struct *work)
>  interrupt_work);
>       kvm = assigned_dev->kvm;
> 
> -     spin_lock_irq(&assigned_dev->assigned_dev_lock);
>       if (assigned_dev->irq_requested_type & KVM_DEV_IRQ_HOST_MSIX) {
>               struct kvm_guest_msix_entry *guest_entries =
>                       assigned_dev->guest_msix_entries;

irq_requested_type and guest_msix_entries should also protected by the lock. 
So how about another spin_lock()/unlock() pair wraps the second kvm_set_irq()?

> +
> +             spin_lock_irq(&assigned_dev->assigned_dev_lock);
>               for (i = 0; i < assigned_dev->entries_nr; i++) {
>                       if (!(guest_entries[i].flags &
>                                       KVM_ASSIGNED_MSIX_PENDING))
>                               continue;
>                       guest_entries[i].flags &= ~KVM_ASSIGNED_MSIX_PENDING;
> +                     /*
> +                      * If kvm_assigned_dev_intr sets pending for an
> +                      * entry smaller than this work instance is
> +                      * currently processing, a new work instance
> +                      * will be queued.
> +                      */
> +                     spin_unlock_irq(&assigned_dev->assigned_dev_lock);
>                       kvm_set_irq(assigned_dev->kvm,
>                                   assigned_dev->irq_source_id,
>                                   guest_entries[i].vector, 1);
> +                     spin_lock_irq(&assigned_dev->assigned_dev_lock);
>               }
> +             spin_unlock_irq(&assigned_dev->assigned_dev_lock);
>       } else
>               kvm_set_irq(assigned_dev->kvm, assigned_dev->irq_source_id,
>                           assigned_dev->guest_irq, 1);

Or could we make kvm_set_irq() atomic? Though the code path is a little long 
for spinlock.

> 
> -     spin_unlock_irq(&assigned_dev->assigned_dev_lock);
>  }
> 
>  static irqreturn_t kvm_assigned_dev_intr(int irq, void *dev_id)

-- 
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to