On Sun, Jun 28, 2009 at 08:50:28AM -0400, Gregory Haskins wrote:
> Michael S. Tsirkin wrote:
> > On Thu, Jun 25, 2009 at 09:28:27AM -0400, Gregory Haskins wrote:
> >   
> >> @@ -65,25 +134,39 @@ irqfd_wakeup(wait_queue_t *wait, unsigned mode, int 
> >> sync, void *key)
> >>    unsigned long flags = (unsigned long)key;
> >>  
> >>    /*
> >> -   * Assume we will be called with interrupts disabled
> >> +   * Called with interrupts disabled
> >>     */
> >>    if (flags & POLLIN)
> >> -          /*
> >> -           * Defer the IRQ injection until later since we need to
> >> -           * acquire the kvm->lock to do so.
> >> -           */
> >> +          /* An event has been signaled, inject an interrupt */
> >>            schedule_work(&irqfd->inject);
> >>  
> >>    if (flags & POLLHUP) {
> >> -          /*
> >> -           * for now, just remove ourselves from the list and let
> >> -           * the rest dangle.  We will fix this up later once
> >> -           * the races in eventfd are fixed
> >> -           */
> >> +          /* The eventfd is closing, detach from KVM */
> >> +          struct kvm *kvm = irqfd->kvm;
> >> +          unsigned long flags;
> >> +
> >>            __remove_wait_queue(irqfd->wqh, &irqfd->wait);
> >> -          irqfd->wqh = NULL;
> >> +
> >> +          spin_lock_irqsave(&kvm->irqfds.lock, flags);
> >> +
> >> +          if (irqfd->active) {
> >> +                  /*
> >> +                   * If the item is still active we can be sure that
> >> +                   * no-one else is trying to shutdown this object at
> >> +                   * the same time.
> >> +                   *
> >> +                   * Defer the shutdown to a thread so we can flush
> >> +                   * all remaining inject jobs.  We use a slow-work
> >> +                   * item to prevent a deadlock against the work-queue
> >> +                   */
> >> +                  irqfd_deactivate(irqfd);
> >> +                  slow_work_enqueue(&irqfd->shutdown);
> >>     
> >
> > Greg, in your patch for slow-work module removal, you write:
> >   "Callers must ensure that their module has at least
> >   one reference held while the work is enqueued."
> >   Where does this guarantee come from, in this case?
> >   
> The general guarantee comes from the fact that modules naturally have to
> have a reference to be able to call the enqueue function to begin with,
> or the calling function was already racy.  In this particular case, we
> can guarantee that the kvm vm fd is held while our slow-work is active,
> and all slow work is flushed before it is released.  (I guess I am
> assuming that VFS takes a module reference when an fd is opened, but I
> have not verified that it actually does.  If it doesn't, I suppose KVM
> is already racy w.r.t. unloading, independent of my patches)
> 
> -Greg
> 

that could be the case, as we have, for example:

static struct file_operations kvm_vm_fops = {
        .release        = kvm_vm_release,
        .unlocked_ioctl = kvm_vm_ioctl,
        .compat_ioctl   = kvm_vm_ioctl,
        .mmap           = kvm_vm_mmap,
};

with no owner field.

Avi, shouldn't we initialize the owner field to prevent
kvm module from going away while files are open?

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to