On Fri, 20 May 2011 13:32:10 -0500
Shirish Pargaonkar <[email protected]> wrote:

> On Thu, May 19, 2011 at 3:22 PM, Jeff Layton <[email protected]> wrote:
> > Holding the spinlock while we call this function means that it can't
> > sleep, which really limits what it can do. Taking it out from under
> > the spinlock also means less contention for this global lock.
> >
> > Change the semantics such that the Global_MidLock is not held when
> > the callback is called. To do this requires that we take extra care
> > not to have sync_mid_result remove the mid from the list when the
> > mid is in a state where that has already happened. This prevents
> > list corruption when the mid is sitting on a private list for
> > reconnect or when cifsd is coming down.
> >
> > Reviewed-by: Pavel Shilovsky <[email protected]>
> > Signed-off-by: Jeff Layton <[email protected]>
> > ---
> >  fs/cifs/cifsglob.h  |    6 +++---
> >  fs/cifs/connect.c   |   30 ++++++++++++++++++++++++------
> >  fs/cifs/transport.c |   23 ++++++++---------------
> >  3 files changed, 35 insertions(+), 24 deletions(-)
> >

[...]

> >
> >        /* mark submitted MIDs for retry and issue callback */
> > -       cFYI(1, "%s: issuing mid callbacks", __func__);
> > +       INIT_LIST_HEAD(&retry_list);
> > +       cFYI(1, "%s: moving mids to private list", __func__);
> >        spin_lock(&GlobalMid_Lock);
> >        list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
> >                mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
> >                if (mid_entry->midState == MID_REQUEST_SUBMITTED)
> >                        mid_entry->midState = MID_RETRY_NEEDED;
> > +               list_move(&mid_entry->qhead, &retry_list);
> > +       }
> > +       spin_unlock(&GlobalMid_Lock);
> > +
> > +       cFYI(1, "%s: moving mids to private list", __func__);
> 
> Does this comment repeat here?
> 

Oops, yes. I'll plan to fix it. If there are no other problems with the
patch, it might be easiest to commit this as-is and I'll do another
patch to fix it. Otherwise, I can respin if Steve prefers.

> > +       list_for_each_safe(tmp, tmp2, &retry_list) {
> > +               mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
> >                list_del_init(&mid_entry->qhead);
> 
> And if we called list_move on this entry earlier, does this entry exist
> on the qhead list anymore i.e. did not list_move that care of list_del part?
> 

list_move moves it from the pending_mid_q to the retry_list.
list_del_init removes it from the list and then makes the list_head
point to itself (i.e. turns it into an empty list).

We have to do the initial list_move with the spinlock held since another
thread could be modifying the pending_mid_q. After they all get moved
to the retry list, we can walk that list without holding the spinlock
since it's private.

-- 
Jeff Layton <[email protected]>
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to