On Mon, 28 Jan 2008 22:02:49 -0500
"J. Bruce Fields" <[EMAIL PROTECTED]> wrote:

> On Mon, Jan 28, 2008 at 09:29:42PM -0500, Jeff Layton wrote:
> > On Mon, 28 Jan 2008 20:23:51 -0500
> > "J. Bruce Fields" <[EMAIL PROTECTED]> wrote:
> > 
> > > On Mon, Jan 28, 2008 at 08:12:10PM -0500, Jeff Layton wrote:
> > > > On Mon, 28 Jan 2008 19:05:17 -0500
> > > > "J. Bruce Fields" <[EMAIL PROTECTED]> wrote:
> > > > 
> > > > > On Mon, Jan 28, 2008 at 02:29:11PM -0500, Jeff Layton wrote:
> > > > > > It's possible to end up continually looping in
> > > > > > nlmsvc_retry_blocked() even when the lockd kthread has been
> > > > > > requested to come down. I've been able to reproduce this
> > > > > > fairly easily by having an RPC grant callback queued up to
> > > > > > an RPC client that's not bound. If the other host is
> > > > > > unresponsive, then the block never gets taken off the list,
> > > > > > and lockd continually respawns new RPC's.
> > > > > > 
> > > > > > If lockd is going down anyway, we don't want to keep
> > > > > > retrying the blocks, so allow it to break out of this loop.
> > > > > > Additionally, in that situation kthread_stop will have
> > > > > > already woken lockd, so when nlmsvc_retry_blocked returns,
> > > > > > have lockd check for kthread_stop() so that it doesn't end
> > > > > > up calling svc_recv and waiting for a long time.
> > > > > 
> > > > > Stupid question: how is this different from the
> > > > > kthread_stop() coming just after this kthread_should_stop()
> > > > > call but before we call svc_recv()?  What keeps us from
> > > > > waiting in svc_recv() indefinitely after kthread_stop()?
> > > > > 
> > > > 
> > > > Nothing at all, unfortunately :-(
> > > > 
> > > > Since we've taken signalling out of the lockd_down codepath,
> > > > this gets a lot trickier than it used to be. I'm starting to
> > > > wonder if we should go back to sending a signal on lockd_down.
> > > 
> > > OK, thanks.  So do I keep these patches, or not?  This sounds
> > > like a regression (if perhaps a very minor one--I'm not quite
> > > clear).  Help!
> > > 
> > 
> > Well, if we reintroduce signalling in lockd_down then this
> > particular problem goes away. It may be reasonable to add that back
> > into the mix for now.
> > 
> > As to the more general question of whether to keep these patches...
> > 
> > Most of them are pretty innocuous, but the patch to convert to
> > kthread API is the biggest change. 2.6.25 might be a bit aggressive
> > for that. It wouldn't hurt to let the conversion to kthread API
> > brew until 2.6.26.
> > 
> > lockd still does have a use after free problem though, so patch 1/2
> > here might be worth considering even without changing things over to
> > kthreads. I've not tested it on a kernel without the kthread
> > patches, however. I can do that if you think that's the right
> > approach.
> 
> Pfft, I was hoping you'd tell me what to do.
> 
> But, yes, it sounds like dropping the kthread conversion and sending
> in that one bugfix is probably wisest.  (But I assume we should keep
> the first of those three patches, "SUNRPC: spin svc_rqst
> initialization to its own function".)
> 

Yes. The first patch fixes a possible NULL pointer dereference so I
think we want that one here. This patch:

    Subject: [PATCH 2/4] SUNRPC: export svc_sock_update_bufs

...should be safe too, though it's not strictly needed until we convert
lockd to the kthread API.

So at this point we have 2 questions:

1) is there a way to make sure we don't block in svc_recv() without
resorting to signals and can we reasonably mix signaling +
kthread_stop? There may be a chicken and egg problem involved here
though I haven't thought it through yet...

2) can an unresponsive client make lockd loop continuously in
nlmsvc_retry_blocked() and prevent it from serving new lock requests?

I'll try to get answers to both of these...

-- 
Jeff Layton <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to