4.9-stable review patch.  If anyone has any objections, please let me know.


From: "J. Bruce Fields" <bfie...@redhat.com>

[ Upstream commit efda760fe95ea15291853c8fa9235c32d319cd98 ]

As reported by David Jeffery: "a signal was sent to lockd while lockd
was shutting down from a request to stop nfs.  The signal causes lockd
to call restart_grace() which puts the lockd_net structure on the grace
list.  If this signal is received at the wrong time, it will occur after
lockd_down_net() has called locks_end_grace() but before
lockd_down_net() stops the lockd thread.  This leads to lockd putting
the lockd_net structure back on the grace list, then exiting without
anything removing it from the list."

So, perform the final locks_end_grace() from the the lockd thread; this
ensures it's serialized with respect to restart_grace().

Reported-by: David Jeffery <djeff...@redhat.com>
Signed-off-by: J. Bruce Fields <bfie...@redhat.com>
Signed-off-by: Sasha Levin <alexander.le...@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>
 fs/lockd/svc.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -132,6 +132,8 @@ lockd(void *vrqstp)
        int             err = 0;
        struct svc_rqst *rqstp = vrqstp;
+       struct net *net = &init_net;
+       struct lockd_net *ln = net_generic(net, lockd_net_id);
        /* try_to_freeze() is called from svc_recv() */
@@ -176,6 +178,8 @@ lockd(void *vrqstp)
        if (nlmsvc_ops)
+       cancel_delayed_work_sync(&ln->grace_period_end);
+       locks_end_grace(&ln->lockd_manager);
        return 0;
@@ -270,8 +274,6 @@ static void lockd_down_net(struct svc_se
        if (ln->nlmsvc_users) {
                if (--ln->nlmsvc_users == 0) {
-                       cancel_delayed_work_sync(&ln->grace_period_end);
-                       locks_end_grace(&ln->lockd_manager);
                        svc_shutdown_net(serv, net);
                        dprintk("lockd_down_net: per-net data destroyed; 
net=%p\n", net);

Reply via email to