This is a note to let you know that I've just added the patch titled

    NFS: Ensure that we free the rpc_task after read and write cleanups are done

to the 3.7-stable tree which can be found at:
    
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     
nfs-ensure-that-we-free-the-rpc_task-after-read-and-write-cleanups-are-done.patch
and it can be found in the queue-3.7 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.


>From 6db6dd7d3fd8f7c765dabc376493d6791ab28bd6 Mon Sep 17 00:00:00 2001
From: Trond Myklebust <[email protected]>
Date: Fri, 4 Jan 2013 12:47:04 -0500
Subject: NFS: Ensure that we free the rpc_task after read and write cleanups 
are done

From: Trond Myklebust <[email protected]>

commit 6db6dd7d3fd8f7c765dabc376493d6791ab28bd6 upstream.

This patch ensures that we free the rpc_task after the cleanup callbacks
are done in order to avoid a deadlock problem that can be triggered if
the callback needs to wait for another workqueue item to complete.

Signed-off-by: Trond Myklebust <[email protected]>
Cc: Weston Andros Adamson <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Bruce Fields <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 fs/nfs/read.c  |   10 +++++++---
 fs/nfs/write.c |   10 +++++++---
 2 files changed, 14 insertions(+), 6 deletions(-)

--- a/fs/nfs/read.c
+++ b/fs/nfs/read.c
@@ -91,12 +91,16 @@ void nfs_readdata_release(struct nfs_rea
        put_nfs_open_context(rdata->args.context);
        if (rdata->pages.pagevec != rdata->pages.page_array)
                kfree(rdata->pages.pagevec);
-       if (rdata != &read_header->rpc_data)
-               kfree(rdata);
-       else
+       if (rdata == &read_header->rpc_data) {
                rdata->header = NULL;
+               rdata = NULL;
+       }
        if (atomic_dec_and_test(&hdr->refcnt))
                hdr->completion_ops->completion(hdr);
+       /* Note: we only free the rpc_task after callbacks are done.
+        * See the comment in rpc_free_task() for why
+        */
+       kfree(rdata);
 }
 EXPORT_SYMBOL_GPL(nfs_readdata_release);
 
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -126,12 +126,16 @@ void nfs_writedata_release(struct nfs_wr
        put_nfs_open_context(wdata->args.context);
        if (wdata->pages.pagevec != wdata->pages.page_array)
                kfree(wdata->pages.pagevec);
-       if (wdata != &write_header->rpc_data)
-               kfree(wdata);
-       else
+       if (wdata == &write_header->rpc_data) {
                wdata->header = NULL;
+               wdata = NULL;
+       }
        if (atomic_dec_and_test(&hdr->refcnt))
                hdr->completion_ops->completion(hdr);
+       /* Note: we only free the rpc_task after callbacks are done.
+        * See the comment in rpc_free_task() for why
+        */
+       kfree(wdata);
 }
 EXPORT_SYMBOL_GPL(nfs_writedata_release);
 


Patches currently in stable-queue which might be from 
[email protected] are

queue-3.7/nfs-fix-access-to-suid-sgid-executables.patch
queue-3.7/nfs-ensure-that-we-free-the-rpc_task-after-read-and-write-cleanups-are-done.patch
queue-3.7/nfs-avoid-dereferencing-null-pointer-in-initiate_bulk_draining.patch
queue-3.7/pnfs-increase-the-refcount-when-layoutget-fails-the-first-time.patch
queue-3.7/nfs-fix-null-checking-in-nfs_get_option_str.patch
queue-3.7/nfs-add-sequence_priviliged_ops-for-nfs4_proc_sequence.patch
queue-3.7/nfs-don-t-extend-writes-to-cover-entire-page-if-pagecache-is-invalid.patch
queue-3.7/nfs-don-t-zero-out-the-rest-of-the-page-if-we-hit-the-eof-on-a-dio-read.patch
queue-3.7/nfs-don-t-use-setpageerror-in-the-nfs-writeback-code.patch
queue-3.7/nfs-fix-calls-to-drop_nlink.patch
queue-3.7/nfsv4-check-for-buffer-length-in-__nfs4_get_acl_uncached.patch
queue-3.7/nfs-avoid-null-dereference-in-nfs_destroy_server.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to