commit: 05bf86b4ccfd0f197da61c67bd372111d15a6620
From: Hugh Dickins <[email protected]>
Date: Sat, 14 May 2011 12:06:42 -0700
Subject: [PATCH] tmpfs: fix race between swapoff and writepage

Shame on me!  Commit b1dea800ac39 "tmpfs: fix race between umount and
writepage" fixed the advertized race, but introduced another: as even
its comment makes clear, we cannot safely rely on a peek at list_empty()
while holding no lock - until info->swapped is set, shmem_unuse_inode()
may delete any formerly-swapped inode from the shmem_swaplist, which
in this case would leave a swap area impossible to swapoff.

Although I don't relish taking the mutex every time, I don't care much
for the alternatives either; and at least the peek at list_empty() in
shmem_evict_inode() (a hotter path since most inodes would never have
been swapped) remains safe, because we already truncated the whole file.

Signed-off-by: Hugh Dickins <[email protected]>
Cc: [email protected]
Signed-off-by: Linus Torvalds <[email protected]>
---
 mm/shmem.c |   10 ++++------
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 9e755c1..dfc7069 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1037,7 +1037,6 @@ static int shmem_writepage(struct page *page, struct 
writeback_control *wbc)
        struct address_space *mapping;
        unsigned long index;
        struct inode *inode;
-       bool unlock_mutex = false;
 
        BUG_ON(!PageLocked(page));
        mapping = page->mapping;
@@ -1072,15 +1071,14 @@ static int shmem_writepage(struct page *page, struct 
writeback_control *wbc)
         * we've taken the spinlock, because shmem_unuse_inode() will
         * prune a !swapped inode from the swaplist under both locks.
         */
-       if (swap.val && list_empty(&info->swaplist)) {
+       if (swap.val) {
                mutex_lock(&shmem_swaplist_mutex);
-               /* move instead of add in case we're racing */
-               list_move_tail(&info->swaplist, &shmem_swaplist);
-               unlock_mutex = true;
+               if (list_empty(&info->swaplist))
+                       list_add_tail(&info->swaplist, &shmem_swaplist);
        }
 
        spin_lock(&info->lock);
-       if (unlock_mutex)
+       if (swap.val)
                mutex_unlock(&shmem_swaplist_mutex);
 
        if (index >= info->next_index) {

_______________________________________________
stable mailing list
[email protected]
http://linux.kernel.org/mailman/listinfo/stable

Reply via email to