3.16.7-ckt11 -stable review patch.  If anyone has any objections, please let me 
know.

------------------

From: Lukas Czerner <[email protected]>

commit 0f2af21aae11972fa924374ddcf52e88347cf5a8 upstream.

Currently there is a bug in zero range code which causes zero range
calls to only allocate block aligned portion of the range, while
ignoring the rest in some cases.

In some cases, namely if the end of the range is past i_size, we do
attempt to preallocate the last nonaligned block. However this might
cause kernel to BUG() in some carefully designed zero range requests
on setups where page size > block size.

Fix this problem by first preallocating the entire range, including
the nonaligned edges and converting the written extents to unwritten
in the next step. This approach will also give us the advantage of
having the range to be as linearly contiguous as possible.

Signed-off-by: Lukas Czerner <[email protected]>
Signed-off-by: Theodore Ts'o <[email protected]>
Cc: Moritz Muehlenhoff <[email protected]>
Signed-off-by: Luis Henriques <[email protected]>
---
 fs/ext4/extents.c | 31 +++++++++++++++++++------------
 1 file changed, 19 insertions(+), 12 deletions(-)

diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 2a4b4f3b1ae2..cdfe574ba3d9 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -4795,12 +4795,6 @@ static long ext4_zero_range(struct file *file, loff_t 
offset,
        else
                max_blocks -= lblk;
 
-       flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT |
-               EXT4_GET_BLOCKS_CONVERT_UNWRITTEN |
-               EXT4_EX_NOCACHE;
-       if (mode & FALLOC_FL_KEEP_SIZE)
-               flags |= EXT4_GET_BLOCKS_KEEP_SIZE;
-
        mutex_lock(&inode->i_mutex);
 
        /*
@@ -4817,15 +4811,28 @@ static long ext4_zero_range(struct file *file, loff_t 
offset,
                ret = inode_newsize_ok(inode, new_size);
                if (ret)
                        goto out_mutex;
-               /*
-                * If we have a partial block after EOF we have to allocate
-                * the entire block.
-                */
-               if (partial_end)
-                       max_blocks += 1;
        }
 
+       flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT;
+       if (mode & FALLOC_FL_KEEP_SIZE)
+               flags |= EXT4_GET_BLOCKS_KEEP_SIZE;
+
+       /* Preallocate the range including the unaligned edges */
+       if (partial_begin || partial_end) {
+               ret = ext4_alloc_file_blocks(file,
+                               round_down(offset, 1 << blkbits) >> blkbits,
+                               (round_up((offset + len), 1 << blkbits) -
+                                round_down(offset, 1 << blkbits)) >> blkbits,
+                               new_size, flags, mode);
+               if (ret)
+                       goto out_mutex;
+
+       }
+
+       /* Zero range excluding the unaligned edges */
        if (max_blocks > 0) {
+               flags |= (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN |
+                         EXT4_EX_NOCACHE);
 
                /* Now release the pages and zero block aligned part of pages*/
                truncate_pagecache_range(inode, start, end - 1);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to