This is a note to let you know that I've just added the patch titled

    block: fix race between set_blocksize and read paths

to the 6.1-stable tree which can be found at:
    
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     block-fix-race-between-set_blocksize-and-read-paths.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.


>From [email protected] Tue Oct 21 16:13:58 
>2025
From: Mahmoud Adam <[email protected]>
Date: Tue, 21 Oct 2025 09:03:42 +0200
Subject: block: fix race between set_blocksize and read paths
To: <[email protected]>
Cc: <[email protected]>, <[email protected]>, "Darrick J. Wong" 
<[email protected]>, Christoph Hellwig <[email protected]>, Luis Chamberlain 
<[email protected]>, Shin'ichiro Kawasaki <[email protected]>, "Jens 
Axboe" <[email protected]>, Xiubo Li <[email protected]>, Ilya Dryomov 
<[email protected]>, Jeff Layton <[email protected]>, Alexander Viro 
<[email protected]>, Theodore Ts'o <[email protected]>, Andreas Dilger 
<[email protected]>, Jaegeuk Kim <[email protected]>, Chao Yu 
<[email protected]>, Christoph Hellwig <[email protected]>, Trond Myklebust 
<[email protected]>, Anna Schumaker <[email protected]>, "Ryusuke 
Konishi" <[email protected]>, "Matthew Wilcox (Oracle)" 
<[email protected]>, Andrew Morton <[email protected]>, "Hannes 
Reinecke" <[email protected]>, Damien Le Moal <[email protected]>, 
<[email protected]>, <[email protected]>, 
<[email protected]>, <[email protected]>, <linux-ext4
 @vger.kernel.org>, <[email protected]>, 
<[email protected]>, <[email protected]>, 
<[email protected]>, <[email protected]>
Message-ID: <[email protected]>

From: "Darrick J. Wong" <[email protected]>

commit c0e473a0d226479e8e925d5ba93f751d8df628e9 upstream.

With the new large sector size support, it's now the case that
set_blocksize can change i_blksize and the folio order in a manner that
conflicts with a concurrent reader and causes a kernel crash.

Specifically, let's say that udev-worker calls libblkid to detect the
labels on a block device.  The read call can create an order-0 folio to
read the first 4096 bytes from the disk.  But then udev is preempted.

Next, someone tries to mount an 8k-sectorsize filesystem from the same
block device.  The filesystem calls set_blksize, which sets i_blksize to
8192 and the minimum folio order to 1.

Now udev resumes, still holding the order-0 folio it allocated.  It then
tries to schedule a read bio and do_mpage_readahead tries to create
bufferheads for the folio.  Unfortunately, blocks_per_folio == 0 because
the page size is 4096 but the blocksize is 8192 so no bufferheads are
attached and the bh walk never sets bdev.  We then submit the bio with a
NULL block device and crash.

Therefore, truncate the page cache after flushing but before updating
i_blksize.  However, that's not enough -- we also need to lock out file
IO and page faults during the update.  Take both the i_rwsem and the
invalidate_lock in exclusive mode for invalidations, and in shared mode
for read/write operations.

I don't know if this is the correct fix, but xfs/259 found it.

Signed-off-by: Darrick J. Wong <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Luis Chamberlain <[email protected]>
Tested-by: Shin'ichiro Kawasaki <[email protected]>
Link: 
https://lore.kernel.org/r/174543795699.4139148.2086129139322431423.stgit@frogsfrogsfrogs
Signed-off-by: Jens Axboe <[email protected]>
[use bdev->bd_inode instead & fix small contextual changes]
Signed-off-by: Mahmoud Adam <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
 block/bdev.c      |   17 +++++++++++++++++
 block/blk-zoned.c |    5 ++++-
 block/fops.c      |   16 ++++++++++++++++
 block/ioctl.c     |    6 ++++++
 4 files changed, 43 insertions(+), 1 deletion(-)

--- a/block/bdev.c
+++ b/block/bdev.c
@@ -147,9 +147,26 @@ int set_blocksize(struct block_device *b
 
        /* Don't change the size if it is same as current */
        if (bdev->bd_inode->i_blkbits != blksize_bits(size)) {
+               /*
+                * Flush and truncate the pagecache before we reconfigure the
+                * mapping geometry because folio sizes are variable now.  If a
+                * reader has already allocated a folio whose size is smaller
+                * than the new min_order but invokes readahead after the new
+                * min_order becomes visible, readahead will think there are
+                * "zero" blocks per folio and crash.  Take the inode and
+                * invalidation locks to avoid racing with
+                * read/write/fallocate.
+                */
+               inode_lock(bdev->bd_inode);
+               filemap_invalidate_lock(bdev->bd_inode->i_mapping);
+
                sync_blockdev(bdev);
+               kill_bdev(bdev);
+
                bdev->bd_inode->i_blkbits = blksize_bits(size);
                kill_bdev(bdev);
+               filemap_invalidate_unlock(bdev->bd_inode->i_mapping);
+               inode_unlock(bdev->bd_inode);
        }
        return 0;
 }
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -417,6 +417,7 @@ int blkdev_zone_mgmt_ioctl(struct block_
                op = REQ_OP_ZONE_RESET;
 
                /* Invalidate the page cache, including dirty pages. */
+               inode_lock(bdev->bd_inode);
                filemap_invalidate_lock(bdev->bd_inode->i_mapping);
                ret = blkdev_truncate_zone_range(bdev, mode, &zrange);
                if (ret)
@@ -439,8 +440,10 @@ int blkdev_zone_mgmt_ioctl(struct block_
                               GFP_KERNEL);
 
 fail:
-       if (cmd == BLKRESETZONE)
+       if (cmd == BLKRESETZONE) {
                filemap_invalidate_unlock(bdev->bd_inode->i_mapping);
+               inode_unlock(bdev->bd_inode);
+       }
 
        return ret;
 }
--- a/block/fops.c
+++ b/block/fops.c
@@ -592,7 +592,14 @@ static ssize_t blkdev_write_iter(struct
                        ret = direct_write_fallback(iocb, from, ret,
                                        generic_perform_write(iocb, from));
        } else {
+               /*
+                * Take i_rwsem and invalidate_lock to avoid racing with
+                * set_blocksize changing i_blkbits/folio order and punching
+                * out the pagecache.
+                */
+               inode_lock_shared(bd_inode);
                ret = generic_perform_write(iocb, from);
+               inode_unlock_shared(bd_inode);
        }
 
        if (ret > 0)
@@ -605,6 +612,7 @@ static ssize_t blkdev_write_iter(struct
 static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
 {
        struct block_device *bdev = iocb->ki_filp->private_data;
+       struct inode *bd_inode = bdev->bd_inode;
        loff_t size = bdev_nr_bytes(bdev);
        loff_t pos = iocb->ki_pos;
        size_t shorted = 0;
@@ -652,7 +660,13 @@ static ssize_t blkdev_read_iter(struct k
                        goto reexpand;
        }
 
+       /*
+        * Take i_rwsem and invalidate_lock to avoid racing with set_blocksize
+        * changing i_blkbits/folio order and punching out the pagecache.
+        */
+       inode_lock_shared(bd_inode);
        ret = filemap_read(iocb, to, ret);
+       inode_unlock_shared(bd_inode);
 
 reexpand:
        if (unlikely(shorted))
@@ -695,6 +709,7 @@ static long blkdev_fallocate(struct file
        if ((start | len) & (bdev_logical_block_size(bdev) - 1))
                return -EINVAL;
 
+       inode_lock(inode);
        filemap_invalidate_lock(inode->i_mapping);
 
        /*
@@ -735,6 +750,7 @@ static long blkdev_fallocate(struct file
 
  fail:
        filemap_invalidate_unlock(inode->i_mapping);
+       inode_unlock(inode);
        return error;
 }
 
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -114,6 +114,7 @@ static int blk_ioctl_discard(struct bloc
            end > bdev_nr_bytes(bdev))
                return -EINVAL;
 
+       inode_lock(inode);
        filemap_invalidate_lock(inode->i_mapping);
        err = truncate_bdev_range(bdev, mode, start, end - 1);
        if (err)
@@ -121,6 +122,7 @@ static int blk_ioctl_discard(struct bloc
        err = blkdev_issue_discard(bdev, start >> 9, len >> 9, GFP_KERNEL);
 fail:
        filemap_invalidate_unlock(inode->i_mapping);
+       inode_unlock(inode);
        return err;
 }
 
@@ -146,12 +148,14 @@ static int blk_ioctl_secure_erase(struct
            end > bdev_nr_bytes(bdev))
                return -EINVAL;
 
+       inode_lock(bdev->bd_inode);
        filemap_invalidate_lock(bdev->bd_inode->i_mapping);
        err = truncate_bdev_range(bdev, mode, start, end - 1);
        if (!err)
                err = blkdev_issue_secure_erase(bdev, start >> 9, len >> 9,
                                                GFP_KERNEL);
        filemap_invalidate_unlock(bdev->bd_inode->i_mapping);
+       inode_unlock(bdev->bd_inode);
        return err;
 }
 
@@ -184,6 +188,7 @@ static int blk_ioctl_zeroout(struct bloc
                return -EINVAL;
 
        /* Invalidate the page cache, including dirty pages */
+       inode_lock(inode);
        filemap_invalidate_lock(inode->i_mapping);
        err = truncate_bdev_range(bdev, mode, start, end);
        if (err)
@@ -194,6 +199,7 @@ static int blk_ioctl_zeroout(struct bloc
 
 fail:
        filemap_invalidate_unlock(inode->i_mapping);
+       inode_unlock(inode);
        return err;
 }
 


Patches currently in stable-queue which might be from [email protected] are

queue-6.1/block-fix-race-between-set_blocksize-and-read-paths.patch
queue-6.1/filemap-add-a-kiocb_invalidate_pages-helper.patch
queue-6.1/fs-factor-out-a-direct_write_fallback-helper.patch
queue-6.1/direct_write_fallback-on-error-revert-the-ki_pos-update-from-buffered-write.patch
queue-6.1/filemap-update-ki_pos-in-generic_perform_write.patch
queue-6.1/filemap-add-a-kiocb_invalidate_post_direct_write-helper.patch
queue-6.1/nilfs2-fix-deadlock-warnings-caused-by-lock-dependency-in-init_nilfs.patch
queue-6.1/block-open-code-__generic_file_write_iter-for-blkdev-writes.patch


_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to