This is a note to let you know that I've just added the patch titled

    dm-raid: fix lockdep waring in "pers->hot_add_disk"

to the 6.8-stable tree which can be found at:
    
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     dm-raid-fix-lockdep-waring-in-pers-hot_add_disk.patch
and it can be found in the queue-6.8 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.



commit 536e66694259aab8c1e869aa670b4b0d5805f925
Author: Yu Kuai <[email protected]>
Date:   Tue Mar 5 15:23:06 2024 +0800

    dm-raid: fix lockdep waring in "pers->hot_add_disk"
    
    [ Upstream commit 95009ae904b1e9dca8db6f649f2d7c18a6e42c75 ]
    
    The lockdep assert is added by commit a448af25becf ("md/raid10: remove
    rcu protection to access rdev from conf") in print_conf(). And I didn't
    notice that dm-raid is calling "pers->hot_add_disk" without holding
    'reconfig_mutex'.
    
    "pers->hot_add_disk" read and write many fields that is protected by
    'reconfig_mutex', and raid_resume() already grab the lock in other
    contex. Hence fix this problem by protecting "pers->host_add_disk"
    with the lock.
    
    Fixes: 9092c02d9435 ("DM RAID: Add ability to restore transiently failed 
devices on resume")
    Fixes: a448af25becf ("md/raid10: remove rcu protection to access rdev from 
conf")
    Cc: [email protected] # v6.7+
    Signed-off-by: Yu Kuai <[email protected]>
    Signed-off-by: Xiao Ni <[email protected]>
    Acked-by: Mike Snitzer <[email protected]>
    Signed-off-by: Song Liu <[email protected]>
    Link: 
https://lore.kernel.org/r/[email protected]
    Signed-off-by: Sasha Levin <[email protected]>

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 063f1266ec462..d97355e9b9a6e 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -4091,7 +4091,9 @@ static void raid_resume(struct dm_target *ti)
                 * Take this opportunity to check whether any failed
                 * devices are reachable again.
                 */
+               mddev_lock_nointr(mddev);
                attempt_restore_of_faulty_devices(rs);
+               mddev_unlock(mddev);
        }
 
        if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) {

Reply via email to