2.6.35-longterm review patch. If anyone has any objections, please let me know.
------------------ From: NeilBrown <[email protected]> commit 8f9e0ee38f75d4740daa9e42c8af628d33d19a02 upstream. Commit 4044ba58dd15cb01797c4fd034f39ef4a75f7cc3 supposedly fixed a problem where if a raid1 with just one good device gets a read-error during recovery, the recovery would abort and immediately restart in an infinite loop. However it depended on raid1_remove_disk removing the spare device from the array. But that does not happen in this case. So add a test so that in the 'recovery_disabled' case, the device will be removed. This suitable for any kernel since 2.6.29 which is when recovery_disabled was introduced. Reported-by: Sebastian Färber <[email protected]> Signed-off-by: NeilBrown <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Andi Kleen <[email protected]> --- drivers/md/raid1.c | 1 + 1 file changed, 1 insertion(+) Index: linux/drivers/md/raid1.c =================================================================== --- linux.orig/drivers/md/raid1.c +++ linux/drivers/md/raid1.c @@ -1208,6 +1208,7 @@ static int raid1_remove_disk(mddev_t *md * is not possible. */ if (!test_bit(Faulty, &rdev->flags) && + !mddev->recovery_disabled && mddev->degraded < conf->raid_disks) { err = -EBUSY; goto abort;
_______________________________________________ stable mailing list [email protected] http://linux.kernel.org/mailman/listinfo/stable
