Neil Brown wrote:
2.6.12 does support reducing the number of drives in a raid1, but it
will only remove drives from the end of the list. e.g. if the
state was
58604992 blocks [3/2] [UU_]
then it would work. But as it is
58604992 blocks [3/2] [_UU]
it won't. You could fail the last drive (hdc8) and then add it back
in again. This would move it to the first slot, but it would cause a
full resync which is a bit of a waste.
Thanks for your help! That's the route I took. It worked ([2/2]
[UU]). The only hiccup was that when I rebooted, hdd2 was back in the
first slot by itself ([3/1] [U__]). I guess there was some contention
in discovery. But all I had to do was physically remove hdd and the
remaining two were back to [2/2] [UU].
Since commit 6ea9c07c6c6d1c14d9757dd8470dc4c85bbe9f28 (about
2.6.13-rc4) raid1 will repack the devices to the start of the
list when trying to change the number of devices.
I couldn't find a newer kernel RPM for FC3, and I was nervous about
building a new kernel myself and screwing up my system, so I went the
slot rotate route instead. It only took about 20 minutes to resync (a
lot faster than trying to build a new kernel).
My main concern was that it would discover an unreadable sector while
resyncing from the last remaining drive and I would lose the whole
array. (That didn't happen, though.) I looked for some mdadm command
to check the remaining drive before I failed the last one, to help avoid
that worst case scenario, but couldn't find any. Is there some way to
do that, for future reference?
Cheers,
11011011
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html