In a raid configuration with a spare disk, if one of the active disks
fails, then the spare disk is automatically configured to replace the
failed disk. If the failed disk is then replaced with raidhotremove /
raidhotadd, this disk now becomes the spare disk. If this happens
should I modify the raidtab file since it is now not consistent with
the state of the raid? The spare-disk device in the raidtab file is not
really the spare disk.
A related question I have is what happens if 3 drives are marked as bad
in a raid5 + hot spare? This happened to me due to hardware problems
(most likely APIC errors). Although the disks are actually likely to be
fine, I couldn't start the raid once the disks were marked as bad. I
couldn't do raidhotadd or raidhotremove since it said the raid wasn't
active, but then I couldn't start the raid either since too many
devices were marked as failed. It seems like something like this could
happen if there is a loose scsi cable or some other hardware problem.
Although the disks may be fine, once too many are marked as failed, it
seems you're stuck.
Andy
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]