Hi Neil;

I am hoping you are going to tell me this is already solved,
but here goes...

scenario:
        hda4, hdb4, and hdc4 in a raid 5 with no hotspare.


With 2.4.3 XFS kernels, it seems that a raid 5 does not come
up correctly if the first disk is unavailable.  The error message
that arises in the syslog from the md driver is:

md: could not lock hda4, zero size?  marking faulty
md: could not import hda4.
md: autostart hda4 failed!

The same does not happen if hdb4 or hdc4 are unavailable.  The
raid 5 comes up as expected.  You can get around this by
marking hda4 as "failed-drive" and doing a mkraid --force
on the appropriate raid device and writing the superblock(s)
again, but it would be nice if the raid 5 set came up in
degraded mode at least so we could repair the condition
as it does if hdb4 or hdc4 are disabled at boot.  We are
NOT using kernel autodetection, but we are running raidstart
on the raid device in rc.sysinit.

Before we attend to it, is this expected behavior or not?

thanks, Scott
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]

Reply via email to