Chris R. Brown ([EMAIL PROTECTED]) wrote on 17 May 1999 21:08:
>We've experienced a few odd anomalies during testing our IDE RAID-5
>array ( 6 x 16gb =80g).
>
>We started with a good running array and did an e2fsck to ensure its
>integrity...
>
>We simulated a drive failure by disconnecting a drive's power, and if
>the IDE channel contained a second drive in the RAID5 array, the array
>was permanently hosed and couldn't be used, even though the
>RAID5 driver would report that it was running in ok degraded mode (5
>of 6 drives) and ALL remaining drives were functional and could be
>accessed.
It's been said that with IDE, if the master drive fails the slave one
is also unavailable, which means that you lose TWO drives (I haven't
tested this myself). Therefore the array is hosed. Raid 5 protects
against failure of only one drive.
>We reasoned that this was because the second IDE drive (on the
>channel with the failure) temporarily was offline for a brief instant
>during the "failure". Can anyone confirm these findings, and if so,
>do they imply that elements of a RAID array must be on seperate IDE
>channels?
With IDE it seems to be the case. Anyway, you'll want it for
performance reasons.
>It is our impression that the RAID5 array will not gracefully shut
>down, and most likely be corrupted if two drives temporarily fail, or
>even go off line at once.
Certainly, with raid5.
> Secondly, we had several instances where the RAID5 driver
>reported that it was running in degraded mode with four out of six
>drives functioning (Note: This array had no spares) - a seeming
>impossibility, but the array continued to operate. Is this a bug?
Oops... :-(
>In these cases e2fsck found excessive errors and no data could be
>used.
As expected...
>Third, we tried restarting the array, sometimes switching drives
>around on different channels and couldn't get all drives to be
>properly recognized by the RAID5 driver even though we correctly
>updated the /etc/raidtab file. Would turning off the
>persistant-sperblock feature help out here?
With persistent superblocks, which is the default with raid5, the
kernel CAN recognize scrambled orders. I tested it with scsi, and it
worked perfectly. Without persistent superblocks there's no way for
this to happen.