On 03/28/2018 12:32 PM, Fred Cisin via cctalk wrote:
With very unreliable drives, that isn't acceptable. If each "drive" within the RAID were itself a RAID, . . . Getting to be a complicated controller, or cascading controllers, . . .

Many of the SCSI / SAS RAID controllers that I've worked with over the last 10+ years have this cascading controller functionality. Most of the RAID controlelrs that I've worked with would let you build a mirror or stripe across some sort of underlying RAID. Typical examples are striping (RAID 0) across mirrors (RAID 1) or multiple RAID 5 arrays.

'course not.  Besides MTBF for calculating the probability of a second drive failing within N hours, must also consider other factors, such as external influences causing more than one drive to go, and the essentially non-linear aspect of a failure rate curve.
You also need to take into account the additional I/O load imposed on the remaining drives during a rebuild.

I usedto routinely run into software (Solstice Disk Suite?) RAID 1 mirrors on Solaris boxen for the OS (/) where different parts of each drive would fail. So we'd end up with a situation where we had a decent RAID, but we couldn't replace either disk. This usually involved taking an entire backup of the machine, replacing both disks, and restoring the data.



--
Grant. . . .
unix || die

Reply via email to