On 4/4/07, Ric Wheeler <[EMAIL PROTECTED]> wrote:


Mirko Benz wrote:
> Neil,
>
> Exactly what I had in mind.
>
> Some vendors claim they do parity checking for reads. Technically it
> should be possible for Linux RAID as well but is not implemented – correct?
>
> Reliability data for unrecoverable read errors:
> - enterprise SAS drive (ST3300655SS): 1 in 10^16 bits transfered, ~ 1
> error in 1,1 PB
> - enterprise SATA drive (ST3500630NS): 1 in 10^14 bits transfered, ~ 1
> error in 11 TB
>
> For a single SATA drive @ 50 MB/s it take on average 2,7 days to
> encounter an error.
> For a large RAID with several drives this becomes much lower or am I
> viewing this wrong?
>
> Regards,
> Mirko

One note is that if the drive itself notices the unrecoverable read error, MD
will see this as an IO error and rebuild the stripe.

What you need the parity check on read for is to validate errors not at the disk
sector level, but rather ones that sneak in from DRAM, HBA errors or wire level
uncorrected errors.

For the raid5 case the corruption can be detected, but not corrected.
Is the expectation that the administrator will be notified to restore
from backup?

Are there any raid6 solutions that take advantage of what hpa has proved?
http://marc.info/?l=linux-raid&m=117333726129771&w=2

I am prototyping a writeback caching policy for MD, it seems plugging
in different read policies would be a straightforward extension.

ric

Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to