Anyone here read the article "Why RAID 5 stops working in 2009" at 
http://blogs.zdnet.com/storage/?p=162

Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux 
if the RAID has to be rebuilt because of a faulty disk?  I imagine so because 
of the physical constraints that plague our hds.  Granted, the chance of 
failure in my case shouldn't be nearly as high as I will most likely recruit 
four or three 750gb drives- not in the order of 10tb.

With my opensolaris NAS, I will be scrubbing every week (consumer grade 
drives[every month for enterprise-grade]) as recommended in the ZFS best 
practices guide.  If I "zpool status" and I see that the scrub is increasingly 
fixing errors, would that mean that the disk is in fact headed towards failure 
or perhaps that the natural expansion of disk usage is to blame?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to