I would love to hear where you think the RAID5 algorithm is flawed and with 
which version of firmware.

I have seen drives ack writes without requesting any data, return the wrong 
relative offsets to the controller..etc
I have seen the FC interface chips do lots of things that led to data 
corruption...

I would wager that 90% of the data corruptions that we looked at on the T3/6120 
platforms were not array firmware issues, but host, drive, or FC microcode 
issues....

Personally I always found it fun to find out that the applications were doing 
things that were out of whack...like starting a read for a block range before 
the write completes (i.e the host did not receive any status back yet)...

If you do have a case in mind, please let me know...

That being said, I am a proponent of ZFS because of the issues it can solve 
that are not easy to solve in block-level RAID....

BTW, Most RAID5 implementations do not verify parity every time they pull data 
from a drive....so in that respect they are all open to not catching silent 
data corruptions. It is funny to hear management go down the warpath about 
implementing data integrity features, as long as it does not affect 
performance...
 
 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to