User Name wrote:
> Hello relling,
>
> Thanks for your comments.  FWIW, I am building an actual hardware array, so 
> een though I _may_ put ZFS on top of the hardware arrays 22TB "drive" that 
> the OS sees (I may not) I am focusing purely on the controller rebuild.
>
> So, setting aside ZFS for the moment, am I still correct in my intuition that 
> there is no way a _controller_ needs to touch a disk more times than there 
> are bits on the entire disk, and that this calculation people are doing is 
> faulty ?
>   

I think the calculation is correct, at least for the general case.
At FAST this year there was an interesting paper which tried to
measure this exposure in a large field sample by using checksum
verifications.  I like this paper and it validates what we see in the
field -- the most common failure mode is unrecoverable read.
http://www.usenix.org/event/fast08/tech/ 
full_papers/bairavasundaram/bairavasundaram.pdf

I should also point out that ZFS is already designed to offer some
diversity which should help guard against spatially clustered
media failures.  hmmm... another blog topic in my queue...
 -- richard


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to