2012-10-25 15:30, Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Wagner
I can only speak anecdotally, but I believe it does.
Watching zpool iostat it does read all data on both disks in a mirrored
Logically, it would not make sense not to verify all redundant data.
The point of a scrub is to ensure all data is correct.
Same for me.
Think about it: When you write some block, it computes parity bits, and writes
them to the redundant parity disks. When you later scrub the same data, it
wouldn't make sense to do anything other than repeating this process, to verify
all the disks including parity.
Logically, yes - I agree this is what we expect to be done.
However, at least with the normal ZFS reading pipeline, reads
of redundant copies and parities only kick in if the first
read variant of the block had errors (HW IO errors, checksum
In case of raidzN, normal reads should first try to use the
unencumbered plain userdata sectors (with IO speeds like
striping), and if there are errors - retry with permutations
based on parity sectors and different combinations of userdata
sectors, until it makes a block whose checksum matches the
expected value - or fails by running out of combinations.
If scrubbing works the way we "logically" expect it to, it
should enforce validation of such combinations for each read
of each copy of a block, in order to ensure that parity sectors
are intact and can be used for data recovery if a plain sector
fails. Likely, raidzN scrubs should show as compute-intensive
tasks compared to similar mirror scrubs.
I thought about it, wondered and posted the question, and went
on to my other work. I did not (yet) research the code to find
first-hand, partly because gurus might know the answer and reply
faster than I dig into it ;)
zfs-discuss mailing list