On Apr 24, 2012, at 8:35 AM, Jim Klimov wrote:
> On 2012-04-24 19:14, Tim Cook wrote:
>> Personally unless the dataset is huge and you're using z3, I'd be
>> scrubbing once a week. Even if it's z3, just do a window on Sunday's or
>> something so that you at least make it through the whole dataset at
>> least once a month.
It depends. There are cascading failure modes in your system that are not
media related and cause bring your system to its knees. Scrubs and resilvers
can trigger or exacerbate these.
> +1 I guess
> Among other considerations, if the scrub does find irrepairable errors,
> you might have some recent-enough backups or other sources of the data,
> so the situation won't be as fatal as when you look for errors once a
> year ;)
There is considerable evidence that scrubs propagate errors for some systems
(no such evidence for ZFS systems). So it is not a good idea to have a blanket
scrub policy with high frequency.
>> There's no reason NOT to scrub that I can think of other than the
>> overhead - which shouldn't matter if you're doing it during off hours.
> "I heard a rumor" that HDDs can detect reading flaky sectors
> (i.e. detect a bit-rot error and recover thanks to ECC), and
> in this case they would automatically remap the revocered
> sector. So reading the disks in (logical) locations where
> your data is known to be may be a good thing to prolong its
> available life.
It is a SMART feature and the disks do it automatically for you.
ZFS Performance and Training
zfs-discuss mailing list