ZFS already limits the amount of IO that a scrub can do. Putting
multiple pools on the same disk defeats ZFS's IO scheduler.* Scrubs are
just one example of the performance problems that will cause. I don't
think we should complicate the scrub script to accommodate this
scenario.
My suggestion is that you comment out the default scrub job in
/etc/cron.d/zfsutils-linux and replace it with something that meets your
needs. Don't change /usr/lib/zfs-linux/scrub, as that will get
overwritten on package upgrades.
For example, you might scrub the pools on different weeks with something like
this:
24 0 0-7 * * root [ $(date +\%w) -eq 0 ] && zpool list -H -o health POOL1 2>
/dev/null | grep -q ONLINE && zpool scrub POOL1
24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && zpool list -H -o health POOL2 2>
/dev/null | grep -q ONLINE && zpool scrub POOL2
I'm going to boldly mark this Invalid. Others can override me,
obviously. Or, if you want to make more of a case, go for it.
* As a side note, in the general case, such a configuration also implies
that one is using partitions. This means they have the Linux IO
scheduler also in the mix, unless they're doing root-on-ZFS, in which
case zfs-initramfs is setting the noop scheduler. I assume you're doing
root-on-ZFS, since you mentioned "One pool holds OS", so that's not an
issue for you personally.
** Changed in: zfs-linux (Ubuntu)
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1731735
Title:
zfs scrub starts on all pools simultaneously
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1731735/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs