On May 16, 2011, at 10:31 AM, Brandon High wrote:
> On Mon, May 16, 2011 at 8:33 AM, Richard Elling
> <richard.ell...@gmail.com> wrote:
>> As a rule of thumb, the resilvering disk is expected to max out at around
>> 80 IOPS for 7,200 rpm disks. If you see less than 80 IOPS, then suspect
>> the throttles or broken data path.
> 
> My system was doing far less than 80 IOPS during resilver when I
> recently upgraded the drives. The older and newer drives were both 5k
> RPM drives (WD10EADS and Hitachi 5K3000 3TB) so I don't expect it to
> be super fast.
> 
> The worst resilver was 50 hours, the best was about 20 hours. This was
> just my home server, which is lightly used. The clients (2-3 CIFS
> clients, 3 mostly idle VBox instances using raw zvols, and 2-3 NFS
> clients) are mostly idle and don't do a lot of writes.
> 
> Adjusting zfs_resilver_delay and zfs_resilver_min_time_ms sped things
> up a bit, which suggests that the default values may be too
> conservative for some environments.

I am more inclined to change the hires_tick value. The "delays" are in 
units of clock ticks. For Solaris, the default clock tick is 10ms, that I will
argue is too large for modern disk systems. What this means is that when 
the resilver, scrub, or memory throttle causes delays, the effective IOPS is
driven to 10 or less. Unfortunately, these values are guesses and are 
probably suboptimal for various use cases. OTOH, the prior behaviour of
no resilver or scrub throttle was also considered a bad thing.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to