On Mon, Apr 25, 2011 at 4:45 PM, Richard Elling
<richard.ell...@gmail.com> wrote:
> If there is other work going on, then you might be hitting the resilver
> throttle. By default, it will delay 2 clock ticks, if needed. It can be turned

There is some other access to the pool from nfs and cifs clients, but
not much, and mostly reads.

Setting zfs_resilver_delay seems to have helped some, based on the
iostat output. Are there other tunables?

> Probably won't work because it does not make the resilvering drive
> any faster.

It doesn't seem like the devices are the bottleneck, even with the
delay turned off.

$ iostat -xn 60 3
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  369.2   11.5 5577.0   71.3  0.7  0.7    1.9    1.9  14  29 c2t0d0
  371.9   11.5 5570.3   71.3  0.7  0.7    1.7    1.8  13  29 c2t1d0
  369.9   11.5 5574.4   71.3  0.7  0.7    1.8    1.9  14  29 c2t2d0
  370.7   11.5 5573.9   71.3  0.7  0.7    1.8    1.9  14  29 c2t3d0
  368.0   11.5 5553.1   71.3  0.7  0.7    1.8    1.9  14  29 c2t4d0
  196.1  172.8 2825.5 2436.6  0.3  1.1    0.8    3.0   6  26 c2t5d0
  183.6  184.9 2717.6 2674.7  0.5  1.3    1.4    3.5  11  31 c2t6d0
  393.0   11.2 5540.7   71.3  0.5  0.6    1.3    1.5  12  26 c2t7d0
   95.8    1.2   95.6   16.2  0.0  0.0    0.2    0.2   0   1 c0t0d0
    0.9    1.2    3.6   16.2  0.0  0.0    7.5    1.9   0   0 c0t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  891.2   11.8 2386.9   64.4  0.0  1.2    0.0    1.3   1  36 c2t0d0
  919.9   12.1 2351.8   64.6  0.0  1.1    0.0    1.2   0  35 c2t1d0
  906.9   12.1 2346.1   64.6  0.0  1.2    0.0    1.3   0  36 c2t2d0
  877.9   11.6 2351.0   64.5  0.7  0.5    0.8    0.6  23  35 c2t3d0
  883.4   12.0 2322.0   64.4  0.2  1.0    0.2    1.1   7  35 c2t4d0
    0.8  758.0    0.8 1910.4  0.2  5.0    0.2    6.6   3  72 c2t5d0
  882.7   11.4 2355.1   64.4  0.8  0.4    0.9    0.4  27  34 c2t6d0
  907.8   11.4 2373.1   64.5  0.7  0.3    0.8    0.4  23  30 c2t7d0
 1607.8    9.4 1568.2   83.0  0.1  0.2    0.1    0.1   3  18 c0t0d0
    7.3    9.1   23.5   83.0  0.1  0.0    6.0    1.4   2   2 c0t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  960.3   12.7 2868.0   59.0  1.1  0.7    1.2    0.8  37  52 c2t0d0
  963.2   12.7 2877.5   59.1  1.1  0.8    1.1    0.8  36  51 c2t1d0
  960.3   12.6 2844.7   59.1  1.1  0.7    1.1    0.8  37  52 c2t2d0
 1000.1   12.8 2827.1   59.0  0.6  1.2    0.6    1.2  21  52 c2t3d0
  960.9   12.3 2811.1   59.0  1.3  0.6    1.3    0.6  42  51 c2t4d0
    0.5  962.2    0.4 2418.3  0.0  4.1    0.0    4.3   0  59 c2t5d0
 1014.2   12.3 2820.6   59.1  0.8  0.8    0.8    0.8  28  48 c2t6d0
 1031.2   12.5 2822.0   59.1  0.8  0.8    0.7    0.8  26  45 c2t7d0
 1836.4    0.0 1783.4    0.0  0.0  0.2    0.0    0.1   1  19 c0t0d0
    5.3    0.0    5.3    0.0  0.0  0.0    1.1    1.5   1   1 c0t1d0


-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to