While not strictly a ZFS issue as such I thought I'd post here as this and the 
storage forums are my best bet in terms of getting some help.

I have a machine that I recently set up with b130, b131 and b132. With each 
build I have been playing around with ZFS raidz2 and mirroring to do a little 
performance testing. This is a 6 SATA port ICH10 Intel motherboard, running in 
AHCI mode. OS is on a USB flash. Suffice to say I have noticed that 1 
particular drive out of 6 seems to have very high asvc_t practically all the 
time. This is an excerpt from 'iostat -xnM c6t2d0 2';

 [i]   r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
   70.5  502.0    0.0    4.1  0.0  1.3    0.0    2.2   0  54 c6t2d0
   50.5  137.5    0.0    3.0  0.0  0.7    0.0    3.9   0  47 c6t2d0
   71.0  163.5    0.0    4.8  0.0  0.8    0.0    3.4   0  61 c6t2d0
   13.5   29.5    0.0    1.0  0.0  2.6    0.0   61.4   0  88 c6t2d0
    1.0    0.5    0.0    0.0  0.0  3.6    0.0 2406.2   0 100 c6t2d0
    1.0    1.0    0.0    0.0  0.0  4.0    0.0 1993.4   0 100 c6t2d0
    1.0    1.5    0.0    0.0  0.0  4.0    0.0 1593.8   0 100 c6t2d0
    2.0    3.0    0.0    0.1  0.0  4.0    0.0  791.6   0 100 c6t2d0
    1.0    2.0    0.0    0.1  0.0  4.0    0.0 1320.3   0 100 c6t2d0
    1.0    5.0    0.0    0.3  0.0  3.6    0.0  595.1   0 100 c6t2d0
[/i]

and here is the drive shown with the other in the raidz2 pool

                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    1.5    0.0    0.0  0.0  0.0    0.0    0.5   0   0 c6t0d0
    0.0    1.5    0.0    0.0  0.0  0.0    0.0    0.3   0   0 c6t1d0
    1.0    1.0    0.0    0.0  0.0  4.0    0.0 1994.8   0 100 c6t2d0
    1.0    1.5    0.0    0.0  0.0  0.0    0.0    5.2   0   1 c6t3d0
    1.0    1.5    0.0    0.0  0.0  0.0    0.0    6.9   0   1 c6t4d0
    1.0    1.5    0.0    0.0  0.0  0.0    0.0   10.1   0   2 c6t5d0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    1.0    5.5    0.0    0.2  0.0  0.0    0.0    1.6   0   1 c6t0d0
    1.0    5.5    0.0    0.2  0.0  0.0    0.0    1.5   0   1 c6t1d0
    2.0    3.5    0.0    0.1  0.0  4.0    0.0  721.8   0 100 c6t2d0
    1.0    5.5    0.0    0.2  0.0  0.0    0.0    1.9   0   1 c6t3d0
    1.0    5.5    0.0    0.2  0.0  0.0    0.0    1.6   0   1 c6t4d0
    2.0    5.5    0.0    0.2  0.0  0.0    0.0    3.1   0   2 c6t5d0
                    extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    3.5    0.0    0.1  0.0  0.0    0.0    0.4   0   0 c6t0d0
    0.0    3.5    0.0    0.1  0.0  0.0    0.0    1.8   0   0 c6t1d0
    1.0    2.0    0.0    0.1  0.0  4.0    0.0 1327.1   0 100 c6t2d0
    1.0    3.5    0.0    0.1  0.0  0.0    0.0    4.9   0   1 c6t3d0
    1.0    3.5    0.0    0.1  0.0  0.0    0.0    3.9   0   1 c6t4d0
    1.0    3.5    0.0    0.1  0.0  0.0    0.0    2.0   0   1 c6t5d0

I have seen asvc_t as high as 20000.

There do not appear to be any errors hardware wise as 'iostat -e' shows

 [i]          ---- errors ---
device  s/w h/w trn tot
sd0       0   0   0   0
sd2       0   0   0   0
sd3       0   0   0   0
sd4       0   0   0   0
sd5       0   0   0   0
sd6       0   0   0   0
sd7       0   0   0   0
[/i]



'zpool iostat -v 2' pauses for anywhere between 3 and 10 seconds before it 
prints the stats for that particular drive in the pool;

                       capacity     operations    bandwidth
pool                alloc   free   read  write   read  write
------------------  -----  -----  -----  -----  -----  -----
data                 185G  5.26T      3    115  8.96K  2.48M
  raidz2             185G  5.26T      3    115  8.96K  2.48M
    c6t0d0              -      -      2     26  2.70K   643K
    c6t1d0              -      -      2     26  2.49K   643K

* INSERT ~10 SECOND PAUSE*

    c6t2d0              -      -      2     24  2.81K   643K
    c6t3d0              -      -      2     26  2.75K   643K
    c6t4d0              -      -      2     26  2.45K   643K
    c6t5d0              -      -      2     26  2.71K   643K
------------------  -----  -----  -----  -----  -----  -----
rpool               3.50G  3.94G      0      0  9.99K   1010
  c5t0d0s0          3.50G  3.94G      0      0  9.99K   1010
------------------  -----  -----  -----  -----  -----  -----
swpool               102K  3.69G      0      0     19      0
  /dev/rdsk/c7t0d0   102K  3.69G      0      0     19      0
------------------  -----  -----  -----  -----  -----  -----

I have booted up a linux rescue CD that has S.M.A.R.T support (system rescue 
CD) and performed the 'long' test on each drive. All drives pass the 'test'. 
There also appears to be no system errors with the drives under linux either.

Can anyone shed any light on this issue, or suggest what I could try next ? I 
am sort of discounting hardware problems given that I do not see errors from 
the live linux CD. Maybe I should install linux and see if the problem persists 
?

Cheers.
-- 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to