Hi,

snv_74, x4500, 48x 500GB, 16GB RAM, 2x dual core

# zpool create test c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t6d0 c0t7d0 
c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t7d0 c4t0d0 c4t1d0 c4t2d0 
c4t3d0 c4t4d0 c4t5d0 c4t6d0 c4t7d0 c5t1d0 c5t2d0 c5t3d0 c5t5d0 c5t6d0 c5t7d0 
c6t0d0 c6t1d0 c6t2d0 c6t3d0 c6t4d0 c6t5d0 c6t6d0 c6t7d0 c7t0d0 c7t1d0 c7t2d0 
c7t3d0 c7t4d0 c7t5d0 c7t6d0 c7t7d0
[46x 500GB]

# ls -lh /test/q1
-rw-r--r--   1 root     root         82G Oct 18 09:43 /test/q1

# dd if=/test/q1 of=/dev/null bs=16384k &
# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test         213G  20.6T    645    120  80.1M  14.7M
test         213G  20.6T  9.26K      0  1.16G      0
test         213G  20.6T  9.66K      0  1.21G      0
test         213G  20.6T  9.41K      0  1.18G      0
test         213G  20.6T  9.41K      0  1.18G      0
test         213G  20.6T  7.45K      0   953M      0
test         213G  20.6T  7.59K      0   971M      0
test         213G  20.6T  7.41K      0   948M      0
test         213G  20.6T  8.25K      0  1.03G      0
test         213G  20.6T  9.17K      0  1.15G      0
test         213G  20.6T  9.54K      0  1.19G      0
test         213G  20.6T  9.89K      0  1.24G      0
test         213G  20.6T  9.41K      0  1.18G      0
test         213G  20.6T  9.31K      0  1.16G      0
test         213G  20.6T  9.80K      0  1.22G      0
test         213G  20.6T  8.72K      0  1.09G      0
test         213G  20.6T  7.86K      0  1006M      0
test         213G  20.6T  7.21K      0   923M      0
test         213G  20.6T  7.62K      0   975M      0
test         213G  20.6T  8.68K      0  1.08G      0
test         213G  20.6T  9.81K      0  1.23G      0
test         213G  20.6T  9.57K      0  1.20G      0

So it's around 1GB/s.

# dd if=/dev/zero of=/test/q10 bs=128k &
# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test         223G  20.6T    656    170  81.5M  20.8M
test         223G  20.6T      0  8.10K      0  1021M
test         223G  20.6T      0  7.94K      0  1001M
test         216G  20.6T      0  6.53K      0   812M
test         216G  20.6T      0  7.19K      0   906M
test         216G  20.6T      0  6.78K      0   854M
test         216G  20.6T      0  7.88K      0   993M
test         216G  20.6T      0  10.3K      0  1.27G
test         222G  20.6T      0  8.61K      0  1.04G
test         222G  20.6T      0  7.30K      0   919M
test         222G  20.6T      0  8.16K      0  1.00G
test         222G  20.6T      0  8.82K      0  1.09G
test         225G  20.6T      0  4.19K      0   511M
test         225G  20.6T      0  10.2K      0  1.26G
test         225G  20.6T      0  9.15K      0  1.13G
test         225G  20.6T      0  8.46K      0  1.04G
test         225G  20.6T      0  8.48K      0  1.04G
test         225G  20.6T      0  10.9K      0  1.33G
test         231G  20.6T      0      3      0  3.96K
test         231G  20.6T      0      0      0      0
test         231G  20.6T      0      0      0      0
test         231G  20.6T      0  9.02K      0  1.11G
test         231G  20.6T      0  12.2K      0  1.50G
test         231G  20.6T      0  9.14K      0  1.13G
test         231G  20.6T      0  10.3K      0  1.27G
test         231G  20.6T      0  9.08K      0  1.10G
test         237G  20.6T      0      0      0      0
test         237G  20.6T      0      0      0      0
test         237G  20.6T      0  6.03K      0   760M
test         237G  20.6T      0  9.18K      0  1.13G
test         237G  20.6T      0  8.40K      0  1.03G
test         237G  20.6T      0  8.45K      0  1.04G
test         237G  20.6T      0  11.1K      0  1.36G

Well, writing could be faster than reading here... there're gaps due to bug 
6415647 I guess.


# zpool destroy test

# metainit d100 1 46 c0t0d0s0 c0t1d0s0 c0t2d0s0 c0t3d0s0 c0t4d0s0 c0t5d0s0 
c0t6d0s0 c0t7d0s0 c1t0d0s0 c1t1d0s0 c1t2d0s0 c1t3d0s0 c1t4d0s0 c1t5d0s0 
c1t6d0s0 c1t7d0s0 c4t0d0s0 c4t1d0s0 c4t2d0s0 c4t3d0s0 c4t4d0s0 c4t5d0s0 
c4t6d0s0 c4t7d0s0 c5t1d0s0 c5t2d0s0 c5t3d0s0 c5t5d0s0 c5t6d0s0 c5t7d0s0 
c6t0d0s0 c6t1d0s0 c6t2d0s0 c6t3d0s0 c6t4d0s0 c6t5d0s0 c6t6d0s0 c6t7d0s0 
c7t0d0s0 c7t1d0s0 c7t2d0s0 c7t3d0s0 c7t4d0s0 c7t5d0s0 c7t6d0s0 c7t7d0s0 -i 128k
d100: Concat/Stripe is setup
[46x 500GB]

And I get not so good results - maximum 1GB of reading... hmmmm...

maxphys is 56K - I thought it was increased some time ago on x86!

Still no performance increase.

# metainit d101 -r c0t0d0s0 c1t0d0s0 c4t0d0s0 c6t0d0s0 c7t0d0s0 -i 128k
# metainit d102 -r c0t1d0s0 c1t1d0s0 c5t1d0s0 c6t1d0s0 c7t1d0s0 -i 128k
# metainit d103 -r c0t2d0s0 c1t2d0s0 c5t2d0s0 c6t2d0s0 c7t2d0s0 -i 128k
# metainit d104 -r c0t4d0s0 c1t4d0s0 c4t4d0s0 c6t4d0s0 c7t4d0s0 -i 128k
# metainit d105 -r c0t3d0s0 c1t3d0s0 c4t3d0s0 c5t3d0s0 c6t3d0s0 c7t3d0s0 -i 128k
# metainit d106 -r c0t5d0s0 c1t5d0s0 c4t5d0s0 c5t5d0s0 c6t5d0s0 c7t5d0s0 -i 128k
# metainit d107 -r c0t6d0s0 c1t6d0s0 c4t6d0s0 c5t6d0s0 c6t6d0s0 c7t6d0s0 -i 128k
# metainit d108 -r c0t7d0s0 c1t7d0s0 c4t7d0s0 c5t7d0s0 c6t7d0s0 c7t7d0s0 -i 128k

# iostat -xnzCM 1 | egrep "device| c[0-7]$"
[...]
                    extended device statistics              
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0  362.0    0.0  362.0  0.0  7.0    0.0   19.3   0 698 c0
    0.0  377.0    0.0  377.0  0.0  7.0    0.0   18.5   0 698 c1
    0.0  320.0    0.0  320.0  0.0  6.0    0.0   18.7   0 598 c4
    0.0  268.0    0.0  268.0  0.0  5.0    0.0   18.6   0 499 c5
    0.0  372.0    0.0  372.0  0.0  7.0    0.0   18.8   0 698 c6
    0.0  374.0    0.0  374.0  0.0  7.0    0.0   18.7   0 698 c7

Sometimes I get even more - around 2.3GB/s

The question is - why I can't get that kind of performance with single zfs pool 
(striping accross all te disks)? Concurrency problem or something else?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to