Hello list,

did some bonnie++ benchmarks for different zpool configurations
consisting of one or two 1tb sata disks (hitachi hds721010cla332, 512
bytes/sector, 7.2k), and got some strange results, please see
attachements for exact numbers and pool config:

          seq write  factor   seq read  factor
          MB/sec              MB/sec
single    123        1        135       1
raid0     114        1        249       2
mirror     57        0.5      129       1

Each of the disks is capable of about 135 MB/sec sequential reads and
about 120 MB/sec sequential writes, iostat -En shows no defects. Disks
are 100% busy in all tests, and show normal service times. This is on
opensolaris 130b, rebooting with openindiana 151a live cd gives the
same results, dd tests give the same results, too. Storage controller
is an lsi 1068 using mpt driver. The pools are newly created and
empty. atime on/off doesn't make a difference.

Is there an explanation why

1) in the raid0 case the write speed is more or less the same as a
single disk.

2) in the mirror case the write speed is cut by half, and the read
speed is the same as a single disk. I'd expect about twice the
performance for both reading and writing, maybe a bit less, but
definitely more than measured.

For comparison I did the same tests with 2 old 2.5" 36gb sas 10k disks
maxing out at about 50-60 MB/sec on the outer tracks.

          seq write  factor   seq read  factor
          MB/sec              MB/sec
single     38        1         50       1
raid0      89        2        111       2
mirror     36        1         92       2

Here we get the expected behaviour: raid0 with about double the
performance for reading and writing, mirror about the same performance
for writing, and double the speed for reading, compared to a single
disk. An old scsi system with 4x2 mirror pairs also shows these
scaling characteristics, about 450-500 MB/sec seq read and 250 MB/sec
write, each disk capable of 80 MB/sec. I don't care about absolute
numbers, just don't get why the sata system is so much slower than
expected, especially for a simple mirror. Any ideas?

Thanks,
Michael

--
Michael Hase
http://edition-software.de
  pool: ptest
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        ptest       ONLINE       0     0     0
          c13t4d0   ONLINE       0     0     0

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
zfssingle       32G    79  98 123866  51 63626  35   255  99 135359  25 530.6  
13
Latency               333ms     111ms    5283ms   73791us     465ms    2535ms
Version  1.96       ------Sequential Create------ --------Random Create--------
zfssingle           -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  4536  40 +++++ +++ 14140  50 10382  69 +++++ +++  6260  73
Latency             21655us     154us     206us   24539us      46us     405us
1.96,1.96,zfssingle,1,1342165334,32G,,79,98,123866,51,63626,35,255,99,135359,25,530.6,13,16,,,,,4536,40,+++++,+++,14140,50,10382,69,+++++,+++,6260,73,333ms,111ms,5283ms,73791us,465ms,2535ms,21655us,154us,206us,24539us,46us,405us

###############

  pool: ptest
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        ptest       ONLINE       0     0     0
          c13t4d0   ONLINE       0     0     0
          c13t5d0   ONLINE       0     0     0

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
zfsstripe       32G    78  98 114243  46 72938  37   192  77 249022  44 815.1  
20
Latency               483ms     106ms    5179ms    3613ms     259ms    1567ms
Version  1.96       ------Sequential Create------ --------Random Create--------
zfsstripe           -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  6474  53 +++++ +++ 15505  47  8562  81 +++++ +++ 10839  65
Latency             21894us     131us     208us   22203us      52us     230us
1.96,1.96,zfsstripe,1,1342172768,32G,,78,98,114243,46,72938,37,192,77,249022,44,815.1,20,16,,,,,6474,53,+++++,+++,15505,47,8562,81,+++++,+++,10839,65,483ms,106ms,5179ms,3613ms,259ms,1567ms,21894us,131us,208us,22203us,52us,230us

################

  pool: ptest
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        ptest        ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c13t4d0  ONLINE       0     0     0
            c13t5d0  ONLINE       0     0     0

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
zfsmirror       32G    77  98 57247  24 39607  22   227  98 129639  25 739.9  17
Latency               520ms   73719us    5408ms   94349us     451ms    1466ms
Version  1.96       ------Sequential Create------ --------Random Create--------
zfsmirror           -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  5790  53 +++++ +++  9871  55  7183  65 +++++ +++  9993  44
Latency             29362us     262us     435us   22629us      25us     202us
1.96,1.96,zfsmirror,1,1342174995,32G,,77,98,57247,24,39607,22,227,98,129639,25,739.9,17,16,,,,,5790,53,+++++,+++,9871,55,7183,65,+++++,+++,9993,44,520ms,73719us,5408ms,94349us,451ms,1466ms,29362us,262us,435us,22629us,25us,202us
  pool: psas
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        psas        ONLINE       0     0     0
          c2t2d0    ONLINE       0     0     0
          c2t3d0    ONLINE       0     0     0

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
zfssasstripe    32G   122  99 89086  18 27264   7   325  99 111753  11 522.4  11
Latency             89941us   26949us    3192ms   53126us    2052ms    2528ms
Version  1.96       ------Sequential Create------ --------Random Create--------
zfssasstripe        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  5036  23 +++++ +++  7989  23  8044  33 +++++ +++  8853  25
Latency             26568us     133us     135us   15398us     113us     140us
1.96,1.96,zfssasstripe,1,1342171776,32G,,122,99,89086,18,27264,7,325,99,111753,11,522.4,11,16,,,,,5036,23,+++++,+++,7989,23,8044,33,+++++,+++,8853,25,89941us,26949us,3192ms,53126us,2052ms,2528ms,26568us,133us,135us,15398us,113us,140us

####################

  pool: psas
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        psas        ONLINE       0     0     0
          c2t2d0    ONLINE       0     0     0

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
zfssassingle    32G   121  98 38025   7 12144   3   318  97 50313   5 364.9   6
Latency               165ms    2803ms    4687ms     234ms    2898ms    2923ms
Version  1.96       ------Sequential Create------ --------Random Create--------
zfssassingle        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  3320  14 +++++ +++  8438  25  7683  31 +++++ +++  8113  30
Latency             20777us     130us     149us   15352us      54us     151us
1.96,1.96,zfssassingle,1,1342173496,32G,,121,98,38025,7,12144,3,318,97,50313,5,364.9,6,16,,,,,3320,14,+++++,+++,8438,25,7683,31,+++++,+++,8113,30,165ms,2803ms,4687ms,234ms,2898ms,2923ms,20777us,130us,149us,15352us,54us,151us

###################

  pool: psas
 state: ONLINE
  scan: resilvered 610K in 0h0m with 0 errors on Fri Jul 13 14:46:38 2012
config:

        NAME        STATE     READ WRITE CKSUM
        psas        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t2d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
zfssasmirror    32G   122  99 36393   7 14645   4   325  99 92238  10 547.0  11
Latency               110ms    3220ms    3883ms   57845us     821ms    1838ms
Version  1.96       ------Sequential Create------ --------Random Create--------
zfssasmirror        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  3526  15 +++++ +++  6846  22  6589  26 +++++ +++  8342  29
Latency             28666us     133us     180us   15383us      38us     133us
1.96,1.96,zfssasmirror,1,1342185390,32G,,122,99,36393,7,14645,4,325,99,92238,10,547.0,11,16,,,,,3526,15,+++++,+++,6846,22,6589,26,+++++,+++,8342,29,110ms,3220ms,3883ms,57845us,821ms,1838ms,28666us,133us,180us,15383us,38us,133us
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to