I have a system similar to yours:

2 x Quad Core AMD 2.0GHz
16GB x DDR2 667
14 x 3.5" Seagate 750GB
2 x Intel X25-E SLC SSD
2 x LSI 1068 PCI-E controllers

I use the X25-E's for the root pool as well as the log devices for my data pool:

# zpool status
  pool: ak2-pool0
 state: ONLINE
scrub: scrub completed after 0h1m with 0 errors on Tue May 12 14:35:32 2009
config:

        NAME        STATE     READ WRITE CKSUM
        ak2-pool0   ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c2t2d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c2t4d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c2t5d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c2t6d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1t7d0  ONLINE       0     0     0
            c2t7d0  ONLINE       0     0     0
        logs        ONLINE       0     0     0
          c1t0d0s7  ONLINE       0     0     0
          c2t0d0s7  ONLINE       0     0     0

I've only used bonnie++ for some rough performance tests. Here is the 'quick test', with fsyncs after every write on a new, empty pool:

# bonnie++ -d /ak2-pool0/bonnie/test/ -u nobody -f -b
...
      ------Sequential Output------ --Sequential Input- --Random-
       -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- -Seeks-
Size   K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
32G              339610 89 216544 61           715528 73  2733  12

------Sequential Create------ --------Random Create--------
      -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
   16  1171   9 +++++ +++  1182   8  1174   9 +++++ +++  1185   8

ak2-zfs1,32G,,,339610,89,216544,61,,,715528,73,2732.9,12,16,1171,9,+++++,+++,1182,8,1174,9,+++++,+++,1185,8

Watching 'zpool iostat 1' averages during the tests:
Sequential write: ~4.5k IOP, ~580MB
Sequential read: ~5.5k IOP, ~700MB


Hope that helps,
--
Dave


[email protected] wrote:
So I've setup my test system. It's a dual quad core system with 2.33 ghz procs and 8GB of RAM. I have 12 2.5" 146 SAS drives connected via a LSI SAS expander and 1068 hba. For the boot drive, I'm using a 2.5" sata and I also have a Intel SSD (32GB SLC) which I'm using for ZIL. Oh - i'm running Express Community Edition snv_113.

Can anyone share some insight on how best to benchmark fs performance? I usually use iometer, but I can't get dynamo to compile on this platform.

I see alot of people use iozone, but I don't really understand why some of the performance numbers exceed the maximum throughput in theory for 12 SAS disks.

I've configured a single mirrored pool with no hot spares across all 12 disks

# zpool status
  pool: datapool1
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        datapool1    ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c0t8d0   ONLINE       0     0     0
            c0t9d0   ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c0t10d0  ONLINE       0     0     0
            c0t11d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c0t12d0  ONLINE       0     0     0
            c0t13d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c0t14d0  ONLINE       0     0     0
            c0t15d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c0t16d0  ONLINE       0     0     0
            c0t17d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c0t18d0  ONLINE       0     0     0
            c0t19d0  ONLINE       0     0     0
        logs         ONLINE       0     0     0
          c2t1d0     ONLINE       0     0     0

Thanks in advance!

--------------------------------------------------------------------------------

This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. This communication may contain material protected by the attorney-client privilege. If you are not the intended recipient, be advised that any use, dissemination, forwarding, printing or copying is strictly prohibited. If you have received this email in error, please contact the sender and delete all copies.



_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to