Actually the performance decrease when disableing the write cache on the SSD is aprox 3x (aka 66%).
Setup: node1 = Linux Client with open-iscsi server = comstar (cache=write through) + zvol (recordsize=8k, compression=off) --- with SSD-Disk-write cache disabled: node1:/mnt/ssd# iozone -ec -r 8k -s 128m -l 2 -i 0 -i 2 -o -I Iozone: Performance Test of File I/O Version $Revision: 3.327 $ Compiled for 32 bit mode. Build: linux Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root. Run began: Sun Jan 10 20:14:46 2010 Include fsync in write timing Include close in write timing Record Size 8 KB File size set to 131072 KB SYNC Mode. O_DIRECT feature enabled Command line used: iozone -ec -r 8k -s 128m -l 2 -i 0 -i 2 -o -I Output is in Kbytes/sec Time Resolution = 0.000002 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Min process = 2 Max process = 2 Throughput test with 2 processes Each process writes a 131072 Kbyte file in 8 Kbyte records Children see throughput for 2 initial writers = 1324.45 KB/sec Parent sees throughput for 2 initial writers = 1291.27 KB/sec Min throughput per process = 646.07 KB/sec Max throughput per process = 678.38 KB/sec Avg throughput per process = 662.23 KB/sec Min xfer = 124832.00 KB Children see throughput for 2 rewriters = 4360.29 KB/sec Parent sees throughput for 2 rewriters = 4360.08 KB/sec Min throughput per process = 2158.82 KB/sec Max throughput per process = 2201.47 KB/sec Avg throughput per process = 2180.15 KB/sec Min xfer = 128536.00 KB Children see throughput for 2 random readers = 43930.41 KB/sec Parent sees throughput for 2 random readers = 43914.01 KB/sec Min throughput per process = 21768.16 KB/sec Max throughput per process = 22162.25 KB/sec Avg throughput per process = 21965.21 KB/sec Min xfer = 128760.00 KB Children see throughput for 2 random writers = 5561.01 KB/sec Parent sees throughput for 2 random writers = 5560.41 KB/sec Min throughput per process = 2780.37 KB/sec Max throughput per process = 2780.64 KB/sec Avg throughput per process = 2780.50 KB/sec Min xfer = 131064.00 KB ... with SSD write cache enabled node1:/mnt/ssd# iozone -ec -r 8k -s 128m -l 2 -i 0 -i 2 -o -I Iozone: Performance Test of File I/O Version $Revision: 3.327 $ Compiled for 32 bit mode. Build: linux Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root. Run began: Sun Jan 10 20:22:14 2010 Include fsync in write timing Include close in write timing Record Size 8 KB File size set to 131072 KB SYNC Mode. O_DIRECT feature enabled Command line used: iozone -ec -r 8k -s 128m -l 2 -i 0 -i 2 -o -I Output is in Kbytes/sec Time Resolution = 0.000002 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Min process = 2 Max process = 2 Throughput test with 2 processes Each process writes a 131072 Kbyte file in 8 Kbyte records Children see throughput for 2 initial writers = 3387.15 KB/sec Parent sees throughput for 2 initial writers = 3258.90 KB/sec Min throughput per process = 1621.62 KB/sec Max throughput per process = 1765.53 KB/sec Avg throughput per process = 1693.57 KB/sec Min xfer = 120392.00 KB Children see throughput for 2 rewriters = 11084.93 KB/sec Parent sees throughput for 2 rewriters = 11083.10 KB/sec Min throughput per process = 5503.68 KB/sec Max throughput per process = 5581.25 KB/sec Avg throughput per process = 5542.46 KB/sec Min xfer = 129256.00 KB Children see throughput for 2 random readers = 46140.94 KB/sec Parent sees throughput for 2 random readers = 46104.64 KB/sec Min throughput per process = 23002.35 KB/sec Max throughput per process = 23138.59 KB/sec Avg throughput per process = 23070.47 KB/sec Min xfer = 130312.00 KB Children see throughput for 2 random writers = 18500.58 KB/sec Parent sees throughput for 2 random writers = 18492.31 KB/sec Min throughput per process = 9248.47 KB/sec Max throughput per process = 9252.11 KB/sec Avg throughput per process = 9250.29 KB/sec Min xfer = 131032.00 KB Difference for Writes: 50-66% less performance. Still much better then disks for writes. One more question for understanding: Talking about read performance. Assuming a reliable ZIL disk (cache flush = working): The ZIL can guarantee data integrity, however if the backend disks (aka pool disks) do not properly implement cache flush - a reliable ZIL device does not "workaround" the bad backend disks rigth ??? (meaning: having a reliable ZIL + some MLC SSD with write cache enabled is not reliable at the end) Thanks -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss