Very impressive iops numbers. Although I have some thoughts on the benchmarking method itself. Imho the comparison shouldn't be raw iops numbers on the ddrdrive itself as tested with iometer (it's only 4gb),

The purpose of the benchmarks presented is to isolate the inherent capability of just the SSD in a simple/synthetic/sustained Iometer 4KB random write test. This test successfully illuminates a critical difference between a Flash only and a DRAM/SLC based SSD. Flash only SSD vendors are *less* than forthright in their marketing when specifying their 4KB random write capability. I am
surprised vendors are not called out for marketing FOB (fresh out of the box)
results (that even with TRIM support) are not sustainable.  Intel was a notable
exception until they also introduced SSDs based on SandForce controllers.

In the section prior to the benchmarks, titled "ZIL Accelerator access pattern
random and/or sequential" I show an example workload and how it translates to an actual log device's access pattern. It clearly shows a wide (21-71%) spectrum of random write accesses. So before even presenting any Iometer results, I don't believe I indicate or even imply that "real world" workloads will somehow be 100% 4KB random write based. For the record, I agree with you as they are obviously not!

real world numbers on a real world pool consisting of spinning disks with ddrdrive acting as zil accelerator.

Benchmarking is frustrating for us also, as what is a "real world" pool?
And if we picked one to benchmark, how relevant would it be to others?

1) number of vdevs (we see anywhere from one to massive)
2) vdev configuration (only mirrored pairs to 12 disk raidz2)
3) HDD type (low rpm green HDDs to SSD only pools)
4) host memory size (we see "not enough" to 192GB+)
5) number of host CPUs (you get the picture)
6) network connection (1GB to multiple 10GB)
7) number of network ports
8) direct connect to client or through a switch(s)

Is the ZFS pool accessed using NFS or iSCSI?
What is the client OS?
What is the client configuration?
What is the workload composition (read/async write/sync write)?
What is the workload access pattern (sequential/random)?

This could be just good enough for small businesses and moderate sized pools.

No doubt, we are also very clear on who we target (enterprise customers).

The beauty of ZFS is the flexibility of it's implementation. By supporting multiple log device types and configurations it ultimately enables a broad range of performance capabilities!

Best regards,

Christopher George
cgeorge at
zfs-discuss mailing list

Reply via email to