> The cache on the vast majority of SSDs is volatile and not protected by a super-capacitor. You must assume that a SSD has a volatile cache unless you have paid the bucks (or euros) for one of the rare SSDs which assures that its cache is saved.

I think one expample would be the Seagate Pulsar - because it explicitly states the use of a super capacitor - but you can not easily buy those. I'd expect to be able to set zfs_nocacheflush there.

So when thinking about this, how do SSD's ensure data integrity comparable with todays HDD's ?

It must be that SSD's must also rely on cache flush commands from the host and they must honour those commands for their internal cache.

But how can they gain performance then (I've read that non-cached SSD writes are similar to HDD writes due to the erase / write cycle required (or hell slow) in terms of latency) ?

I quickly tested my Intel X25-M Postville 160 GB SSD with opensolaris (iometer with -o for SYNC_IO) with and without sys_nocacheflush set. Results are comparable, no major change.

- no_cacheflush=0
iozone -ec -r 64k -s 2048m -l 2 -i 0 -i 2 -o

       Children see throughput for  2 initial writers  =   61790.45 KB/sec
       Parent sees throughput for  2 initial writers   =   61035.32 KB/sec
       Min throughput per process                      =   30520.67 KB/sec
       Max throughput per process                      =   31269.78 KB/sec
       Avg throughput per process                      =   30895.23 KB/sec

- no_cacheflush=1
iozone -ec -r 64k -s 2048m -l 2 -i 0 -i 2 -o

       Children see throughput for  2 initial writers  =   63221.24 KB/sec
       Parent sees throughput for  2 initial writers   =   62006.64 KB/sec
       Min throughput per process                      =   30836.24 KB/sec
       Max throughput per process                      =   32384.99 KB/sec
       Avg throughput per process                      =   31610.62 KB/sec
       Min xfer                                        = 1996864.00 KB

For a HDD based system the difference is huge

- no_cacheflush=0
iozone -ec -r 64k -s 2048m -l 2 -i 0 -i 2 -o

~ 4 mb/ sec for a 3 disk SATA pool

- no_cacheflush=1
iozone -ec -r 64k -s 2048m -l 2 -i 0 -i 2 -o

~ 80 mb/ sec for a 3 disk SATA pool

So how can those SSD's be faster than disks in this workload if the basic write in the backend is just as fast has HDD writes (due to the erase / write cycle) ?

a) The SSD ignores the cache flush command -> bad (data integrity issue)
b) The SSD honours the case flush but is very very clever and somehow solved this limitation of flash memory

Can I safely (safely as in "with no data integrity issue") use non-Supercapacitor SSD's with ZFS (with cache flushing enabled (sys_nocacheflush=0, the default) then ?

Regards,
Robert




_______________________________________________
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to