On Aug 12, 2009, at 2:07 AM, Kaya Bekiroğlu <[email protected]> wrote:

On Mon, Aug 10, 2009 at 12:44 PM, Bob Friesenhahn <[email protected] > wrote:
Recent SSDs typically accumulate (coalesce) multiple writes in the
on-board DRAM to flush these to NAND at a later time, so multiple
writes are hardly an issue.

If the SSD buffers the writes in DRAM then the postponed writes written via NFS COMMIT become a total non-issue.

I am hardly an authority on the subject, but in my characterization of ZFS/NFS performance with a battery backed, RAM-based slog, I found that the delay from pushing data across the PCI-X bus alone during COMMIT flush had a significant impact on NFS single stream write performance. I imagine this delay exists to a certain degree on all interconnect types (SATA, PCI-E, etc) even if the slog itself were infinitely fast. In the extreme case (say, multi-megabyte COMMITs) one can waste a good chunk of both the bus (or slog) bandwidth and the network bandwidth--the bus/slog is idle when the network is active, and the network is idle when the bus/slog is active.

Whether or not it makes sense to take such aggressive measures to mask latency in specific cases is, of course, a separate question.

I have hit this latency barrier myself.

I have a zpool of 7 mirrors going to SAS 15K drives off a controller with 512MB of NVRAM.

The best I can get is around 40-45MB/s for 4k sequential synchronous IOs (with 4 outstanding IOs).

Of course disabling ZIL isn't good as it makes it all async and subject to loss, so I played around with SSD slogs of varying types, but have not been able to break past it.

I think the NVRAM backed zpool is the best it's going to get and what I'm hitting is the network+pci bus latency limit. The added two-way network committ doubles the latency, which in turn halves the throughput.

-Ross

_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to