Again thanks for the input and clarifications.
I would like to clarify the numbers I was talking about with ZiL
performance specs I was seeing talked about on other forums. Right now
I'm getting streaming performance of sync writes at about 1 Gbit/S. My
target is closer to 10Gbit/S. If I get to build it this system, it will
house a decent size VMware NFS storage w/ 200+ VMs, which will be dual
connected via 10Gbe. This is all medical imaging research. We move data
around by the TB and fast streaming is imperative.
On the system I've been testing with is 10Gbe connected and I have about 50
VMs running very happily, and haven't yet found my random I/O limit.
However every time, I storage vMotion a handful of additional VMs, the ZIL
seems to max out it's writing speed to the SSDs and random I/O also
suffers. With out the SSD ZIL, random I/O is very poor. I will be doing
some testing with sync=off, tomorrow and see how things perform.
If anyone can testify to a ZIL device(s) that can keep up with 10GBe or
more streaming synchronous writes please let me know.
On Thu, Oct 4, 2012 at 1:33 PM, Richard Elling <richard.ell...@gmail.com>wrote:
> This has been available for quite a while and I haven't heard of any bugs
> in this area.
> - Several threads seem to suggest a ZIL throughput limit of 1Gb/s with
> SSDs. I'm not sure if that is current, but I can't find any reports of
> better performance. I would suspect that DDR drive or Zeus RAM as ZIL
> would push past this.
> 1GB/s seems very high, but I don't have any numbers to share.
> It is not unusual for workloads to exceed the performance of a single
> For example, if you have a device that can achieve 700 MB/sec, but a
> generated by lots of clients accessing the server via 10GbE (1 GB/sec),
> then it
> should be immediately obvious that the slog needs to be striped.
> this is also easy to measure.
> -- richard
zfs-discuss mailing list