On 2/14/2011 3:52 PM, Gary Mills wrote:
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote:
On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills<mi...@cc.umanitoba.ca>  wrote:
Is there any reason not to use one LUN per RAID group?
[...]
     In other words, if you build a zpool with one vdev of 10GB and
another with two vdev's each of 5GB (both coming from the same array
and raid set) you get almost exactly twice the random read performance
from the 2x5 zpool vs. the 1x10 zpool.
This finding is surprising to me.  How do you explain it?  Is it
simply that you get twice as many outstanding I/O requests with two
LUNs?  Is it limited by the default I/O queue depth in ZFS?  After
all, all of the I/O requests must be handled by the same RAID group
once they reach the storage device.

     Also, using a 2540 disk array setup as a 10 disk RAID6 (with 2 hot
spares), you get substantially better random read performance using 10
LUNs vs. 1 LUN. While inconvenient, this just reflects the scaling of
ZFS aith number of vdevs and not "spindles".

I'm going to go out on a limb here and say that you get the extra performance under one condition: you don't overwhelm the NVRAM write cache on the SAN device head.

So long as the SAN's NVRAM cache can acknowledge the write immediately (i.e. it isn't full with pending commits to backing store), then, yes, having multiple write commits coming from different ZFS vdevs will obviously give more performance than a single ZFS vdev.

That said, given that SAN NVRAM caches are true write caches (and not a ZIL-like thing), it should be relatively simple to swamp one with write requests (most SANs have little more than 1GB of cache), at which point, the SAN will be blocking on flushing its cache to disk.

So, if you can arrange your workload to avoid more than the maximum write load of the SAN's raid array over a defined period, then, yes, go with the multiple LUN/array setup. In particular, I would think this would be excellent for small-write/latency-sensitive applications, where the total amount of data written (over several seconds) isn't large, but where latency is critical. For larger I/O requests (or, for consistent, sustained I/O of more than small amounts), all bets are off as far as possibly advantage of multiple LUNS/array.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to