On 2/15/2011 1:37 PM, Torrey McMahon wrote:

On 2/14/2011 10:37 PM, Erik Trimble wrote:
That said, given that SAN NVRAM caches are true write caches (and not a ZIL-like thing), it should be relatively simple to swamp one with write requests (most SANs have little more than 1GB of cache), at which point, the SAN will be blocking on flushing its cache to disk.

Actually, most array controllers now have 10s if not 100s of GB of cache. The 6780 has 32GB, DMX-4 has - if I remember correctly - 256. The latest HDS box is probably close if not more.

Of course you still have to flush to disk and the cache flush algorithms of the boxes themselves come into play but 1GB was a long time ago.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


STK2540 and the STK6140 have at most 1GB.
STK6180 has 4GB.


The move to large GB caches is only recent - only large (i.e big array setups with a dedicated SAN head) have had multi-GB NVRAM cache for any length of time.

In particular, pretty much all base arrays still have 4GB or less on the enclosure controller - only in the SAN heads do you find big multi-GB caches. And, lots (I'm going to be brave and say the vast majority) of ZFS deployments use direct-attach arrays or internal storage, rather than large SAN configs. Lots of places with older SAN heads are also going to have much smaller caches. Given the price tag of most large SANs, I'm thinking that there are still huge numbers of 5+ year-old SANs out there, and practically all of them have only a dozen or less GB of cache.

So, yes, big SAN modern configurations have lots of cache. But they're also the ones most likely to be hammered with huge amounts of I/O from multiple machines. All of which makes it relatively easy to blow through the cache capacity and slow I/O back down to the disk speed.

Once you get back down to raw disk speed, having multiple LUNS per raid array is almost certainly going to perform worse than a single LUN, due to thrashing. That is, it would certainly be better (i.e. faster) for an array to have to commit 1 128k slab than 4 32k slabs.


So, the original recommendation is interesting, but needs to have the caveat that you'd really only use it if you can either limit the amount of sustained I/O you have, or are using very-large-cache disk setups.

I would think it idea might also apply (i.e. be useful) for something like the F5100 or similar RAM/Flash arrays.

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to