On Mon, Oct 29, 2012 at 09:30:47AM -0500, Brian Wilson wrote:
> First I'd like to note that contrary to the nomenclature there isn't
> any one "SAN" product that all operates the same. There are a number
> of different vendor provided solutions that use a FC SAN to deliver
> luns to hosts, and they each have their own limitations. Forgive my
> pedanticism please.
> >On Sun, Oct 28, 2012 at 04:43:34PM +0700, Fajar A. Nugraha wrote:
> > On Sat, Oct 27, 2012 at 9:16 PM, Edward Ned Harvey
> > (opensolarisisdeadlongliveopensolaris)
> > <opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> > >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-()
> > >> boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
> > >>
> > >> So my
> > >> suggestion is actually just present one huge 25TB LUN to zfs and let
> > >> the SAN handle redundancy.
> >You are entering the uncharted waters of ``multi-level disk
> >>management'' here. Both ZFS and the SAN use redundancy and error-
> >checking to ensure data integrity. Both of them also do automatic
> >replacement of failing disks. A good SAN will present LUNs that
> >>behave as perfectly reliable virtual disks, guaranteed to be error
> >free. Almost all of the time, ZFS will find no errors. If ZFS does
> >find an error, there's no nice way to recover. Most commonly, this
> >happens when the SAN is powered down or rebooted while the ZFS host
> >is still running.
> On your host side, there's also the consideration of ssd/scsi
> queuing. If you're running on only one LUN, you're limiting your
> IOPS to only one IO queue over your FC paths, and if you have that
> throttled (per many storage vendors recommendations about
> ssd:ssd_max_throttle and zfs:zfs_vdev_max_pending), then one LUN
> will throttle your IOPS back on your host. That might also motivate
> you to split into multiple LUNS so your OS doesn't end up
> bottle-necking your IO before it even gets to your SAN HBA.
That's a performance issue rather than a reliability issue. The other
performance issue to consider is block size. At the last place I
worked, we used an Iscsi LUN from a Netapp filer. This LUN reported a
block size of 512 bytes, even though the Netapp itself used a 4K
block size. This means that the filer was doing the block size
conversion, resulting in much more I/O than the ZFS layer intended.
The fact that Netapp does COW made this situation even worse.
My impression was that very few of their customers encountered this
performance problem because almost all of them used their Netapp only
for NFS or CIFS. Our Netapp was extremely reliable but did not have
the Iscsi LUN performance that we needed.
-Gary Mills- -refurb- -Winnipeg, Manitoba, Canada-
zfs-discuss mailing list