Marion Hakanson wrote:
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
more on random I/O. The server/initiator side is a T2000 with Solaris 10u4.
It never sees over 25% CPU, ever. Oh yeah, and two 1GB network links to
the SAN
. . .
My opinion is, if when the array got really loaded up, everything slowed
down evenly, users wouldn't mind or notice much. But when every 20 or so
reads/writes gets delayed my 10s of seconds, the users start to line up at
my door.
Hmm, I have no experience with iSCSI yet. But behavior of our T2000
file/NFS server connected via 2Gbit fiber channel SAN is exactly as
you describe when our HDS SATA array gets behind. Access to other
ZFS pools remains unaffected, but any access to the busy pool just
hangs. Some Oracle apps on NFS clients die due to excessive delays.
In our case, this old HDS array's SATA shelves have a very limited queue
depth (four per RAID controller) in the "back end" loop, plus every write
is hit with the added overhead of an in-array read-back verification.
Maybe your iSCSI situation injects enough latency at higher loads to
cause something like our FC queue limitations.
The iSCSI array has 2GB RAM as a cache. Writes to cache complete very
fast. I'm not sure, but would love to get some metering going on this
guy to find out, that it's really the reads that cause the issue. It
seems like, but I'm not totally sure yet that heavy random read loads
are when things break down. I'll pass on anything I find to the list,
'cause I'm sure there are a lot of folks with ZFS on a SAN. The
flexibility of having the SAN is still seductive, even though the
benefits to ZFS performance for direct attached storage are pulling us
the other way.
Jon
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss