Gary, >> Sidenote: Today we made eight network/iSCSI related tweaks that, in >> aggregate, have resulted in dramatic performance improvements >> (some I >> just hadn't gotten around to yet, others suggested by Sun's Mertol >> Ozyoney)... >> - disabling the Nagle algorithm on the head node >> - setting each iSCSI target block size to match the ZFS record >> size of >> 128K >> - disabling "thin provisioning" on the iSCSI targets >> - enabling jumbo frames everywhere (each switch and NIC) >> - raising ddi_msix_alloc_limit to 8 >> - raising ip_soft_rings_cnt to 16 >> - raising tcp_deferred_acks_max to 16 >> - raising tcp_local_dacks_max to 16 > > Can you tell us which of those changes made the most dramatic > improvement?
>> - disabling the Nagle algorithm on the head node This will have a dramatic effective on most I/Os, except for large sequential writes. >> - setting each iSCSI target block size to match the ZFS record size >> of 128K >> - enabling jumbo frames everywhere (each switch and NIC) These will have a positive effect for large writes, both sequential and random >> - disabling "thin provisioning" on the iSCSI targets This only has a benefit for file-based or dsk based backing stores. If one use rdsk backing stores of any type, this is not an issue. Jim > I have a similar situation here, with a 2-TB ZFS pool on > a T2000 using Iscsi to a Netapp file server. Is there any way to tell > in advance if any of those changes will make a difference? Many of > them seem to be server resources. How can I determine their current > usage? > > -- > -Gary Mills- -Unix Support- -U of M Academic Computing and > Networking- > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss