I'm the original author.  I just answered most of your
questions on your help ticket, but it may be a while before
our support team passes my reply on.

On 04/21/11 17:55, Don wrote:
I have several OpenIndiana b147 boxes serving as a NAS heads with a dual port 
Myricom 10G NIC in each head.

The 10G network has been used for testing and we've been gathering performance 
numbers. We're ready to enable jumbo frames but I don't see a 
/kernel/drv/myri10ge.conf file in which to make the changes. Anyone know why 
the file might be missing? I don't see it on any of the OI boxes I've checked.

The syntax is as you guessed:

myri10ge_mtu_override=9000;

Can anyone recommend network tuning parameters that we should consider using 
for a NAS box serving as an ISCSI target with dual 10g interfaces for 25 ESX 
servers (1G Interfaces)?

Myricom has several recommendations:
/etc/system:
set ddi_msix_alloc_limit=8
set pcplusmp:apic_multi_msi_max=8
set pcplusmp:apic_msix_max=8
set pcplusmp:apic_intr_policy=1

I believe those last 3 (pcplusmp*) are no longer required
in this version of opensolaris, but they should check the
web to confirm.  See http://www.solarisinternals.com/wiki/index.php/Networks


/etc/kernel/myri10ge.conf:
myri10ge_bigbufs_initial=4096;
myri10ge_bigbufs_max=32768;

Might be helpful:
myri10ge_lro=1;
myri10ge_lro_max_aggr=2;

I think b147 has fixed the 2 mblk chain limit that forces packets
through a slow path in TCP, so you can probably increase that to
8.

Note that you can play with this at runtime via ndd.

For Low Latency:
myri10ge_use_msix=0;
myri10ge_intr_coal_delay=0;

Any opinions on:
myri10ge_max_slices=1;

There are 20-odd ESX servers accessing this host at the same time- would 
additional slices be useful?


Yes.  The purpose of the above (ddi_msix_alloc_limit=8) is to allow the
driver to allocate up to 8 MSI-X vectors, for 8 slices (tx/rx queue
pairs).   Some of these proposed settings (myri10ge_use_msix=0;
myri10ge_max_slices=1;) disable multiple slices, and negate the
ddi_msix_alloc_limit tuning.  So do one or the other, but not both :)

Most of the "low latency" tuning suggestions you found are for workloads
like HFT, where every microsecond matters.   For a fileserver, I'd
suggest optimizing for CPU utilization (eg, bandwidth).

Drew

_______________________________________________
networking-discuss mailing list
networking-discuss@opensolaris.org

Reply via email to