LACP does work - I have used on HP Procurve, but settings are fussy
usually different than Etherchannel uses.
(http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1004048)
Did you try changing the virtual switch settings?
On 2/03/2015 6:11 p.m.,
Hi,
I would try *one* TPG which includes both interface addresses
and I would double check for packet drops on the Catalyst.
The 3560 supports only receive flow control which means, that
a sending 10Gbit port can easily overload a 1Gbit port.
Do you have flow control enabled?
- Joerg
On
Hello Garrett,
No, no 802.3ad going on in this config.
Here is a basic schematic:
https://drive.google.com/file/d/0BwyUMjibonYQVkVqcE5OQUJyUUU/view?usp=sharing
Here is the Nexenta MPIO iSCSI Setup Document that I used as a guide:
Seems like it is indeed a comstar problem. Lockstat analysis might reveal
contended locks or perhaps some kind of timeouts in the code.
Sent from my iPhone
On Mar 2, 2015, at 12:22 AM, W Verb wver...@gmail.com wrote:
Hello Garrett,
No, no 802.3ad going on in this config.
Here is a
On Mar 2, 2015, at 2:07 PM, W Verb via illumos-developer
develo...@lists.illumos.org wrote:
Count indv cuml rcnt nsec Hottest Lock Caller
---
9306 44% 44% 0.00 1557 htable_mutex+0x370
Here’s a theory. You are using small (relatively) MTUs (3000 is less than the
smallest ZFS block size.) So, when you go multipathing this way, might a
single upper layer transaction (ZFS block transfer request, or for that matter
COMSTAR block request) get routed over different paths. This
Hello all,
I am not using layer 2 flow control. The switch carries line-rate 10G
traffic without error.
I think I have found the issue via lockstat. The first lockstat is taken
during a multipath read:
lockstat -kWP sleep 30
Adaptive mutex spin: 21331 events in 30.020 seconds (711 events/sec)
Hello,
vmstat seems pretty boring. Certainly nothing going to swap.
root@sanbox:/root# vmstat
kthr memorypagedisk faults cpu
r b w swap free re mf pi po fr de sr po ro s0 s2 in sy cs us sy
id
0 0 0 34631632 30728068 175 215 0 0 0 0 963 275
I’m not sure I’ve followed properly. You have *two* interfaces. You are not
trying to provision these in an aggr are you? As far as I’m aware, VMware does
not support 802.3ad link aggregations. (Its possible that you can make it work
with ESXi if you give the entire NIC to the guest — but