Docs say backplane is 320 gigabit per second. They're stacked with 2 x
QSPF+ cables in the stacking modules.

We're about 2/3 full on the ethernet ports, and have used only 4 of
the 8 10gbit ports.

Kurt

On Wed, May 31, 2017 at 5:23 PM, Don Ely <[email protected]> wrote:
> What's the backplane speed of the Junipers?  All ports in use?
>
> On May 31, 2017 5:21 PM, "Kurt Buff" <[email protected]> wrote:
>>
>> We're on vSphere 6.
>>
>> But it seems unlikely that the vmxnet3 adapter is at the root of this,
>> as the hosts and VMs are well-established (almost 3 years), and the
>> upgrade from 5.5 took place over a year ago.
>>
>> The only major change adjacent to this problem involved moving to the
>> Nimble, and migrating all of the VMs away from the EMC VNX5400 and
>> VNXe3100. They (all SANS and all hosts) are connected to a stacked
>> pair of Juniper EX 4300s, but we did add in 4-port 10g SFP moduled and
>> cabled the Nimble to that.
>>
>> Even then, it wasn't until we were a couple of weeks into the
>> migration that we started seeing this problem.
>>
>> I'm willing to believe that it's the Junipers, but I want to get
>> VMware sussed out before I head there.
>>
>> I say that because I haven't yet deleted the VMNICs for the EMCs - we
>> kept the same VLAN, but migrated the address space in the VLAN (it's
>> isolated) from 10.10.0.0/14 to 10.211.10.0/24, as the 10.10.0.0/24
>> space took a chunk out of our lab's address space.
>>
>> Kurt
>>
>> On Wed, May 31, 2017 at 4:55 PM, Don Ely <[email protected]> wrote:
>> > What version of vSphere?  There are some known issues with the vmxnet3
>> > adapter
>> >
>> > On May 31, 2017 4:51 PM, "Kurt Buff" <[email protected]> wrote:
>> >>
>> >> Update - still not solved:
>> >>
>> >> Got on a call with a MSFT rep. He ran a quick shell script that did
>> >> the things I've already done:
>> >>
>> >>      netsh interface tcp set global chimney=disabled
>> >>      netsh interface tcp set global rss=disabled
>> >>      netsh int tcp set global autotuning=disabled
>> >>      netsh int tcp set global congestion=none
>> >>      netsh int tcp set global netdma=Disabled
>> >>
>> >> I've put him off for now, as I'm seeing what might be a related
>> >> problem crop up - the ancient CRM we're using has been spouting errors
>> >> all day about not being able to write to the database.
>> >>
>> >> I've looked at CPU ready on both machines, and the file server's is
>> >> pretty bad, but the other server's isn't. That's after migrating them
>> >> to a single host together, and migrating everything else off that host
>> >> - just those two VMs on this host. I've also looked at performance
>> >> charts in vmware for both machines regarding disk and network, and am
>> >> not seeing anything out of line.
>> >>
>> >> I'm trying to install the vmware support assistant appliance, but am
>> >> running into problems with SSO auth - the vsphere infrastructure was
>> >> upgraded from 5.5 to 6.0, and it looks like I have a project ahead of
>> >> me to fix the SSL certs, which this post seems to cover:
>> >>
>> >>
>> >> https://virtuallyunderstood.wordpress.com/2016/08/03/troubleshooting-expired-psc-certificates-with-vsphere-6/
>> >>
>> >> Further, I've checked with Nimble support, and they say that there is
>> >> some latency, but that their tools indicate that it is external to the
>> >> array - they're pointing at vsphere or the network, and suggesting I
>> >> should fail over the array to its other interface to see if that
>> >> clears the problem. I'm saving that for later.
>> >>
>> >> I'm also going to see about setting up a machine to monitor the
>> >> server/iSCSI switch to which the hosts and SANs are attached - what
>> >> I'm seeing in PRTG for that doesn't give me what I want.
>> >>
>> >> It just goes deeper and deeper...
>> >>
>> >> Kurt
>> >>
>> >> Kurt
>> >>
>> >>
>> >> I've got a ticket open with vmware now
>> >>
>> >> On Fri, May 26, 2017 at 12:20 PM, Kurt Buff <[email protected]>
>> >> wrote:
>> >> > All,
>> >> >
>> >> > I have a 2012R2 file server running as a VM on vSphere 6.0.
>> >> >
>> >> > Here's what I'm seeing:
>> >> >
>> >> > Copy large file (win7 ISO) from file server to workstation, I get
>> >> > roughly 12-13Mbytes/second, wired or wireless.
>> >> >
>> >> > Copy that file from workstation to server over a wireless connection,
>> >> > same speed - 12-13Mbytes/second
>> >> >
>> >> > Copy that file from workstation to server over wired connection,
>> >> > speed
>> >> > degrades to 1Mbyte/second or less
>> >> >
>> >> > Copy that file to another 2012R2 VM on the same host on the same SAN
>> >> > volume (our print server), and speeds are 12-13Mbytes/second for both
>> >> > wired and wireless.
>> >> >
>> >> > I've made sure that the following are disabled: RSS, atime, 8.3
>> >> > filename generation, TCP Chimney.
>> >> >
>> >> > RAM and CPU utilization on this machine are well within limits.
>> >> >
>> >> > I'm thoroughly stumped.
>> >> >
>> >> > Anyone have pointers for me? I'm about to raise a case with MSFT.
>> >> >
>> >> > Kurt
>> >>
>> >>
>> >
>>
>>
>


Reply via email to