advertised media rate does not equal actual link rate.
UCS vnic's advertise as 40gbit's they are barely able to push 7Gbit. For
example.
On 9 May 2016 at 16:50, Edward Bond wrote:
> The one you are referencing is to the loopback device on the VM, not VM to
> VM.
>
>
The one you are referencing is to the loopback device on the VM, not VM to
VM.
recap:
VM loopback device on VM = 26 gbps
VM traffic is limited to 10gbs ( actual data is ~5 per 10G link on host )
Host to Host is 18 gbs ( actual data is ~9 per 10G link on host )
Host bond0 = 20gbs link speed
**Host
Please see my responses inline, prefixed by [SL].
> On May 8, 2016, at 4:35 PM, ed bond wrote:
>
> Scott,
>
> I agree. I am not expecting that.
>
>
>
> When I noticed Scenario 1, I looked at the openvswitch virtual ethernet
> device, it only has 10gbs set to the
Please see my response below.
> On May 7, 2016, at 4:47 AM, ed bond wrote:
>
> Hello all,
>
> I was hoping someone might be able to help me diagnose what might be going on.
>
> Right now I have a bond0 interface setup with jumbo packets. I can get
> 18gigabit/s
Inside vm:
ubuntu@3-885bd897-3655-42ab-b0b9-a97b31826f88:~$ iperf -c 127.0.0.1
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 2.50 MByte (default)
[ 3] local
What can you get over loopback in the VM?. It's likely your hitting a
CPU/bus bound limit.
On 7/05/2016 06:47, "ed bond" wrote:
> Hello all,
>
> I was hoping someone might be able to help me diagnose what might be going
> on.
>
> Right now I have a bond0 interface setup
Hello all,
I was hoping someone might be able to help me diagnose what might be
going on.
Right now I have a bond0 interface setup with jumbo packets. I can get
18gigabit/s throughput to a single host. However inside the vms I am limited by
10gigabits per second. The VM’s have