Re: [docker-dev] comparison of docker overlay network performance

2016-09-20 Thread sgas
So , here is the updated  performance comparison table.  


End points 

Mtu

NIC vxlan acceleration

Iperf Rate (Gbps)

Notes

Host to host

9000

N/A

9.63

Host-vxlan to host-vxlan

8950

no

7.01

Linux vxlan overhead (?) 

Veth-ovs-vxlan to vxlan-ovs-veth

8950

no

6.27

Ovs overhead

Veth-lbr-vxlan to vxlan-lbr-veth

8950

no

4.12

Linux bridge overhead (?)

Docker overlay 

8950

no

3.96

Linux bridge overhead (?)

Docker overlay 

1450

no

1.11

Small mtu penalty

On Monday, September 19, 2016 at 4:34:10 PM UTC-7, sgas wrote:
>
> Jana, good point.   The test setup info follows.
>
> 2 hosts  each with 1 cpu Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz with 32 
> cores,  10 Gbps ixgbe  NICs. 
>
> I ran a comparison of Linux Bridge and  OVS in the following setup . 
>
> iperfveth1---veth2-- 
> [bridge]--vxlan--phy(LAN)phy--vxlan--[bridge]---veth2---veth1--iperf
>
>
> The performance is as follows:
>
> OVS :   6.27 Gbits/sec
> Linux Bridge4.12 Gbits/sec
>
> Linux bridge is only  about 2/3 rd as performant as  OVS !!
>
>
>
> On Sunday, September 18, 2016 at 9:54:04 PM UTC-7, Jana Radhakrishnan 
> wrote:
>>
>> Good info. These performance number can be understood better if the 
>> details about test hardware and the test setup is provided. But yes if you 
>> are using iperf kind of tools then the single core performance of 
>> forwarding through a bridge maxes out at 700-750Kpps(The size of the packet 
>> doesn't matter much for bridge forwarding) because it needs to perform 
>> forwarding lookups which is always the long pole in any kind of packet 
>> performance tests. Again the bridge forwarding limit is determined by how 
>> fast the core is and that is the limit and a single core would never be a 
>> able to saturate a 10G link even in fastest cores available today when 
>> forwarding lookups are involved.
>>
>> But single core performance is not the end of all in terms of the how 
>> this data needs to interpreted.. If you run a multi-core saturation tests 
>> with all cores being put to use to send Docker overlay traffic you can in 
>> fact saturate your 10G link. So as long as there are enough cores available 
>> and there are enough containers that can make use of these cores and as 
>> long as we can saturate that 10G link we are good. So the only time this 
>> would be a problem is on a truly single core(or less number of cores) 
>> hardware or when there is a single application which doesn't effectively 
>> utilize all the cores but requires all the bandwidth that the host network 
>> offers. 
>>
>> On Sun, Sep 18, 2016 at 5:59 PM sgas  wrote:
>>
>>> I  have collected a  summary  of my findings so far  on docker overlay 
>>> network  performance .  The table follows.   As is obvious there is 
>>> significant performance penalty form Linux vxlan and bridge. 
>>>
>>>
>>> End points 
>>>
>>> Mtu
>>>
>>> NIC vxlan acceleration
>>>
>>> Rate (Gbps)
>>>
>>> Notes
>>>
>>> Host to host
>>>
>>> 9000
>>>
>>> N/A
>>>
>>> 9.63
>>>
>>> Host-vxlan to host-vxlan
>>>
>>> 8950
>>>
>>> no
>>>
>>> 7.01
>>>
>>> Linux vxlan overhead (?) 
>>>
>>> Docker overlay 
>>>
>>> 8950
>>>
>>> no
>>>
>>> 3.96
>>>
>>> Linux bridge overhead (?)
>>>
>>> Docker overlay 
>>>
>>> 1450
>>>
>>> no
>>>
>>> 1.11
>>>
>>> Small mtu penalty
>>>
>>>
>>>
>>> Improving  performance  in both Linux vxlan and bridge, in addition to 
>>> offload to NIC should also improve results. 
>>>
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "docker-dev" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to docker-dev+...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"docker-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to docker-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [docker-dev] comparison of docker overlay network performance

2016-09-19 Thread sgas
Jana, good point.   The test setup info follows.

2 hosts  each with 1 cpu Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz with 32 
cores,  10 Gbps ixgbe  NICs. 

I ran a comparison of Linux Bridge and  OVS in the following setup . 

iperfveth1---veth2-- 
[bridge]--vxlan--phy(LAN)phy--vxlan--[bridge]---veth2---veth1--iperf


The performance is as follows:

OVS :   6.27 Gbits/sec
Linux Bridge4.12 Gbits/sec

Linux bridge is only  about 2/3 rd as performant as  OVS !!



On Sunday, September 18, 2016 at 9:54:04 PM UTC-7, Jana Radhakrishnan wrote:
>
> Good info. These performance number can be understood better if the 
> details about test hardware and the test setup is provided. But yes if you 
> are using iperf kind of tools then the single core performance of 
> forwarding through a bridge maxes out at 700-750Kpps(The size of the packet 
> doesn't matter much for bridge forwarding) because it needs to perform 
> forwarding lookups which is always the long pole in any kind of packet 
> performance tests. Again the bridge forwarding limit is determined by how 
> fast the core is and that is the limit and a single core would never be a 
> able to saturate a 10G link even in fastest cores available today when 
> forwarding lookups are involved.
>
> But single core performance is not the end of all in terms of the how this 
> data needs to interpreted.. If you run a multi-core saturation tests with 
> all cores being put to use to send Docker overlay traffic you can in fact 
> saturate your 10G link. So as long as there are enough cores available and 
> there are enough containers that can make use of these cores and as long as 
> we can saturate that 10G link we are good. So the only time this would be a 
> problem is on a truly single core(or less number of cores) hardware or when 
> there is a single application which doesn't effectively utilize all the 
> cores but requires all the bandwidth that the host network offers. 
>
> On Sun, Sep 18, 2016 at 5:59 PM sgas > 
> wrote:
>
>> I  have collected a  summary  of my findings so far  on docker overlay 
>> network  performance .  The table follows.   As is obvious there is 
>> significant performance penalty form Linux vxlan and bridge. 
>>
>>
>> End points 
>>
>> Mtu
>>
>> NIC vxlan acceleration
>>
>> Rate (Gbps)
>>
>> Notes
>>
>> Host to host
>>
>> 9000
>>
>> N/A
>>
>> 9.63
>>
>> Host-vxlan to host-vxlan
>>
>> 8950
>>
>> no
>>
>> 7.01
>>
>> Linux vxlan overhead (?) 
>>
>> Docker overlay 
>>
>> 8950
>>
>> no
>>
>> 3.96
>>
>> Linux bridge overhead (?)
>>
>> Docker overlay 
>>
>> 1450
>>
>> no
>>
>> 1.11
>>
>> Small mtu penalty
>>
>>
>>
>> Improving  performance  in both Linux vxlan and bridge, in addition to 
>> offload to NIC should also improve results. 
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "docker-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to docker-dev+...@googlegroups.com .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"docker-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to docker-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [docker-dev] comparison of docker overlay network performance

2016-09-18 Thread 'Jana Radhakrishnan' via docker-dev
Good info. These performance number can be understood better if the details
about test hardware and the test setup is provided. But yes if you are
using iperf kind of tools then the single core performance of forwarding
through a bridge maxes out at 700-750Kpps(The size of the packet doesn't
matter much for bridge forwarding) because it needs to perform forwarding
lookups which is always the long pole in any kind of packet performance
tests. Again the bridge forwarding limit is determined by how fast the core
is and that is the limit and a single core would never be a able to
saturate a 10G link even in fastest cores available today when forwarding
lookups are involved.

But single core performance is not the end of all in terms of the how this
data needs to interpreted.. If you run a multi-core saturation tests with
all cores being put to use to send Docker overlay traffic you can in fact
saturate your 10G link. So as long as there are enough cores available and
there are enough containers that can make use of these cores and as long as
we can saturate that 10G link we are good. So the only time this would be a
problem is on a truly single core(or less number of cores) hardware or when
there is a single application which doesn't effectively utilize all the
cores but requires all the bandwidth that the host network offers.

On Sun, Sep 18, 2016 at 5:59 PM sgas  wrote:

> I  have collected a  summary  of my findings so far  on docker overlay
> network  performance .  The table follows.   As is obvious there is
> significant performance penalty form Linux vxlan and bridge.
>
>
> End points
>
> Mtu
>
> NIC vxlan acceleration
>
> Rate (Gbps)
>
> Notes
>
> Host to host
>
> 9000
>
> N/A
>
> 9.63
>
> Host-vxlan to host-vxlan
>
> 8950
>
> no
>
> 7.01
>
> Linux vxlan overhead (?)
>
> Docker overlay
>
> 8950
>
> no
>
> 3.96
>
> Linux bridge overhead (?)
>
> Docker overlay
>
> 1450
>
> no
>
> 1.11
>
> Small mtu penalty
>
>
>
> Improving  performance  in both Linux vxlan and bridge, in addition to
> offload to NIC should also improve results.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "docker-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to docker-dev+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"docker-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to docker-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.