In docker classic enviroment, 16 VM instance.
Sam 于2018年11月15日周四 上午10:33写道:
> May be I mistake, my test result is:
> to reach line speed of 10G port(Intel 10G NIC), I use 8 core for linux
> bridge, and I have to use 10 core for ovs.
> Same scene.
>
> I don't know who have some test result, or
May be I mistake, my test result is:
to reach line speed of 10G port(Intel 10G NIC), I use 8 core for linux
bridge, and I have to use 10 core for ovs.
Same scene.
I don't know who have some test result, or official test report?
Guru Shetty 于2018年11月15日周四 上午1:24写道:
>
>
> On Tue, 13 Nov 2018 at
> On Nov 13, 2018, at 5:25 PM, Russell Bryant wrote:
>
> I think this is implied based on the description of how ovn-northd
> would work, but do you expect to make a completely seamless drop-in
> replacement (aside from build-time and run-time dependencies? All
> parameters would be
Hello all,Lets say i have a physical interface eth0 that i move under a
ovs-bridge br0.
a) What would the performance / throughput impact as a result of the physical
interface being part of the ovs bridge now?The reason i ask is there is
probably a extra hop the packet will have to take now to
On Tue, 13 Nov 2018 at 21:53, Sam wrote:
> And why OVS take high CPU cost?
>
My simplistic guess is that you have created a loop in your network with
OVS. Or your SDN flows are inefficient. For a simple setup, there should
really be not much difference with CPU between linux bridge and OVS.
HI
When using the OVS actions to push a label stack, the labels get added “in
the middle”, between the customer’s ETH header and the original payload.
This is fine for an L3 service where the outermost eth layer gets removed
when pushing to the tunnel, but for L2 services, the MPLS stack should