In the offloading to NIC discussion, none of the vswitch functionality is
offloaded to the NIC �C only TCP checksum, LSO, LRO, inbound load spreading etc
(SR-IOV would be a different story although that has very limited
deployment).

From: Haoweiguo [mailto:[email protected]]
Sent: Thursday, August 30, 2012 3:02 AM
To: Somesh Gupta; [email protected]
Subject: 答复: [nvo3] performance limitations with virtual switch as the nvo3 end 
point


Offloading to the NIC is a simple solution that don't need signal protocol 
between NVE and TES.so the solution will have large impact on the architecture 
of NVO3.

But offloading to TOR has it's specific advantage.TOR can realize better 
ACL/QOS etc feature than the NIC.

So I think It's not clear we should offload to NIC or offload to TOR.

Regards

Weiguo

________________________________
发件人: Somesh Gupta [[email protected]]
发送时间: 2012年8月30日 8:19
到: [email protected]
主题: Re: [nvo3] performance limitations with virtual switch as the nvo3 end point
I assume that majority of the NIC vendors will support the stateless offloads 
for
VxLAN and NvGRE by sometime next year �C so they should all be on equal footing
from that point of view.

the additional overhead of encap/decap compared to the overhead of copying date 
between
the VM and the hypervisor should be minimal.

From: [email protected] [mailto:[email protected]] On Behalf Of smith, 
erik
Sent: Wednesday, August 29, 2012 3:48 PM
To: David LE GOFF; [email protected]
Subject: Re: [nvo3] performance limitations with virtual switch as the nvo3 end 
point

Hi David, a few months ago we did some basic performance testing with OVS and 
were pretty happy with the results.  For one reason or another we were under 
the impression that using OVS to encap/decap would limit our total throughput 
to 4-6 Gbps and this turned out to not be the case.  In our configuration, we 
were able to demonstrate 20 Gbps over a bonded pair of 10GbE NICs using STT for 
the overlay.  Our testing wasn’t exactly scientific but I also found an 
interesting blog post by Martin Cassado that our limited testing seems to 
corroborate.

I haven’t done any testing with VMware and VXLAN.  However, if you’re 
experiencing limited performance with OVS on <insert your favorite Linux distro 
here>, I would suggest playing around with Jumbo frames (starting from within 
the guest) and working your way out to the physical interfaces.

For additional information, refer to the following:

1)      Martin Cassado’s blog: ( 
http://networkheresy.com/2012/06/08/the-overhead-of-software-tunneling/ )

2)      I posted something to my blog a bit less detailed (but with diagrams) 
earlier this week ( 
http://brasstacksblog.typepad.com/brass-tacks/2012/08/network-virtualization-networkings-21st-century-equivalent-to-the-space-race.html
 )  Specifically, the final three diagrams..

Erik

From: [email protected] [mailto:[email protected]] On Behalf Of David 
LE GOFF
Sent: Wednesday, August 29, 2012 9:16 AM
To: [email protected]
Subject: [nvo3] performance limitations with virtual switch as the nvo3 end 
point

Hi Folks,

Did anyone experienced some performance limitations in Labs with the virtual 
switch function as the bottleneck when dealing with network overlays?
I mean with the tunnel end point located on the hypervisor (virtual switch), 
setting up Tagging, QoS, ACL, encryption/decryption, etc. require significant 
CPUs.

I know there is not yet official nvo3 implementation there, though VSphere 5 
announced it with VXLAN recently but at any chance if some studies have been 
done, I would be glad to read those.
I know STT has been built to overcome such challenges thanks to the NIC offload 
capabilities…

These studies may also drive the brainstorming about which protocol we may 
use/build?

Thank you!

david le goff.
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to