And in fact the gateway on a virtual network being a virtual instance on an x86 machine is quite common (vShield Edge, vLB, vFW) -- especially where the IaaS tenant application architecture has been tired and organized as a collection of VMs, vLB, vFW and the L2 segments gluing it all together (vApp).
-----Original Message----- From: Martin Casado [[email protected]<mailto:[email protected]>] Sent: Wednesday, August 29, 2012 07:34 PM Central Standard Time To: Somesh Gupta Cc: [email protected]; Jon Hudson Subject: Re: [nvo3] performance limitations with virtual switch as the nvo3 end point Agree here too. STT is optimized for when both ends are x86 (although one of those ends could certainly be used as a gateway). On 8/29/12 5:29 PM, Somesh Gupta wrote: Agree although some of the NICs can do some or most of the offloads even today. But in my opinion STT introduces reassembly state in the receiver which would prevent gateway implementations in ASICs – for a short term gain. From: Jon Hudson [mailto:[email protected]] Sent: Wednesday, August 29, 2012 5:26 PM To: Somesh Gupta Cc: [email protected]<mailto:[email protected]> Subject: Re: [nvo3] performance limitations with virtual switch as the nvo3 end point Totally agreed, but that will require all new hardware. One of the gains of STT was the use of current off the self performance capabilities. On Wed, Aug 29, 2012 at 5:19 PM, Somesh Gupta <[email protected]<mailto:[email protected]>> wrote: I assume that majority of the NIC vendors will support the stateless offloads for VxLAN and NvGRE by sometime next year – so they should all be on equal footing from that point of view. the additional overhead of encap/decap compared to the overhead of copying date between the VM and the hypervisor should be minimal. From: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of smith, erik Sent: Wednesday, August 29, 2012 3:48 PM To: David LE GOFF; [email protected]<mailto:[email protected]> Subject: Re: [nvo3] performance limitations with virtual switch as the nvo3 end point Hi David, a few months ago we did some basic performance testing with OVS and were pretty happy with the results. For one reason or another we were under the impression that using OVS to encap/decap would limit our total throughput to 4-6 Gbps and this turned out to not be the case. In our configuration, we were able to demonstrate 20 Gbps over a bonded pair of 10GbE NICs using STT for the overlay. Our testing wasn’t exactly scientific but I also found an interesting blog post by Martin Cassado that our limited testing seems to corroborate. I haven’t done any testing with VMware and VXLAN. However, if you’re experiencing limited performance with OVS on <insert your favorite Linux distro here>, I would suggest playing around with Jumbo frames (starting from within the guest) and working your way out to the physical interfaces. For additional information, refer to the following: 1) Martin Cassado’s blog: ( http://networkheresy.com/2012/06/08/the-overhead-of-software-tunneling/ ) 2) I posted something to my blog a bit less detailed (but with diagrams) earlier this week ( http://brasstacksblog.typepad.com/brass-tacks/2012/08/network-virtualization-networkings-21st-century-equivalent-to-the-space-race.html ) Specifically, the final three diagrams.. Erik From: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of David LE GOFF Sent: Wednesday, August 29, 2012 9:16 AM To: [email protected]<mailto:[email protected]> Subject: [nvo3] performance limitations with virtual switch as the nvo3 end point Hi Folks, Did anyone experienced some performance limitations in Labs with the virtual switch function as the bottleneck when dealing with network overlays? I mean with the tunnel end point located on the hypervisor (virtual switch), setting up Tagging, QoS, ACL, encryption/decryption, etc. require significant CPUs. I know there is not yet official nvo3 implementation there, though VSphere 5 announced it with VXLAN recently but at any chance if some studies have been done, I would be glad to read those. I know STT has been built to overcome such challenges thanks to the NIC offload capabilities… These studies may also drive the brainstorming about which protocol we may use/build? Thank you! david le goff. _______________________________________________ nvo3 mailing list [email protected]<mailto:[email protected]> https://www.ietf.org/mailman/listinfo/nvo3 -- "Do not lie. And do not do what you hate." _______________________________________________ nvo3 mailing list [email protected]<mailto:[email protected]> https://www.ietf.org/mailman/listinfo/nvo3 -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Casado Nicira Networks, Inc. www.nicira.com<http://www.nicira.com> cell: 650-776-1457 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
_______________________________________________ nvo3 mailing list [email protected] https://www.ietf.org/mailman/listinfo/nvo3
