" to be great you have to do things others would not"

=)

On Aug 30, 2012, at 10:33 AM, "Balus, Florin Stelian (Florin)" 
<[email protected]> wrote:

> Multiple types of service header/encapsulations create issues for the usual 
> IETF (VPN) control plane; the VPN one was designed for uniform encapsulation 
> at VPN demux layer although various tunneling could be used (MPLS/IP GRE)…
> We need some new thinking here in IETF but we are probably getting ahead of 
> ourselves. Good discussion to have after we sort out the data and control 
> plane requirement drafts…
>  
> From: [email protected] [mailto:[email protected]] On Behalf Of 
> AshwoodsmithPeter
> Sent: Thursday, August 30, 2012 10:21 AM
> To: Balus, Florin Stelian (Florin); Martin Casado; Ayandeh, Siamack
> Cc: David LE GOFF; [email protected]; smith, erik
> Subject: Re: [nvo3] performance limitations with virtual switch as the nvo3 
> end point
>  
> Agreed re Florin’s MPLS comments, and that suggests ‘Yes’ is a reasonable 
> anwser to this question from Somesh.  “BTW, should the standard support 
> multiple encapsulation types? And should/can a single L2-CUG support multiple 
> encapsulation types?”
>  
> Peter
>  
> From: [email protected] [mailto:[email protected]] On Behalf Of 
> Balus, Florin Stelian (Florin)
> Sent: Thursday, August 30, 2012 1:09 PM
> To: Martin Casado; Ayandeh, Siamack
> Cc: smith, erik; [email protected]; David LE GOFF
> Subject: Re: [nvo3] performance limitations with virtual switch as the nvo3 
> end point
>  
> In-line…
>  
> From: [email protected] [mailto:[email protected]] On Behalf Of 
> Martin Casado
> Sent: Thursday, August 30, 2012 8:48 AM
> To: Ayandeh, Siamack
> Cc: David LE GOFF; [email protected]; smith, erik
> Subject: Re: [nvo3] performance limitations with virtual switch as the nvo3 
> end point
>  
> For the most part, drop performance impact is similar to TSO today (partial 
> coalescing can be done on the receive side).  Here is a relevant snippet from 
> a discussion on this at 
> (http://networkheresy.com/2012/03/04/network-virtualization-encapsulation-and-stateless-tcp-transport-stt/)
> 
> " the semantics are very similar (to TSO) in that received packets can be 
> batched in consecutive sequences and passed to the guest as legitimate TCP 
> frames (just like TSO today).
> 
> However, with STT, the outer frame is what is segmented, where with other 
> tunneling protocols presumably it would be the inner TCP frame. There are 
> clear trade-offs between the two approaches. With STT, if the first packets 
> drops, then we’re hosed. On the other hand, segmenting the inner header (with 
> L2) would likely require duplicating the TCP header in each packet which 
> would be less efficient byte-for-byte."
> 
> Again, it's worth pointing out that STT was designed for use when x86 is on 
> both ends.  The header is 32 bit aligned, the spec isn't parsimonious with 
> bit sizes in header fields, and the fields are opaque based on the assumption 
> that they'll be interpreted by software on either side that is evolving 
> relatively quickly.  This makes sense in some environments, and not in 
> others.   
> 
> From our (Nicira's) standpoint, using a more flexible encap makes sense when 
> we own both sides of the communication since we are often often evolving our 
> control plane (header bits are useful for all sorts of stuff, datapath state 
> versioning, multi-hop logical topologies, carrying additional information 
> like logical inport, or logical output port, etc.).   Also, it is generally 
> only deployed in the datacenter fabric, so abusing TCP isn't a huge issue 
> since no middleboxes should be on route.  For deployment environments with 
> middleboxe, GRE is clearly more suitable (and we support that too).
> 
> Of course, whenever an end point is an ASIC or a third party device we don't 
> control, clearly something like VXLAN or NVGRE is preferable.
> 
> [FB>] If we are to talk about what will work on the interim/a gateway 
> optimized soln: you forgot MPLS VPN encap, still one of the encaps under 
> consideration in NVO3.  Ideal for an existing VPN/VLAN gateway, VPN providers 
> will definitely love it, it’s already standardized, available in merchant 
> silicon and widely deployed on the WAN side in all the Telco networks. And to 
> sing a friendly note, Openflow/SDN can program it… J
> 
> In general, I think it is a good idea to decouple the control plane and the 
> encap so there is more flexibility to map the right technology to the right 
> deployment environment.
> 
> .m
> 
> On 8/30/12 7:25 AM, Ayandeh, Siamack wrote:
> Hi Erik,
>  
> Thanks for the post. Do you by any chance have any data on impact of packet 
> loss on STT performance if application is running TCP? Would the application 
> resend the entire segment!?
>  
> Thanks,
>  
> Siamack
> From: [email protected] [mailto:[email protected]] On Behalf Of 
> smith, erik
> Sent: Wednesday, August 29, 2012 6:48 PM
> To: David LE GOFF; [email protected]
> Subject: Re: [nvo3] performance limitations with virtual switch as the nvo3 
> end point
>  
> Hi David, a few months ago we did some basic performance testing with OVS and 
> were pretty happy with the results.  For one reason or another we were under 
> the impression that using OVS to encap/decap would limit our total throughput 
> to 4-6 Gbps and this turned out to not be the case.  In our configuration, we 
> were able to demonstrate 20 Gbps over a bonded pair of 10GbE NICs using STT 
> for the overlay.  Our testing wasn’t exactly scientific but I also found an 
> interesting blog post by Martin Cassado that our limited testing seems to 
> corroborate.
>  
> I haven’t done any testing with VMware and VXLAN.  However, if you’re 
> experiencing limited performance with OVS on <insert your favorite Linux 
> distro here>, I would suggest playing around with Jumbo frames (starting from 
> within the guest) and working your way out to the physical interfaces.   
>  
> For additional information, refer to the following:
> 1)      Martin Cassado’s blog: ( 
> http://networkheresy.com/2012/06/08/the-overhead-of-software-tunneling/ ) 
> 2)      I posted something to my blog a bit less detailed (but with diagrams) 
> earlier this week ( 
> http://brasstacksblog.typepad.com/brass-tacks/2012/08/network-virtualization-networkings-21st-century-equivalent-to-the-space-race.html
>  )  Specifically, the final three diagrams..
>  
> Erik
>  
> From: [email protected] [mailto:[email protected]] On Behalf Of David 
> LE GOFF
> Sent: Wednesday, August 29, 2012 9:16 AM
> To: [email protected]
> Subject: [nvo3] performance limitations with virtual switch as the nvo3 end 
> point
>  
> Hi Folks,
>  
> Did anyone experienced some performance limitations in Labs with the virtual 
> switch function as the bottleneck when dealing with network overlays?
> I mean with the tunnel end point located on the hypervisor (virtual switch), 
> setting up Tagging, QoS, ACL, encryption/decryption, etc. require significant 
> CPUs.
> 
> I know there is not yet official nvo3 implementation there, though VSphere 5 
> announced it with VXLAN recently but at any chance if some studies have been 
> done, I would be glad to read those.
> I know STT has been built to overcome such challenges thanks to the NIC 
> offload capabilities…
>  
> These studies may also drive the brainstorming about which protocol we may 
> use/build?
>  
> Thank you!
>  
> david le goff.
>  
> 
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
>  
> 
> -- 
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Martin Casado
> Nicira Networks, Inc.
> www.nicira.com
> cell: 650-776-1457
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to