Hi Martin,

I agree an x86 based gateway is required if we are to use NIC based
encapsulation mechanism as STT.

As a follow up to Linda's question and what you mention earlier, do you see
2 kinds of encapsulations in the DC at one time to avoid a gateway.

1. One for server-to-server communication which can use advantages on NIC's.
2. Another for server-to-host, which can work across the WAN.

Thanks,
Vishwas

On Tue, Sep 4, 2012 at 9:39 AM, Martin Casado <[email protected]> wrote:

> "Communication" was the wrong word, sorry.  What I meant was that we often
> own both edges with respect to encap/decap.  Those edges are typically
> vswitches within the hypervisor or gateways.
>
> .m
>
> From: Linda Dunbar <[email protected]>
> Date: Tuesday, September 4, 2012 9:20 AM
> To: Martin Casado <[email protected]>, "Ayandeh, Siamack" <
> [email protected]>
> Cc: David LE GOFF <[email protected]>, "[email protected]" <[email protected]>,
> "smith, erik (EMC)" <[email protected]>
> Subject: RE: [nvo3] performance limitations with virtual switch as the
> nvo3 end point
>
> Martin, ****
>
> ** **
>
> When you say that “we own both sides of the communication”, how do you
> classify the communication between VMs hosted by your servers and peers
> which are hosted at other places or internet users? ****
>
> ** **
>
> Linda Dunbar****
>
> ** **
>
> *From:* [email protected] 
> [mailto:[email protected]<[email protected]>]
> *On Behalf Of *Martin Casado
> *Sent:* Thursday, August 30, 2012 10:48 AM
> *To:* Ayandeh, Siamack
> *Cc:* David LE GOFF; [email protected]; smith, erik
> *Subject:* Re: [nvo3] performance limitations with virtual switch as the
> nvo3 end point****
>
> ** **
>
>
>
> From our (Nicira's) standpoint, using a more flexible encap makes sense
> when we own both sides of the communication since we are often often
> evolving our control plane (header bits are useful for all sorts of stuff,
> datapath state versioning, multi-hop logical topologies, carrying
> additional information like logical inport, or logical output port,
> etc.).   Also, it is generally only deployed in the datacenter fabric, so
> abusing TCP isn't a huge issue since no middleboxes should be on route.
> For deployment environments with middleboxe, GRE is clearly more suitable
> (and we support that too).
>
> ****
>
> ** **
>
> ** **
>
>
> Of course, whenever an end point is an ASIC or a third party device we
> don't control, clearly something like VXLAN or NVGRE is preferable.
>
> In general, I think it is a good idea to decouple the control plane and
> the encap so there is more flexibility to map the right technology to the
> right deployment environment.
>
> .m
>
> ****
>
> On 8/30/12 7:25 AM, Ayandeh, Siamack wrote:****
>
> Hi Erik,****
>
>  ****
>
> Thanks for the post. Do you by any chance have any data on impact of
> packet loss on STT performance if application is running TCP? Would the
> application resend the entire segment!?****
>
>  ****
>
> Thanks,****
>
>  ****
>
> Siamack****
>
> *From:*[email protected] 
> [mailto:[email protected]<[email protected]>]
> *On Behalf Of *smith, erik
> *Sent:* Wednesday, August 29, 2012 6:48 PM
> *To:* David LE GOFF; [email protected]
> *Subject:* Re: [nvo3] performance limitations with virtual switch as the
> nvo3 end point****
>
>  ****
>
> Hi David, a few months ago we did some basic performance testing with OVS
> and were pretty happy with the results.  For one reason or another we were
> under the impression that using OVS to encap/decap would limit our total
> throughput to 4-6 Gbps and this turned out to not be the case.  In our
> configuration, we were able to demonstrate 20 Gbps over a bonded pair of
> 10GbE NICs using STT for the overlay.  Our testing wasn’t exactly
> scientific but I also found an interesting blog post by Martin Cassado that
> our limited testing seems to corroborate.****
>
>  ****
>
> I haven’t done any testing with VMware and VXLAN.  However, if you’re
> experiencing limited performance with OVS on <insert your favorite Linux
> distro here>, I would suggest playing around with Jumbo frames (starting
> from within the guest) and working your way out to the physical interfaces.
>   ****
>
>  ****
>
> For additional information, refer to the following:****
>
> 1)     Martin Cassado’s blog: (
> http://networkheresy.com/2012/06/08/the-overhead-of-software-tunneling/)
> ****
>
> 2)     I posted something to my blog a bit less detailed (but with
> diagrams) earlier this week (
> http://brasstacksblog.typepad.com/brass-tacks/2012/08/network-virtualization-networkings-21st-century-equivalent-to-the-space-race.html)
>   Specifically, the final three diagrams..
> ****
>
>  ****
>
> Erik****
>
>  ****
>
> *From:*[email protected] 
> [mailto:[email protected]<[email protected]>]
> *On Behalf Of *David LE GOFF
> *Sent:* Wednesday, August 29, 2012 9:16 AM
> *To:* [email protected]
> *Subject:* [nvo3] performance limitations with virtual switch as the nvo3
> end point****
>
>  ****
>
> Hi Folks,****
>
>  ****
>
> Did anyone experienced some performance limitations in Labs with the
> virtual switch function as the bottleneck when dealing with network
> overlays?****
>
> I mean with the tunnel end point located on the hypervisor (virtual
> switch), setting up Tagging, QoS, ACL, encryption/decryption, etc. require
> significant CPUs.****
>
>
> I know there is not yet official nvo3 implementation there, though VSphere
> 5 announced it with VXLAN recently but at any chance if some studies have
> been done, I would be glad to read those.****
>
> I know STT has been built to overcome such challenges thanks to the NIC
> offload capabilities…****
>
>  ****
>
> These studies may also drive the brainstorming about which protocol we may
> use/build?****
>
>  ****
>
> Thank you!****
>
>  ****
>
> david le goff.****
>
>
>
>
> ****
>
> _______________________________________________****
>
> nvo3 mailing list****
>
> [email protected]****
>
> https://www.ietf.org/mailman/listinfo/nvo3****
>
>
>
> ****
>
> -- ****
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~****
>
> Martin Casado****
>
> Nicira Networks, Inc.****
>
> www.nicira.com****
>
> cell: 650-776-1457****
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~****
>
>
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
>
>
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to