I only have one source for that. The TCP Cloud guys claim the NIC drivers arent 
efficient, and a ten gig link will top out around 4.9Gb, whereas native 
contrail encapsulation will reach 9.6 or so.

They seemed experienced and convinced enough that it was worth listening to, 
all other things being equal. I'd prefer not to have do benchmark real world 
scenarios with different overlays at the scale I'm working, but if VXLAN is 
required for bare metal, I may be forced.

The annoying part is that we'll be running QFX5100s, which support MPLS in the 
image, supposedly.

Get Outlook for iOS<https://aka.ms/o0ukef>

On Wed, Sep 21, 2016 at 11:09 PM -0700, "Van Leeuwen, Robert" 
<rovanleeu...@ebay.com<mailto:rovanleeu...@ebay.com>> wrote:

>The documentation seems to suggest that the TOR Services Node requires VXLAN 
>encapsulation in order
> to support baremetal hosts. Am I reading that right? We're looking at 
> contrail now, and the best information seems
> to suggest that VXLAN encapsulation halves the effective line rate of your 
> hosts, so I'd prefer to go with MPLS/GRE.

I have no actual experience with integrating baremetal except for a POC setup 
some time ago, however:
* You configure contrail in what order encapsulation should be done.
That means it will switch between the lowest common denominator.
So any traffic going to bare-metal will indeed be vxlan based. However, traffic 
between vrouters will still be MPLS over GRE/UDP
Depending on how many bare-metal hosts you think you will integrate it might 
not affect too much traffic.
* I do not think the VXLAN protocol impact is as significant as you mentioned.
BUM traffic will be handled less elegant but I am not aware of it impacting raw 
traffic speeds to such a high degree.
If that is the case I would certainly like to know more about why that would 

Robert van Leeuwen
Users mailing list

Reply via email to