Pedro,

An excellent email.  Thanks for pointing out some of the real issues.

John

Sent from my iPhone


>-----Original Message-----
>From: [email protected] [mailto:[email protected]] On Behalf Of
>Pedro Roque Marques
>Sent: Friday, June 29, 2012 4:29 PM
>To: Thomas Narten
>Cc: [email protected]
>Subject: Re: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-
>narten-nvo3-overlay-problem-statement-02]
>
>
>On Jun 29, 2012, at 2:55 PM, Thomas Narten wrote:
>
>> Pedro Roque Marques <[email protected]> writes:
>>
>>> I object to the document on the following points:
>>>
>>> 3) Does not discuss the requirements for inter-CUG traffic.
>>
>> Given that the problem statement is not supposed to be the
>> requirements document,, what exactly should the problem statement say
>> about this topic?
>
>The problem statement does not acknowledge that in a very relevant
>segment of DC designs most traffic crosses subnets/CUGs. Since in this
>type of  data-centers the elements of a subnet can be anywhere, the
>assumption the there is a router that interconnects the subnets is not
>reasonable.
>Thus the problem statement misses a fundamental part of the problem.
>
>>
>> <[email protected]> writes:
>>
>>> Inter-VN traffic (what you refer to as inter-CUG traffic) is handled
>>> by a straightforward application of IP routing to the inner IP
>>> headers; this is similar to the well-understood application of IP
>>> routing to forward traffic across VLANs.  We should talk about VRFs
>>> as something other than a limitation of current approaches - for
>>> VLANs, VRFs (separate instances of routing) are definitely a feature,
>>> and I expect this to carry forward to nvo3 VNs.  In addition, we need
>>> to make changes to address Dimitri's comments about problems with the
>>> current VRF text.
>>
>> Pedro Roque Marques <[email protected]> writes:
>>
>>> That is where again the differences between different types of
>>> data-centers do play in. If for instance 90% of a VMs traffic happens
>>> to be between the Host OS and a network attached storage file system
>>> run as-a-Service (with the appropriate multi-tenent
>>> support) then the question of where are the routers becomes a very
>>> important issue. In a large scale data-center where the Host VM and
>>> the CPU that hosts the filesystem block can be randomly spread where
>>> is the router ?
>>
>> Where is what router? Are you assuming the Host OS and NAS are in the
>> different VNs? And hence, traffic has to (at least conceptually) exit
>> one VN and reenter another whenever there is HostOS - NAS traffic?
>>
>>> Is every switch a router ? Does it have all the CUGs present ?
>>
>> The underlay can be a mixture of switches and routers... that is not
>> our concern. So long as the underlay delivers traffic sourced by an
>> ingress NVE to the appropriate egress NVE, we are good.
>>
>> If there are issues with the actual path taken being suboptimal in
>> some sense, that is an underlay problem to solve, not for the overlay.
>
>That statement misses the "problem". The actual path being optimal is a
>1st order problem to solve. It is a more important issue than other
>aspects currently discussed in the document.
>This problem cannot be delegated to the "underlay".
>
>>
>>> In some DC designs the problem to solve is the inter-CUG traffic.
>>> With L2 headers being totally irrelevant.
>>
>> There is an underlying assumption in NVO3 that isolating tenants from
>> each other is a key reason to use overlays. If 90% of the traffic is
>> actually between different tenants, it is not immediately clear to me
>> why one has set up a system with a lot of "inter tenant" traffic.
>
>As i pointed out, it is very common to run database or file system
>services to multiple tenants. As you can imagine these tend to dominate
>the traffic.
>A "subnet"/CUG is not necessarily a tenant. It can be an application
>tier. That is standard practice in several DC deployments and supported
>for instance by Amazon's VPC service.
>
>> Is
>> this is a case we need to focus on optimizing?
>
>It is more important than intra-subnet/CUG traffic. And the problem
>statement does not reflect this.
>
>>
>> But in any case, if one does have inter-VN traffic, that will have to
>> get funneled through a "gateway" between VNs, at least conceptually, I
>> would assume that an implementation of overlays would provide at least
>> one, and likely more such gateways on each VN. How many and where to
>> place them will presumably depend on many factors but would be done
>> based on traffic patterns and network layout. I would not think every
>> NVE has to provide such functionality.
>>
>> What do you propose needs saying in the problem statement about that?
>
>I propose that the problem statement that focuses on large scale DCs
>(which should probably be a different document than the one for small
>scale) should clearly state upfront that most of the traffic crosses
>CUGs. And that a solution should be able to provide optimal inter-CUG
>traffic without packets crossing the    "DC  underlay" multiple times.
>
>Conceptually, that means that the functionality of the "gateway" should
>be implemented at the overlay ingress and egress points, rather than
>requiring a mid-box. Whenever a mid-box is present it becomes a scaling
>and provisioning nightmare. This is relevant in DCs where the compute
>load can be spread beyond a single rack (i.e. the old aggregation switch
>boundary).
>
>> Thomas
>>
>> _______________________________________________
>> nvo3 mailing list
>> [email protected]
>> https://www.ietf.org/mailman/listinfo/nvo3
>
>_______________________________________________
>nvo3 mailing list
>[email protected]
>https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to