Pedro,

> Can you please describe an example of how you could set up such
> straightforward routing, assuming two Hosts belong to different "CUGs" such
> that these can be randomly spread across the DC  ? My question is where is the
> "gateway", how is it provisioned and  how can traffic paths be guaranteed to
> be optimal.

Ok, I see your point - the routing functionality is straightforward to move 
over,
but ensuring optimal pathing is significantly more work, as noted in another one
of your messages:

> Conceptually, that means that the functionality of the "gateway" should be
> implemented at the overlay ingress and egress points, rather than requiring
> a mid-box.

Thanks,
--David


> -----Original Message-----
> From: Pedro Roque Marques [mailto:[email protected]]
> Sent: Friday, June 29, 2012 7:38 PM
> To: Black, David
> Cc: [email protected]; [email protected]
> Subject: Re: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-
> narten-nvo3-overlay-problem-statement-02]
> 
> 
> On Jun 29, 2012, at 4:02 PM, <[email protected]> wrote:
> 
> >> There is an underlying assumption in NVO3 that isolating tenants from
> >> each other is a key reason to use overlays. If 90% of the traffic is
> >> actually between different tenants, it is not immediately clear to me
> >> why one has set up a system with a lot of "inter tenant" traffic. Is
> >> this is a case we need to focus on optimizing?
> >
> > A single tenant may have multiple virtual networks with routing used to
> > provide/control access among them.  The crucial thing is to avoid assuming
> > that a tenant or other administrative entity has a single virtual network
> > (or CUG in Pedro's email).  For example, consider moving a portion of
> > a single data center that uses multiple VLANs and routers to selectively
> > connect them into an nvo3 environment - each VLAN gets turned into a virtual
> > network, and the routers now route among virtual networks instead of VLANs.
> >
> > One of the things that's been pointed out to me in private is that the level
> > of importance that one places on routing across virtual networks may depend
> > on one's background.  If one is familiar with VLANs and views nvo3 overlays
> > providing VLAN-like functionality, IP routing among virtual networks is a
> > straightforward application of IP routing among VLANs (e.g., the previous
> > mention of L2/L3 IRB functionality that is common in data center network
> > switches).
> 
> Can you please describe an example of how you could set up such
> straightforward routing, assuming two Hosts belong to different "CUGs" such
> that these can be randomly spread across the DC  ? My question is where is the
> "gateway", how is it provisioned and  how can traffic paths be guaranteed to
> be optimal.
> 
> >  OTOH, if one is familiar with VPNs where access among
> > otherwise-closed groups has to be explicitly configured, particularly
> > L3 VPNs where one cannot look to L2 to help with grouping the end systems,
> > this sort of cross-group access can be a significant area of functionality.
> 
> Considering that in a VPN one can achieve inter-CUG traffic exchange without
> an gateway in the middle via policy, it is unclear why you suggest that "look
> to L2" would help.
> 
> >
> > Thanks,
> > --David
> >
> >> -----Original Message-----
> >> From: [email protected] [mailto:[email protected]] On Behalf Of
> Thomas
> >> Narten
> >> Sent: Friday, June 29, 2012 5:56 PM
> >> To: Pedro Roque Marques
> >> Cc: [email protected]
> >> Subject: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-narten-
> >> nvo3-overlay-problem-statement-02]
> >>
> >> Pedro Roque Marques <[email protected]> writes:
> >>
> >>> I object to the document on the following points:
> >>>
> >>> 3) Does not discuss the requirements for inter-CUG traffic.
> >>
> >> Given that the problem statement is not supposed to be the
> >> requirements document,, what exactly should the problem statement say
> >> about this topic?
> >>
> >> <[email protected]> writes:
> >>
> >>> Inter-VN traffic (what you refer to as inter-CUG traffic) is handled
> >>> by a straightforward application of IP routing to the inner IP
> >>> headers; this is similar to the well-understood application of IP
> >>> routing to forward traffic across VLANs.  We should talk about VRFs
> >>> as something other than a limitation of current approaches - for
> >>> VLANs, VRFs (separate instances of routing) are definitely a
> >>> feature, and I expect this to carry forward to nvo3 VNs.  In
> >>> addition, we need to make changes to address Dimitri's comments
> >>> about problems with the current VRF text.
> >>
> >> Pedro Roque Marques <[email protected]> writes:
> >>
> >>> That is where again the differences between different types of
> >>> data-centers do play in. If for instance 90% of a VMs traffic
> >>> happens to be between the Host OS and a network attached storage
> >>> file system run as-a-Service (with the appropriate multi-tenent
> >>> support) then the question of where are the routers becomes a very
> >>> important issue. In a large scale data-center where the Host VM and
> >>> the CPU that hosts the filesystem block can be randomly spread
> >>> where is the router ?
> >>
> >> Where is what router? Are you assuming the Host OS and NAS are in the
> >> different VNs? And hence, traffic has to (at least conceptually) exit
> >> one VN and reenter another whenever there is HostOS - NAS traffic?
> >>
> >>> Is every switch a router ? Does it have all the CUGs present ?
> >>
> >> The underlay can be a mixture of switches and routers... that is not
> >> our concern. So long as the underlay delivers traffic sourced by an
> >> ingress NVE to the appropriate egress NVE, we are good.
> >>
> >> If there are issues with the actual path taken being suboptimal in
> >> some sense, that is an underlay problem to solve, not for the overlay.
> >>
> >>> In some DC designs the problem to solve is the inter-CUG
> >>> traffic. With L2 headers being totally irrelevant.
> >>
> >> There is an underlying assumption in NVO3 that isolating tenants from
> >> each other is a key reason to use overlays. If 90% of the traffic is
> >> actually between different tenants, it is not immediately clear to me
> >> why one has set up a system with a lot of "inter tenant" traffic. Is
> >> this is a case we need to focus on optimizing?
> >>
> >> But in any case, if one does have inter-VN traffic, that will have to
> >> get funneled through a "gateway" between VNs, at least conceptually, I
> >> would assume that an implementation of overlays would provide at least
> >> one, and likely more such gateways on each VN. How many and where to
> >> place them will presumably depend on many factors but would be done
> >> based on traffic patterns and network layout. I would not think every
> >> NVE has to provide such functionality.
> >>
> >> What do you propose needs saying in the problem statement about that?
> >>
> >> Thomas
> >>
> >> _______________________________________________
> >> nvo3 mailing list
> >> [email protected]
> >> https://www.ietf.org/mailman/listinfo/nvo3
> >
> 

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to