> There is an underlying assumption in NVO3 that isolating tenants from
> each other is a key reason to use overlays. If 90% of the traffic is
> actually between different tenants, it is not immediately clear to me
> why one has set up a system with a lot of "inter tenant" traffic. Is
> this is a case we need to focus on optimizing?

A single tenant may have multiple virtual networks with routing used to
provide/control access among them.  The crucial thing is to avoid assuming
that a tenant or other administrative entity has a single virtual network
(or CUG in Pedro's email).  For example, consider moving a portion of 
a single data center that uses multiple VLANs and routers to selectively
connect them into an nvo3 environment - each VLAN gets turned into a virtual
network, and the routers now route among virtual networks instead of VLANs.

One of the things that's been pointed out to me in private is that the level
of importance that one places on routing across virtual networks may depend
on one's background.  If one is familiar with VLANs and views nvo3 overlays
providing VLAN-like functionality, IP routing among virtual networks is a
straightforward application of IP routing among VLANs (e.g., the previous
mention of L2/L3 IRB functionality that is common in data center network
switches).  OTOH, if one is familiar with VPNs where access among
otherwise-closed groups has to be explicitly configured, particularly
L3 VPNs where one cannot look to L2 to help with grouping the end systems,
this sort of cross-group access can be a significant area of functionality.

Thanks,
--David

> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of Thomas
> Narten
> Sent: Friday, June 29, 2012 5:56 PM
> To: Pedro Roque Marques
> Cc: [email protected]
> Subject: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-narten-
> nvo3-overlay-problem-statement-02]
> 
> Pedro Roque Marques <[email protected]> writes:
> 
> > I object to the document on the following points:
> >
> > 3) Does not discuss the requirements for inter-CUG traffic.
> 
> Given that the problem statement is not supposed to be the
> requirements document,, what exactly should the problem statement say
> about this topic?
> 
> <[email protected]> writes:
> 
> > Inter-VN traffic (what you refer to as inter-CUG traffic) is handled
> > by a straightforward application of IP routing to the inner IP
> > headers; this is similar to the well-understood application of IP
> > routing to forward traffic across VLANs.  We should talk about VRFs
> > as something other than a limitation of current approaches - for
> > VLANs, VRFs (separate instances of routing) are definitely a
> > feature, and I expect this to carry forward to nvo3 VNs.  In
> > addition, we need to make changes to address Dimitri's comments
> > about problems with the current VRF text.
> 
> Pedro Roque Marques <[email protected]> writes:
> 
> > That is where again the differences between different types of
> >  data-centers do play in. If for instance 90% of a VMs traffic
> >  happens to be between the Host OS and a network attached storage
> >  file system run as-a-Service (with the appropriate multi-tenent
> >  support) then the question of where are the routers becomes a very
> >  important issue. In a large scale data-center where the Host VM and
> >  the CPU that hosts the filesystem block can be randomly spread
> >  where is the router ?
> 
> Where is what router? Are you assuming the Host OS and NAS are in the
> different VNs? And hence, traffic has to (at least conceptually) exit
> one VN and reenter another whenever there is HostOS - NAS traffic?
> 
> > Is every switch a router ? Does it have all the CUGs present ?
> 
> The underlay can be a mixture of switches and routers... that is not
> our concern. So long as the underlay delivers traffic sourced by an
> ingress NVE to the appropriate egress NVE, we are good.
> 
> If there are issues with the actual path taken being suboptimal in
> some sense, that is an underlay problem to solve, not for the overlay.
> 
> > In some DC designs the problem to solve is the inter-CUG
> > traffic. With L2 headers being totally irrelevant.
> 
> There is an underlying assumption in NVO3 that isolating tenants from
> each other is a key reason to use overlays. If 90% of the traffic is
> actually between different tenants, it is not immediately clear to me
> why one has set up a system with a lot of "inter tenant" traffic. Is
> this is a case we need to focus on optimizing?
> 
> But in any case, if one does have inter-VN traffic, that will have to
> get funneled through a "gateway" between VNs, at least conceptually, I
> would assume that an implementation of overlays would provide at least
> one, and likely more such gateways on each VN. How many and where to
> place them will presumably depend on many factors but would be done
> based on traffic patterns and network layout. I would not think every
> NVE has to provide such functionality.
> 
> What do you propose needs saying in the problem statement about that?
> 
> Thomas
> 
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to