>
> Many data centers' web servers are in different subnet than other applications
> (e.g. DB servers). The web server applications frequently communicate with
> peers in the same DC which are usually in different subnets.
>
> What kind of DCs which only have intra subnet communication?

A large hadoop/mapreduce cluster will not bother about different subnets,
and the traffic requirements can be huge. Thousands of enterprise applications
will also do the same, since they assume they are in a protected environment.

I agree with Joel's earlier statement :

>  I have seen descriptions of data centers for which it is almost "all",
>  and other data centers for which it is negligible.

Dimitri


>
> Linda
> > -----Original Message-----
> > From: Joel M. Halpern [mailto:[email protected]]
> > Sent: Monday, July 09, 2012 3:00 PM
> > To: Linda Dunbar
> > Cc: Pedro Roque Marques; [email protected]; [email protected];
> > [email protected]
> > Subject: Re: [nvo3] inter-CUG traffic [was Re: call for adoption:
> > draft-narten-nvo3-overlay-problem-statement-02]
> >
> > I have not seen data to suggest "most".  Do you have a source for that?
> >   I have seen descriptions of data centers for which it is almost "all",
> > and other data centers for which it is negligible.
> > Yours,
> > Joel
> >
> > On 7/9/2012 3:51 PM, Linda Dunbar wrote:
> > > Joel,
> > >
> > > I agree with you that VPN related WG (e.g. L2VPN, L3VPN) may not need
> > to address too much on across VN communications.
> > >
> > > But most traffic in Data Center are across subnets. Given NVo3 is for
> > identifying issues associated with data centers, I think that cross
> > Subnet traffic should not be ignored.
> > >
> > > Linda
> > >
> > >> -----Original Message-----
> > >> From: [email protected] [mailto:[email protected]] On Behalf
> > Of
> > >> Joel M. Halpern
> > >> Sent: Friday, June 29, 2012 9:32 PM
> > >> To: Pedro Roque Marques
> > >> Cc: [email protected]; [email protected]; [email protected]
> > >> Subject: Re: [nvo3] inter-CUG traffic [was Re: call for adoption:
> > >> draft-narten-nvo3-overlay-problem-statement-02]
> > >>
> > >> I would not be so bold as to insist that all deployments can safely
> > >> ignore inter-VPN intra-data-center traffic.  But there are MANY
> > cases
> > >> where that is not an important part of the traffic mix.
> > >> So I was urging that we not mandate optimal inter-subnet routing as
> > >> part
> > >> of the NVO3 requirements.
> > >> I would not want to prohibit it either, as there are definitely
> > cases
> > >> where it matters, some along the lines you alude to.
> > >>
> > >> Yours,
> > >> Joel
> > >>
> > >> On 6/29/2012 9:40 PM, Pedro Roque Marques wrote:
> > >>> Joel,
> > >>> A very common model currently is to have a 3 tier app where each
> > tier
> > >> is in its VLAN. You will find that web-servers for instance don't
> > >> actually talk much to each other... although they are on the same
> > VLAN
> > >> 100% of their traffic goes outside VLAN. Very similar story applies
> > to
> > >> app logic tier. The database tier may have some replication traffic
> > >> within its VLAN but hopefully than is less than the requests that it
> > >> serves.
> > >>>
> > >>> There isn't a whole lot of intra-CUG/subnet traffic under that
> > >> deployment model. A problem statement that assumes (implicitly) that
> > >> most or a significant part of the traffic stays local to a
> > >> VLAN/subnet/CUG is not a good match for the common 3-tier
> > application
> > >> model. Even if you assume that web and app tiers use a
> > VLAN/subnet/CUG
> > >> per tenant (which really is an application in enterprise) the
> > database
> > >> is typically common for a large number of apps/tenants.
> > >>>
> > >>>     Pedro.
> > >>>
> > >>> On Jun 29, 2012, at 5:26 PM, Joel M. Halpern wrote:
> > >>>
> > >>>> Depending upon what portion of the traffic needs inter-region
> > >> handling (inter vpn, inter-vlan, ...) it is not obvious that
> > "optimal"
> > >> is an important goal.  As a general rule, perfect is the enemy of
> > good.
> > >>>>
> > >>>> Yours,
> > >>>> Joel
> > >>>>
> > >>>> On 6/29/2012 7:54 PM, [email protected] wrote:
> > >>>>> Pedro,
> > >>>>>
> > >>>>>> Can you please describe an example of how you could set up such
> > >>>>>> straightforward routing, assuming two Hosts belong to different
> > >> "CUGs" such
> > >>>>>> that these can be randomly spread across the DC  ? My question
> > is
> > >> where is the
> > >>>>>> "gateway", how is it provisioned and  how can traffic paths be
> > >> guaranteed to
> > >>>>>> be optimal.
> > >>>>>
> > >>>>> Ok, I see your point - the routing functionality is
> > straightforward
> > >> to move over,
> > >>>>> but ensuring optimal pathing is significantly more work, as noted
> > >> in another one
> > >>>>> of your messages:
> > >>>>>
> > >>>>>> Conceptually, that means that the functionality of the "gateway"
> > >> should be
> > >>>>>> implemented at the overlay ingress and egress points, rather
> > than
> > >> requiring
> > >>>>>> a mid-box.
> > >>>>>
> > >>>>> Thanks,
> > >>>>> --David
> > >>>>>
> > >>>>>
> > >>>>>> -----Original Message-----
> > >>>>>> From: Pedro Roque Marques [mailto:[email protected]]
> > >>>>>> Sent: Friday, June 29, 2012 7:38 PM
> > >>>>>> To: Black, David
> > >>>>>> Cc: [email protected]; [email protected]
> > >>>>>> Subject: Re: [nvo3] inter-CUG traffic [was Re: call for adoption:
> > >> draft-
> > >>>>>> narten-nvo3-overlay-problem-statement-02]
> > >>>>>>
> > >>>>>>
> > >>>>>> On Jun 29, 2012, at 4:02 PM, <[email protected]> wrote:
> > >>>>>>
> > >>>>>>>> There is an underlying assumption in NVO3 that isolating
> > tenants
> > >> from
> > >>>>>>>> each other is a key reason to use overlays. If 90% of the
> > >> traffic is
> > >>>>>>>> actually between different tenants, it is not immediately
> > clear
> > >> to me
> > >>>>>>>> why one has set up a system with a lot of "inter tenant"
> > traffic.
> > >> Is
> > >>>>>>>> this is a case we need to focus on optimizing?
> > >>>>>>>
> > >>>>>>> A single tenant may have multiple virtual networks with routing
> > >> used to
> > >>>>>>> provide/control access among them.  The crucial thing is to
> > avoid
> > >> assuming
> > >>>>>>> that a tenant or other administrative entity has a single
> > virtual
> > >> network
> > >>>>>>> (or CUG in Pedro's email).  For example, consider moving a
> > >> portion of
> > >>>>>>> a single data center that uses multiple VLANs and routers to
> > >> selectively
> > >>>>>>> connect them into an nvo3 environment - each VLAN gets turned
> > >> into a virtual
> > >>>>>>> network, and the routers now route among virtual networks
> > instead
> > >> of VLANs.
> > >>>>>>>
> > >>>>>>> One of the things that's been pointed out to me in private is
> > >> that the level
> > >>>>>>> of importance that one places on routing across virtual
> > networks
> > >> may depend
> > >>>>>>> on one's background.  If one is familiar with VLANs and views
> > >> nvo3 overlays
> > >>>>>>> providing VLAN-like functionality, IP routing among virtual
> > >> networks is a
> > >>>>>>> straightforward application of IP routing among VLANs (e.g.,
> > the
> > >> previous
> > >>>>>>> mention of L2/L3 IRB functionality that is common in data
> > center
> > >> network
> > >>>>>>> switches).
> > >>>>>>
> > >>>>>> Can you please describe an example of how you could set up such
> > >>>>>> straightforward routing, assuming two Hosts belong to different
> > >> "CUGs" such
> > >>>>>> that these can be randomly spread across the DC  ? My question
> > is
> > >> where is the
> > >>>>>> "gateway", how is it provisioned and  how can traffic paths be
> > >> guaranteed to
> > >>>>>> be optimal.
> > >>>>>>
> > >>>>>>>    OTOH, if one is familiar with VPNs where access among
> > >>>>>>> otherwise-closed groups has to be explicitly configured,
> > >> particularly
> > >>>>>>> L3 VPNs where one cannot look to L2 to help with grouping the
> > end
> > >> systems,
> > >>>>>>> this sort of cross-group access can be a significant area of
> > >> functionality.
> > >>>>>>
> > >>>>>> Considering that in a VPN one can achieve inter-CUG traffic
> > >> exchange without
> > >>>>>> an gateway in the middle via policy, it is unclear why you
> > suggest
> > >> that "look
> > >>>>>> to L2" would help.
> > >>>>>>
> > >>>>>>>
> > >>>>>>> Thanks,
> > >>>>>>> --David
> > >>>>>>>
> > >>>>>>>> -----Original Message-----
> > >>>>>>>> From: [email protected] [mailto:[email protected]] On
> > >> Behalf Of
> > >>>>>> Thomas
> > >>>>>>>> Narten
> > >>>>>>>> Sent: Friday, June 29, 2012 5:56 PM
> > >>>>>>>> To: Pedro Roque Marques
> > >>>>>>>> Cc: [email protected]
> > >>>>>>>> Subject: [nvo3] inter-CUG traffic [was Re: call for adoption:
> > >> draft-narten-
> > >>>>>>>> nvo3-overlay-problem-statement-02]
> > >>>>>>>>
> > >>>>>>>> Pedro Roque Marques <[email protected]> writes:
> > >>>>>>>>
> > >>>>>>>>> I object to the document on the following points:
> > >>>>>>>>>
> > >>>>>>>>> 3) Does not discuss the requirements for inter-CUG traffic.
> > >>>>>>>>
> > >>>>>>>> Given that the problem statement is not supposed to be the
> > >>>>>>>> requirements document,, what exactly should the problem
> > >> statement say
> > >>>>>>>> about this topic?
> > >>>>>>>>
> > >>>>>>>> <[email protected]> writes:
> > >>>>>>>>
> > >>>>>>>>> Inter-VN traffic (what you refer to as inter-CUG traffic) is
> > >> handled
> > >>>>>>>>> by a straightforward application of IP routing to the inner
> > IP
> > >>>>>>>>> headers; this is similar to the well-understood application
> > of
> > >> IP
> > >>>>>>>>> routing to forward traffic across VLANs.  We should talk
> > about
> > >> VRFs
> > >>>>>>>>> as something other than a limitation of current approaches -
> > >> for
> > >>>>>>>>> VLANs, VRFs (separate instances of routing) are definitely a
> > >>>>>>>>> feature, and I expect this to carry forward to nvo3 VNs.  In
> > >>>>>>>>> addition, we need to make changes to address Dimitri's
> > comments
> > >>>>>>>>> about problems with the current VRF text.
> > >>>>>>>>
> > >>>>>>>> Pedro Roque Marques <[email protected]> writes:
> > >>>>>>>>
> > >>>>>>>>> That is where again the differences between different types
> > of
> > >>>>>>>>> data-centers do play in. If for instance 90% of a VMs traffic
> > >>>>>>>>> happens to be between the Host OS and a network attached
> > >> storage
> > >>>>>>>>> file system run as-a-Service (with the appropriate multi-
> > tenent
> > >>>>>>>>> support) then the question of where are the routers becomes a
> > >> very
> > >>>>>>>>> important issue. In a large scale data-center where the Host
> > VM
> > >> and
> > >>>>>>>>> the CPU that hosts the filesystem block can be randomly
> > spread
> > >>>>>>>>> where is the router ?
> > >>>>>>>>
> > >>>>>>>> Where is what router? Are you assuming the Host OS and NAS are
> > >> in the
> > >>>>>>>> different VNs? And hence, traffic has to (at least
> > conceptually)
> > >> exit
> > >>>>>>>> one VN and reenter another whenever there is HostOS - NAS
> > >> traffic?
> > >>>>>>>>
> > >>>>>>>>> Is every switch a router ? Does it have all the CUGs present ?
> > >>>>>>>>
> > >>>>>>>> The underlay can be a mixture of switches and routers... that
> > is
> > >> not
> > >>>>>>>> our concern. So long as the underlay delivers traffic sourced
> > by
> > >> an
> > >>>>>>>> ingress NVE to the appropriate egress NVE, we are good.
> > >>>>>>>>
> > >>>>>>>> If there are issues with the actual path taken being
> > suboptimal
> > >> in
> > >>>>>>>> some sense, that is an underlay problem to solve, not for the
> > >> overlay.
> > >>>>>>>>
> > >>>>>>>>> In some DC designs the problem to solve is the inter-CUG
> > >>>>>>>>> traffic. With L2 headers being totally irrelevant.
> > >>>>>>>>
> > >>>>>>>> There is an underlying assumption in NVO3 that isolating
> > tenants
> > >> from
> > >>>>>>>> each other is a key reason to use overlays. If 90% of the
> > >> traffic is
> > >>>>>>>> actually between different tenants, it is not immediately
> > clear
> > >> to me
> > >>>>>>>> why one has set up a system with a lot of "inter tenant"
> > traffic.
> > >> Is
> > >>>>>>>> this is a case we need to focus on optimizing?
> > >>>>>>>>
> > >>>>>>>> But in any case, if one does have inter-VN traffic, that will
> > >> have to
> > >>>>>>>> get funneled through a "gateway" between VNs, at least
> > >> conceptually, I
> > >>>>>>>> would assume that an implementation of overlays would provide
> > at
> > >> least
> > >>>>>>>> one, and likely more such gateways on each VN. How many and
> > >> where to
> > >>>>>>>> place them will presumably depend on many factors but would be
> > >> done
> > >>>>>>>> based on traffic patterns and network layout. I would not
> > think
> > >> every
> > >>>>>>>> NVE has to provide such functionality.
> > >>>>>>>>
> > >>>>>>>> What do you propose needs saying in the problem statement
> > about
> > >> that?
> > >>>>>>>>
> > >>>>>>>> Thomas
> > >>>>>>>>
> > >>>>>>>> _______________________________________________
> > >>>>>>>> nvo3 mailing list
> > >>>>>>>> [email protected]
> > >>>>>>>> https://www.ietf.org/mailman/listinfo/nvo3
> > >>>>>>>
> > >>>>>>
> > >>>>>
> > >>>>> _______________________________________________
> > >>>>> nvo3 mailing list
> > >>>>> [email protected]
> > >>>>> https://www.ietf.org/mailman/listinfo/nvo3
> > >>>>>
> > >>>>
> > >>>
> > >>>
> > >>
> > >> _______________________________________________
> > >> nvo3 mailing list
> > >> [email protected]
> > >> https://www.ietf.org/mailman/listinfo/nvo3
> > >
>
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to