> 
> Still take the 3-tier app as an example, if the app servers each could be
> configured with two interfaces with one located in the VLAN/subnet of the web
> servers while the another located in the VLAN/subnet of the DB servers, the
> network only needs to care about the optimization for intra-subnet traffic
> (e.g., web<->app intra-subnet traffic and app<->DB intra-subnet traffic).


This would violate several security best practices. It would make the VM 
an easy entry point to the sensitive business logic and/or DB networks.
A vulnerability in the VM OS and/or apps would allow anyone to gain access
to the VM and through there the internal network. That's why people
are using DMZs. 

A router/firewall has a much lower code footprint and most likely
is more resilient to such vulnerabilities. So, not only does one 
need L3 isolation, but most often a real firewall in 
between the web-servers and the rest of the applications.

> 
> Of course, if the above demand can't be met due to some reason, the network
> should consider the optimization for inter-subnet traffic, and as had been
> pointed out by someone before, one practical solution is to deploy the default
> gateway functions as close as possible to the servers, e.g., inside the NVEs.
> 
> Best regards,
> Xiaohu
> 
> > -----邮件原件-----
> > 发件人: [email protected] [mailto:[email protected]] 代表 Pedro
> > Roque Marques
> > 发送时间: 2012年6月30日 9:41
> > 收件人: Joel M. Halpern
> > 抄送: [email protected]; [email protected]; [email protected]
> > 主题: Re: [nvo3] inter-CUG traffic [was Re: call for adoption:
> > draft-narten-nvo3-overlay-problem-statement-02]
> >
> > Joel,
> > A very common model currently is to have a 3 tier app where each tier is in
> its
> > VLAN. You will find that web-servers for instance don't actually talk much
> to
> > each other… although they are on the same VLAN 100% of their traffic goes
> > outside VLAN. Very similar story applies to app logic tier. The database
> tier may
> > have some replication traffic within its VLAN but hopefully than is less
> than the
> > requests that it serves.
> >
> > There isn't a whole lot of intra-CUG/subnet traffic under that deployment
> > model. A problem statement that assumes (implicitly) that most or a
> significant
> > part of the traffic stays local to a VLAN/subnet/CUG is not a good match for
> the
> > common 3-tier application model. Even if you assume that web and app tiers
> > use a  VLAN/subnet/CUG per tenant (which really is an application in
> > enterprise) the database is typically common for a large number of
> > apps/tenants.
> >
> >   Pedro.
> >
> > On Jun 29, 2012, at 5:26 PM, Joel M. Halpern wrote:
> >
> > > Depending upon what portion of the traffic needs inter-region handling
> (inter
> > vpn, inter-vlan, ...) it is not obvious that "optimal" is an important goal.
> As a
> > general rule, perfect is the enemy of good.
> > >
> > > Yours,
> > > Joel
> > >
> > > On 6/29/2012 7:54 PM, [email protected] wrote:
> > >> Pedro,
> > >>
> > >>> Can you please describe an example of how you could set up such
> > >>> straightforward routing, assuming two Hosts belong to different "CUGs"
> > such
> > >>> that these can be randomly spread across the DC  ? My question is where
> > is the
> > >>> "gateway", how is it provisioned and  how can traffic paths be
> > guaranteed to
> > >>> be optimal.
> > >>
> > >> Ok, I see your point - the routing functionality is straightforward to
> move
> > over,
> > >> but ensuring optimal pathing is significantly more work, as noted in
> another
> > one
> > >> of your messages:
> > >>
> > >>> Conceptually, that means that the functionality of the "gateway" should
> be
> > >>> implemented at the overlay ingress and egress points, rather than
> > requiring
> > >>> a mid-box.
> > >>
> > >> Thanks,
> > >> --David
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Pedro Roque Marques [mailto:[email protected]]
> > >>> Sent: Friday, June 29, 2012 7:38 PM
> > >>> To: Black, David
> > >>> Cc: [email protected]; [email protected]
> > >>> Subject: Re: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-
> > >>> narten-nvo3-overlay-problem-statement-02]
> > >>>
> > >>>
> > >>> On Jun 29, 2012, at 4:02 PM, <[email protected]> wrote:
> > >>>
> > >>>>> There is an underlying assumption in NVO3 that isolating tenants from
> > >>>>> each other is a key reason to use overlays. If 90% of the traffic is
> > >>>>> actually between different tenants, it is not immediately clear to me
> > >>>>> why one has set up a system with a lot of "inter tenant" traffic. Is
> > >>>>> this is a case we need to focus on optimizing?
> > >>>>
> > >>>> A single tenant may have multiple virtual networks with routing used to
> > >>>> provide/control access among them.  The crucial thing is to avoid
> > assuming
> > >>>> that a tenant or other administrative entity has a single virtual
> network
> > >>>> (or CUG in Pedro's email).  For example, consider moving a portion of
> > >>>> a single data center that uses multiple VLANs and routers to
> selectively
> > >>>> connect them into an nvo3 environment - each VLAN gets turned into a
> > virtual
> > >>>> network, and the routers now route among virtual networks instead of
> > VLANs.
> > >>>>
> > >>>> One of the things that's been pointed out to me in private is that the
> level
> > >>>> of importance that one places on routing across virtual networks may
> > depend
> > >>>> on one's background.  If one is familiar with VLANs and views nvo3
> > overlays
> > >>>> providing VLAN-like functionality, IP routing among virtual networks is
> a
> > >>>> straightforward application of IP routing among VLANs (e.g., the
> previous
> > >>>> mention of L2/L3 IRB functionality that is common in data center
> network
> > >>>> switches).
> > >>>
> > >>> Can you please describe an example of how you could set up such
> > >>> straightforward routing, assuming two Hosts belong to different "CUGs"
> > such
> > >>> that these can be randomly spread across the DC  ? My question is where
> > is the
> > >>> "gateway", how is it provisioned and  how can traffic paths be
> > guaranteed to
> > >>> be optimal.
> > >>>
> > >>>>  OTOH, if one is familiar with VPNs where access among
> > >>>> otherwise-closed groups has to be explicitly configured, particularly
> > >>>> L3 VPNs where one cannot look to L2 to help with grouping the end
> > systems,
> > >>>> this sort of cross-group access can be a significant area of
> functionality.
> > >>>
> > >>> Considering that in a VPN one can achieve inter-CUG traffic exchange
> > without
> > >>> an gateway in the middle via policy, it is unclear why you suggest that
> "look
> > >>> to L2" would help.
> > >>>
> > >>>>
> > >>>> Thanks,
> > >>>> --David
> > >>>>
> > >>>>> -----Original Message-----
> > >>>>> From: [email protected] [mailto:[email protected]] On Behalf
> > Of
> > >>> Thomas
> > >>>>> Narten
> > >>>>> Sent: Friday, June 29, 2012 5:56 PM
> > >>>>> To: Pedro Roque Marques
> > >>>>> Cc: [email protected]
> > >>>>> Subject: [nvo3] inter-CUG traffic [was Re: call for adoption: draft-
> narten-
> > >>>>> nvo3-overlay-problem-statement-02]
> > >>>>>
> > >>>>> Pedro Roque Marques <[email protected]> writes:
> > >>>>>
> > >>>>>> I object to the document on the following points:
> > >>>>>>
> > >>>>>> 3) Does not discuss the requirements for inter-CUG traffic.
> > >>>>>
> > >>>>> Given that the problem statement is not supposed to be the
> > >>>>> requirements document,, what exactly should the problem statement
> > say
> > >>>>> about this topic?
> > >>>>>
> > >>>>> <[email protected]> writes:
> > >>>>>
> > >>>>>> Inter-VN traffic (what you refer to as inter-CUG traffic) is handled
> > >>>>>> by a straightforward application of IP routing to the inner IP
> > >>>>>> headers; this is similar to the well-understood application of IP
> > >>>>>> routing to forward traffic across VLANs.  We should talk about VRFs
> > >>>>>> as something other than a limitation of current approaches - for
> > >>>>>> VLANs, VRFs (separate instances of routing) are definitely a
> > >>>>>> feature, and I expect this to carry forward to nvo3 VNs.  In
> > >>>>>> addition, we need to make changes to address Dimitri's comments
> > >>>>>> about problems with the current VRF text.
> > >>>>>
> > >>>>> Pedro Roque Marques <[email protected]> writes:
> > >>>>>
> > >>>>>> That is where again the differences between different types of
> > >>>>>> data-centers do play in. If for instance 90% of a VMs traffic
> > >>>>>> happens to be between the Host OS and a network attached storage
> > >>>>>> file system run as-a-Service (with the appropriate multi-tenent
> > >>>>>> support) then the question of where are the routers becomes a very
> > >>>>>> important issue. In a large scale data-center where the Host VM and
> > >>>>>> the CPU that hosts the filesystem block can be randomly spread
> > >>>>>> where is the router ?
> > >>>>>
> > >>>>> Where is what router? Are you assuming the Host OS and NAS are in the
> > >>>>> different VNs? And hence, traffic has to (at least conceptually) exit
> > >>>>> one VN and reenter another whenever there is HostOS - NAS traffic?
> > >>>>>
> > >>>>>> Is every switch a router ? Does it have all the CUGs present ?
> > >>>>>
> > >>>>> The underlay can be a mixture of switches and routers... that is not
> > >>>>> our concern. So long as the underlay delivers traffic sourced by an
> > >>>>> ingress NVE to the appropriate egress NVE, we are good.
> > >>>>>
> > >>>>> If there are issues with the actual path taken being suboptimal in
> > >>>>> some sense, that is an underlay problem to solve, not for the overlay.
> > >>>>>
> > >>>>>> In some DC designs the problem to solve is the inter-CUG
> > >>>>>> traffic. With L2 headers being totally irrelevant.
> > >>>>>
> > >>>>> There is an underlying assumption in NVO3 that isolating tenants from
> > >>>>> each other is a key reason to use overlays. If 90% of the traffic is
> > >>>>> actually between different tenants, it is not immediately clear to me
> > >>>>> why one has set up a system with a lot of "inter tenant" traffic. Is
> > >>>>> this is a case we need to focus on optimizing?
> > >>>>>
> > >>>>> But in any case, if one does have inter-VN traffic, that will have to
> > >>>>> get funneled through a "gateway" between VNs, at least conceptually, I
> > >>>>> would assume that an implementation of overlays would provide at least
> > >>>>> one, and likely more such gateways on each VN. How many and where
> > to
> > >>>>> place them will presumably depend on many factors but would be done
> > >>>>> based on traffic patterns and network layout. I would not think every
> > >>>>> NVE has to provide such functionality.
> > >>>>>
> > >>>>> What do you propose needs saying in the problem statement about
> > that?
> > >>>>>
> > >>>>> Thomas
> > >>>>>
> > >>>>> _______________________________________________
> > >>>>> nvo3 mailing list
> > >>>>> [email protected]
> > >>>>> https://www.ietf.org/mailman/listinfo/nvo3
> > >>>>
> > >>>
> > >>
> > >> _______________________________________________
> > >> nvo3 mailing list
> > >> [email protected]
> > >> https://www.ietf.org/mailman/listinfo/nvo3
> > >>
> > >
> >
> > _______________________________________________
> > nvo3 mailing list
> > [email protected]
> > https://www.ietf.org/mailman/listinfo/nvo3
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to