How are these very large DCs with one large flat L2 subnet dealing with the 
issue of broadcast/arp today?


On Jul 10, 2012, at 2:39 PM, Stiliadis, Dimitrios (Dimitri) wrote:

> \>
>> I thought that we are discussing if it is necessary for NVo3 to consider 
>> cross
>> subnet communications. For the example you give where all applications in the
>> DC are all in one subnet and those applications don't communicate with peers
>> in other subnets, I would think they are pretty small DC, which is not really
>> what NVo3 is targeted for, is it?
>> 
> 
> On the contrary. Lots of very large DCs are there for such applications.
> 
> (most Big Data related data centers are like that).
> 
> Dimitri
> 
> 
>> Linda
>> 
>>> -----Original Message-----
>>> From: Stiliadis, Dimitrios (Dimitri) [mailto:dimitri.stiliadis@alcatel-
>>> lucent.com]
>>> Sent: Monday, July 09, 2012 5:41 PM
>>> To: Linda Dunbar; Joel M. Halpern
>>> Cc: Pedro Roque Marques; [email protected]; [email protected];
>>> [email protected]
>>> Subject: RE: [nvo3] inter-CUG traffic [was Re: call for adoption:
>>> draft-narten-nvo3-overlay-problem-statement-02]
>>> 
>>> There is no DMZ in such applications. They don't reside on the DMZ.
>>> 
>>> Applications maintain Tbytes of data in a hadoop cluster and running
>>> mapreduce
>>> on them. There is a single application and single tenant on this whole
>>> cluster.
>>> mapreduce nodes will send lots  of data between the map and reduce
>>> phase
>>> and there is no need for any L3 separation in these cases.
>>> 
>>> This is just one example. We can't possibly make the assumption that
>>> any
>>> app in the DC is related to a web-service on the DMZ.
>>> 
>>> 
>>> Dimitri
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>> -----Original Message-----
>>>> From: Linda Dunbar [mailto:[email protected]]
>>>> Sent: Monday, July 09, 2012 2:59 PM
>>>> To: Stiliadis, Dimitrios (Dimitri); Joel M. Halpern
>>>> Cc: Pedro Roque Marques; [email protected]; [email protected];
>>> [email protected]
>>>> Subject: RE: [nvo3] inter-CUG traffic [was Re: call for adoption:
>>> draft-
>>>> narten-nvo3-overlay-problem-statement-02]
>>>> 
>>>> Dimitrios,
>>>> 
>>>> Do you mean that all applications in "a large hddop/mapredure
>>> cluster" are in
>>>> one subnet?
>>>> And the applications in  "one cluster" never communicate with another
>>>> "cluster" in the same data center?  Really?
>>>> I would think applications in DMZ would be at least put in a
>>> different subnet.
>>>> 
>>>> Linda
>>>> 
>>>>> -----Original Message-----
>>>>> From: Stiliadis, Dimitrios (Dimitri)
>>> [mailto:dimitri.stiliadis@alcatel-
>>>>> lucent.com]
>>>>>> What kind of DCs which only have intra subnet communication?
>>>>> 
>>>>> A large hadoop/mapreduce cluster will not bother about different
>>>>> subnets,
>>>>> and the traffic requirements can be huge. Thousands of enterprise
>>>>> applications
>>>>> will also do the same, since they assume they are in a protected
>>>>> environment.
>>>>> 
>>>>> I agree with Joel's earlier statement :
>>>>> 
>>>>>> I have seen descriptions of data centers for which it is almost
>>>>> "all",
>>>>>> and other data centers for which it is negligible.
>>>>> 
>>>>> Dimitri
>>>>> 
>>>>> 
>>>>>> 
>>>>>> Linda
>>>>>>> -----Original Message-----
>>>>>>> From: Joel M. Halpern [mailto:[email protected]]
>>>>>>> Sent: Monday, July 09, 2012 3:00 PM
>>>>>>> To: Linda Dunbar
>>>>>>> Cc: Pedro Roque Marques; [email protected]; [email protected];
>>>>>>> [email protected]
>>>>>>> Subject: Re: [nvo3] inter-CUG traffic [was Re: call for
>>> adoption:
>>>>>>> draft-narten-nvo3-overlay-problem-statement-02]
>>>>>>> 
>>>>>>> I have not seen data to suggest "most".  Do you have a source
>>> for
>>>>> that?
>>>>>>>  I have seen descriptions of data centers for which it is
>>> almost
>>>>> "all",
>>>>>>> and other data centers for which it is negligible.
>>>>>>> Yours,
>>>>>>> Joel
>>>>>>> 
>>>>>>> On 7/9/2012 3:51 PM, Linda Dunbar wrote:
>>>>>>>> Joel,
>>>>>>>> 
>>>>>>>> I agree with you that VPN related WG (e.g. L2VPN, L3VPN) may
>>> not
>>>>> need
>>>>>>> to address too much on across VN communications.
>>>>>>>> 
>>>>>>>> But most traffic in Data Center are across subnets. Given
>>> NVo3 is
>>>>> for
>>>>>>> identifying issues associated with data centers, I think that
>>> cross
>>>>>>> Subnet traffic should not be ignored.
>>>>>>>> 
>>>>>>>> Linda
>>>>>>>> 
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: [email protected] [mailto:[email protected]]
>>> On
>>>>> Behalf
>>>>>>> Of
>>>>>>>>> Joel M. Halpern
>>>>>>>>> Sent: Friday, June 29, 2012 9:32 PM
>>>>>>>>> To: Pedro Roque Marques
>>>>>>>>> Cc: [email protected]; [email protected]; [email protected]
>>>>>>>>> Subject: Re: [nvo3] inter-CUG traffic [was Re: call for
>>> adoption:
>>>>>>>>> draft-narten-nvo3-overlay-problem-statement-02]
>>>>>>>>> 
>>>>>>>>> I would not be so bold as to insist that all deployments can
>>>>> safely
>>>>>>>>> ignore inter-VPN intra-data-center traffic.  But there are
>>> MANY
>>>>>>> cases
>>>>>>>>> where that is not an important part of the traffic mix.
>>>>>>>>> So I was urging that we not mandate optimal inter-subnet
>>> routing
>>>>> as
>>>>>>>>> part
>>>>>>>>> of the NVO3 requirements.
>>>>>>>>> I would not want to prohibit it either, as there are
>>> definitely
>>>>>>> cases
>>>>>>>>> where it matters, some along the lines you alude to.
>>>>>>>>> 
>>>>>>>>> Yours,
>>>>>>>>> Joel
>>>>>>>>> 
>>>>>>>>> On 6/29/2012 9:40 PM, Pedro Roque Marques wrote:
>>>>>>>>>> Joel,
>>>>>>>>>> A very common model currently is to have a 3 tier app where
>>>>> each
>>>>>>> tier
>>>>>>>>> is in its VLAN. You will find that web-servers for instance
>>>>> don't
>>>>>>>>> actually talk much to each other... although they are on the
>>>>> same
>>>>>>> VLAN
>>>>>>>>> 100% of their traffic goes outside VLAN. Very similar story
>>>>> applies
>>>>>>> to
>>>>>>>>> app logic tier. The database tier may have some replication
>>>>> traffic
>>>>>>>>> within its VLAN but hopefully than is less than the requests
>>>>> that it
>>>>>>>>> serves.
>>>>>>>>>> 
>>>>>>>>>> There isn't a whole lot of intra-CUG/subnet traffic under
>>> that
>>>>>>>>> deployment model. A problem statement that assumes
>>> (implicitly)
>>>>> that
>>>>>>>>> most or a significant part of the traffic stays local to a
>>>>>>>>> VLAN/subnet/CUG is not a good match for the common 3-tier
>>>>>>> application
>>>>>>>>> model. Even if you assume that web and app tiers use a
>>>>>>> VLAN/subnet/CUG
>>>>>>>>> per tenant (which really is an application in enterprise)
>>> the
>>>>>>> database
>>>>>>>>> is typically common for a large number of apps/tenants.
>>>>>>>>>> 
>>>>>>>>>>    Pedro.
>>>>>>>>>> 
>>>>>>>>>> On Jun 29, 2012, at 5:26 PM, Joel M. Halpern wrote:
>>>>>>>>>> 
>>>>>>>>>>> Depending upon what portion of the traffic needs inter-
>>> region
>>>>>>>>> handling (inter vpn, inter-vlan, ...) it is not obvious that
>>>>>>> "optimal"
>>>>>>>>> is an important goal.  As a general rule, perfect is the
>>> enemy
>>>>> of
>>>>>>> good.
>>>>>>>>>>> 
>>>>>>>>>>> Yours,
>>>>>>>>>>> Joel
>>>>>>>>>>> 
>>>>>>>>>>> On 6/29/2012 7:54 PM, [email protected] wrote:
>>>>>>>>>>>> Pedro,
>>>>>>>>>>>> 
>>>>>>>>>>>>> Can you please describe an example of how you could set
>>> up
>>>>> such
>>>>>>>>>>>>> straightforward routing, assuming two Hosts belong to
>>>>> different
>>>>>>>>> "CUGs" such
>>>>>>>>>>>>> that these can be randomly spread across the DC  ? My
>>>>> question
>>>>>>> is
>>>>>>>>> where is the
>>>>>>>>>>>>> "gateway", how is it provisioned and  how can traffic
>>> paths
>>>>> be
>>>>>>>>> guaranteed to
>>>>>>>>>>>>> be optimal.
>>>>>>>>>>>> 
>>>>>>>>>>>> Ok, I see your point - the routing functionality is
>>>>>>> straightforward
>>>>>>>>> to move over,
>>>>>>>>>>>> but ensuring optimal pathing is significantly more work,
>>> as
>>>>> noted
>>>>>>>>> in another one
>>>>>>>>>>>> of your messages:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Conceptually, that means that the functionality of the
>>>>> "gateway"
>>>>>>>>> should be
>>>>>>>>>>>>> implemented at the overlay ingress and egress points,
>>> rather
>>>>>>> than
>>>>>>>>> requiring
>>>>>>>>>>>>> a mid-box.
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> --David
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>>> From: Pedro Roque Marques
>>> [mailto:[email protected]]
>>>>>>>>>>>>> Sent: Friday, June 29, 2012 7:38 PM
>>>>>>>>>>>>> To: Black, David
>>>>>>>>>>>>> Cc: [email protected]; [email protected]
>>>>>>>>>>>>> Subject: Re: [nvo3] inter-CUG traffic [was Re: call for
>>>>> adoption:
>>>>>>>>> draft-
>>>>>>>>>>>>> narten-nvo3-overlay-problem-statement-02]
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Jun 29, 2012, at 4:02 PM, <[email protected]> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> There is an underlying assumption in NVO3 that
>>> isolating
>>>>>>> tenants
>>>>>>>>> from
>>>>>>>>>>>>>>> each other is a key reason to use overlays. If 90% of
>>> the
>>>>>>>>> traffic is
>>>>>>>>>>>>>>> actually between different tenants, it is not
>>> immediately
>>>>>>> clear
>>>>>>>>> to me
>>>>>>>>>>>>>>> why one has set up a system with a lot of "inter
>>> tenant"
>>>>>>> traffic.
>>>>>>>>> Is
>>>>>>>>>>>>>>> this is a case we need to focus on optimizing?
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> A single tenant may have multiple virtual networks with
>>>>> routing
>>>>>>>>> used to
>>>>>>>>>>>>>> provide/control access among them.  The crucial thing
>>> is to
>>>>>>> avoid
>>>>>>>>> assuming
>>>>>>>>>>>>>> that a tenant or other administrative entity has a
>>> single
>>>>>>> virtual
>>>>>>>>> network
>>>>>>>>>>>>>> (or CUG in Pedro's email).  For example, consider
>>> moving a
>>>>>>>>> portion of
>>>>>>>>>>>>>> a single data center that uses multiple VLANs and
>>> routers
>>>>> to
>>>>>>>>> selectively
>>>>>>>>>>>>>> connect them into an nvo3 environment - each VLAN gets
>>>>> turned
>>>>>>>>> into a virtual
>>>>>>>>>>>>>> network, and the routers now route among virtual
>>> networks
>>>>>>> instead
>>>>>>>>> of VLANs.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> One of the things that's been pointed out to me in
>>> private
>>>>> is
>>>>>>>>> that the level
>>>>>>>>>>>>>> of importance that one places on routing across virtual
>>>>>>> networks
>>>>>>>>> may depend
>>>>>>>>>>>>>> on one's background.  If one is familiar with VLANs and
>>>>> views
>>>>>>>>> nvo3 overlays
>>>>>>>>>>>>>> providing VLAN-like functionality, IP routing among
>>> virtual
>>>>>>>>> networks is a
>>>>>>>>>>>>>> straightforward application of IP routing among VLANs
>>> (e.g.,
>>>>>>> the
>>>>>>>>> previous
>>>>>>>>>>>>>> mention of L2/L3 IRB functionality that is common in
>>> data
>>>>>>> center
>>>>>>>>> network
>>>>>>>>>>>>>> switches).
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Can you please describe an example of how you could set
>>> up
>>>>> such
>>>>>>>>>>>>> straightforward routing, assuming two Hosts belong to
>>>>> different
>>>>>>>>> "CUGs" such
>>>>>>>>>>>>> that these can be randomly spread across the DC  ? My
>>>>> question
>>>>>>> is
>>>>>>>>> where is the
>>>>>>>>>>>>> "gateway", how is it provisioned and  how can traffic
>>> paths
>>>>> be
>>>>>>>>> guaranteed to
>>>>>>>>>>>>> be optimal.
>>>>>>>>>>>>> 
>>>>>>>>>>>>>>   OTOH, if one is familiar with VPNs where access
>>> among
>>>>>>>>>>>>>> otherwise-closed groups has to be explicitly configured,
>>>>>>>>> particularly
>>>>>>>>>>>>>> L3 VPNs where one cannot look to L2 to help with
>>> grouping
>>>>> the
>>>>>>> end
>>>>>>>>> systems,
>>>>>>>>>>>>>> this sort of cross-group access can be a significant
>>> area
>>>>> of
>>>>>>>>> functionality.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Considering that in a VPN one can achieve inter-CUG
>>> traffic
>>>>>>>>> exchange without
>>>>>>>>>>>>> an gateway in the middle via policy, it is unclear why
>>> you
>>>>>>> suggest
>>>>>>>>> that "look
>>>>>>>>>>>>> to L2" would help.
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> --David
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>>>>> From: [email protected] [mailto:nvo3-
>>> [email protected]]
>>>>> On
>>>>>>>>> Behalf Of
>>>>>>>>>>>>> Thomas
>>>>>>>>>>>>>>> Narten
>>>>>>>>>>>>>>> Sent: Friday, June 29, 2012 5:56 PM
>>>>>>>>>>>>>>> To: Pedro Roque Marques
>>>>>>>>>>>>>>> Cc: [email protected]
>>>>>>>>>>>>>>> Subject: [nvo3] inter-CUG traffic [was Re: call for
>>>>> adoption:
>>>>>>>>> draft-narten-
>>>>>>>>>>>>>>> nvo3-overlay-problem-statement-02]
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Pedro Roque Marques <[email protected]> writes:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> I object to the document on the following points:
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> 3) Does not discuss the requirements for inter-CUG
>>>>> traffic.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Given that the problem statement is not supposed to be
>>> the
>>>>>>>>>>>>>>> requirements document,, what exactly should the
>>> problem
>>>>>>>>> statement say
>>>>>>>>>>>>>>> about this topic?
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> <[email protected]> writes:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Inter-VN traffic (what you refer to as inter-CUG
>>> traffic)
>>>>> is
>>>>>>>>> handled
>>>>>>>>>>>>>>>> by a straightforward application of IP routing to the
>>>>> inner
>>>>>>> IP
>>>>>>>>>>>>>>>> headers; this is similar to the well-understood
>>>>> application
>>>>>>> of
>>>>>>>>> IP
>>>>>>>>>>>>>>>> routing to forward traffic across VLANs.  We should
>>> talk
>>>>>>> about
>>>>>>>>> VRFs
>>>>>>>>>>>>>>>> as something other than a limitation of current
>>>>> approaches -
>>>>>>>>> for
>>>>>>>>>>>>>>>> VLANs, VRFs (separate instances of routing) are
>>>>> definitely a
>>>>>>>>>>>>>>>> feature, and I expect this to carry forward to nvo3
>>> VNs.
>>>>> In
>>>>>>>>>>>>>>>> addition, we need to make changes to address
>>> Dimitri's
>>>>>>> comments
>>>>>>>>>>>>>>>> about problems with the current VRF text.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Pedro Roque Marques <[email protected]> writes:
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> That is where again the differences between different
>>>>> types
>>>>>>> of
>>>>>>>>>>>>>>>> data-centers do play in. If for instance 90% of a VMs
>>>>> traffic
>>>>>>>>>>>>>>>> happens to be between the Host OS and a network
>>> attached
>>>>>>>>> storage
>>>>>>>>>>>>>>>> file system run as-a-Service (with the appropriate
>>> multi-
>>>>>>> tenent
>>>>>>>>>>>>>>>> support) then the question of where are the routers
>>>>> becomes a
>>>>>>>>> very
>>>>>>>>>>>>>>>> important issue. In a large scale data-center where
>>> the
>>>>> Host
>>>>>>> VM
>>>>>>>>> and
>>>>>>>>>>>>>>>> the CPU that hosts the filesystem block can be
>>> randomly
>>>>>>> spread
>>>>>>>>>>>>>>>> where is the router ?
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Where is what router? Are you assuming the Host OS and
>>> NAS
>>>>> are
>>>>>>>>> in the
>>>>>>>>>>>>>>> different VNs? And hence, traffic has to (at least
>>>>>>> conceptually)
>>>>>>>>> exit
>>>>>>>>>>>>>>> one VN and reenter another whenever there is HostOS -
>>> NAS
>>>>>>>>> traffic?
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Is every switch a router ? Does it have all the CUGs
>>>>> present ?
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> The underlay can be a mixture of switches and
>>> routers...
>>>>> that
>>>>>>> is
>>>>>>>>> not
>>>>>>>>>>>>>>> our concern. So long as the underlay delivers traffic
>>>>> sourced
>>>>>>> by
>>>>>>>>> an
>>>>>>>>>>>>>>> ingress NVE to the appropriate egress NVE, we are good.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> If there are issues with the actual path taken being
>>>>>>> suboptimal
>>>>>>>>> in
>>>>>>>>>>>>>>> some sense, that is an underlay problem to solve, not
>>> for
>>>>> the
>>>>>>>>> overlay.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> In some DC designs the problem to solve is the inter-
>>> CUG
>>>>>>>>>>>>>>>> traffic. With L2 headers being totally irrelevant.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> There is an underlying assumption in NVO3 that
>>> isolating
>>>>>>> tenants
>>>>>>>>> from
>>>>>>>>>>>>>>> each other is a key reason to use overlays. If 90% of
>>> the
>>>>>>>>> traffic is
>>>>>>>>>>>>>>> actually between different tenants, it is not
>>> immediately
>>>>>>> clear
>>>>>>>>> to me
>>>>>>>>>>>>>>> why one has set up a system with a lot of "inter
>>> tenant"
>>>>>>> traffic.
>>>>>>>>> Is
>>>>>>>>>>>>>>> this is a case we need to focus on optimizing?
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> But in any case, if one does have inter-VN traffic,
>>> that
>>>>> will
>>>>>>>>> have to
>>>>>>>>>>>>>>> get funneled through a "gateway" between VNs, at least
>>>>>>>>> conceptually, I
>>>>>>>>>>>>>>> would assume that an implementation of overlays would
>>>>> provide
>>>>>>> at
>>>>>>>>> least
>>>>>>>>>>>>>>> one, and likely more such gateways on each VN. How
>>> many
>>>>> and
>>>>>>>>> where to
>>>>>>>>>>>>>>> place them will presumably depend on many factors but
>>>>> would be
>>>>>>>>> done
>>>>>>>>>>>>>>> based on traffic patterns and network layout. I would
>>> not
>>>>>>> think
>>>>>>>>> every
>>>>>>>>>>>>>>> NVE has to provide such functionality.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> What do you propose needs saying in the problem
>>> statement
>>>>>>> about
>>>>>>>>> that?
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Thomas
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>> nvo3 mailing list
>>>>>>>>>>>>>>> [email protected]
>>>>>>>>>>>>>>> https://www.ietf.org/mailman/listinfo/nvo3
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> nvo3 mailing list
>>>>>>>>>>>> [email protected]
>>>>>>>>>>>> https://www.ietf.org/mailman/listinfo/nvo3
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> _______________________________________________
>>>>>>>>> nvo3 mailing list
>>>>>>>>> [email protected]
>>>>>>>>> https://www.ietf.org/mailman/listinfo/nvo3
>>>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> nvo3 mailing list
>>>>>> [email protected]
>>>>>> https://www.ietf.org/mailman/listinfo/nvo3
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to