Benson,

NV03's charter states:
NVO3 will document the problem statement, the applicability,
and an architectural framework for DCVPNs within a data center
environment

One of the key differentiators that set  NV03's overlay apart from other IETF 
defined overlay (such as L2VPN, TRILL, LISP, etc) is that NV03 is for Data 
Center environment.

For Data Center, in addition to intra-virtual network communication, another 
big part is Inter Virtual Network Communication. The end stations (virtual or 
physical) may even have their default gateway specified.

For NVEs that can do L3 routing, if they are different from the specified 
"default gateway" of end stations, or if they don't have the capability (such 
as the needed firewall function for the VN), they can't simply route traffic 
among multiple Virtual Networks.

However, the current Problem statement only has one paragraph describing 
communication between Virtual Networks.

I think that it is necessary to add at least one more paragraph to describe the 
inter virtual network communication:

Some end stations might have specific default gateway configured. Under those 
environment,  the communication between two end stations in different Virtual 
Networks under one NVE might need to be hairpined to their corresponding 
Gateway nodes, which is not bandwidth efficient. This is especially true when 
the NVE to which the end stations are attached is different from the "default 
gateway" specified by the end stations.

Linda



From: Benson Schliesser [mailto:[email protected]]
Sent: Saturday, March 02, 2013 12:49 PM
To: Linda Dunbar
Cc: [email protected]
Subject: Re: [nvo3] WG Last Call for 
draft-ietf-nvo3-overlay-problem-statement-02

Linda, I understand the issues you're describing. But my point remains: they're 
effects of how any given solution works. There may be trade-offs between a 
solution's architectural choices (e.g. aggregation, delegation, and load 
balancing of various control and/or data plane functions) that change these 
properties.

Therefore, I don't think they're in scope of the Problem Statement. The PS is 
intended to capture our motivation etc and not architecture or requirements.

Cheers,
-Benson


On 3/1/13 7:10 PM, Linda Dunbar wrote:
Benson,


The generic architecture described in Figure 1 of the draft-ietf-nvo3-framework 
document shows that all end devices are under a group of DC-GWs:

A generic architecture for Data Centers is depicted in Figure 1:

                                    ,---------.
                                  ,'           `.
                                 (  IP/MPLS WAN )
                                  `.           ,'
                                    `-+------+'
                                 +--+--+   +-+---+
                                 |DC GW|+-+|DC GW|
                                 +-+---+   +-----+
                                    |       /
                                    .--. .--.
                                  (    '    '.--.
                                .-.' Intra-DC     '
                               (     network      )
                                (             .'-'
                                 '--'._.'.    )\ \
                                 / /     '--'  \ \
                                / /      | |    \ \
                          +---+--+   +-`.+--+  +--+----+
                          | ToR  |   | ToR  |  |  ToR  |
                          +-+--`.+   +-+-`.-+  +-+--+--+
                           /     \    /    \   /       \
                        __/_      \  /      \ /_       _\__
                 '--------'   '--------'   '--------'   '--------'
                 :  End   :   :  End   :   :  End   :   :  End   :
                 : Device :   : Device :   : Device :   : Device :
                 '--------'   '--------'   '--------'   '--------'

                 Figure 1 : A Generic Architecture for Data Centers


Can each of the GWs reach all End Devices? If yes, that means every GW needs to 
be aware of corresponding egress NVE for each end device, which is the GW 
bottleneck issue described in my email.

If each GW only reaches a subset of VNIs,   GW routers and their uplink (WAN) 
ports are not fully utilized. WAN ports are expensive.

The NV03 problem statement has detailed description of options of control 
plane.  IMHO, it is more important for the problem statement to address the 
pros and cons of central GW vs. distributed GW.

In addition, the IP encapsulation push all the L2 multicast to L3. IGMP/MLD 
snooping is very efficient in pruning  multicast distribution in Layer 2 and 
they are supported by very cheap switches. But L3 multicast is a different 
story. Those are all valid issues in having overlay networks in data center. I 
don't see why/how problem statement can avoid them.

Thanks, Linda





From: Benson Schliesser [mailto:[email protected]]
Sent: Friday, March 01, 2013 7:03 PM
To: Linda Dunbar
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [nvo3] WG Last Call for 
draft-ietf-nvo3-overlay-problem-statement-02

Hi, Linda.

To clarify: I described them as solution based because they might not be 
present in all different solutions.

For example, a number of your issues stem from the gateway's location. But some 
approaches might have distributed gateways. An L3 service in the NVE might not 
experience any of these gateway issues, for example, depending on the network 
design.

Nevertheless, these are definitely things to consider. Thus my suggestion - I 
think it would be helpful if these issues were rephrased as requirements.

Cheers,
-Benson



On Mar 1, 2013, at 18:36, Linda Dunbar 
<[email protected]<mailto:[email protected]>> wrote:
Benson,

I only point out some Data Center issues not solved by overlay and issues 
introduced by IP overlay. I think that NV03 as an IETF working group for data 
centers with large number of mobile virtual machines, the problem statement 
should have a section describing those points.

I really don't see how those issues are solution based. Can you elaborate why 
do you see those are solution based?

Best regards,
Linda

From: Benson Schliesser [mailto:[email protected]]
Sent: Friday, March 01, 2013 5:30 PM
To: Linda Dunbar; [email protected]<mailto:[email protected]>
Subject: Re: [nvo3] WG Last Call for 
draft-ietf-nvo3-overlay-problem-statement-02

Hi, Linda.

I think the NVO3 Problem Statement is intended to describe the problems that 
motivate our work, rather than repercussions of a particular approach. Speaking 
for myself, I observe that these issues are very much solution-dependent, and I 
think they are more appropriate inputs for the requirements drafts rather than 
the PS. As chair, I'd be interested in hearing what the authors and other 
contributors think about this.

Cheers,
-Benson


On Mar 1, 2013, at 18:10, "Linda Dunbar" 
<[email protected]<mailto:[email protected]>> wrote:
Thomas, et al,

This draft has done a very good job in describing the motivation for overlay 
and its potential control/data plane.

However, the draft hasn't described the issues introduced by overlay and large 
DC issues not solved by overlay. For example:


-        Multicast issues introduced by overlay:

o   IP Encapsulation by NVE (or virtual switches on server)  means that any 
layer 2 multicast  from or to VMs requires Layer 3 Multicast/broadcast support 
in the core.

o   Majority of  virtual switches  in server won't have attached VMs 
participating in any Multicast groups. Therefore, it may not be cost effective 
for them to support Layer 3 multicast protocol, e.g. PIM, etc.

o   For virtual switches that do have attached VMs participating in multicast, 
there are much more dynamic states due to VM moves.

o   The head end replication will require hypervisor virtual switches to keep 
up multicast member states,  which  can change frequently. Therefore, those 
NVEs, if required to support multicast,  have to do more than MPLS (MVPN) 
multicast supported by PE routers.

o

-        Bottleneck at gateway nodes:

o   The draft has emphasized greatly on the importance of tenant separation. 
That means two VMs under the same NVE, if they belong to two different virtual 
network instances, the communication between them have to go all the way to 
their default gateway.

o   Some end stations even have specific default gateway configured. That 
require all the inter VNI communication to go through their designated gateway 
nodes.

o   Many data centers have their FW/LB co- located with GW nodes. That means 
all the inter VNI communications have to hairpined to those GW nodes.


-        issues that are not solved by overlay:

o   The draft is for environment where VMs can move anywhere, which require the 
Gateway nodes to be aware of individual VM addresses and their corresponding 
egress NVEs.

o   Many large Data Centers have small number of gateway nodes interfacing with 
external network, e.g. 2/4/8/more. Very often, those external facing gateway 
nodes are the entrance and exit points for all virtual network instances.

o   Very often each of them can receive traffic for all virtual network 
instances hosted in the data center.

o   That means they all need to be aware of individual VMs and their 
corresponding egress NVEs in the data center. i.e. they may all need host 
routing. It is true that many routers can handle millions of routes, but the 
question is if it is necessary for data center gateway nodes to be those high 
capacity routers.

-
Therefore, I think that the NV03 problem statement draft should add a new 
section to describes issues introduced by overlay and issues not solved by 
overlay.

Linda Dunbar


_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to