Zu Qiang,

Answers are inserted below:

From: Zu Qiang [mailto:[email protected]]
Sent: Thursday, October 23, 2014 7:32 AM
To: Linda Dunbar; [email protected]
Subject: RE: [nvo3] L3 Optimal Routing update in 
draft-merged-nvo3-ts-address-migration

Hello, Linda
                Thanks for presenting the draft. In the worst case described in 
your email below, a NVE may need to perform the  inner-outer address mapping 
table lookup, and this table may contains all VNs in the DC (up to millions).

[Linda] The calculation is to emphasize that ToR/Server based NVEs don't need 
inner-outer mapping table for all VNs.  They only need the table for combined 
TSs of the VNs that they participate.

 I just have some clarification questions:

-          First, is this a capacity issue or technical issue? An 
implementation could optimize the lookup by using two indexes: TS ID and the 
destination address. If it is still not optimized enough, a dedicated GW 
function can be used. Of cause this dedicated GW can be collocated in any NVE. 
I don't this should have any impacts on NVO3 architecture, right?

[Linda]  Are you saying distributed GW? i.e. each GW is collocated in NVE? Then 
each distributed GW has to announce its attached TSs to the WAN.



-          2nd, is this a VM mobility issue or a generic overlay issue? A VM 
may be placed in any host in a DC based the policies. So even VM mobility is 
not enabled, the same "issue" is still there. This "issue" may belong to the 
scope of how to develop the dGW function in NVO3, right?

[Linda] it is both. Without mobility, subnets should be properly lumped 
together (most DC design is this way), so that only aggregated IP prefix is 
used by nodes in the core.


Have a nice day
Zu Qiang

From: nvo3 [mailto:[email protected]] On Behalf Of Linda Dunbar
Sent: Tuesday, October 21, 2014 8:05 PM
To: [email protected]<mailto:[email protected]>
Subject: [nvo3] L3 Optimal Routing update in 
draft-merged-nvo3-ts-address-migration

Thanks to everyone at today's NVO3 Interim Meeting providing comments to 
draft-merged-nvo3-ts-address-migration.

Based on the comments, we changed the text for L3 Optimal Routing to the 
following. Please let us know if they are OK.


In theory, host hosting by every NVE (including the DCBR) can achieve the 
optimal inbound forwarding in very fragmented network. When TSs' IP addresses 
of a VN under all the NVEs can't be aggregated at all, the NVEs need to support 
the combined number of TSs of all the VNs. Here is the math showing that host 
routing on server based NVE or ToR based NVE can be relatively easy to be 
supported even under the worst scenario:

*       Suppose a NVE has TSs belonging to X number of VNs and suppose each VN 
has 200 hosts (spread among many NVEs), then the worse scenario (or the maximum 
routes that NVE needs to have) is 200*X.

*       For Server based NVE, the number of VNs enabled on the NVE has to be 
less than number of VMs instantiated on the server. The industry state of art 
virtualization technology allows maximum 100 VMs on one server. the worst case 
scenario (or the maximum routes that NVE needs to have) is 100*200 = 20,000

*       For ToR based NVE, the number of TSs can be number of TSs per server * 
the number of servers attached to ToR (typical ToR has 48 downstream ports to 
servers). So worst case scenario is 40*100 * 200 = 800,000.
But host routing can be challenging on NVEs with Data Center Gateway attached. 
Those NVEs usually need to support all the VNs enabled in the data center. 
There could be hundreds of thousands of hosts/VMs, sometimes in millions, due 
to business demand and highly advanced server virtualization technologies.
For those NVEs (i.e. the NVEs attached to Data center gateways), the following 
approaches can be considered.
One approach is to designate one or two NVEs as designated forwarder for a 
specific subnet (VN) when the subnet (VN) is spread across many NVEs. For 
example, if high percentage of TSs of one subnet is attached to NVE "X", the 
remaining small percentage of the subnet is spread around many NVEs. 
Designating NVE "X" as the designated forwarder for the subnet can greatly 
reduce the "triangular routing" for the traffic destined to TSs in this subnet 
(VN).
ECMP can be used by the DCBR or any NVEs that don't support host routing or 
can't access NVA to distribute traffic equally to any of the NVEs that support 
the subnet (VN). If an NVE doesn't have the destination of a data packet 
directly attached, it can query NVA for the target NVE to which the destination 
is attached, and encapsulate the packet with the target NVE as outer 
destination before sending it out.

Linda

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to