Thanks to many people's response and comments to draft-nachum-sarp-03. 
I sent an email clarifying some issues of this draft back in early Jan. 
Now we have updated the draft to include the clarification. 

In a nutshell:


People are under the impression that the draft is to scale the flooding of ND 
messages on all links. MLD was brought up to say that ND messages are actually 
suppressed from flooding to links which don't have the target hosts. But that 
is the not the main intent of this draft. 


The real impact of ARP/ND in DC with massive number of hosts (or VMs) is on the 
L2/L3 boundary router (or Default GW). When hosts in subnet A needs to send 
data frames to Subnet B, the router has to 1) respond the ARP/ND requests from 
hosts in Subnet-A and 2) resolve target MAC for hosts in subnet-B. 

The second step is not only CPU intensive but also buffer intensive. There are 
some practices to alleviate the pain of Step 1) for IPv4, but not for IPv6 
(https://datatracker.ietf.org/doc/draft-dunbar-armd-arp-nd-scaling-practices/).

In order to protect routers CPU being overburdened by target resolution 
requests,  some routers has to rate limit the Target MAC resolution requests to 
CPU (Glean Throttling rate in this manual). 
http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/unicast/configuration/guide/l3_ip.pdf
 , searching for "Glean Throttling". 

When the Glean Throttling rate is exceeded, the incoming data frames are 
dropped. 

In traditional Data Center, it is less of an issue because the number of hosts 
attached to one L2/L3 boundary router is limited by physical ports of 
switches/routers. When Servers are virtualized to support 30 plus VMs, the 
number of hosts under one router can grow 30 plus times. 

The solution proposed in this draft can eliminate (or reduce the likelihood of) 
inter-subnet data frames being dropped.  

In addition, the traditional DC has each subnet nicely placed in limited number 
of server racks, i.e. switches under router only need to deal with MAC 
addresses of those limited subnets. With subnets being spread across many 
server racks, the switches are exposed to VLAN/MAC of many subnets, greatly 
increasing the FDB. 

This draft also addresses the FDB entries explosion issue.


We are looking forward to your comments and feedback to this update. 

Thank you very much. 

Linda 

-----Original Message-----
From: [email protected] [mailto:[email protected]] 
Sent: Sunday, February 24, 2013 9:01 AM
To: [email protected]
Cc: Linda Dunbar; [email protected]; [email protected]
Subject: New Version Notification for draft-nachum-sarp-04.txt


A new version of I-D, draft-nachum-sarp-04.txt
has been successfully submitted by Tal Mizrahi and posted to the
IETF repository.

Filename:        draft-nachum-sarp
Revision:        04
Title:           Scaling the Address Resolution Protocol for Large Data Centers 
(SARP)
Creation date:   2013-02-24
Group:           Individual Submission
Number of pages: 19
URL:             http://www.ietf.org/internet-drafts/draft-nachum-sarp-04.txt
Status:          http://datatracker.ietf.org/doc/draft-nachum-sarp
Htmlized:        http://tools.ietf.org/html/draft-nachum-sarp-04
Diff:            http://www.ietf.org/rfcdiff?url2=draft-nachum-sarp-04

Abstract:
   This  document  provides  a  recommended  architecture  and  network
   operation  named  SARP.  SARP  is  based  on  fast  proxies  that
   significantly  reduce  broadcast  domains  and  ARP/ND  broadcast
   transmissions. SARP supports smooth and fast virtual machine (VM)
   mobility without any modification to the VM, while keeping the
   connection up and running efficiently.  SARP is targeted for massive
   scaling data centers with a significant number of VMs using ARP and
   ND protocols.

                                                                                
  


The IETF Secretariat

_______________________________________________
Int-area mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/int-area

Reply via email to