On May 28, 2010, at 9:31 PM, <[email protected]> 
<[email protected]> wrote:

> stripped out by the ML or a spam filter.

list strips all.

> We have already discovered that one VPLS provider may limit the amount of
> multicast/broadcast traffic delivered to sites to ~2mbps. Will that greatly
> impact us during IGP or LDP convergence?

It may, it may not. Hard to know without prefix count/scale figures, endpoint 
router control plane specifics, tunings/adjustments made, knowing specifically 
how the provider edge is configured, etc. Other than obvious MTU issues (check 
this before proceeding further, really), I suggest consideration for ospf, 
overall igp use, and ldp on this network:

--investigate the practical and scale-related aspects to running ospf in a 
non-broadcast point to multipoint configuration.  more clues at:

http://www.cisco.com/en/US/docs/ios/12_0/np1/configuration/guide/1cospf.html#wp5354
http://cisco.com/en/US/docs/ios/iproute_ospf/configuration/guide/iro_cfg_ps6017_TSD_Products_Configuration_Guide_Chapter.html#wp1054321

basically, if you 'define' ospf neighbors like you would for e/ibgp, you can 
avoid the multicast hellos and multicast LSA's that the DR would normally send. 
however, there is one concerning aspect of using dr/bdr on a vpls domain, 
especially as you scale the number of prefixes in it. consider the following:

(http://cisco.iphelp.ru/faq/5/ch08lev1sec1.html)

"Link-state acknowledgment packets are sent as multicasts. If the state of the 
router is DR or BDR, the acknowledgment is sent to the OSPF router multicast 
address of 224.0.0.5. If the state of the router is not DR or BDR, the 
acknowledgment is sent to the all DR router multicast address of 224.0.0.6." 

This could 'get close' to the 2 megabit limit, and perhaps exceed it on small 
timescales assuming you had thousands of prefixes in OSPF (and fast flooders 
with lots of cpu). I guess you should ask your provider to configure a few 
dozen kbytes worth of 'burst' if they're using a ingress policer (likely) vs. 
some form of ingress shaping (unlikely) to handle bcast/mcast packets. 

I suppose you could also tune (downwards) the LS ACK interval, or adjust the 
LSA flooding intervals, and simply compensate for the potential loss of mcast 
packets (lsa's or ls acks alike) over the transport network. If the CE/CPE 
routers you're using can support shaping & matching of OSPF packets, then 
perhaps you could avoid hitting their bcast/mcast limits by shaping your egress 
traffic before it hits their network. 

If it were up to me, I'd keep as few prefixes in ospf as possible (links and 
loops, as they say), doing the rest of the work via ibgp. This implies that any 
potential flooding/update events are kept to a few tens/dozens of packets, and 
thus low bitrate in the worst case. The question is how much pain is there 
doing explicit OSPF neighbors vs. tweaks to lsa/ls acks.

--why do you need ospf, again? iBGP should be sufficient (if it contains /32's 
for all loops, contains all links, etc) to make the FIB happy, and permit LDP 
to resolve FEC's for any possible endpoint you'd be speaking too inside. If all 
the LDP router id's are forced to loopbacks, and if all the loopbacks can 
'reach' each other, I'm not seeing a need for OSPF. 

--ldp discovery will result in building of a full-mesh of tldp connections 
later. The LDP discovery mechanism works much like OSPF, but then moves to 
using a different set of timers (often less frequent). I recommend reading 
about this at the following urls:

http://www.cisco.com/en/US/docs/ios/12_2t/12_2t2/feature/guide/ldp_221t.html#wp1354836

http://www.networkers-online.com/blog/2009/01/ldp-neighbor-discovery-session-establishment-and-maintenance/

This may be another case where at small scale, it's no big deal, but as this 
network grows, it may require attention, or adjustment/migration to different 
(relaxed/slower) timers, explicit neighbors (aka extended discovery), or yet 
more transmit shaping to avoid discards within the providers network or their 
edge interfaces to your gear.

I'll end it here, as this email is reading dangerously like consulting or 
something!

Best,

-Tk






_______________________________________________
cisco-nsp mailing list  [email protected]
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to