Re: [c-nsp] Recovery time under interface failure - VPLS - MPLS L3 VPN- Plenty L3

2008-06-30 Thread alaerte.vidali
Tks Oliver,

assuming there is no STP delay (portfast/etc.) this should be rather
quick

That is as I see it should work for VPLS. But crazily, it is taking 19
to 20 seconds, even though portfast is enabled.

Any clue?


-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 30, 2008 10:23 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] Recovery time under interface failure - VPLS - MPLS
L3 VPN- Plenty L3

[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote on Monday,
June 30, 2008 2:55 PM:

 Hi Oliver,
 
 Question is very specific to issue in ethernet connection between R1 
 and laptop and comparing recovery time under this failure in VPLS,
L3MPLS
 VPN and pure L3 routing.
 That is, how VPLS will influence the recovery of the MAC address on R1

 (delay introduced by VPLS).

Ah, so you mean how long it takes when the ethernet comes back up and
the client can resume connecting to the server? Well, assuming there is
no STP delay (portfast/etc.) this should be rather quick (similar to a
switched environment, with a bit higher propagation delay if the VPLS
spans a large geography). It's the same flooding/learning method.

When it comes to routing (L3VPN or regular), I guess it depends on
wether the route to the destination is already known or not. If not,
getting routing information over to the other side can take ~5-8 seconds
for regular (untuned) IGP or even more for L3VPN (depends on BGP MRAI
timers and RD-setup/import-delay).

oli



 
 -Original Message-
 From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED]
 Sent: Sunday, June 29, 2008 4:48 AM
 To: Vidali Alaerte (NSN - BR/Rio de Janeiro); 
 cisco-nsp@puck.nether.net
 Subject: RE: [c-nsp] Recovery time under interface failure - VPLS - 
 MPLS
 L3 VPN- Plenty L3
 
 [EMAIL PROTECTED]  wrote on Saturday, June 28, 2008 5:21 PM:
 
  Hi,
 
 Considering following simple topology:
 
 Laptop-(e1)R1-R2R3Server
 
 ...and that OSPF timers are the same and BFD is not used (no failure 
 recovery optimization used) on all scenarios:
 
 What would be the recovery time when interface etherne 1 (from laptop

 to R1) fails in these cases:
 
 -Just IP routing between R1 and R3
 -VPLS between R1 and R3
 -MPLS VPN between R1 and R3
 
 If I am not wrong, in VPLS case R1 will remove MAC address and 
 communicate that to R3, but not sure if it will impact final 
 connectivity recovery time between laptop--server.
 (sorry, no lab to test right now)
 
 Hmm, where should above topology recover to? There is no alternate 
 path between client and server here? In general, convergence times 
 depend on several variables.. sub-10-sec is a ballpark figure you can 
 use, but it can also take longer (when BGP is involved) or quicker..
 
   oli
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Recovery time under interface failure - VPLS - MPLS L3 VPN - Plenty L3

2008-06-28 Thread alaerte.vidali

 Hi,

Considering following simple topology: 

Laptop-(e1)R1-R2R3Server

...and that OSPF timers are the same and BFD is not used (no failure
recovery optimization used) on all scenarios:

What would be the recovery time when interface etherne 1 (from laptop to
R1) fails in these cases:

-Just IP routing between R1 and R3
-VPLS between R1 and R3
-MPLS VPN between R1 and R3

If I am not wrong, in VPLS case R1 will remove MAC address and
communicate that to R3, but not sure if it will impact final
connectivity recovery time between laptop--server.
(sorry, no lab to test right now)

Tks.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Source failure in PIM SSM

2008-06-04 Thread alaerte.vidali
Hi,

Any recommendation for docs handling source failure when PIM SSM is
used?

Example:

Source 1.1.1.1, group 239.1.1.1 -R1R2--PC_joined 239.1.1.1
using IGMPv2

R2 has SSM mapping group 239.1.1.1 to sorce 1.1.1.1

I have seem 2 options: Anycast and Prioritycast. Would like to here
other opinion.

Tks,
Alaerte 
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco Processing Regarding ICMP

2008-05-11 Thread alaerte.vidali
Totally agree. Do you know that times you receive request that you just
would like to forget?  :) 

-Original Message-
From: ext Gert Doering [mailto:[EMAIL PROTECTED] 
Sent: Sunday, May 11, 2008 6:58 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro)
Cc: [EMAIL PROTECTED]; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Cisco Processing Regarding ICMP

Hi,

On Sat, May 10, 2008 at 03:39:23PM -0500, [EMAIL PROTECTED] wrote:
 Because internal network design requirements, it is necessary decrease

 internal MTU to slight lower than 1500 bytes,

Ugh.

This is *really* unusual.  Many networks increase their MTU to well
above 1500, so that even tunneled connections still are able to carry
full-MTU packets - but running a network below 1500 sounds like a Really
Bad Plan to me.  

Expect fun with all the sites out there that have Issues with PMTUD.
Lots.

gert
--
USENET is *not* the non-clickable part of WWW!
 
//www.muc.de/~gert/
Gert Doering - Munich, Germany
[EMAIL PROTECTED]
fax: +49-89-35655025
[EMAIL PROTECTED]
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco Processing Regarding ICMP

2008-05-11 Thread alaerte.vidali
Are you sure by default it is not configured any rate?
It seems it default to two per second. 

-Original Message-
From: ext Alexandre Snarskii [mailto:[EMAIL PROTECTED] 
Sent: Sunday, May 11, 2008 3:32 PM
To: Paul Cosgrove
Cc: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Cisco Processing Regarding ICMP

On Sun, May 11, 2008 at 01:14:14PM +0100, Paul Cosgrove wrote:
 Hi Alaerte,
 
 Well the packets with DF set will be dropped, but I don't know what 
 rate restrictions (if any) exist about the generation of ICMP 
 notifications when this occurs.  Perhaps someone else can provide that
informaton.

You can rate-limit ICMP generation due to MTU failures:

Router(config)#mls rate-limit all mtu-failure ?
  10-100  packets per second

but, by default it not configured to any rate: 

Router#show mls rate-limit
 Sharing Codes: S - static, D - dynamic
 Codes dynamic sharing: H - owner (head) of the group, g - guest of the
group 

   Rate Limiter Type   Status Packets/s   Burst  Sharing
 -   --   -   -  ---
[...]
   MTU FAILURE   Off  -   - -

so, it's possible that high rate of MTU failures will overload your
65xx/76xx.. 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco Processing Regarding ICMP

2008-05-11 Thread alaerte.vidali
Hi Phil,

I have seem description saying that initial SYN is punted to RP, so
there is impact under SYN attack for example.
Also, RP needs to calculate new checksum.
I agree it seems better solution, I am only worried with CPU impact in
7609.
Also, only helps UDP.

Tks,
Alaerte 

-Original Message-
From: ext Phil Bedard [mailto:[EMAIL PROTECTED] 
Sent: Sunday, May 11, 2008 7:41 PM
To: Gert Doering
Cc: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Cisco Processing Regarding ICMP

Yeah, a better solution to me is to use the tcp-adjust-mss value,
assuming this is TCP traffic and not something else.  I don't know the
CPU limitations of that on the 7600 but it will probably end up being
less processing power than generating an ICMP message that may never get
to its destination.

Phil

On May 11, 2008, at 11:58 AM, Gert Doering wrote:

 Hi,

 On Sat, May 10, 2008 at 03:39:23PM -0500, [EMAIL PROTECTED]
 wrote:
 Because internal network design requirements, it is necessary 
 decrease internal MTU to slight lower than 1500 bytes,

 Ugh.

 This is *really* unusual.  Many networks increase their MTU to well 
 above 1500, so that even tunneled connections still are able to carry 
 full-MTU packets - but running a network below 1500 sounds like a 
 Really Bad Plan to me.

 Expect fun with all the sites out there that have Issues with PMTUD.  
 Lots.

 gert
 --
 USENET is *not* the non-clickable part of WWW!

//www.muc.de/~gert/
 Gert Doering - Munich, Germany
[EMAIL PROTECTED]
 fax: +49-89-35655025
[EMAIL PROTECTED]
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco Processing Regarding ICMP

2008-05-11 Thread alaerte.vidali
I mean, only helps TCP :) 

-Original Message-
From: Vidali Alaerte (NSN - BR/Rio de Janeiro) 
Sent: Sunday, May 11, 2008 9:02 PM
To: 'ext Phil Bedard'; Gert Doering
Cc: cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] Cisco Processing Regarding ICMP

Hi Phil,

I have seem description saying that initial SYN is punted to RP, so
there is impact under SYN attack for example.
Also, RP needs to calculate new checksum.
I agree it seems better solution, I am only worried with CPU impact in
7609.
Also, only helps UDP.

Tks,
Alaerte 

-Original Message-
From: ext Phil Bedard [mailto:[EMAIL PROTECTED]
Sent: Sunday, May 11, 2008 7:41 PM
To: Gert Doering
Cc: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Cisco Processing Regarding ICMP

Yeah, a better solution to me is to use the tcp-adjust-mss value,
assuming this is TCP traffic and not something else.  I don't know the
CPU limitations of that on the 7600 but it will probably end up being
less processing power than generating an ICMP message that may never get
to its destination.

Phil

On May 11, 2008, at 11:58 AM, Gert Doering wrote:

 Hi,

 On Sat, May 10, 2008 at 03:39:23PM -0500, [EMAIL PROTECTED]
 wrote:
 Because internal network design requirements, it is necessary 
 decrease internal MTU to slight lower than 1500 bytes,

 Ugh.

 This is *really* unusual.  Many networks increase their MTU to well 
 above 1500, so that even tunneled connections still are able to carry 
 full-MTU packets - but running a network below 1500 sounds like a 
 Really Bad Plan to me.

 Expect fun with all the sites out there that have Issues with PMTUD.  
 Lots.

 gert
 --
 USENET is *not* the non-clickable part of WWW!

//www.muc.de/~gert/
 Gert Doering - Munich, Germany
[EMAIL PROTECTED]
 fax: +49-89-35655025
[EMAIL PROTECTED]
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco Processing Regarding ICMP

2008-05-11 Thread alaerte.vidali
I am almost there concerning tolerance :) 
Hope this one is just provisory, until IP backbone devices is changed to
support necessary Jumbo frame on this customer. 
anyway I documented all risks involved, PMTU black role, Cisco CPU
increase and bla-bla-bla.

Tks,
Alaerte

-Original Message-
From: ext Gert Doering [mailto:[EMAIL PROTECTED] 
Sent: Sunday, May 11, 2008 9:06 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro)
Cc: [EMAIL PROTECTED]; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Cisco Processing Regarding ICMP

Hi,

On Sun, May 11, 2008 at 12:57:28PM -0500, [EMAIL PROTECTED] wrote:
 Totally agree. Do you know that times you receive request that you 
 just would like to forget?  :)

Well... sometimes I can refuse to do things, and sometimes workarounds
can be found.

And given the number of it must be do everything, sing and dance, and
at the same time must not cost anything things I've had to build *and
later on support* in the past, my tolerance for crappy designs is not
overly good these days...

gert

--
USENET is *not* the non-clickable part of WWW!
 
//www.muc.de/~gert/
Gert Doering - Munich, Germany
[EMAIL PROTECTED]
fax: +49-89-35655025
[EMAIL PROTECTED]
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Cisco Processing Regarding ICMP

2008-05-10 Thread alaerte.vidali

 Hi,

Any document about how is the processing of a packet received on
interface A toward interface B, where interface B has lower MTU than
received packet and DF bit is set?

(like description of the process)

(considering CPU impact and if default limitation of ICMP generation
enough when the number of packets is very high)

Thanks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco Processing Regarding ICMP

2008-05-10 Thread alaerte.vidali
Thanks Paul,

I would like to find information about processing on 7609 under this
situation, from traffic coming from Internet, normally users downloading
files or watching videos. 
Because internal network design requirements, it is necessary decrease
internal MTU to slight lower than 1500 bytes, so I would like to know
how 7609 will handle high number (in the worst case, or attacks) of
packets with high MTU and DF bit set.

Br,
Alaerte 

-Original Message-
From: ext Paul Cosgrove [mailto:[EMAIL PROTECTED] 
Sent: Saturday, May 10, 2008 9:53 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro)
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Cisco Processing Regarding ICMP

Hi Alaerte,

This will be dependent on the hardware, traffic types, throughput and 
software version/configuration.   You may need to explain a little more 
in order to get an adequate answer to your question. 

Large numbers of packets from a handful of hosts running PMTUD may
require a smaller number of ICMP notifications than would be necessary
for a larger number of hosts sending less traffic.  The difference in
the MTUs, and the sizes of the incoming packets will also affect the
proportion of traffic which triggers notifications.  Similarly protocols
running on the router itself may require their packets to be fragmented.

Paul.

[EMAIL PROTECTED] wrote:
  Hi,

 Any document about how is the processing of a packet received on 
 interface A toward interface B, where interface B has lower MTU than 
 received packet and DF bit is set?

 (like description of the process)

 (considering CPU impact and if default limitation of ICMP generation 
 enough when the number of packets is very high)

 Thanks,
 Alaerte
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

   

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] ICMP Packet too big attack

2008-05-09 Thread alaerte.vidali
 
Hi,

Have you heard about attacks trying to explore generation of packet too
big ICMP messages?

Tks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Jumbo Value for receiving frame

2008-04-28 Thread alaerte.vidali

 Hi,

When sending packet of 1548 bytes, configuring Jumbo support for 1548
bytes is ok to avoid fragmentation (router will add Ethernet frame and
that is ok). 
But the guy that is receiving this frame, should it have Jumbo
configured for 1548 plus Ethernet frames?

(sorry the question, no fast lab to test it now, so trying to find
documentation about this for small platform like 3550)

Tks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] MPLS MTU and Jumbo frames

2008-04-27 Thread alaerte.vidali
Hi Zaid,
 
There are a mix - 7609/12410 on the P/PE, and on access (before MPLS),
3550 and 2950.
 
I saw that 2950 has limitation on the maximum frame size: 
 
2950G(config)#system mtu ? 
  1500-1530  MTU size in bytes 

 
tks,
Alaerte
 



From: ext Ibrahim Abo Zaid [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 27, 2008 3:51 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro)
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] MPLS MTU and Jumbo frames


Hi Alaerte
 
the answer depends on your hardware platform and used IOS so send us
your cisco gear show version 
 
 
best regards
--Abo Zaid

 
On 4/26/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: 

Hi,

Any restriction regarding enabling MPLS MTU when using ethernet
frames
of up to 1548 bytes (data, without considering MPLS tag and
Ethernet
headers)?
(besides using MPLS MTU less than or equal interface MTU)

Tks,
Alaerte

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Adjust TCP MSS in Cisco

2008-04-23 Thread alaerte.vidali

 Hi,

How efficient (regarding CPU resources consuming in router) is adjusting
MSS on Cisco to avoid fragmentation?
(for high traffic rate)

ip tcp adjust-mss 1400

Thanks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] MPLS L3 VPN over TE - Load Balancing per Customer

2008-04-16 Thread alaerte.vidali
Tks Oli,

I believe it is a trend due to FastReroute recovery for VPN customers.
Maybe it change soon with IP FastReroute. Maybe not :)

I will test it again with your suggestion.

Tks again,
Alaerte

-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 16, 2008 5:43 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] MPLS L3 VPN over TE - Load Balancing per Customer

[EMAIL PROTECTED]  wrote on Tuesday, April 15, 2008 10:18 PM:

  Hi,
 
 Considering the topology where MPLS VPN over TE is used:
 (2 links between PE1--PE2)
 
 CustA--PE1PE2CustA
 |  |
 CustB___|  |_CustB
 
 
 What are the possibilities of loading balance traffic in the way CustA

 traffic goes through link 1 and CustB traffic goes through link 2?
 (considering BGP next hop is the same for CustA and CustB)

Why are you using TE in this setup?

The TE tunnel will only use one of the two links. If you really want to
achieve what you describe, you need two tunnels and use bgp next-hop
manipulation to steer the traffic over the respective tunnel. But you
could also use two tunnels and use CEF load-sharing for a single bgp
next-hop. TE also allows to do unequal-cost loadsharing, but you are
probably aware of this already.

oli

P.S: This is the third time L3VPN over TE has come up in the past few
weeks. Is this a trend? ;-)
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Feature Navigator for XR

2008-04-16 Thread alaerte.vidali

 Hi,

Do you know if there is a feature navigator for XR?

Particularly trying to confirm that BFD-triggered Fast Reroute (FRR) is
there on 3.3.0

Tks,
Alaerte

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Feature Navigator for XR

2008-04-16 Thread alaerte.vidali
Hi Oli,

Do you know where is XR configuration of BFD for TE on XR?

IOS has page for it:
http://www.cisco.com/en/US/docs/ios/mpls/configuration/guide/mp_te_bfd_f
rr.html#wp1064977

But XR page I found only mention support, not commands.

Tks,
Alaerte
 

-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 16, 2008 12:06 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] Feature Navigator for XR

[EMAIL PROTECTED]  wrote on Wednesday, April 16, 2008 4:57 PM:

  Hi,
 
 Do you know if there is a feature navigator for XR?
 
 Particularly trying to confirm that BFD-triggered Fast Reroute (FRR) 
 is there on 3.3.0

yes, see
http://www.cisco.com/en/US/docs/ios_xr_sw/iosxr_r3.3/interfaces/configur
ation/guide/hc33bfd.html (support for GSR in 3.3.1)..

oli
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] MPLS L3 VPN over TE - Load Balancing per Customer

2008-04-15 Thread alaerte.vidali

 Hi,

Considering the topology where MPLS VPN over TE is used:
(2 links between PE1--PE2)

CustA--PE1PE2CustA
|  |
CustB___|  |_CustB


What are the possibilities of loading balance traffic in the way CustA
traffic goes through link 1 and CustB traffic goes through link 2?
(considering BGP next hop is the same for CustA and CustB)

R1#sh ip route vrf CustA

B52.52.52.0 [200/0] via 3.3.3.3, 00:15:40

R1#sh ip route vrf CustB

B51.51.51.0 [200/0] via 3.3.3.3, 00:15:40

R1#sh ip route 3.3.3.3
  * 3.3.3.3, from 3.3.3.3, 02:01:42 ago, via Tunnel1000
  Route metric is 3, traffic share count is 1


Tks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Multicast Subsecond Convergence

2008-03-26 Thread alaerte.vidali

 Hi,

Investigating scalability of this feature (and potential issues). Any
real field example?

http://www.cisco.com/en/US/docs/ios/12_2s/feature/guide/fs_subcv.html

Tks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] BFD for HSRP

2008-03-20 Thread alaerte.vidali
Thanks Oli,

Could you send me any reference/description of the solution on IOS-XR?

Tks,
Alaerte 

-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 20, 2008 1:09 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] BFD for HSRP

[EMAIL PROTECTED]  wrote on Thursday, March 20, 2008 4:02 PM:

  Hi,
 
 Do you know if BFD for HSRP can be configured on Etherchannel?
 If not, what will occur if HSRP for BFD support is configured in Vlan 
 interface level, BFD is configured in 2 interfaces participating on 
 etherchannel, and there is a failure on 1 interface of the bundle?

This is exactly the problem one faces when running BFD (using IP encaps)
over a channel. We have a solution in IOS-XR, but I am not aware of
support for this in IOS.

oli
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] HSRP Packet Forwarding

2008-02-25 Thread alaerte.vidali
Hi Oliver,

Why are you asking?

It is related to issue when switch-1 is involved with layer 2 loop and
send back the HSRP packets to 7609-2.

Thanks,
Alaerte 

-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 25, 2008 3:24 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] HSRP Packet Forwarding

[EMAIL PROTECTED]  wrote on Wednesday, February 20, 2008 1:44 PM:

  Hi,
 
 On the following topology:
 
 7609-1-7609-2---switch-1
 
 7609-1 and 7609-2 is configure for HSRP. All connections are Trunk, 
 transporting all Vlans. 7609-1 is the default gateway of HSRP group 1.
 When 7609-1 sends HSRP multicast to 7609-2, does 7609-2 forward this 
 multicast to switch-1?
 (that is, besides taking the HSRP multicast and processing it, 7602-2 
 forwards it to all interfaces on the same Vlan)

yes, it would forward it as there could be other HSRP speakers on the
same LAN. Why are you asking? If you are concerned about nodes connected
to switch messing around with your HSRP, use authentication on your HSRP
messages

oli
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Accelerate Failure Detection of EoMPLS

2008-01-28 Thread alaerte.vidali
 Do you know if there is ways to accelerate detection of failure between
PEs and shutdown extended Vlan (through EoMPLS or VPLS).

PC1--PE1-PE2--PC2
 |___|

When simulating failure on link PE1---PE2, it is taking to long for
traffic switchover.
(already tested EoMPLS over TE with FRR, but it seems some issue with
extended Vlan)

By the way, there is layer 3 configured on Vlan with xconnect command. I
am wondering if this is making IOS go crazy.

Tks,
Alaerte

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] VPLS and AToM Failure Recovery Time

2008-01-26 Thread alaerte.vidali
Facing the following issue:

A VPLS (also tested with EoMPLS) pseudowire indicates up state but does
not send/receive frames during link failure simulation for up to 30
seconds.

It was tested severy features: only VPLS with IGP, EoMPLS over Traffic
Engineering, EoMPLS over TE protected by FRR.

Recovery of VPLS and AToM by itself is very fast, in all conditions. But
effectively, there is no frames going through the pseudowire. 

I am wondering if it is SUP/Module hardware issue or have you faced this
in other platform?
Software tested is 12.2.33.SRB2.

Here is some details of config and monitoring during failure simulation:

PC1-7604(sup720)7640(sup32)-PC2
  |__|


First, pseudowire is taking interface gi 4/0/1. When failure in this
link is forced, VC immediatelly takes interface gi 4/0/0. The VC status
is UP, but there is no frame crossing the pseudowire from PC1 to PC2.
The amount of time it takes  for traffic go through pseudowire again is
very big, up to 30 seconds, which remembers me Spanning Tree issue.

sh mpls l2transport vc 100 det
Local interface: VFI vlan100 VFI up
  MPLS VC type is VFI, interworking type is Ethernet
  Destination address: 200.222.117.41, VC ID: 100, VC status: up
Output interface: Gi4/0/1, imposed label stack {16}
Preferred path: not configured
Default path: active
Next hop: 200.164.97.33
  Create time: 16:47:30, last status change time: 00:58:41
  Signaling protocol: LDP, peer 200.222.117.41:0 up
Targeted Hello: 200.222.117.42(LDP Id) - 200.222.117.41
MPLS VC labels: local 16, remote 16
Group ID: local 0, remote 0
MTU: local 1500, remote 1500
Remote interface description:
  Sequencing: receive disabled, send disabled
  VC statistics:
packet totals: receive 8869, send 422530
byte totals:   receive 839752, send 29011888
packet drops:  receive 0, send 0

int gigabitEthernet 4/0/1
flamengo(config-if)#shut

sh mpls l2 vc 100 det
Local interface: VFI vlan100 VFI up
  MPLS VC type is VFI, interworking type is Ethernet
  Destination address: 200.222.117.41, VC ID: 100, VC status: up
Output interface: Gi4/0/0, imposed label stack {16}
Preferred path: not configured
Default path: active
Next hop: 200.164.178.233
  Create time: 16:50:09, last status change time: 01:01:20
  Signaling protocol: LDP, peer 200.222.117.41:0 up
Targeted Hello: 200.222.117.42(LDP Id) - 200.222.117.41
MPLS VC labels: local 16, remote 16
Group ID: local 0, remote 0
MTU: local 1500, remote 1500
Remote interface description:
  Sequencing: receive disabled, send disabled
  VC statistics:
packet totals: receive 8902, send 423880
byte totals:   receive 842842, send 29104224
packet drops:  receive 0, send 0



Following is the basic config when it was tested for VPLS:

l2 vfi vlan100 manual
 vpn id 100
 neighbor 200.222.117.41 encapsulation mpls
!
interface Vlan100
ip address 100.100.100.1 255.255.255.0
 xconnect vfi vlan100


And here is the basic config when it was tested for AToM with MPLS TE
and FRR. The result was the same, up to 30 seconds of no traffic between
PC1 to PC2, even though Tunnel1 came up in 600ms due to G4/0/1
protection by Tunnel2. 

interface Vlan600
 ip address 160.4.4.2 255.255.255.0
 xconnect 200.222.117.42 600 encapsulation mpls pw-class usetunnel1

interface Vlan601
 ip address 161.4.4.2 255.255.255.0
 xconnect 200.222.117.42 601 encapsulation mpls pw-class usetunnel2

pseudowire-class usetunnel1
 encapsulation mpls
 preferred-path interface Tunnel1 disable-fallback

pseudowire-class usetunnel2
 encapsulation mpls
 preferred-path interface Tunnel2 disable-fallback


sh ip route 20.20.20.0
Routing entry for 20.20.20.0/24
  Known via ospf 2, distance 110, metric 2, type intra area
  Last update from 160.4.4.2 on Vlan600, 00:07:24 ago
  Routing Descriptor Blocks:
  * 161.4.4.2, from 200.164.178.233, 00:07:24 ago, via Vlan601
  Route metric is 2, traffic share count is 1
160.4.4.2, from 200.164.178.233, 00:07:24 ago, via Vlan600
  Route metric is 2, traffic share count is 1

From OSPF point of view, there is no issue. It keeps point traffic to
extended Vlan 601, as Vlan 601 VC status is UP. But effectively, traffic
seems to go to black hole.

Tks,
Alaerte



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] PIM Split Rules and Multicast over L3 MPLS VPN

2008-01-23 Thread alaerte.vidali
Thanks Oli.

I will test today on PFC3xx with SRB2 and post the result.

Br,
Alaerte 

-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 22, 2008 8:01 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] PIM Split Rules and Multicast over L3 MPLS VPN

[EMAIL PROTECTED]  wrote on Tuesday, January 22, 2008 6:09 PM:

 Hi,
 
 PIM considers source of multicast to perform load splitting when the 
 command ip multicast multipath is entered. When using multicast over
 L3 MPLS VPN, the source IP is the IP of PEx for any customer group 
 connected to PEx.
 Any way to overcome this limitation and achieve load splitting of 
 multicast over L3 MPLS VPN?
 
 For example, consider this scenario:
 
  Sender for group G1 and
 G2---CE1-PE1--P1-PE2CE2receiver of G1 and G2
|   |
|___P2__|
 
 The goal is having one G1 taking path PE1--P1--PE2 and G2 taking path 
 PE1--P2--PE2.
 (but without using GRE encapsulation to have multicast encapsulated 
 into unicast)

12.2SRB for the 7600 introduced ip multicast multipath s-g-hash basic
which allows you to do the hash on source+group.. Platform support for
this is still limited, not sure about your environment.

oli
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] VPLS Error Message: Output interface: if-?(0), imposed label stack {}

2008-01-23 Thread alaerte.vidali
In a very simple lab setup, VPLS is not working. I am wondering if it is
platform/hardware issue (for example WS-X6548-GE-TX issue). Any idea?

Topology:

CE1a---PE1-PE2---CE2a

Here is result of related command:

sh mpls l2transport vc 60 det
Local interface: VFI vlan60 VFI up
  MPLS VC type is VFI, interworking type is Ethernet
  Destination address: 200.222.117.41, VC ID: 60, VC status: down
Output interface: if-?(0), imposed label stack {}
Preferred path: not configured  
Default path: no route
No adjacency
  Create time: 00:19:18, last status change time: 00:06:28
  Signaling protocol: LDP, peer 200.222.117.41:0 up
Targeted Hello: 200.222.117.42(LDP Id) - 200.222.117.41
MPLS VC labels: local 21, remote 16 
Group ID: local 0, remote 0
MTU: local 1500, remote 1500
Remote interface description: 
  Sequencing: receive disabled, send disabled
  VC statistics:
packet totals: receive 0, send 0
byte totals:   receive 0, send 0
packet drops:  receive 0, send 0


Configuration:


l2 vfi vlan60 manual
 vpn id 60
 neighbor 200.222.117.41 encapsulation mpls
!
interface Vlan60
 xconnect vfi vlan60
!
mpls label protocol ldp
mpls ldp discovery targeted-hello accept
mpls ldp router-id Loopback0 force
!
interface Loopback0
 ip address 10.10.10.101 255.255.255.255
!
Ip cef

sh ver
Cisco IOS Software, c7600s72033_rp Software
(c7600s72033_rp-ADVIPSERVICESK9-M), 
Version 12.2(33)SRB2, RELEASE SOFTWARE (fc1)


show module

Mod Ports Card Type  Model
Serial No.
--- - -- --
---
  12  Supervisor Engine 720 (Active) WS-SUP720-3B
SAD092604Y5
  28  8 port 1000mb GBIC Enhanced QoSWS-X6408A-GBIC
SAL10489531
  3   48  SFM-capable 48 port 10/100/1000mb RJ45 WS-X6548-GE-TX
SAL10425G69

Mod  Sub-Module  Model  Serial   Hw
Status 
 --- -- --- ---
---
  1  Policy Feature Card 3   WS-F6K-PFC3B   SAD09240BDE  2.1
Ok
  1  MSFC3 Daughterboard WS-SUP720  SAD0925023U  2.3
Ok


Tks,
Alaerte








___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] PIM Split Rules and Multicast over L3 MPLS VPN

2008-01-22 Thread alaerte.vidali
Hi,

PIM considers source of multicast to perform load splitting when the
command ip multicast multipath is entered. When using multicast over
L3 MPLS VPN, the source IP is the IP of PEx for any customer group
connected to PEx.
Any way to overcome this limitation and achieve load splitting of
multicast over L3 MPLS VPN?

For example, consider this scenario:

 Sender for group G1 and
G2---CE1-PE1--P1-PE2CE2receiver of G1 and G2
   |   |
   |___P2__| 

The goal is having one G1 taking path PE1--P1--PE2 and G2 taking path
PE1--P2--PE2.
(but without using GRE encapsulation to have multicast encapsulated into
unicast)

Thanks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] EoMPLS and VPLS Load Balancing

2008-01-19 Thread alaerte.vidali
Many tks Oli,

In Cisco pages there is a note saying that PFCxx does not support load
balancing at the tunnel ingress, so only one IGP path is used. This is
the site:

http://www.cisco.com/en/US/docs/routers/7600/ios/12.2SXF/configuration/g
uide/pfc3mpls.html

So I am wondering if at the end it is impossible to have PW load
balancing in PE.
(or if this does not applies to other train, like SR)

Br,
Alaerte
 

-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Saturday, January 19, 2008 11:56 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); Vidali Alaerte (NSN -
BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] EoMPLS and VPLS Load Balancing

Alaerte Vidali mailto:[EMAIL PROTECTED] wrote on Saturday,
January 19, 2008 11:58 AM:

 Tks Oli,
 
 Is it the same if instead of AtoM it is used VPLS?
 That is, the same CEF hash mechanism is used to choose the path?
 (without TE)

Yes, as far as I know. To the forwarding plane, it's just another PW..
You can also use tunnel selection along with VPLS (as shown in
http://www.cisco.com/en/US/docs/ios/12_0s/feature/guide/vpls_qos.html,
section Pseudowire Tunnel Selection)

oli

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast over VPLS

2007-11-30 Thread alaerte.vidali
physical ethernet link circuit to bridge traffic between the R2-R3 vc-lsp and 
the R3-R4 vc-lsp.

I think I did not get it. If I understood your suggestion, on the topology R3 
needs to bridge traffic received from R2--R3 to R3--R4 vc-lsp. 
Is that correct? If yes, you are thinking about bridge-group between 2 vc-lsps?


 user user
  ||
Multicast_Server--R1-R2(gi1)-(gi1)R3(gi2)-(gi1)R4(gi2)(gi1)R5(gi2)
   |
|
   
||


Tks,
Alaerte

 

-Original Message-
From: ext Zitouni Rachid [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 30, 2007 11:06 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] Multicast over VPLS

The easiest is to consider a physical ethernet link circuit to bridge traffic 
between the R2-R3 vc-lsp and the R3-R4 vc-lsp.

HiH,
Rachid 

-Message d'origine-
De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Envoyé : vendredi 30 novembre 
2007 13:29 À : Zitouni Rachid; cisco-nsp@puck.nether.net Objet : RE: [c-nsp] 
Multicast over VPLS

Hi and thanks,

 Other way is to make circuit loop on R3 and establish vc-lsps between R2 and 
 R3 then R3 and R4

If I establish VC-LSP from R2---R3, and VC-LSP from R3---R4, how would R3 
switch what it received from R2 to R4?
Throught layer 3? 

Br,
Alaerte

-Original Message-
From: ext Zitouni Rachid [mailto:[EMAIL PROTECTED]
Sent: Friday, November 30, 2007 7:39 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] Multicast over VPLS


No solution within a single vpls domain wich is by definition a broadcast 
domain.
If you have a vc-lsp between R2 and R3 and a vc-lsp between R2 and R4 You can 
optimize using igmp snooping on vc-lsp avoiding unnecessary multicast 
replication.  
Other way is to make circuit loop on R3 and establish vc-lsps between R2 and R3 
then R3 and R4

HiH
Rachid 

-Message d'origine-
De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de [EMAIL 
PROTECTED] Envoyé : jeudi 29 novembre 2007 19:30 À : cisco-nsp@puck.nether.net 
Objet : [c-nsp] Multicast over VPLS

Hi,

Any information about what draft Cisco is considering/will adopt to solve 
bandwidth waste issue with Multicast over VPLS?

Besides standard, do you see any solution currently available to avoid PE to 
send several flows of the same multicast over a single link on ring topology?


Topology:

VPLS between R2/R3/R4/R5
   user user
 |   |
Multicast_Server--R1-R2-R3-R4R5

By default, R2 will send 3 times the same flow on link R2---R3.

Tks a lot,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast over VPLS

2007-11-30 Thread alaerte.vidali
Hi,

I have seen people pointing some disadvantages of H-VPLS. Could you
share your overview of it?

Tks,
Alaerte 

-Original Message-
From: ext Jeff Tantsura [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 30, 2007 7:52 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] Multicast over VPLS

Hi,

Daisy-chained H-VPLS.

Regards,
Jeff

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:cisco-nsp- 
 [EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
 Sent: donderdag 29 november 2007 19:30
 To: cisco-nsp@puck.nether.net
 Subject: [c-nsp] Multicast over VPLS
 
 Hi,
 
 Any information about what draft Cisco is considering/will adopt to 
 solve bandwidth waste issue with Multicast over VPLS?
 
 Besides standard, do you see any solution currently available to avoid

 PE to send several flows of the same multicast over a single link on 
 ring topology?
 
 
 Topology:
 
 VPLS between R2/R3/R4/R5
user user
  |   |
 Multicast_Server--R1-R2-R3-R4R5
 
 By default, R2 will send 3 times the same flow on link R2---R3.
 
 Tks a lot,
 Alaerte
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multicast over VPLS

2007-11-30 Thread alaerte.vidali
Hi and thanks,

 Other way is to make circuit loop on R3 and establish vc-lsps between R2 and 
 R3 then R3 and R4

If I establish VC-LSP from R2---R3, and VC-LSP from R3---R4, how would R3 
switch what it received from R2 to R4?
Throught layer 3? 

Br,
Alaerte

-Original Message-
From: ext Zitouni Rachid [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 30, 2007 7:39 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] Multicast over VPLS


No solution within a single vpls domain wich is by definition a broadcast 
domain.
If you have a vc-lsp between R2 and R3 and a vc-lsp between R2 and R4 You can 
optimize using igmp snooping on vc-lsp avoiding unnecessary multicast 
replication.  
Other way is to make circuit loop on R3 and establish vc-lsps between R2 and R3 
then R3 and R4

HiH
Rachid 

-Message d'origine-
De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de [EMAIL 
PROTECTED] Envoyé : jeudi 29 novembre 2007 19:30 À : cisco-nsp@puck.nether.net 
Objet : [c-nsp] Multicast over VPLS

Hi,

Any information about what draft Cisco is considering/will adopt to solve 
bandwidth waste issue with Multicast over VPLS?

Besides standard, do you see any solution currently available to avoid PE to 
send several flows of the same multicast over a single link on ring topology?


Topology:

VPLS between R2/R3/R4/R5
   user user
 |   |
Multicast_Server--R1-R2-R3-R4R5

By default, R2 will send 3 times the same flow on link R2---R3.

Tks a lot,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Multicast over VPLS

2007-11-29 Thread alaerte.vidali
Hi,

Any information about what draft Cisco is considering/will adopt to
solve bandwidth waste issue with Multicast over VPLS?

Besides standard, do you see any solution currently available to avoid
PE to send several flows of the same multicast over a single link on
ring topology?


Topology:

VPLS between R2/R3/R4/R5
   user user
 |   |
Multicast_Server--R1-R2-R3-R4R5

By default, R2 will send 3 times the same flow on link R2---R3.

Tks a lot,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] cisco-nsp Digest, Vol 58, Issue 4

2007-09-03 Thread alaerte.vidali
Hi,

Did you have problem with SUP720 recently (hardware failure)?

Last semester there was 3 problems. I am wondering if any series are
having problem.

Tks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] TE over Etherchannel

2007-08-20 Thread alaerte.vidali
 Have you heard such affirmation before?

TE FRR is not supported over Etherchannel

Under SX releases, the only feature that it says it is not supported
under etherchannel is DS-TE.
I have used backup tunnel taking etherchannel and it worked for years. 

Now this statement means that between two P routers layer 3 etherchannel
cannot be used.

Tks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] cisco-nsp Digest, Vol 57, Issue 59

2007-08-17 Thread alaerte.vidali
 Hi

Do you know if there is any restriction for standard Traffic Enginnering
in layer 3 etherchannel on 7609 ?
I searched in Cisco and only found restriction for DS-TE.

I have used the command mpls traffic-eng tunnels under layer 3
port-channel without problem.

The way I see it is that on the path from head-end to tail-end some
links could be POS, other GigaEthernet, other ATM...The only requirement
on the PATH is enabling traffic engineering on the interfaces. And
standard TE is supported on layer 3 etherchannel on 7609.

There is a discussion about TE not supported in GSR bundles. In GSR
case, it seems it is not supported at all. Not sure if currently this
restriction is not true anymore.
http://puck.nether.net/pipermail/cisco-nsp/2005-February/016887.html


Tks
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] TDM over IP

2007-08-15 Thread alaerte.vidali
 Hi,

Have you used it?

I followed draft and Cisco implementation. Now looking for field
problems related to clock.

Tks.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] VPLS over Tunnels

2007-08-08 Thread alaerte.vidali
Hi Oli,

I am looking for exactly that, if possible send specific PW over
specific paths. That is, doc discussing relation between PWs and TE in
VPLS environments.

For example, suppose customer has 4 sites (CE-a1, CE-a2, CE-a3, CE-a4)
using VPLS backbone with 4 PEs (PE1, PE2, PE3 and PE4). I am looking if
possible to influence the PW paths between CE-a1 to CE-a2, CE-a1 to
CE-a3 and so on.

As I understand it, TE could provide transparent services VPLS, as it
does for MPLS L3 VPN. That is, if the following configuration is used in
PE1:

L2 vfi PE1_to_other Pes manual
 vpn id 200
 neighbor 2.2.2.2 encapsulation mpls
 neighbor 3.3.3.3 encapsulation mpls
!
Int loop 0
 ip ad 1.1.1.1 255.255.255.255

The path used to send PW to neighbor 2.2.2.2 could be defined by the TE
implemented.

I am after potential drawbacks of using VPLS and TE at the same time
(bugs, platform restrictions, limitations...)

Tks,
Alaerte
  

-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 08, 2007 7:37 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); [EMAIL PROTECTED];
cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] VPLS over Tunnels

Well, I'm not sure what you're after. There could be the concern to send
specific PW over specific paths (i.e. using Tunnel selection within a
VFI), this is available in some recent releases (think 12.2SRA,
configured just like AToM tunnel selection). Other than this, I can't
really think of anything.. do you have something in mind?

oli

[EMAIL PROTECTED]  wrote on Tuesday, August 07, 2007 10:19 PM:

 Hi,
 
 Thanks for the feedback.
 
 I am aware that VPLS will take advantage of any feature implemented in

 MPLS Backbone, like TE and Fast Reroute.
 
 I would like to see doc discussing advanced topics like interaction 
 with TE, impact of Fast Reroute in end nodes when using VPLS, 
 bandwidth available between end nodes...
 
 
 Br,
 Alaerte
 
 -Original Message-
 From: ext Masood Ahmad Shah [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, August 07, 2007 4:54 PM
 To: Vidali Alaerte (NSN - BR/Rio de Janeiro); 
 cisco-nsp@puck.nether.net Subject: RE: [c-nsp] VPLS over Tunnels
 
 VPLS uses edge routers that can learn, bridge and replicate on a VPN 
 basis.
 These routers are connected by a full mesh of tunnels, enabling 
 any-to-any connectivity.
 
 Here's the URL...
 

http://www.cisco.com/en/US/products/ps6648/products_ios_protocol_option_
 home
 .html
 
 
 Regards,
 Masood Ahmad Shah
 BLOG: http://www.weblogs.com.pk/jahil/
 
 
 
 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 [EMAIL PROTECTED]
 Sent: Wednesday, August 08, 2007 12:34 AM
 To: cisco-nsp@puck.nether.net
 Subject: [c-nsp] VPLS over Tunnels
 
  Hello,
 
 Trying to find some doc about implementing VPLS over TE Tunnels.
 
 Something similar to Implementing MPLS VPN over TE Tunnels

http://www.cisco.com/en/US/tech/tk436/tk428/technologies_tech_note09186a
 0080125b01.shtml
 
 Tks
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] VPLS over Tunnels

2007-08-07 Thread alaerte.vidali
 Hello,

Trying to find some doc about implementing VPLS over TE Tunnels. 

Something similar to Implementing MPLS VPN over TE Tunnels
http://www.cisco.com/en/US/tech/tk436/tk428/technologies_tech_note09186a
0080125b01.shtml

Tks
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Dual-Homed VPLS

2007-07-27 Thread alaerte.vidali
The idea sounds nice, but did you see the drawbacks?

I think I will end up using 2 extended Vlans per customer, so we have the 
benefits of layer 3 redundancy without the concerns of Spanning Tree and will 
get some load balancing. 

Something like this:

Site 1  Site 2

CE1-A (Vlan 10)--PE1=PE2-CE2-A (Vlan 10)
  |   \   /|
layer 3 connection \ /   layer 3 connection
  | \   /  |
CE1-A'   \ / CE2-A'
(Vlan 20) \   /(Vlan 20)
 | \ /| 
  -  PE3 -

If CE1-A'wants to sent packet to CE2-A, it sends traffic to CE1-A through local 
layer 3 connection (and then goes from CE1-A to CE2-A through VPLS) or sent to 
CE2-A' through VPLS and then locally to CE2-A.
If there is a failure on PE1/link CE1-A to PE1, all traffic goes through PE3 
via CE1-A', as per layer 3 decision.

The disadvantage of this approach is that it double the number of required 
extended Vlans over VPLS.
But in same cases (as the current project I am dealing with) the number of 
extended Vlans is not a concern.

Best Regards,
Alaerte


-Original Message-
From: ext Tim Durack [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 27, 2007 10:08 AM
To: cisco-nsp@puck.nether.net
Cc: Vidali Alaerte (NSN - BR/Rio de Janeiro); [EMAIL PROTECTED]; [EMAIL 
PROTECTED]
Subject: Re: [c-nsp] Dual-Homed VPLS

Disclaimer: I've only read about this.

If you can do H-VPLS with u-PE/n-PE Cisco talks about EE H-VPLS Pseudo-n-PE 
Redundancy being a way to avoid some of the loop-avaoidance issues.

See this link:

http://cisco.com/en/US/products/hw/routers/ps368/products_white_paper09186a00801f6084.shtml

Looks like an interesting idea to me. Anyone actually doing this?

Tim:

On 7/27/07, Peter Krupl [EMAIL PROTECTED] wrote:
 Hi,

 I have looked at this issue too, but your solution has one major flaw...

 Q:
 What would happen if the VPLS circuits go down in the core network, 
 And then came back up ?
 A: You have a loop, until spanning tree notices

 Flexlink seems more usable


 Med venlig hilsen/Kind regards
 Peter Åris Krüpl
 Netværksspecialist

 -Oprindelig meddelelse-
 Fra: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] På vegne af 
 [EMAIL PROTECTED]
 Sendt: 25. juli 2007 17:33
 Til: [EMAIL PROTECTED]
 Cc: cisco-nsp@puck.nether.net
 Emne: Re: [c-nsp] Dual-Homed VPLS

 Hi Eric,

 Exactly.

 As PE would forward Spanning tree BPDUs transparently, I am 
 considering STP is also an option to block a link.
 For example:


 CE1-A_(fa-1/1)---  PE1===PE2-CE2-A (STP ROOT)
|   \ /
(fa-1/2) \   /
|_\ /
   PE3


 Considering CE2-A is STP root, CE1-A would receive BPDU from both 
 interfaces, would choose fa1/1 as RP and would block fa1/2.
 What do you think?


 I hope we receive more feedback.


 Best Regards,
 Alaerte



 -Original Message-
 From: ext Eric Helm [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, July 25, 2007 12:15 PM
 To: Vidali Alaerte (NSN - BR/Rio de Janeiro)
 Subject: Re: [c-nsp] Dual-Homed VPLS

 Alaerte...
 Are you talking about dual-homed VPLS endpoints? If so, I'd be curious 
 to hear what suggestions you receive for this topic. When I looked 
 into doing this, it seemed that using Flex-Links was the only viable 
 solution.

 Regards,

 /Eric

 [EMAIL PROTECTED] wrote:
   Hi,
 
  Do you indicate any reference for this topic?
 
  I tried some books like MPLS Configuration on Cisco IOS Software
  (pretty good book) by Lancy and Umesh, but it only touch the subject.
 
  Tks,
  Alaerte
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net 
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Dual-Homed VPLS

2007-07-25 Thread alaerte.vidali
 Hi,

Do you indicate any reference for this topic?

I tried some books like MPLS Configuration on Cisco IOS Software
(pretty good book) by Lancy and Umesh, but it only touch the subject.

Tks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Dual-Homed VPLS

2007-07-25 Thread alaerte.vidali
Hi Eric,

Exactly.

As PE would forward Spanning tree BPDUs transparently, I am considering
STP is also an option to block a link.
For example:


CE1-A_(fa-1/1)---  PE1===PE2-CE2-A (STP ROOT)   
|   \ /
(fa-1/2) \   / 
|_\ /
   PE3


Considering CE2-A is STP root, CE1-A would receive BPDU from both
interfaces, would choose fa1/1 as RP and would block fa1/2.
What do you think?


I hope we receive more feedback.


Best Regards,
Alaerte

 

-Original Message-
From: ext Eric Helm [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 25, 2007 12:15 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro)
Subject: Re: [c-nsp] Dual-Homed VPLS

Alaerte...
Are you talking about dual-homed VPLS endpoints? If so, I'd be curious
to hear what suggestions you receive for this topic. When I looked into
doing this, it seemed that using Flex-Links was the only viable
solution.

Regards,

/Eric

[EMAIL PROTECTED] wrote:
  Hi,
 
 Do you indicate any reference for this topic?
 
 I tried some books like MPLS Configuration on Cisco IOS Software
 (pretty good book) by Lancy and Umesh, but it only touch the subject.
 
 Tks,
 Alaerte
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] HSRP Flapping Due to CPU Spikes

2007-07-24 Thread alaerte.vidali
Thanks again Gianluca,

By traffic Locally switched you mean traffic that does not cross bus
(inbound and outbound interface on same module) 

Br,
Alaerte

-Original Message-
From: ext hjan [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 24, 2007 10:47 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro)
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] HSRP Flapping Due to CPU Spikes



[EMAIL PROTECTED] ha scritto:
 Hi Gianluca,
 Thanks for your information. (sorry the delay between one message and 
 other, out of office)

Sorry foir the delay but i'm out of office due to honeymoon vacation :)

 I think this may be a central point of the instabilities we are 
 observing on 7600. I am trying to understand why mls rate-limit 
 unicast ip icmp unreachable... help in your case.

This could be long...however i try to summarize below.
The traffic punted to CPU has been identified as user traffic. This
traffic is normally forwarded through the router in hardware and not
transferred to the RP. During transient conditions it is possible to see
traffic locally switched on the same port. Local switching on the same
port causes the router to send these packets to the RP for processing
and to originate ICMP redirect messages (if necessary).
The command mls rate-limit unicast ip icmp redirect allows to
rate-limit the packets sent to the RP.

 
 Please if you have time let me know:
 
 -Did you use commands to verify mistral asic drops?

Yes.
show ibc | in LBIC at some point i can see:
sho ibc | in LBIC
   LBIC RXQ Drop pkt count = 65535LBIC drop pkt count  = 0
   LBIC Drop pkt stick = 0
so drops occour.


 -Is the traffic punted to RP data traffic? 

Yes.

 -Are you using logging on Access Lists?

No.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Bug with 12.2(18)SXF8 on 7600/SUP2

2007-07-10 Thread alaerte.vidali
Hi,

What was the previous IOS version?
These problem was not observed under SXF5 and SRA1 on network I have
worked with.

Regards,
Alaerte






Message: 3
Date: Mon, 9 Jul 2007 13:17:42 -0400
From: Phil Bedard [EMAIL PROTECTED]
Subject: [c-nsp] Bug with 12.2(18)SXF8 on 7600/SUP2?
To: Cisco-Nsp Nether Net cisco-nsp@puck.nether.net
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed

We recently upgraded some of our 7600s/SUP2s to SXF8 and now are seeing
the following error messages at random times.  When these events occur,
the router has CPU issues where it will drop BGP/OSPF sessions due to
hold timers expiring and drop interfaces due to missed keepalives.  The
documentation on the error message points to a possible hardware
problem, but we have seen this on 6 routers thus  
far and it's highly unlikely it's a problem with bad hardware.
Generally we'll get a few of these in a row and then things  
stabilize.   I've been investigated a link between these events and  
heavy BGP update times, but haven't found any concrete data as of  
yet.   We have opened a TAC case and have thus far gotten nowhere.

Jul  7 15:24:24.551 UTC: %PM_SCP-SP-2-LCP_FW_ERR_INFORM: Module 1 is
experiencing the following error: Bus Asic #0 out of sync error Jul  7
15:24:31.545 UTC: %PM_SCP-SP-2-LCP_FW_ERR_INFORM: Module 1 is
experiencing the following error: Bus Asic #0 in sync

Phil
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Payload Type in AS5400

2007-07-05 Thread alaerte.vidali
Hi,

Do you know if it is possible change Payload Type on version 12.4(11)T?

CISCO AS uses PT=98 for G726 (all flavours)

Thanks.




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Defining the IP of client in LNS without Radius

2007-06-29 Thread alaerte.vidali
 Hi,

Do you think it is possible pre define the IP address of a L2TP tunnel
without using AAA server?
(for example using DHCP for IP pool and somehow define configure DHCP
server to map IP to certain parameters received from LNS)

Tks,
Alaerte
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] HSRP Flapping Due to CPU Spikes

2007-06-27 Thread alaerte.vidali
Hi Gianluca,

Thanks for your information. (sorry the delay between one message and
other, out of office)
I think this may be a central point of the instabilities we are
observing on 7600. I am trying to understand why mls rate-limit
unicast ip icmp unreachable... help in your case.

Please if you have time let me know:

-Did you use commands to verify mistral asic drops?
-Is the traffic punted to RP data traffic? 
-Are you using logging on Access Lists?



Tks,
Alaerte  

-Original Message-
From: ext hjan [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 14, 2007 7:04 AM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] HSRP Flapping Due to CPU Spikes



[EMAIL PROTECTED] ha scritto:
 Hi Gianluca,
 
 Did you have a clue why does rate-limit solve the problem?

In my case it is a combination of multiple factor, network topolgy and
routing/qos design, however with mls rate-limit unicast ip icmp
unreachable acl-drop 0 you force the SIP601-LC to doesn't punt
unreachable packet to SUP-720 for processing,  so you can't overwhelm
the CPU.
In my case for a short time after the up/down on tenge the cpu was
overwhelmed and drop on mistral asic occours, what you drop is random
and in my case involved eigrp.

A sniff of the pachet punted to RP helped me a lot :) :
monitor session 1 source interface gx/y
monitor session 1 destination interface gx/y remote command switch test
monitor add 1 rp-inband tx


Regards,
Gianluca
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Transmission Failure Detection

2007-06-25 Thread alaerte.vidali
 
Hi,

Cisco is recommending not use RSVP Hellos lower than 200ms for link
failure detection. I am wondering if you have use it without false
positives in commercial networks.

Tks
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] HSRP Flapping Due to CPU Spikes

2007-06-25 Thread alaerte.vidali
Hi Rodney,

We are looking forward this feature. Last news we received is that there
is no release date for 7609. Do you have different information?

tks 

-Original Message-
From: ext Rodney Dunn [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 25, 2007 4:24 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro)
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] HSRP Flapping Due to CPU Spikes

You may want to look in to code that has, if it's shipping yet, bfd
triggered HSRP.

Rodney

On Mon, Jun 25, 2007 at 01:00:01PM -0500, [EMAIL PROTECTED] wrote:
 Sorry the delay..out of office.
 No, no tracking. It seems like a process problem. HSRP compete with 
 other process during WAN failure (like OSPF convergence, MPLS FRR
 actions...) on CPU request and some HSRP packets seems to be lost 
 causing the HSRP state change.
 
 Rgds,
 Alaerte
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] cisco-nsp Digest, Vol 53, Issue 81

2007-04-27 Thread alaerte.vidali
Humm,,,no support for FRR on BFD under this version. No date to release
yet.

Are you using SIP + SPA for 10GE?
If yes, what is the SIP?

Rgds,
Alaerte 

-Original Message-
From: ext Raman Sud [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 27, 2007 8:02 PM
To: Vidali Alaerte (NSN - BR/Rio de Janeiro); Vidali Alaerte (NSN -
BR/Rio de Janeiro)
Subject: RE: cisco-nsp Digest, Vol 53, Issue 81

Cisco 6509 with sup 7203bxls running
s72033-advipservicesk9_wan-mz.122-18.SXF6.bin

-Original Message-
From: Alaerte Vidali [mailto:[EMAIL PROTECTED]
Sent: Friday, April 27, 2007 3:54 PM
To: [EMAIL PROTECTED]; Raman Sud; cisco-nsp@puck.nether.net
Subject: RE: cisco-nsp Digest, Vol 53, Issue 81

There is normal BFD and recently BFD integrated in FRR.
You can check if there is support for later one on your devices.
What is the platform/IOS?

br,
Alaerte

 -Original Message-
 From: ext Raman Sud
 Received: Sat Apr 28 01:32:55 EEST 2007
 To: [EMAIL PROTECTED], cisco-nsp@puck.nether.net
 Subject: RE: cisco-nsp Digest, Vol 53, Issue 81
 
 It will be on 10GE port on my backbone routers
 
 I have BFD  already configured on my backbone router and I am using
OSPF
 as an IGP.
 
 I have MPLS on the entire backbone and would like to give route 
 redundancy to my customer in various cities
 
 Raman
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Friday, April 27, 2007 3:12 PM
 To: cisco-nsp@puck.nether.net
 Cc: Raman Sud
 Subject: RE: cisco-nsp Digest, Vol 53, Issue 81
 
 Hi Raman,
 
 Yes, it works fine if that is your question. 
 But recently Cisco added a warm to not use values lower than 200ms for

 RSVP hellos.
 I could not get the exactly reason it was done. My bet is that some 
 customer complained about CPU usage and low priority when there are 
 instabilities on network burning CPU, which may cause false positives 
 when using value as low as 10ms for RSVP hellos.
 
 Cisco released BFD for FRR. If you have it available on your 
 platform/IOS, better go on this way.
 If not, just be aware of potential problems with low RSVP timers.
 
 Configuration is very straightforward. Just global command and
interface
 command defining RSVP hello time and number of misses. I suggest
stress
 your network to verify if you do not have false positives.
 
 What is the interface you want to use it?
 
 Good Luck,
 
 Alaerte
 
 
 --
 
 Message: 7
 Date: Fri, 27 Apr 2007 14:03:02 -0700
 From: Raman Sud [EMAIL PROTECTED]
 Subject: [c-nsp] MPLS Fast Reroute
 To: cisco-nsp@puck.nether.net
 Message-ID:
   
 [EMAIL PROTECTED]
 Content-Type: text/plain; charset=us-ascii
 
 Has anyone setup MPLS fast-reroute using RSVP? Is there a config that 
 someone can share
  
 Thanks
  
 
 Raman Sud
 
 
 
 --
 
 Message: 8
 Date: Fri, 27 Apr 2007 14:27:27 -0700 (MST)
 From: Bill Nash [EMAIL PROTECTED]
 Subject: Re: [c-nsp] MPLS Fast Reroute
 To: Raman Sud [EMAIL PROTECTED]
 Cc: cisco-nsp@puck.nether.net
 Message-ID: [EMAIL PROTECTED]
 Content-Type: TEXT/PLAIN; charset=US-ASCII
 
 

http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft
 /120limit/120st/120st16/frr.htm
 
 - billn
 
 On Fri, 27 Apr 2007, Raman Sud wrote:
 
 
 
 


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Fast Reroute and Link Flapping

2007-04-19 Thread alaerte.vidali
Hi Oli,

Could you comment the 10-sec link-up debounce of POS?

It is not the behavior on the links I handled last time.

Tks,
Alaerte 

-Original Message-
From: ext Oliver Boehmer (oboehmer) [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 19, 2007 12:23 AM
To: Vidali Alaerte (NSN BR/Rio de Janeiro); cisco-nsp@puck.nether.net
Subject: RE: [c-nsp] Fast Reroute and Link Flapping

[EMAIL PROTECTED]  wrote on Thursday, April 19, 2007 1:30 AM:

  Hi,
 
 Do you have reference that discuss fast reroute and link flapping?

not aware of one, but how fast a link flap would you be interested in,
and what is your TE config? Once the protected link has failed and the
LSPs are rerouted via the backup tunnel, the headend(s) will try to
re-optimize the tunnel around the link. If the failed link comes back,
by default the tunnel headends will not trigger a reoptimization (can be
enabled, however), so existing tunnels will not immediately cross the
link in question.
Any LSA/LSP throttling will also apply, so a rapidly flapping link will
trigger the backoff and the IGP throttles down.
I'd always enable IP dampening to prevent rapidly flapping links from
causing any churn on your network. POS already as a 10-sec link-up
debounce timer, but enableing dampening on it doesn't hurt.

oli

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Fast Reroute and Link Flapping

2007-04-18 Thread alaerte.vidali
 Hi,

Do you have reference that discuss fast reroute and link flapping?

Tks,
Alaerte

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Show system jumbomtu

2007-03-30 Thread Alaerte.Vidali
Do you know if there is any hidden issue with this command?

I tried it in 2 IOS versions it was supposed to work, but no support.

Version 12.2(33)SRA1

OSR-1#show system ?
% Unrecognized command
OSR-1#show system

Version 12.2(17d)SXB7
OSR-2#sh system?
% Unrecognized command
OSR-2#sh system



Tks,
Alaerte

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] RSVP Hellos

2007-03-29 Thread Alaerte.Vidali
 
Are you aware of any restriction concerning use of aggressive values on
RSVP Hellos to detect neighbor failure?




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] mpls traffic-eng reoptimize timers frequency

2007-03-28 Thread Alaerte.Vidali
Do you have comments regarding this command?
(advantages, disadvantages, CPU impact, traffic impact, bugs)

Tks,
Alaerte

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/