[c-nsp] backup cpe

2009-07-12 Thread Mohammad Khalil

hi all 
i have a router with 2 ethernet interfaces
one is connected to a microwave device (Leased Line) and the other is connected 
to a WiMAX CPE
now if the leased line went down 
how im going to activate the cpe automatically ??
there is no dialing in the CPE it obtain a DHCP ip address from the BS once the 
LOS is there 

Thanks 

_
More than messages–check out the rest of the Windows Live™.
http://www.microsoft.com/windows/windowslive/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] backup cpe

2009-07-12 Thread Arie Vayner (avayner)
Mohammad,

Take a look here:

Enhanced Object Tracking
http://www.cisco.com/en/US/docs/ios/12_2t/12_2t15/feature/guide/fthsrptk
.html

Reliable Static Routing Backup Using Object Tracking
http://www.cisco.com/en/US/docs/ios/12_3/12_3x/12_3xe/feature/guide/dbac
kupx.html

Embedded Event Manager (EEM)
http://www.cisco.com/en/US/products/ps6815/products_ios_protocol_group_h
ome.html


I think this should give you some ideas...

Arie

-Original Message-
From: cisco-nsp-boun...@puck.nether.net
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Mohammad Khalil
Sent: Sunday, July 12, 2009 11:28
To: cisco-nsp@puck.nether.net
Subject: [c-nsp] backup cpe


hi all 
i have a router with 2 ethernet interfaces
one is connected to a microwave device (Leased Line) and the other is
connected to a WiMAX CPE
now if the leased line went down 
how im going to activate the cpe automatically ??
there is no dialing in the CPE it obtain a DHCP ip address from the BS
once the LOS is there 

Thanks 

_
More than messages-check out the rest of the Windows Live(tm).
http://www.microsoft.com/windows/windowslive/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Mac address flapping..

2009-07-12 Thread A . L . M . Buxey
Hi,
 Actualy,
  My 2 4506s are plugged into the customers, Flat, Default configed, Cisco 
 3548-XL-EN switch.

are they in the same VTP domain or having trunks fed to them?
those switches are very very old and weak in terms of numbers
of VLANs - especially in PVST mode etc

do you handle the VLANs on the 6506 devices (ie they are the routers?)
if so, have you checked the settings for VLAN 42 - esp. with regard to
HRSRP?


alan
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] backup cpe

2009-07-12 Thread Ivan Pepelnjak
More specifically ... SOHO multihoming solutions (includes object tracking
and reliable static routing)

http://wiki.nil.com/Small_site_multihoming

More reliable static routing tricks:

http://blog.ioshints.info/search?q=reliable+static

More DHCP-related tricks:

http://blog.ioshints.info/search/label/DHCP

EEM applet that enables/disables an interface (just tie it to a track
object, not a timer):

http://wiki.nil.com/Time-based_wireless_interface_activity

More sample EEM applets:

http://wiki.nil.com/Category:EEM_applet

More EEM usage guidelines and tips:

http://blog.ioshints.info/search/label/EEM

Ufff ... I'm obviously writing too much :)
Ivan
 
http://www.ioshints.info/about
http://blog.ioshints.info/


 -Original Message-
 From: Arie Vayner (avayner) [mailto:avay...@cisco.com] 
 Sent: Sunday, July 12, 2009 12:13 PM
 To: Mohammad Khalil; cisco-nsp@puck.nether.net
 Subject: Re: [c-nsp] backup cpe
 
 Mohammad,
 
 Take a look here:
 
 Enhanced Object Tracking
 http://www.cisco.com/en/US/docs/ios/12_2t/12_2t15/feature/guid
 e/fthsrptk
 .html
 
 Reliable Static Routing Backup Using Object Tracking 
 http://www.cisco.com/en/US/docs/ios/12_3/12_3x/12_3xe/feature/
 guide/dbac
 kupx.html
 
 Embedded Event Manager (EEM)
 http://www.cisco.com/en/US/products/ps6815/products_ios_protoc
 ol_group_h
 ome.html
 
 
 I think this should give you some ideas...
 
 Arie
 
 -Original Message-
 From: cisco-nsp-boun...@puck.nether.net
 [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of 
 Mohammad Khalil
 Sent: Sunday, July 12, 2009 11:28
 To: cisco-nsp@puck.nether.net
 Subject: [c-nsp] backup cpe
 
 
 hi all
 i have a router with 2 ethernet interfaces one is connected 
 to a microwave device (Leased Line) and the other is 
 connected to a WiMAX CPE now if the leased line went down how 
 im going to activate the cpe automatically ??
 there is no dialing in the CPE it obtain a DHCP ip address 
 from the BS once the LOS is there 
 
 Thanks 
 
 _
 More than messages-check out the rest of the Windows Live(tm).
 http://www.microsoft.com/windows/windowslive/
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 
 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IGMP snooping ME6500

2009-07-12 Thread Phil Mayers

On Sat, Jul 11, 2009 at 07:28:36PM +0100, Adrian Minta wrote:

Hi,
I have a problem with Layer 2 multicast traffic on ME6500. The switch 
floods all redundant links with multicast traffic, much like a dumb 
switch. On all the other small platforms igmp snooping works very good 
out of the box: Cat2950, Cat3550, ME3400.


A friend of mine have the same symptom on 7600. My software version is 
Version 12.2(33)SXH3a, don't know his version.


Has anyone encounter the same problem ? Does anybody know a valid solution ?


Config? Software version?

If you don't already have it, try creating an un-numbered SVI e.g.:

vlan 200
 name multicast
int Vlan200
 no ip address

I seem to recall references to this being required for some multicast 
functionality on some versions of 6500/7600 IOS


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Mac address flapping..

2009-07-12 Thread Phil Mayers

On Sun, Jul 12, 2009 at 06:09:05AM +0100, James Ashton wrote:

I have looked at all the port configs in question.No forgotten stuff that I 
can see.

I am willing to go with the loop idea..  But I dont get any loop errors.   I 
dont get any Mac Move errors other than for this HSRP Mac Address, and over 120 
other vlans on  these same ports arent having this issue.


But if it were a loop, how would I find it and fix it..   I ahve gone through 
every method I know of and allt he Cisco troubleshooting docs.   I can feel 
that I am missing something here, But I just cant think of what.



Next step is to SPAN the ports concerned and confirm for real what 
packets are causing the mac move notify, and see what else is there that 
shouldn't be


It's possible the loop isn't a full one; e.g. if they've looped subnet 
200 to 201, via a firewall that's dropping non-IP packets, then STP 
wouldn't complain, and you wouldn't get a broadcast storm, but you would 
get this kind of problem.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Mac address flapping..

2009-07-12 Thread Thomas Habets

On Sun, 12 Jul 2009, James Ashton wrote:

over 120 other vlans on  these same ports arent having this
issue.


Have you checked that you aren't running into spanning tree limits?

6500/7600 have two limits, virtual ports and active logical ports.

The short story is:
1) check if show spanning-tree summary total is more than 1.
2) check if show vlan virtual-port is more than 1800 per slot.

http://blog.habets.pp.se/2009/06/Spanning-tree-limits
http://www.cisco.com/en/US/solutions/ns340/ns394/ns50/net_design_guidance0900aecd806fe4bb.pdf

-
typedef struct me_s {
  char name[]  = { Thomas Habets };
  char email[] = { tho...@habets.pp.se };
  char kernel[]= { Linux };
  char *pgpKey[]   = { http://www.habets.pp.se/pubkey.txt; };
  char pgp[] = { A8A3 D1DD 4AE0 8467 7FDE  0945 286A E90A AD48 E854 };
  char coolcmd[]   = { echo '. ./_. ./_'_;. ./_ };
} me_t;
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Mac address flapping..

2009-07-12 Thread James Ashton
Alan,
 The 3548 is not part of the VTP.  And I am not passing it a trunk.  Just the 
one vlan.


and the 6509s do handle the VLANs.   But there are no tweeks to HSRP.  Just the 
default settings like all the others.


From: a.l.m.bu...@lboro.ac.uk [a.l.m.bu...@lboro.ac.uk]
Sent: Sunday, July 12, 2009 6:20 AM
To: James Ashton
Cc: Matlock, Kenneth L; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Mac address flapping..

Hi,
 Actualy,
  My 2 4506s are plugged into the customers, Flat, Default configed, Cisco 
 3548-XL-EN switch.

are they in the same VTP domain or having trunks fed to them?
those switches are very very old and weak in terms of numbers
of VLANs - especially in PVST mode etc

do you handle the VLANs on the 6506 devices (ie they are the routers?)
if so, have you checked the settings for VLAN 42 - esp. with regard to
HRSRP?


alan
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Mac address flapping..

2009-07-12 Thread James Ashton
Thomas
Here is the output.   Doesn't look like I have hit any limits.



From 6509-a

=
core-tpa001#sh spanning-tree summary totals 
Switch is in pvst mode
Root bridge for: VLAN0002-VLAN0065, VLAN0074, VLAN0084, VLAN0088, VLAN0093
  VLAN0098-VLAN0100, VLAN0996-VLAN0998
EtherChannel misconfig guard is enabled
Extended system ID   is enabled
Portfast Default is disabled
PortFast BPDU Guard Default  is disabled
Portfast BPDU Filter Default is disabled
Loopguard Defaultis enabled
UplinkFast   is disabled
BackboneFast is disabled
Pathcost method used is short
Name   Blocking Listening Learning Forwarding STP Active
--  -  -- --
120 vlans2 00592594




From 4506-a

core-tpa001#show vlan virtual-port
Slot 1
---
Total slot virtual ports 710
Slot 3
---
Total slot virtual ports 357
Slot 5
---
Total slot virtual ports 1
Total chassis virtual ports 1068 


James


From: Thomas Habets [tho...@habets.pp.se]
Sent: Sunday, July 12, 2009 9:56 AM
To: James Ashton
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Mac address flapping..

On Sun, 12 Jul 2009, James Ashton wrote:
 over 120 other vlans on  these same ports arent having this
 issue.

Have you checked that you aren't running into spanning tree limits?

6500/7600 have two limits, virtual ports and active logical ports.

The short story is:
1) check if show spanning-tree summary total is more than 1.
2) check if show vlan virtual-port is more than 1800 per slot.

http://blog.habets.pp.se/2009/06/Spanning-tree-limits
http://www.cisco.com/en/US/solutions/ns340/ns394/ns50/net_design_guidance0900aecd806fe4bb.pdf

-
typedef struct me_s {
   char name[]  = { Thomas Habets };
   char email[] = { tho...@habets.pp.se };
   char kernel[]= { Linux };
   char *pgpKey[]   = { http://www.habets.pp.se/pubkey.txt; };
   char pgp[] = { A8A3 D1DD 4AE0 8467 7FDE  0945 286A E90A AD48 E854 };
   char coolcmd[]   = { echo '. ./_. ./_'_;. ./_ };
} me_t;
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IGMP snooping ME6500

2009-07-12 Thread Adrian Minta

Phil Mayers wrote:


Config? Software version?

If you don't already have it, try creating an un-numbered SVI e.g.:

vlan 200
 name multicast
int Vlan200
 no ip address

I seem to recall references to this being required for some multicast 
functionality on some versions of 6500/7600 IOS




Seems weird, but I will give it a try !

--
Best regards,
Adrian Minta


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IGMP snooping ME6500

2009-07-12 Thread Tim Stevenson
That's not really the critical thing, so much as - you need an IGMP 
querier active in the VLAN in order for snooping to work 
correctly/reliably. Some applications may behave fine without; others 
won't. The key is periodic joins from the hosts are required to 
maintain membership state for snooping. The querier ensures that happens.


So what you really need is an SVI *with* an IP address for that vlan, 
and then enable igmp snooping querier for that vlan. The configured 
IP is used to source queries. The SVI in this case can actually be 
shutdown, it doesn't really matter.


The config is like:
int vlan 200
 ip add 10.1.1.1/24
 ip igmp snooping querier
 shut

The other option is to just enable PIM on the (admin up) SVI in the 
vlan, but you may not want to do that, depends on the network design.


int vlan 200
 ip add 10.1.1.1/24
 ip pim sparse
 no shut

HTH,
Tim



At 10:46 AM 7/12/2009, Adrian Minta remarked:


Phil Mayers wrote:

 Config? Software version?

 If you don't already have it, try creating an un-numbered SVI e.g.:

 vlan 200
  name multicast
 int Vlan200
  no ip address

 I seem to recall references to this being required for some multicast
 functionality on some versions of 6500/7600 IOS


Seems weird, but I will give it a try !

--
Best regards,
Adrian Minta


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsphttps://puck.nether.net/mailman/listinfo/cisco-nsp
archive at 
http://puck.nether.net/pipermail/cisco-nsp/http://puck.nether.net/pipermail/cisco-nsp/





Tim Stevenson, tstev...@cisco.com
Routing  Switching CCIE #5561
Technical Marketing Engineer, Cisco Nexus 7000
Cisco - http://www.cisco.com
IP Phone: 408-526-6759

The contents of this message may be *Cisco Confidential*
and are intended for the specified recipients only.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IGMP snooping ME6500

2009-07-12 Thread Adrian Minta

Tim Stevenson wrote:
That's not really the critical thing, so much as - you need an IGMP 
querier active in the VLAN in order for snooping to work 
correctly/reliably. Some applications may behave fine without; others 
won't. The key is periodic joins from the hosts are required to 
maintain membership state for snooping. The querier ensures that happens.


So what you really need is an SVI *with* an IP address for that vlan, 
and then enable igmp snooping querier for that vlan. The configured IP 
is used to source queries. The SVI in this case can actually be 
shutdown, it doesn't really matter.


The config is like:
int vlan 200
 ip add 10.1.1.1/24
 ip igmp snooping querier
 shut

The other option is to just enable PIM on the (admin up) SVI in the 
vlan, but you may not want to do that, depends on the network design.


int vlan 200
 ip add 10.1.1.1/24
 ip pim sparse
 no shut

HTH,
Tim

Creating an unnumbered interface didn't seems to work. Now I am trying 
your solution, the one with ip igmp snooping querier. I don't want to 
involve the switches in any multicast routing.


--
Best regards,
Adrian Minta



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IGMP snooping ME6500

2009-07-12 Thread ML

Adrian Minta wrote:

Tim Stevenson wrote:
That's not really the critical thing, so much as - you need an IGMP 
querier active in the VLAN in order for snooping to work 
correctly/reliably. Some applications may behave fine without; others 
won't. The key is periodic joins from the hosts are required to 
maintain membership state for snooping. The querier ensures that happens.


So what you really need is an SVI *with* an IP address for that vlan, 
and then enable igmp snooping querier for that vlan. The configured IP 
is used to source queries. The SVI in this case can actually be 
shutdown, it doesn't really matter.


The config is like:
int vlan 200
 ip add 10.1.1.1/24
 ip igmp snooping querier
 shut

The other option is to just enable PIM on the (admin up) SVI in the 
vlan, but you may not want to do that, depends on the network design.


int vlan 200
 ip add 10.1.1.1/24
 ip pim sparse
 no shut

HTH,
Tim

Creating an unnumbered interface didn't seems to work. Now I am trying 
your solution, the one with ip igmp snooping querier. I don't want to 
involve the switches in any multicast routing.




Normally I just enable PIM on the SVI and IGMP snooping for the VLAN.
No traffic gets flooded unnecessarily.


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Mac address flapping..

2009-07-12 Thread Mateusz Blaszczyk
James,
did you try to clear the arp table to force some broadcast traffic?
or ping broadcast IP for the vlan?
and see if it triggers more mac flapping?
not that it would help at all...

it is buffling.
Another thing... try to reconfigure SVIs... or even use another VLAN

I think we run out of guns here...

Best Regards,

-mat

2009/7/12 James Ashton jash...@esnet.com:
 Thomas
 Here is the output.   Doesn't look like I have hit any limits.



 From 6509-a

 =
 core-tpa001#sh spanning-tree summary totals
 Switch is in pvst mode
 Root bridge for: VLAN0002-VLAN0065, VLAN0074, VLAN0084, VLAN0088, VLAN0093
  VLAN0098-VLAN0100, VLAN0996-VLAN0998
 EtherChannel misconfig guard is enabled
 Extended system ID           is enabled
 Portfast Default             is disabled
 PortFast BPDU Guard Default  is disabled
 Portfast BPDU Filter Default is disabled
 Loopguard Default            is enabled
 UplinkFast                   is disabled
 BackboneFast                 is disabled
 Pathcost method used is short
 Name                   Blocking Listening Learning Forwarding STP Active
 --  -  -- --
 120 vlans                    2         0        0        592        594




 From 4506-a

 core-tpa001#show vlan virtual-port
 Slot 1
 ---
 Total slot virtual ports 710
 Slot 3
 ---
 Total slot virtual ports 357
 Slot 5
 ---
 Total slot virtual ports 1
 Total chassis virtual ports 1068


 James

 
 From: Thomas Habets [tho...@habets.pp.se]
 Sent: Sunday, July 12, 2009 9:56 AM
 To: James Ashton
 Cc: cisco-nsp@puck.nether.net
 Subject: Re: [c-nsp] Mac address flapping..

 On Sun, 12 Jul 2009, James Ashton wrote:
 over 120 other vlans on  these same ports arent having this
 issue.

 Have you checked that you aren't running into spanning tree limits?

 6500/7600 have two limits, virtual ports and active logical ports.

 The short story is:
 1) check if show spanning-tree summary total is more than 1.
 2) check if show vlan virtual-port is more than 1800 per slot.

 http://blog.habets.pp.se/2009/06/Spanning-tree-limits
 http://www.cisco.com/en/US/solutions/ns340/ns394/ns50/net_design_guidance0900aecd806fe4bb.pdf

 -
 typedef struct me_s {
   char name[]      = { Thomas Habets };
   char email[]     = { tho...@habets.pp.se };
   char kernel[]    = { Linux };
   char *pgpKey[]   = { http://www.habets.pp.se/pubkey.txt; };
   char pgp[] = { A8A3 D1DD 4AE0 8467 7FDE  0945 286A E90A AD48 E854 };
   char coolcmd[]   = { echo '. ./_. ./_'_;. ./_ };
 } me_t;
 ___
 cisco-nsp mailing list  cisco-...@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

[c-nsp] Maximum spannig tree instances

2009-07-12 Thread Shine Joseph
Hi,

I searched in the archives if I could find the answer to my this query. =
The result was negative.

How many spanning-tree instances are possible in Rapid PVST+ and MST =
modes in Cisco 6500 series switches with Sup720?

The only documentation that I could see which says about total number of =
virtual ports per line card and total active logical ports. There is no =
reference to  number of instances.

The following netpro link mentions about 4096 instances, but this point =
is not validated.
http://forums.cisco.com/eforum/servlet/NetProf?page=3Dnetprofforum=3DNet=
work%20Infrastructuretopic=3DLAN%2C%20Switching%20and%20RoutingtopicID=3D=
.ee71a04CommCmd=3DMB%3Fcmd%3Dpass_through%26location%3Doutline%40%5E1%40=
%40.2cc1484e/2#selected_message

Any links or pointers would be much appreciated.

Thanks in advance,
Shine
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] EIGRP SoO question

2009-07-12 Thread Derick Winkworth
I'm trying to wrap my head around how this works.

There is BGP SOO.  This is where routes are tagged as they are redistributed 
into BGP so that other PEs attached to the same customer site do not push the 
routes back into the site.  This accounts for the PE -  CE direction.

In the opposite direction, it seems there are actually two different mechanisms.

There is

a) EIGRP SOO.  This is an EIGRP extension/tag that the PE uses so it does not 
re-introduce a route back into the PE iBGP cloud.  Routes are tagged going into 
a site, and if the site is dual-homed and the route comes back to another PE 
that is appropriately configured, this other PE will see the tag and not 
re-advertise that route back into BGP.

b)  BGP cost community.  This attribute carries the EIGRP metric of the route 
that is being redistributed into BGP.  At another PE (presumable a PE attached 
to a multihomed site), this attribute tells BGP to compare the EIGRP cost 
embedded in the attribute directly to an EIGRP route learned from the CE.  This 
attribute is compared before any other BGP attribute.


So I guess why do we need both (a) and (b)?

The documentation for this is shoddy.

Derick Winkworth
CCIE #15672
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IGMP snooping ME6500

2009-07-12 Thread Tim Stevenson
Note that you can have a pim-enabled interface with ip 
multicast-routing disabled and that should work too - though then the 
RP CPU will be setting up state (at L3) for no particularly good 
reason. The querier function is to avoid all that. Let us know if it 
improves things.


Tim

At 12:40 PM 7/12/2009, Adrian Minta remarked:

Creating an unnumbered interface didn't seems to work. Now I am trying
your solution, the one with ip igmp snooping querier. I don't want to
involve the switches in any multicast routing.





Tim Stevenson, tstev...@cisco.com
Routing  Switching CCIE #5561
Technical Marketing Engineer, Cisco Nexus 7000
Cisco - http://www.cisco.com
IP Phone: 408-526-6759

The contents of this message may be *Cisco Confidential*
and are intended for the specified recipients only.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Maximum spannig tree instances

2009-07-12 Thread Clinton Work


The short answer is that the 6500 platform spanning-tree scalability is 
limited by virtual ports and the complexity of your spanning-tree 
topology.  If you only have a couple of trunks carrying all 4096 VLANs 
then you'll be fine.  If you have a lot FastE ports trunking hundreds of 
VLANs then you will quickly run into the virtual port limits. 

The 12.2SXF release notes indicate the virtual port limits which can be 
checked against the show vlan virtual-port command output.  The 
documentation isn't clear, but the 6500 spanning-tree limits are based 
upon virtual ports rather than logical ports (ex CatOS, 4500, ..).  If 
you check that 12.2SXI release notes you'll see that Cisco included 
enhancements that increase the virtual port scalability numbers.   Note, 
RST is listed as RPVST+ in the table. 


http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SXF/native/release/notes/OL_4164.html#wp26366

Shine Joseph wrote:

Hi,

I searched in the archives if I could find the answer to my this query. =
The result was negative.

How many spanning-tree instances are possible in Rapid PVST+ and MST =
modes in Cisco 6500 series switches with Sup720?

The only documentation that I could see which says about total number of =
virtual ports per line card and total active logical ports. There is no =
reference to  number of instances.

The following netpro link mentions about 4096 instances, but this point =
is not validated.
http://forums.cisco.com/eforum/servlet/NetProf?page=3Dnetprofforum=3DNet=
work%20Infrastructuretopic=3DLAN%2C%20Switching%20and%20RoutingtopicID=3D=
.ee71a04CommCmd=3DMB%3Fcmd%3Dpass_through%26location%3Doutline%40%5E1%40=
%40.2cc1484e/2#selected_message

Any links or pointers would be much appreciated.

Thanks in advance,
Shine
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
  



--
==
Clinton Work
Airdrie, AB


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Mac address flapping..

2009-07-12 Thread Lincoln Dale
its either a loop, or the server in question is dual homed with the same 
mac address on two physical switches.
since your network hasn't yet melted down because of a loop and 
loopguard (which you have enabled right?) hasn't seen a BPDU on a port 
which shouldn't ever receive them, my money is on a host that is 
misconfigured.


e.g. think of the host using the equivalent of a portchannel mode 'on' 
and balacning traffic both directions.

your switching infrastructure will see this as a mac-move.

this is not a valid scenario for a host.  the host either needs to be 
connected to:
A. a single physical switch with all physical interfaces configured 
into a port channel such that the switch sees it as a single logical link
B. plugged into multiple physical switches (for redundancy) with the 
switches supporting multi chassis ether channel (MCEC).


for (B), the only valid scenarios at this point in time are:
   Catalyst 6500 VSS
   Nexus 7000 virtual Port Channel (vPC)
   Catalyst 3750 switch stack


cheers,

lincoln.



James Ashton wrote:

I have looked at all the port configs in question.No forgotten stuff that I 
can see.

I am willing to go with the loop idea..  But I dont get any loop errors.   I 
dont get any Mac Move errors other than for this HSRP Mac Address, and over 120 
other vlans on  these same ports arent having this issue.


But if it were a loop, how would I find it and fix it..   I ahve gone through 
every method I know of and allt he Cisco troubleshooting docs.   I can feel 
that I am missing something here, But I just cant think of what.

James


From: Mateusz Blaszczyk [blah...@gmail.com]
Sent: Friday, July 10, 2009 3:19 PM
To: James Ashton
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Mac address flapping..

James,

. (I have a pair doing redundant gateways for a DataCenter network)
  

  %MAC_MOVE-SP-4-NOTIF: Host 00d0.009e.2400 in vlan 42 is flapping between 
port Po1 and port Gi1/7

I see about 20 of these for this one vlan each minute.



the mac is 6509-b and pps==20/minute is probably HSRP hello packet
from Vlan42 on 6509-b.
if there are no topo changes in stp there must be a unnoticed L2 loop,
either forgotten portfast or bpdu filtering between 6509-a,-b and
4506-a.

perhaps try to disconnect the customer completely during a maintenance
window and double check all your connections.

Best Regards,

-mat
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

  

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Help with output drops

2009-07-12 Thread Randy McAnally
Hi all,

I just finished installing and configuring a new 6509 with dual sup7203bxl
(12.2(18)SXF15a) and a 6724 linecards.  It serves a simple purpose of
maintaining a single BGP session, and managing layer3 (vlans) for various
access switches.  No end devices are connected.

The problem is that we are getting constant output drops when our gig-E uplink
goes above ~400 mbps.  Nowhere near the interface speed!  See below, take note
of massive 'Total output drops' with no other errors (on either end):

rtr1.ash#sh int g1/1
GigabitEthernet1/1 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 00d0.01ff.5800 (bia 00d0.01ff.5800)
  Description: PTP-UPLINK
  Internet address is 209.9.224.68/29
  MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
 reliability 255/255, txload 118/255, rxload 12/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is T
  input flow-control is off, output flow-control is off
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:01, output hang never
  Last clearing of show interface counters 05:01:25
  Input queue: 0/1000/0/0 (size/max/drops/flushes); Total output drops: 718023
  Queueing strategy: fifo
  Output queue: 0/100 (size/max)
  30 second input rate 47789000 bits/sec, 30797 packets/sec
  30 second output rate 465362000 bits/sec, 48729 packets/sec
  L2 Switched: ucast: 27775 pkt, 2136621 bytes - mcast: 24590 pkt, 1574763 bytes
  L3 in Switched: ucast: 592150327 pkt, 95608889548 bytes - mcast: 0 pkt, 0
bytes mcast
  L3 out Switched: ucast: 991372425 pkt, 1214882993007 bytes mcast: 0 pkt, 0 
bytes
 592554441 packets input, 95674494492 bytes, 0 no buffer
 Received 33643 broadcasts (17872 IP multicasts)
 0 runts, 0 giants, 0 throttles
 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
 0 watchdog, 0 multicast, 0 pause input
 0 input packets with dribble condition detected
 991006394 packets output, 1214377864373 bytes, 0 underruns
 0 output errors, 0 collisions, 0 interface resets
 0 babbles, 0 late collision, 0 deferred
 0 lost carrier, 0 no carrier, 0 PAUSE output
 0 output buffer failures, 0 output buffers swapped out

The CPU usage is nil:

rtr1.ash#sh proc cpu sort

CPU utilization for five seconds: 1%/0%; one minute: 0%; five minutes: 0%
 PID Runtime(ms)   Invoked  uSecs   5Sec   1Min   5Min TTY Process
   6 3036624252272  12037  0.47%  0.19%  0.18%   0 Check heaps
 316  195004 99543   1958  0.15%  0.01%  0.00%   0 BGP Scanner
 119  267568   2962884 90  0.15%  0.03%  0.02%   0 IP Input
 172  413528   2134933193  0.07%  0.03%  0.02%   0 CEF process
   4  16 48214  0  0.00%  0.00%  0.00%   0 cpf_process_ipcQ
   3   0 2  0  0.00%  0.00%  0.00%   0 cpf_process_msg_
   5   0 1  0  0.00%  0.00%  0.00%   0 PF Redun ICC Req
   2 772298376  2  0.00%  0.00%  0.00%   0 Load Meter
   9   23964157684151  0.00%  0.01%  0.00%   0 ARP Input
   7   0 1  0  0.00%  0.00%  0.00%   0 Pool Manager
   8   0 2  0  0.00%  0.00%  0.00%   0 Timers
snip

I THINK I have determined the drops are caused by buffer congestion on the port:

rtr1.ash#sh queueing interface gigabitEthernet 1/1 

rtr1.ash#sh queueing interface gigabitEthernet 1/1
Interface GigabitEthernet1/1 queueing strategy:  Weighted Round-Robin
  Port QoS is enabled
  Port is untrusted
  Extend trust state: not trusted [COS = 0]
  Default COS is 0
Queueing Mode In Tx direction: mode-cos
Transmit queues [type = 1p3q8t]:
Queue IdScheduling  Num of thresholds
-
   01 WRR 08
   02 WRR 08
   03 WRR 08
   04 Priority01

WRR bandwidth ratios:  100[queue 1] 150[queue 2] 200[queue 3]
queue-limit ratios: 50[queue 1]  20[queue 2]  15[queue 3]  15[Pri Queue]

snip

  Packets dropped on Transmit:

queue dropped  [cos-map]
-
1   719527  [0 1 ]
20  [2 3 4 ]
30  [6 7 ]
40  [5 ]

So it would appear all of my traffic goes into queue 1.  It would also seem
that 50% buffers for queue 1 isn't enough?  These are the default settings by
the way.

I'm pretty sure that wrr-queue queue-limit and wrr-queue bandwidth should help
us mitigate this frustrating packet loss, but I've no experience and would
like some insight and suggestions before I start making changes.  I am totally
unfamiliar with these features (I come from Foundry/Brocade background) and
would like any suggestions or advise you might have before I try anything that
could risk downtime or further 

[c-nsp] Help with output drops

2009-07-12 Thread Randy McAnally
Hi all,

I just finished installing and configuring a new 6509 with dual sup7203bxl
(12.2(18)SXF15a) and a 6724 linecard.  It serves a simple purpose of
maintaining a single BGP session, and managing layer3 (vlans) for various
access switches.  No end devices are connected.

The problem is that I am getting constant output drops when the aggregation
uplink goes above ~400 mbps.  Nowhere near the interface speed!  See below,
take note of massive 'Total output drops' with no other errors (on either end):

rtr1.ash#sh int g1/1
GigabitEthernet1/1 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 00d0.01ff.5800 (bia 00d0.01ff.5800)
  Description: PTP-UPLINK
  Internet address is 209.9.224.68/29
  MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
 reliability 255/255, txload 118/255, rxload 12/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is T
  input flow-control is off, output flow-control is off
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:01, output hang never
  Last clearing of show interface counters 05:01:25
  Input queue: 0/1000/0/0 (size/max/drops/flushes); Total output drops: 718023
  Queueing strategy: fifo
  Output queue: 0/100 (size/max)
  30 second input rate 47789000 bits/sec, 30797 packets/sec
  30 second output rate 465362000 bits/sec, 48729 packets/sec
  L2 Switched: ucast: 27775 pkt, 2136621 bytes - mcast: 24590 pkt, 1574763 bytes
  L3 in Switched: ucast: 592150327 pkt, 95608889548 bytes - mcast: 0 pkt, 0
bytes mcast
  L3 out Switched: ucast: 991372425 pkt, 1214882993007 bytes mcast: 0 pkt, 0 
bytes
 592554441 packets input, 95674494492 bytes, 0 no buffer
 Received 33643 broadcasts (17872 IP multicasts)
 0 runts, 0 giants, 0 throttles
 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
 0 watchdog, 0 multicast, 0 pause input
 0 input packets with dribble condition detected
 991006394 packets output, 1214377864373 bytes, 0 underruns
 0 output errors, 0 collisions, 0 interface resets
 0 babbles, 0 late collision, 0 deferred
 0 lost carrier, 0 no carrier, 0 PAUSE output
 0 output buffer failures, 0 output buffers swapped out

The CPU usage is nil:

rtr1.ash#sh proc cpu sort

CPU utilization for five seconds: 1%/0%; one minute: 0%; five minutes: 0%
 PID Runtime(ms)   Invoked  uSecs   5Sec   1Min   5Min TTY Process
   6 3036624252272  12037  0.47%  0.19%  0.18%   0 Check heaps
 316  195004 99543   1958  0.15%  0.01%  0.00%   0 BGP Scanner
 119  267568   2962884 90  0.15%  0.03%  0.02%   0 IP Input
 172  413528   2134933193  0.07%  0.03%  0.02%   0 CEF process
   4  16 48214  0  0.00%  0.00%  0.00%   0 cpf_process_ipcQ
   3   0 2  0  0.00%  0.00%  0.00%   0 cpf_process_msg_
   5   0 1  0  0.00%  0.00%  0.00%   0 PF Redun ICC Req
   2 772298376  2  0.00%  0.00%  0.00%   0 Load Meter
   9   23964157684151  0.00%  0.01%  0.00%   0 ARP Input
   7   0 1  0  0.00%  0.00%  0.00%   0 Pool Manager
   8   0 2  0  0.00%  0.00%  0.00%   0 Timers
snip

I THINK I have determined the drops are caused by buffer congestion on the port:

rtr1.ash#sh queueing interface gigabitEthernet 1/1

rtr1.ash#sh queueing interface gigabitEthernet 1/1
Interface GigabitEthernet1/1 queueing strategy:  Weighted Round-Robin
  Port QoS is enabled
  Port is untrusted
  Extend trust state: not trusted [COS = 0]
  Default COS is 0
Queueing Mode In Tx direction: mode-cos
Transmit queues [type = 1p3q8t]:
Queue IdScheduling  Num of thresholds
-
   01 WRR 08
   02 WRR 08
   03 WRR 08
   04 Priority01

WRR bandwidth ratios:  100[queue 1] 150[queue 2] 200[queue 3]
queue-limit ratios: 50[queue 1]  20[queue 2]  15[queue 3]  15[Pri Queue]

snip

  Packets dropped on Transmit:

queue dropped  [cos-map]
-
1   719527  [0 1 ]
20  [2 3 4 ]
30  [6 7 ]
40  [5 ]

So it would appear all of my traffic goes into queue 1.  It would also seem
that 50% buffers for queue 1 isn't enough?  These are the default settings by
the way.

I'm pretty sure that wrr-queue queue-limit and wrr-queue bandwidth should help
us mitigate this frustrating packet loss, but I've no experience and would
like some insight and suggestions before I start making changes.  I am totally
unfamiliar with these features (I come from Foundry/Brocade background) and
would like any suggestions or advise you might have before I try anything that
could risk downtime or further 

[c-nsp] Help with output drops

2009-07-12 Thread Randy McAnally
Hi all,

I just finished installing and configuring a new 6509 with dual sup7203bxl
(12.2(18)SXF15a) and a 6724 linecard.  It serves a simple purpose of
maintaining a single BGP session, and managing layer3 (vlans) for various
access switches.  No end devices are connected.

The problem is that I am getting constant output drops when the aggregation
uplink goes above ~400 mbps.  Nowhere near the interface speed!  See below,
take note of massive 'Total output drops' with no other errors (on either end):

rtr1.ash#sh int g1/1
GigabitEthernet1/1 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 00d0.01ff.5800 (bia 00d0.01ff.5800)
  Description: PTP-UPLINK
  Internet address is 209.9.224.68/29
  MTU 1500 bytes, BW 100 Kbit, DLY 10 usec,
 reliability 255/255, txload 118/255, rxload 12/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, media type is T
  input flow-control is off, output flow-control is off
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:01, output hang never
  Last clearing of show interface counters 05:01:25
  Input queue: 0/1000/0/0 (size/max/drops/flushes); Total output drops: 718023
  Queueing strategy: fifo
  Output queue: 0/100 (size/max)
  30 second input rate 47789000 bits/sec, 30797 packets/sec
  30 second output rate 465362000 bits/sec, 48729 packets/sec
  L2 Switched: ucast: 27775 pkt, 2136621 bytes - mcast: 24590 pkt, 1574763 bytes
  L3 in Switched: ucast: 592150327 pkt, 95608889548 bytes - mcast: 0 pkt, 0
bytes mcast
  L3 out Switched: ucast: 991372425 pkt, 1214882993007 bytes mcast: 0 pkt, 0 
bytes
 592554441 packets input, 95674494492 bytes, 0 no buffer
 Received 33643 broadcasts (17872 IP multicasts)
 0 runts, 0 giants, 0 throttles
 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
 0 watchdog, 0 multicast, 0 pause input
 0 input packets with dribble condition detected
 991006394 packets output, 1214377864373 bytes, 0 underruns
 0 output errors, 0 collisions, 0 interface resets
 0 babbles, 0 late collision, 0 deferred
 0 lost carrier, 0 no carrier, 0 PAUSE output
 0 output buffer failures, 0 output buffers swapped out

The CPU usage is nil:

rtr1.ash#sh proc cpu sort

CPU utilization for five seconds: 1%/0%; one minute: 0%; five minutes: 0%
 PID Runtime(ms)   Invoked  uSecs   5Sec   1Min   5Min TTY Process
   6 3036624252272  12037  0.47%  0.19%  0.18%   0 Check heaps
 316  195004 99543   1958  0.15%  0.01%  0.00%   0 BGP Scanner
 119  267568   2962884 90  0.15%  0.03%  0.02%   0 IP Input
 172  413528   2134933193  0.07%  0.03%  0.02%   0 CEF process
   4  16 48214  0  0.00%  0.00%  0.00%   0 cpf_process_ipcQ
   3   0 2  0  0.00%  0.00%  0.00%   0 cpf_process_msg_
   5   0 1  0  0.00%  0.00%  0.00%   0 PF Redun ICC Req
   2 772298376  2  0.00%  0.00%  0.00%   0 Load Meter
   9   23964157684151  0.00%  0.01%  0.00%   0 ARP Input
   7   0 1  0  0.00%  0.00%  0.00%   0 Pool Manager
   8   0 2  0  0.00%  0.00%  0.00%   0 Timers
snip

I THINK I have determined the drops are caused by buffer congestion on the port:

rtr1.ash#sh queueing interface gigabitEthernet 1/1

rtr1.ash#sh queueing interface gigabitEthernet 1/1
Interface GigabitEthernet1/1 queueing strategy:  Weighted Round-Robin
  Port QoS is enabled
  Port is untrusted
  Extend trust state: not trusted [COS = 0]
  Default COS is 0
Queueing Mode In Tx direction: mode-cos
Transmit queues [type = 1p3q8t]:
Queue IdScheduling  Num of thresholds
-
   01 WRR 08
   02 WRR 08
   03 WRR 08
   04 Priority01

WRR bandwidth ratios:  100[queue 1] 150[queue 2] 200[queue 3]
queue-limit ratios: 50[queue 1]  20[queue 2]  15[queue 3]  15[Pri Queue]

snip

  Packets dropped on Transmit:

queue dropped  [cos-map]
-
1   719527  [0 1 ]
20  [2 3 4 ]
30  [6 7 ]
40  [5 ]

So it would appear all of my traffic goes into queue 1.  It would also seem
that 50% buffers for queue 1 isn't enough?  These are the default settings by
the way.

I'm pretty sure that wrr-queue queue-limit and wrr-queue bandwidth should help
us mitigate this frustrating packet loss, but I've no experience and would
like some insight and suggestions before I start making changes.  I am totally
unfamiliar with these features (I come from Foundry/Brocade background) and
would like any suggestions or advise you might have before I try anything that
could risk downtime or further 

Re: [c-nsp] EIGRP SoO question

2009-07-12 Thread Ivan Pepelnjak
You'll probably find enough details here:

http://wiki.nil.com/Multihomed_MPLS_VPN_sites_running_EIGRP

If that's not the case, let me know and I'll fix the article.

Ivan
 
http://www.ioshints.info/about
http://blog.ioshints.info/ 

 -Original Message-
 From: Derick Winkworth [mailto:dwinkwo...@att.net] 
 Sent: Sunday, July 12, 2009 9:38 PM
 To: cisco-nsp@puck.nether.net
 Subject: [c-nsp] EIGRP SoO question
 
 I'm trying to wrap my head around how this works.
 
 There is BGP SOO.  This is where routes are tagged as they 
 are redistributed into BGP so that other PEs attached to the 
 same customer site do not push the routes back into the site. 
  This accounts for the PE -  CE direction.
 
 In the opposite direction, it seems there are actually two 
 different mechanisms.
 
 There is
 
 a) EIGRP SOO.  This is an EIGRP extension/tag that the PE 
 uses so it does not re-introduce a route back into the PE 
 iBGP cloud.  Routes are tagged going into a site, and if the 
 site is dual-homed and the route comes back to another PE 
 that is appropriately configured, this other PE will see the 
 tag and not re-advertise that route back into BGP.
 
 b)  BGP cost community.  This attribute carries the EIGRP 
 metric of the route that is being redistributed into BGP.  At 
 another PE (presumable a PE attached to a multihomed site), 
 this attribute tells BGP to compare the EIGRP cost embedded 
 in the attribute directly to an EIGRP route learned from the 
 CE.  This attribute is compared before any other BGP attribute.
 
 
 So I guess why do we need both (a) and (b)?
 
 The documentation for this is shoddy.
 
 Derick Winkworth
 CCIE #15672
 

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/