Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Gergely Antal
And also keep in mind that WS-X6708-10GE is a true line-rate card...

Eric Gauthier wrote:
 Mark,
 
 I'm not sure if you have other 10g links, but keep in mind that Cisco has 
 (at least) two 10g optics - Xenpak and X2 - that are not intercompatible.  
 I believe that the 6704-10ge uses Xenpak and the 6708-10gE, which is the
 same card but with 8 ports, uses X2.  If everything else you do is X2,
 it might make sense to jump up to the 8 port card to prevent yourself
 from having to buy/spare two types of optics.
 
 Eric :)
 
 
 On Mon, Mar 30, 2009 at 02:41:12AM -0700, Mark Tech wrote:
 Hi
 I have a prospect for a 10G upstream customer and Upstream ISP connections. 
 I would need to connect these into our 7609s running RSP 720-3CXL's, at the 
 moment I have found that the WS-X6704-10GE card may be suitable.

 My technical requirements are:
 10Gbps line rate
 IPv4
 Able to handle full Internet routing table
 Potentially IPv6 and MPLS in the future

 With the WS-X6704-10GE, there seems to be several options that are available 
 with it i.e.

 Memory Option: 
 MEM-XCEF720-256M 
 Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A) 
 MEM-XCEF720-512M 
 Cat 6500 512MB DDR, xCEF720 (67xx interface, DFC3A/DFC3B)  
 MEM-XCEF720-1GB 
 Catalyst 6500 1GB DDR, xCEF720 (67xx interface, DFC3BXL) 

 
 Distributed Forwarding Card Option

 WS-F6700-CFC 
 Catalyst 6500 Central Fwd Card for WS-X67xx modules 
 WS-F6700-DFC3B 
 Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx  
 WS-F6700-DFC3A 
 Catalyst 6500 Dist Fwd Card for WS-X67xx modules 
 WS-F6700-DFC3BXL 
 Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx 
 WS-F6700-DFC3C 
 Catalyst 6500 Dist Fwd Card for WS-X67xx modules 
 WS-F6700-DFC3CXL 
 Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx

 I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?

 Regards

 Mark


   

 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/



signature.asc
Description: OpenPGP digital signature
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Paul Stewart
I don't have the spec sheets handy but I do believe though that the 6708 is
1:2 oversubscribed though correct?  The 6704 is 1:1 if that's important to
the application

Paul


-Original Message-
From: cisco-nsp-boun...@puck.nether.net
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Gergely Antal
Sent: Tuesday, March 31, 2009 2:55 AM
To: Eric Gauthier
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] 10GE card for 7609

And also keep in mind that WS-X6708-10GE is a true line-rate card...

Eric Gauthier wrote:
 Mark,
 
 I'm not sure if you have other 10g links, but keep in mind that Cisco 
 has (at least) two 10g optics - Xenpak and X2 - that are not
intercompatible.
 I believe that the 6704-10ge uses Xenpak and the 6708-10gE, which is 
 the same card but with 8 ports, uses X2.  If everything else you do 
 is X2, it might make sense to jump up to the 8 port card to prevent 
 yourself from having to buy/spare two types of optics.
 
 Eric :)
 
 
 On Mon, Mar 30, 2009 at 02:41:12AM -0700, Mark Tech wrote:
 Hi
 I have a prospect for a 10G upstream customer and Upstream ISP
connections. I would need to connect these into our 7609s running RSP
720-3CXL's, at the moment I have found that the WS-X6704-10GE card may be
suitable.

 My technical requirements are:
 10Gbps line rate
 IPv4
 Able to handle full Internet routing table Potentially IPv6 and MPLS 
 in the future

 With the WS-X6704-10GE, there seems to be several options that are
available with it i.e.

 Memory Option: 
 MEM-XCEF720-256M
 Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A) 
 MEM-XCEF720-512M Cat 6500 512MB DDR, xCEF720 (67xx interface, 
 DFC3A/DFC3B) MEM-XCEF720-1GB Catalyst 6500 1GB DDR, xCEF720 (67xx 
 interface, DFC3BXL)

 
 Distributed Forwarding Card Option

 WS-F6700-CFC
 Catalyst 6500 Central Fwd Card for WS-X67xx modules WS-F6700-DFC3B 
 Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx WS-F6700-DFC3A 
 Catalyst 6500 Dist Fwd Card for WS-X67xx modules WS-F6700-DFC3BXL 
 Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx WS-F6700-DFC3C 
 Catalyst 6500 Dist Fwd Card for WS-X67xx modules WS-F6700-DFC3CXL 
 Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx

 I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?

 Regards

 Mark


   

 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net 
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Gergely Antal
I meant that you can not push 40G out of a 6704
even with a dfc attached to it.But you can do it with a 6708
with 1:1 subscription.

Gert Doering wrote:
 Hi,
 
 On Tue, Mar 31, 2009 at 08:54:47AM +0200, Gergely Antal wrote:
 And also keep in mind that WS-X6708-10GE is a true line-rate card...
 
 Hardly.
 
 gert



signature.asc
Description: OpenPGP digital signature
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

[c-nsp] IP Address management software

2009-03-31 Thread Gary Roberton
Hello all

What IP address management software do you use to control the allocation of
subnets to your customers/department?

Thanks

Gary
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Tassos Chatzithomaoglou

Why is that? Small (16MB) buffers?
We are at about 32G without any problem until now.

--
Tassos

Gergely Antal wrote on 31/03/2009 11:01:

I meant that you can not push 40G out of a 6704
even with a dfc attached to it.But you can do it with a 6708
with 1:1 subscription.

Gert Doering wrote:

Hi,

On Tue, Mar 31, 2009 at 08:54:47AM +0200, Gergely Antal wrote:

And also keep in mind that WS-X6708-10GE is a true line-rate card...

Hardly.

gert





___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IP Address management software

2009-03-31 Thread luismi
We use here IPPlan.

El mar, 31-03-2009 a las 09:17 +0100, Gary Roberton escribió:
 Hello all
 
 What IP address management software do you use to control the allocation of
 subnets to your customers/department?
 
 Thanks
 
 Gary
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

[c-nsp] Crash 7206VXR after changing IP address on interface

2009-03-31 Thread Ольга Ружанская

Hello, List!
 
Does anyone had the problem with changing IP address on GigabitEthernet 
interface on 7206VXR (NPE-G1)?
 
We tried to change address on interface twice - the same effect.
It's an Internet address, after changing - router reboots and saves only the 
crash info.
In crash info :
Cause 0008 (Code 0x2): TLB (load or instruction fetch) exception.

Software Version 12.2(31)SB11.
 
Best regards, 
Olga

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Tamas Sziraki
6704 is just said to be 1:1 with dfc. What we experienced, is that if  
you need the full 4G use the 6708, regardless if it's oversubscribed  
2:1.



Tamas

On Mar 31, 2009, at 9:02 AM, Paul Stewart wrote:

I don't have the spec sheets handy but I do believe though that the  
6708 is
1:2 oversubscribed though correct?  The 6704 is 1:1 if that's  
important to

the application

Paul


-Original Message-
From: cisco-nsp-boun...@puck.nether.net
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Gergely Antal
Sent: Tuesday, March 31, 2009 2:55 AM
To: Eric Gauthier
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] 10GE card for 7609

And also keep in mind that WS-X6708-10GE is a true line-rate card...

Eric Gauthier wrote:

Mark,

I'm not sure if you have other 10g links, but keep in mind that Cisco
has (at least) two 10g optics - Xenpak and X2 - that are not

intercompatible.

I believe that the 6704-10ge uses Xenpak and the 6708-10gE, which is
the same card but with 8 ports, uses X2.  If everything else you do
is X2, it might make sense to jump up to the 8 port card to prevent
yourself from having to buy/spare two types of optics.

Eric :)


On Mon, Mar 30, 2009 at 02:41:12AM -0700, Mark Tech wrote:

Hi
I have a prospect for a 10G upstream customer and Upstream ISP

connections. I would need to connect these into our 7609s running RSP
720-3CXL's, at the moment I have found that the WS-X6704-10GE card  
may be

suitable.


My technical requirements are:
10Gbps line rate
IPv4
Able to handle full Internet routing table Potentially IPv6 and MPLS
in the future

With the WS-X6704-10GE, there seems to be several options that are

available with it i.e.


Memory Option:
MEM-XCEF720-256M
Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A)
MEM-XCEF720-512M Cat 6500 512MB DDR, xCEF720 (67xx interface,
DFC3A/DFC3B) MEM-XCEF720-1GB Catalyst 6500 1GB DDR, xCEF720 (67xx
interface, DFC3BXL)


Distributed Forwarding Card Option

WS-F6700-CFC
Catalyst 6500 Central Fwd Card for WS-X67xx modules WS-F6700-DFC3B
Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx WS-F6700-DFC3A
Catalyst 6500 Dist Fwd Card for WS-X67xx modules WS-F6700-DFC3BXL
Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx WS-F6700-DFC3C
Catalyst 6500 Dist Fwd Card for WS-X67xx modules WS-F6700-DFC3CXL
Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx

I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?

Regards

Mark




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/






___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Frederic LOUI


Hi Mark,
Mark Tech a écrit :

Hi
  

...

I have a prospect for a 10G upstream customer and Upstream ISP connections. I 
would need to connect these into our 7609s running RSP 720-3CXL's, at the 
moment I have found that the WS-X6704-10GE card may be suitable.
  

We've re-used  WS-X6704-10GE (from our 6500 without DFC3CXL daughter card)

My technical requirements are:
10Gbps line rate
  

10G = yes

IPv4
Able to handle full Internet routing table
Potentially IPv6 and MPLS in the future
  
We're implementing all of these features without (as from now) any issue 
except VPLS which needs ES20/ES40 or ES(combo)
We're also using IPv4 (sparse mode with anycast RP ) /IPv6 multicast 
with embedded RP, pseudowire, L3VPN with MPLS.
As far as it goes 6VPE is also supported... (We also tested in lab, 
traffic-engineering with CRS1 in the tunnel path and it has worked at 
10Gb rate)
For whatever reason CISCO recommended a 4gb RAM RSP memory upgrade if 
the 7600 need but I'm not convinced about this recommendation.
(Note that if you need to upgrade you need to purchase 2x4Gb in case of 
redundant RSP)

With the WS-X6704-10GE, there seems to be several options that are available 
with it i.e.

Memory Option: 
MEM-XCEF720-256M 
Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A) 
MEM-XCEF720-512M 
Cat 6500 512MB DDR, xCEF720 (67xx interface, DFC3A/DFC3B)  
MEM-XCEF720-1GB 
Catalyst 6500 1GB DDR, xCEF720 (67xx interface, DFC3BXL) 



Distributed Forwarding Card Option

WS-F6700-CFC 
Catalyst 6500 Central Fwd Card for WS-X67xx modules 
WS-F6700-DFC3B 
Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx  
WS-F6700-DFC3A 
Catalyst 6500 Dist Fwd Card for WS-X67xx modules 
WS-F6700-DFC3BXL 
Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx 
WS-F6700-DFC3C 
Catalyst 6500 Dist Fwd Card for WS-X67xx modules 
WS-F6700-DFC3CXL 
Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx


I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?

  
Yes. WS-F6700-DFC3CXL becomes handy if you're planning to use netflow as 
the NDE(we extensively use it at full flow)  is handle by the Forwarding 
card.
(So the RSP7203CXL won't handle NDE burden) Again, more RAM would help 
to have more room for NETFLOW cache. But maybe this RAM is useful for 
another feature that I'm not awre of.


Netflow is useful and the only way (we found) to monitor IPv6 traffic on 
this platform as IPv6 MIBs are not available on this architecture.

Regards

Mark

  

Cheers/Fred
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Saku Ytti
On (2009-03-31 03:02 -0400), Paul Stewart wrote:

 I don't have the spec sheets handy but I do believe though that the 6708 is
 1:2 oversubscribed though correct?  The 6704 is 1:1 if that's important to
 the application

6704 is not exactly 1:1, while 6708 is exactly 1:2. So if you stick
play-doh on 4 ports in 6708 you have wire-speed card (FSVO wire-speed,
I really hate the term though, I mean wire-speed surely has to mean
overspeed of maxports-1 * maxportspeed?)
Also as an interesting curiosity LAN cards prior to 6708 are slow
to detect linedown, so if you're venturing inside the subsecond
convergency world, it is relevant.

 Paul
 
 
 -Original Message-
 From: cisco-nsp-boun...@puck.nether.net
 [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Gergely Antal
 Sent: Tuesday, March 31, 2009 2:55 AM
 To: Eric Gauthier
 Cc: cisco-nsp@puck.nether.net
 Subject: Re: [c-nsp] 10GE card for 7609
 
 And also keep in mind that WS-X6708-10GE is a true line-rate card...
 
 Eric Gauthier wrote:
  Mark,
  
  I'm not sure if you have other 10g links, but keep in mind that Cisco 
  has (at least) two 10g optics - Xenpak and X2 - that are not
 intercompatible.
  I believe that the 6704-10ge uses Xenpak and the 6708-10gE, which is 
  the same card but with 8 ports, uses X2.  If everything else you do 
  is X2, it might make sense to jump up to the 8 port card to prevent 
  yourself from having to buy/spare two types of optics.
  
  Eric :)
  
  
  On Mon, Mar 30, 2009 at 02:41:12AM -0700, Mark Tech wrote:
  Hi
  I have a prospect for a 10G upstream customer and Upstream ISP
 connections. I would need to connect these into our 7609s running RSP
 720-3CXL's, at the moment I have found that the WS-X6704-10GE card may be
 suitable.
 
  My technical requirements are:
  10Gbps line rate
  IPv4
  Able to handle full Internet routing table Potentially IPv6 and MPLS 
  in the future
 
  With the WS-X6704-10GE, there seems to be several options that are
 available with it i.e.
 
  Memory Option: 
  MEM-XCEF720-256M
  Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A) 
  MEM-XCEF720-512M Cat 6500 512MB DDR, xCEF720 (67xx interface, 
  DFC3A/DFC3B) MEM-XCEF720-1GB Catalyst 6500 1GB DDR, xCEF720 (67xx 
  interface, DFC3BXL)
 
  
  Distributed Forwarding Card Option
 
  WS-F6700-CFC
  Catalyst 6500 Central Fwd Card for WS-X67xx modules WS-F6700-DFC3B 
  Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx WS-F6700-DFC3A 
  Catalyst 6500 Dist Fwd Card for WS-X67xx modules WS-F6700-DFC3BXL 
  Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx WS-F6700-DFC3C 
  Catalyst 6500 Dist Fwd Card for WS-X67xx modules WS-F6700-DFC3CXL 
  Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx
 
  I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?
 
  Regards
 
  Mark
 
 

 
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net 
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net 
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
 
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

-- 
  ++ytti
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Cisco 7970 Screen Saver wake on ring

2009-03-31 Thread Skeeve Stevens
Hey all,

This is a little question which is driving me nuts and my google-fu isn't 
working.

My 7970 (hanging off an Asterisk box) has a screen saver... which is nice 
but when the phone rings, the screen stays blank.

I don't know if I should be answering it if it is blank... currently I have to 
reach over and push the lit screen button to bring the screen back.

Is there any 'wake screen on ring' option on the handset that anyone knows?

...Skeeve

--
Skeeve Stevens, CEO/Technical Director
eintellego Pty Ltd - The Networking Specialists
ske...@eintellego.net / www.eintellego.net
Phone: 1300 753 383, Fax: (+612) 8572 9954
Cell +61 (0)414 753 383 / skype://skeeve
--
NOC, NOC, who's there?

Disclaimer: Limits of Liability and Disclaimer: This message is for the named 
person's use only. It may contain sensitive and private proprietary or legally 
privileged information. You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient. eintellego Pty Ltd and each legal entity in the Tefilah Pty Ltd 
group of companies reserve the right to monitor all e-mail communications 
through its networks.  Any views expressed in this message are those of the 
individual sender, except where the message states otherwise and the sender is 
authorised to state them to be the views of any such entity. Any reference to 
costs, fee quotations, contractual transactions and variations to contract 
terms is subject to separate confirmation in writing signed by an authorised 
representative of eintellego. Whilst all efforts are made to safeguard inbound 
and outbound e-mails, we cannot guarantee that attachments are!
  virus-free or compatible with your systems and do not accept any liability in 
respect of viruses or computer problems experienced.



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] ATM on 7206 - PVC 255 BICI ?

2009-03-31 Thread Lamar Owen
On Tuesday 31 March 2009 00:41:18 Jay Hennigan wrote:
 The provider referring to a BICI interface connection type.  Is there
 support for this on the 7206 platform or do we need to have them use VP
 of 255 or less?

The Broadband InterCarrier Interface, AFAIK, is only supported by Cisco on 
their WAN ATM switches/concentrators (such as MGX 8900 and BPX 8600); but I 
reserve the right to be wrong.  The LAN version, PNNI, is typically supported 
switch-to-switch.  AFAIK, the PA's for 7200 only support UNI.  Catalyst 
8500MSR and LightStream 1010 support PNNI, but not B-ICI (except for ATM to 
Frame Relay Interworking, AFAIK).

 I found some reference to BICI on CCO but nothing specific.

Try with a dash: B-ICI.
-- 
Lamar Owen
Chief Information Officer
Pisgah Astronomical Research Institute
1 PARI Drive
Rosman, NC  28772
http://www.pari.edu
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Cisco 3750 high CPU utilization HL3U bkgrd]

2009-03-31 Thread Ioan Branet
Hello,

We have many Cisco 3750 switches in our network which have high CPU
utilization.It seems that the process that cause this high load is:HL3U
bkgrd process.

The problem is solved after a reload but appears again after 3-4 months.

We changed also the IOS but with no results.

It seems that it is a bug but I am not very sure.



sh processes cpu sorted | ex 0.00
CPU utilization for five seconds: 99%/28%; one minute: 85%; five minutes:
81%
PID Runtime(ms)   Invoked  uSecs   5Sec   1Min   5Min TTY Process
108   389775804   4389443  88799 57.57% 40.01% 39.39%   0 HL3U bkgrd
proce
5811854779  72185839164  3.50%  2.77%  2.31%   0 HLFM address
lea
292 689   192   3588  1.91%  0.33%  0.07%   1 Virtual
Exec 4712845296   2142151   5996  1.11%  1.00%  1.04%   0 FE
free chunk   24517376827532655  32623  0.63%  0.51%  0.52%   0
MFI LFD Stats Pr
107 5476276  58476944 93  0.63%  0.62%  0.58%   0 Hulc LED
Process
74  768210  21312879 36  0.31%  0.09%  0.08%   0 hpm main
process
135 6540410  20282165322  0.15%  0.18%  0.22%   0 IP
Input143 3566619  27781902128  0.15%  0.24%  0.20%   0
Spanning Tree45 1004640 128285520  7  0.15%  0.15%  0.13%
0 Fifo Error Detec
138 1152329   2735155421  0.15%  0.13%  0.12%   0 PI MATM Aging
Pr


sh version
Cisco IOS Software, C3750ME Software (C3750ME-I5-M), Version 12.2(37)SE1,
RELEASE SOFTWARE (fc1)
Copyright (c) 1986-2007 by Cisco Systems, Inc.
Compiled Thu 05-Jul-07 20:06 by antonino
Image text-base: 0x3000, data-base: 0x0163F400

ROM: Bootstrap program is C3750 boot loader
BOOTLDR: C3750ME Boot Loader (C3750ME-HBOOT-M) Version 12.1(14r)AX, RELEASE
SOFTWARE (fc1)

vic102 uptime is 4 weeks, 4 days, 10 hours, 9 minutes
System returned to ROM by power-on
System restarted at 03:01:54 GMT Fri Feb 27 2009
System image file is flash:c3750me-i5-mz.122-37.SE1.bin

cisco ME-C3750-24TE (PowerPC405) processor (revision F0) with 118784K/12280K
bytes of memory.
Processor board ID CAT1043NM05
Last reset from power-on
8 Virtual Ethernet interfaces
24 FastEthernet interfaces
4 Gigabit Ethernet interfaces
The password-recovery mechanism is enabled.

1024K bytes of flash-simulated non-volatile configuration memory.
Base ethernet MAC Address   : 00:19:E8:87:23:00
Motherboard assembly number : 73-9938-04
Motherboard serial number   : CAT104356B7
Model revision number   : F0
Motherboard revision number : A0
Model number: ME-C3750-24TE-M
Daughterboard assembly number   : 73-9939-02
Daughterboard serial number : CAT104355CQ
System serial number: CAT1043NM05
Top Assembly Part Number: 800-25952-04
Top Assembly Revision Number: C0
Version ID  : V05
CLEI Code Number: COM1510ARA
Daughterboard revision number   : A0
Hardware Board Revision Number  : 0x09


Switch   Ports  Model  SW Version  SW
Image   --   -  -  --
-- *1   28 ME-C3750-24TE
12.2(37)SE1 C3750ME-I5-M
Configuration register is 0xF

#sh memory | i HL
030C8118 005000 030C804C 030C94CC 001    005CF8E4  HLFM
MAC
030C94CC 000808 030C8118 030C9820 001    005CF93C  HLFM
IP
0320A434 000808 0320A008 0320A788 001    00CB2F74
HL3U_IPV4_TABLE_CHUNK
0320A788 02 0320A434 0320F5D4 001    00CB2F9C
HL3U_FIB_TYPE_CHUNK
0320F5D4 032768 0320A788 03217600 001    00CB2FC4
HL3U_MPATH_ADJ_TYPE_CHUNK
03217600 000808 0320F5D4 03217954 001    00CB2FEC
HL3U_FIB_WITH_ADJ_OR_TCAM_FAIL_CHUNK
03217954 002000 03217600 03218150 001    00CB3014
HL3U_COVERING_FIB_CHUNK
03218150 000808 03217954 032184A4 001    00CB303C
HL3U_ARP_HRPC_THROTTLE_CHUNKS
032184A4 000432 03218150 03218680 001    00CB3064
HL3U_HSRP_RETRY_CHUNKS
03218680 000808 032184A4 032189D4 001    00CB308C
HL3U_PROXY_ARP_CHUNKS
032189D4 000432 03218680 03218BB0 001    00CB30B4
HL3U_QUERIER_INFO_CHUNKS
03218BB0 003620 032189D4 03219A00 001    00CB30DC
HL3U_ICMP_REDIRECT_Q_CHUNK
03219A00 000296 03218BB0 03219B54 001    00CB3104
HL3U_OUT_ACL_FULL_CHUNKS
032F2BCC 000960 032F252C 032F2FB8 001    00CBE090
HL3U_FIB_WITH_
0330BD38 000176 0330B698 0330BE14 001    00B18A5C
HL2MCM
0330C174 000160 0330C0C0 0330C240 001    01622A8C
HL2MCM
036F9C6C 000972 036F9A8C 036FA064 001    00CB7108
HL3U_FIB_WITH_
036FA064 000872 036F9C6C 036FA3F8 001    00CBE090
HL3U_FIB_WITH_
0390629C 24 03906258 039062E0 001    00B1AFAC
HL2MCM
039906D8 001292 0399008C 03990C10 001    00CBE090
HL3U_FIB_WITH_

[c-nsp] C2800 IP Base and IP SLA / RTR

2009-03-31 Thread Peter Rathlev
Hello,

We're about to buy setup a new batch of IP SLA/RTR units and are looking
at the C2800 for the purpose. I can see from FN that IP Base apparantly
doesn't do IP SLA/RTR, and that we have to get Enterprise Base for that.
Can this be true?

I only have C2800 Enterprise Base in production right now, but we have a
lot of C2600 IP Feature Set (12.3(26)) routers doing RTR now. Do we have
to shell out the extra ££ for the Enterprise Base or do anyone have any
other ideas for rack mountable RTR units?

Thank you.

Regards,
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] C2800 IP Base and IP SLA / RTR

2009-03-31 Thread Ziv Leyes
We have 7200VXR with c7200-is-mz.124-13b.bin which does support IP SLA, but I 
don't know if the same IOS version on a different platform may not have it.
I think also IP advanced services support IP SLA if it's cheaper than 
enterprise then you could go for it.
Hope this helps
Ziv




-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Peter Rathlev
Sent: Tuesday, March 31, 2009 3:51 PM
To: cisco-nsp
Subject: [c-nsp] C2800 IP Base and IP SLA / RTR

Hello,

We're about to buy setup a new batch of IP SLA/RTR units and are looking
at the C2800 for the purpose. I can see from FN that IP Base apparantly
doesn't do IP SLA/RTR, and that we have to get Enterprise Base for that.
Can this be true?

I only have C2800 Enterprise Base in production right now, but we have a
lot of C2600 IP Feature Set (12.3(26)) routers doing RTR now. Do we have
to shell out the extra ££ for the Enterprise Base or do anyone have any
other ideas for rack mountable RTR units?

Thank you.

Regards,
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/�+��g�ל�w�k-��m�+�,j
�j��z{jy�u���w����T�
��~���kzǧq���br*.��z��u�lr���׫�*�
N�~�-��^r�ߊ������zf��g�����y��q��y��)��Lj)Rx+�y�+�����Ǩ~f��Ȩ��(u�ڝ֥^Ǭ___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

[c-nsp] 7600 SNMP stats and processes issue

2009-03-31 Thread Mark Tech

Hi
I have migrated an Ethernet customer connected from our 7206 (PE) to our 
new 7609 (PE) and I am now seeing some very strange SNMP interface results from 
the 7600. The graph itself looks very spikey and when for example a BGP change 
takes place the is a trough of about 20Mbps, then a peak of about 20Mbps, then 
a trough of about 20Mbps then back to normal. 
 
The customer is only using about 60Mbps so its very obvious
 
Its almost as if SNMP is being low prioritised whilst other process i.e. BGP do 
their work. This is affecting us as the customer who monitors their own network 
is asking why our graphs look so different whereas before when they were 
connected to the VXR, they matched almost perfectly
 
My concern is that SNMP is not working as well as it should be which in turn 
will alter our results for 95 percentile customers etc
 
Has anyone else ever experienced this?
 
Regards
 
Mark


  

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] High IPC LC Message Header utilization

2009-03-31 Thread David Coulson
I've noticed weird CPU utilization on a 7513 recently upgraded to 
12.4(23) (AdvIP w/ SSH). The two top processes, by CPU usage, are these:


103 5834624   9290243628  2.78%  2.58%  2.50%   0 IPC LC 
Message H
  3 3708356  10387227357  0.98%  0.87%  0.82%   0 IPC CBus 
process


Not exactly anything to get excited about, but that's ~4% CPU which 
wasn't in use last week. I compared it to another 7500 running the same 
code, which has very different CPU utilization for these processes:


1039492   2990653  3  0.00%  0.00%  0.00%   0 IPC LC 
Message H
 48 1319160   3539843372  0.08%  0.15%  0.15%   0 IPC CBus 
process


The runtime on the IPC LC Message Header process is obviously way out of 
whack on the first one - Is there a way to track down what is causing 
this process to consume CPU? I had a quick look through the ipc sessions 
and queues, with nothing which would really account for such a 
difference in runtime.


Any idea where to look?
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IP Address management software

2009-03-31 Thread Steve Bertrand
luismi wrote:
 We use here IPPlan.

Us too. The only drawback is that it doesn't handle IPv6.

Steve
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Redundant switch fabric

2009-03-31 Thread Mike Louis
I have a solution design that requires redundant switch fabrics. I am 
interpreting this beyond just have redundant supervisors meaning redundant 
backplanes on the switch cards. Do the 6500 and 4500 support redundant fabrics? 
Will a 6748 function with one trace failed?

Note: This message and any attachments is intended solely for the use of the 
individual or entity to which it is addressed and may contain information that 
is non-public, proprietary, legally privileged, confidential, and/or exempt 
from disclosure. If you are not the intended recipient, you are hereby notified 
that any use, dissemination, distribution, or copying of this communication is 
strictly prohibited. If you have received this communication in error, please 
notify the original sender immediately by telephone or return email and destroy 
or delete this message along with any attachments immediately.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] IP Address management software

2009-03-31 Thread luismi
Well, I think the best option then is donate code or money to the
project to develop it as we need.

In my case, right now, we don't need IPv6, but don't consider that as an
excude to give some funds or other resourcers to the developers.

El mar, 31-03-2009 a las 10:06 -0400, Steve Bertrand escribió:
 luismi wrote:
  We use here IPPlan.
 
 Us too. The only drawback is that it doesn't handle IPv6.
 
 Steve

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] Redundant switch fabric

2009-03-31 Thread Brad Hedlund
Mike,
The 6500 and 4500 have the switch fabric on the supervisor engines, so by
having dual supervisors, you in effect have a redundant fabric.

The 6748 actually has 4 traces, each 20G.  2 traces connect to the active
supervisor containing the active switch fabric.  The remaining 2 traces are
standby connections to the standby supervisor/fabric.  So, when a supervisor
engine and its fabric fails, the 2 standby traces are enabled and the full
40G of bandwidth remains.  You never, under normal circumstances, have only
a single trace active on 6748.  Newer versions of IOS provide a hot
standby fabric feature which allows this fabric trace switch over to happen
faster - roughly 50ms.

For the best in redundant designs, consider the Nexus 7000, where the switch
fabric is decoupled from the supervisor engines into a series redundant
fabric modules installed into the back of the switch.  Should a supervisor
engine fail in Nexus 7000 there is ZERO impact to the switch fabric, because
the supervisor engine does not forward data plane traffic.

Cheers,

Brad Hedlund
bhedl...@cisco.com
http://www.internetworkexpert.org


On 3/31/09 9:05 AM, Mike Louis mlo...@nwnit.com wrote:

 I have a solution design that requires redundant switch fabrics. I am
 interpreting this beyond just have redundant supervisors meaning redundant
 backplanes on the switch cards. Do the 6500 and 4500 support redundant
 fabrics? Will a 6748 function with one trace failed?
 
 Note: This message and any attachments is intended solely for the use of the
 individual or entity to which it is addressed and may contain information that
 is non-public, proprietary, legally privileged, confidential, and/or exempt
 from disclosure. If you are not the intended recipient, you are hereby
 notified that any use, dissemination, distribution, or copying of this
 communication is strictly prohibited. If you have received this communication
 in error, please notify the original sender immediately by telephone or return
 email and destroy or delete this message along with any attachments
 immediately.
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] C2800 IP Base and IP SLA / RTR

2009-03-31 Thread Church, Charles
Definitely need to check feature navigator.  We found this same thing out.  IP 
Base on 2600-2800 does not equal IP Base on small switches or 7200s.  IP 
SLA...' is the feature to look for.

Chuck

-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Ziv Leyes
Sent: Tuesday, March 31, 2009 9:04 AM
To: cisco-nsp
Subject: Re: [c-nsp] C2800 IP Base and IP SLA / RTR

We have 7200VXR with c7200-is-mz.124-13b.bin which does support IP SLA, but I 
don't know if the same IOS version on a different platform may not have it.
I think also IP advanced services support IP SLA if it's cheaper than 
enterprise then you could go for it.
Hope this helps
Ziv




-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Peter Rathlev
Sent: Tuesday, March 31, 2009 3:51 PM
To: cisco-nsp
Subject: [c-nsp] C2800 IP Base and IP SLA / RTR

Hello,

We're about to buy setup a new batch of IP SLA/RTR units and are looking at the 
C2800 for the purpose. I can see from FN that IP Base apparantly doesn't do IP 
SLA/RTR, and that we have to get Enterprise Base for that.
Can this be true?

I only have C2800 Enterprise Base in production right now, but we have a lot of 
C2600 IP Feature Set (12.3(26)) routers doing RTR now. Do we have to shell out 
the extra ££ for the Enterprise Base or do anyone have any other ideas for rack 
mountable RTR units?

Thank you.

Regards,
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/ +  g ל w k-  m + ,j
 j  z{jy u   w    T 
  ~   kzǧq   br*.  z  u lr   ׫ * 
N~-^rߊzfgyqy)Lj)Rx+y+Ǩ~fȨ(uڝ֥^Ǭ
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Geoffrey Pendery
The stuff we've been reading (look at Supervisor Engines Supported
on the data sheets for Cisco Catalyst 6500 Series 10 Gigabit Ethernet
Interface Modules, or browse the line cards for the 7600, or go into
Configurator tool) claims that the RSP 720 won't support the X6704 or
X6708 10 Gig LAN cards, only the SIP/SPA/ES WAN type cards.

I don't mean to kick off a big 6500 vs 7600 storm again, but does
anyone know if this is incorrect?
Can we buy a new 7609-S chassis, put a new RSP 720 in it, put 7600 IOS
on that Sup, then plug in a WS-X6708-10G-3C and have it work?


-Geoff


On Mon, Mar 30, 2009 at 4:41 AM, Mark Tech techcon...@yahoo.com wrote:

 Hi
 I have a prospect for a 10G upstream customer and Upstream ISP connections. I 
 would need to connect these into our 7609s running RSP 720-3CXL's, at the 
 moment I have found that the WS-X6704-10GE card may be suitable.

 My technical requirements are:
 10Gbps line rate
 IPv4
 Able to handle full Internet routing table
 Potentially IPv6 and MPLS in the future

 With the WS-X6704-10GE, there seems to be several options that are available 
 with it i.e.

 Memory Option:
 MEM-XCEF720-256M
 Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A)
 MEM-XCEF720-512M
 Cat 6500 512MB DDR, xCEF720 (67xx interface, DFC3A/DFC3B)
 MEM-XCEF720-1GB
 Catalyst 6500 1GB DDR, xCEF720 (67xx interface, DFC3BXL)

 
 Distributed Forwarding Card Option

 WS-F6700-CFC
 Catalyst 6500 Central Fwd Card for WS-X67xx modules
 WS-F6700-DFC3B
 Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx
 WS-F6700-DFC3A
 Catalyst 6500 Dist Fwd Card for WS-X67xx modules
 WS-F6700-DFC3BXL
 Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx
 WS-F6700-DFC3C
 Catalyst 6500 Dist Fwd Card for WS-X67xx modules
 WS-F6700-DFC3CXL
 Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx

 I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?

 Regards

 Mark




 ___
 cisco-nsp mailing list  cisco-...@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Redundant switch fabric

2009-03-31 Thread Justin C. Darby

Mike,

Just to chime in here a bit with some experience - we've had Nexus 7K 
switch backplane modules fail - unless you are pushing near 100% 
backplane utilization you don't even notice until it emails you or your 
config monitoring program notices the failed module. In recent NX-OS 
releases, In Service Software Upgrades are working properly 100% of the 
time for us, and outside of the fact it can take 3-4 hours to upgrade a 
fully loaded switch, there's no real downtime if you've got working port 
redundancy across modules, and modules only go down one at a time like 
they're supposed to.


Considering how distributed and redundant components of the switch are - 
it's pretty unlikely you'd run into huge redundancy problems with any 
single component. I don't have enough N7K's to play with Virtual Port 
Channels (vPCs), but it'd be interesting to see if they have any issues 
when upgrading switches. vPCs can add extreme (and usable) redundancy to 
multi-chassis design, if you want to go a step farther.


Justin

P.S. Comments made here are my own and should not in any way be 
considered an endorsement by the U.S. Federal Government.


Brad Hedlund wrote:

Mike,
The 6500 and 4500 have the switch fabric on the supervisor engines, so by
having dual supervisors, you in effect have a redundant fabric.

The 6748 actually has 4 traces, each 20G.  2 traces connect to the active
supervisor containing the active switch fabric.  The remaining 2 traces are
standby connections to the standby supervisor/fabric.  So, when a supervisor
engine and its fabric fails, the 2 standby traces are enabled and the full
40G of bandwidth remains.  You never, under normal circumstances, have only
a single trace active on 6748.  Newer versions of IOS provide a hot
standby fabric feature which allows this fabric trace switch over to happen
faster - roughly 50ms.

For the best in redundant designs, consider the Nexus 7000, where the switch
fabric is decoupled from the supervisor engines into a series redundant
fabric modules installed into the back of the switch.  Should a supervisor
engine fail in Nexus 7000 there is ZERO impact to the switch fabric, because
the supervisor engine does not forward data plane traffic.

Cheers,

Brad Hedlund
bhedl...@cisco.com
http://www.internetworkexpert.org


On 3/31/09 9:05 AM, Mike Louis mlo...@nwnit.com wrote:

  

I have a solution design that requires redundant switch fabrics. I am
interpreting this beyond just have redundant supervisors meaning redundant
backplanes on the switch cards. Do the 6500 and 4500 support redundant
fabrics? Will a 6748 function with one trace failed?

Note: This message and any attachments is intended solely for the use of the
individual or entity to which it is addressed and may contain information that
is non-public, proprietary, legally privileged, confidential, and/or exempt
from disclosure. If you are not the intended recipient, you are hereby
notified that any use, dissemination, distribution, or copying of this
communication is strictly prohibited. If you have received this communication
in error, please notify the original sender immediately by telephone or return
email and destroy or delete this message along with any attachments
immediately.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/






___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
  


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Redundant switch fabric

2009-03-31 Thread Buhrmaster, Gary

 I have a solution design that requires redundant switch 
 fabrics. I am interpreting this beyond just have redundant 
 supervisors meaning redundant backplanes on the switch cards. 
 Do the 6500 and 4500 support redundant fabrics? Will a 6748 
 function with one trace failed?

If you really need redundancy at those levels, you
really want two chassis.  While rare, even the
chassis can fail (as documented, about once a year,
by someone on this list), and for interesting
problems TAC may want you to OIR cards that cause
interesting results on the entire chassis,
and no matter how good the hardware, software bugs
can always take out the entire box.

In box reliability is valuable and desirable, but
when you need really high availability, you have
to at least consider an out of the box solution.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Redundant switch fabric

2009-03-31 Thread Brad Hedlund
Justin / Mike,

When performing ISSU on Nexus 7000 the interface modules only need to be
reset if the code you are upgrading to also requires an upgrade of the
module's EPLD (erasable programmable logic device).  This is not always
required.  So in many cases you can perform an ISSU upgrade with ZERO impact
to the interface modules.

The CLI will inform you if a software upgrade will be disruptive to the
interface modules before you proceed with the upgrade.

Additionally, you can check the Nexus 7000 EPLD Release Notes prior to an
upgrade so see if your new code will require any change to the EPLD.

http://www.cisco.com/en/US/docs/switches/datacenter/sw/4_1/epld/epld_rn.html

Cheers,

Brad Hedlund
bhedl...@cisco.com
http://www.internetworkexpert.org


On 3/31/09 11:51 AM, Justin C. Darby jcda...@usgs.gov wrote:

 Mike,
 
 Just to chime in here a bit with some experience - we've had Nexus 7K
 switch backplane modules fail - unless you are pushing near 100%
 backplane utilization you don't even notice until it emails you or your
 config monitoring program notices the failed module. In recent NX-OS
 releases, In Service Software Upgrades are working properly 100% of the
 time for us, and outside of the fact it can take 3-4 hours to upgrade a
 fully loaded switch, there's no real downtime if you've got working port
 redundancy across modules, and modules only go down one at a time like
 they're supposed to.
 
 Considering how distributed and redundant components of the switch are -
 it's pretty unlikely you'd run into huge redundancy problems with any
 single component. I don't have enough N7K's to play with Virtual Port
 Channels (vPCs), but it'd be interesting to see if they have any issues
 when upgrading switches. vPCs can add extreme (and usable) redundancy to
 multi-chassis design, if you want to go a step farther.
 
 Justin
 
 P.S. Comments made here are my own and should not in any way be
 considered an endorsement by the U.S. Federal Government.
 
 Brad Hedlund wrote:
 Mike,
 The 6500 and 4500 have the switch fabric on the supervisor engines, so by
 having dual supervisors, you in effect have a redundant fabric.
 
 The 6748 actually has 4 traces, each 20G.  2 traces connect to the active
 supervisor containing the active switch fabric.  The remaining 2 traces are
 standby connections to the standby supervisor/fabric.  So, when a supervisor
 engine and its fabric fails, the 2 standby traces are enabled and the full
 40G of bandwidth remains.  You never, under normal circumstances, have only
 a single trace active on 6748.  Newer versions of IOS provide a hot
 standby fabric feature which allows this fabric trace switch over to happen
 faster - roughly 50ms.
 
 For the best in redundant designs, consider the Nexus 7000, where the switch
 fabric is decoupled from the supervisor engines into a series redundant
 fabric modules installed into the back of the switch.  Should a supervisor
 engine fail in Nexus 7000 there is ZERO impact to the switch fabric, because
 the supervisor engine does not forward data plane traffic.
 
 Cheers,
 
 Brad Hedlund
 bhedl...@cisco.com
 http://www.internetworkexpert.org
 
 
 On 3/31/09 9:05 AM, Mike Louis mlo...@nwnit.com wrote:
 
   
 I have a solution design that requires redundant switch fabrics. I am
 interpreting this beyond just have redundant supervisors meaning redundant
 backplanes on the switch cards. Do the 6500 and 4500 support redundant
 fabrics? Will a 6748 function with one trace failed?
 
 Note: This message and any attachments is intended solely for the use of the
 individual or entity to which it is addressed and may contain information
 that
 is non-public, proprietary, legally privileged, confidential, and/or exempt
 from disclosure. If you are not the intended recipient, you are hereby
 notified that any use, dissemination, distribution, or copying of this
 communication is strictly prohibited. If you have received this
 communication
 in error, please notify the original sender immediately by telephone or
 return
 email and destroy or delete this message along with any attachments
 immediately.
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 
 
 
 
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
   




___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Redundant switch fabric

2009-03-31 Thread Tony Varriale
I've had a colleague run into an issue going to 4.1.3 (long story but it's 
intrusive either way you slice it and is how all boxes are).  What was your 
upgrade from and to?


tv
- Original Message - 
From: Justin C. Darby jcda...@usgs.gov

To: Brad Hedlund brhed...@cisco.com
Cc: cisco-nsp@puck.nether.net
Sent: Tuesday, March 31, 2009 11:51 AM
Subject: Re: [c-nsp] Redundant switch fabric



Mike,

Just to chime in here a bit with some experience - we've had Nexus 7K 
switch backplane modules fail - unless you are pushing near 100% backplane 
utilization you don't even notice until it emails you or your config 
monitoring program notices the failed module. In recent NX-OS releases, In 
Service Software Upgrades are working properly 100% of the time for us, 
and outside of the fact it can take 3-4 hours to upgrade a fully loaded 
switch, there's no real downtime if you've got working port redundancy 
across modules, and modules only go down one at a time like they're 
supposed to.


Considering how distributed and redundant components of the switch are - 
it's pretty unlikely you'd run into huge redundancy problems with any 
single component. I don't have enough N7K's to play with Virtual Port 
Channels (vPCs), but it'd be interesting to see if they have any issues 
when upgrading switches. vPCs can add extreme (and usable) redundancy to 
multi-chassis design, if you want to go a step farther.


Justin

P.S. Comments made here are my own and should not in any way be considered 
an endorsement by the U.S. Federal Government.


Brad Hedlund wrote:

Mike,
The 6500 and 4500 have the switch fabric on the supervisor engines, so 
by

having dual supervisors, you in effect have a redundant fabric.

The 6748 actually has 4 traces, each 20G.  2 traces connect to the active
supervisor containing the active switch fabric.  The remaining 2 traces 
are
standby connections to the standby supervisor/fabric.  So, when a 
supervisor
engine and its fabric fails, the 2 standby traces are enabled and the 
full
40G of bandwidth remains.  You never, under normal circumstances, have 
only

a single trace active on 6748.  Newer versions of IOS provide a hot
standby fabric feature which allows this fabric trace switch over to 
happen

faster - roughly 50ms.

For the best in redundant designs, consider the Nexus 7000, where the 
switch

fabric is decoupled from the supervisor engines into a series redundant
fabric modules installed into the back of the switch.  Should a 
supervisor
engine fail in Nexus 7000 there is ZERO impact to the switch fabric, 
because

the supervisor engine does not forward data plane traffic.

Cheers,

Brad Hedlund
bhedl...@cisco.com
http://www.internetworkexpert.org


On 3/31/09 9:05 AM, Mike Louis mlo...@nwnit.com wrote:



I have a solution design that requires redundant switch fabrics. I am
interpreting this beyond just have redundant supervisors meaning 
redundant

backplanes on the switch cards. Do the 6500 and 4500 support redundant
fabrics? Will a 6748 function with one trace failed?

Note: This message and any attachments is intended solely for the use of 
the
individual or entity to which it is addressed and may contain 
information that
is non-public, proprietary, legally privileged, confidential, and/or 
exempt

from disclosure. If you are not the intended recipient, you are hereby
notified that any use, dissemination, distribution, or copying of this
communication is strictly prohibited. If you have received this 
communication
in error, please notify the original sender immediately by telephone or 
return

email and destroy or delete this message along with any attachments
immediately.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/






___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/ 


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Redundant switch fabric

2009-03-31 Thread Tony Varriale

In Ciscoland, that is redundant sups.

I would recommend asking for clarification and educating on how redundant 
sups and redundant boxes provide some different resiliency options.


tv
- Original Message - 
From: Mike Louis mlo...@nwnit.com

To: cisco-nsp@puck.nether.net
Sent: Tuesday, March 31, 2009 9:05 AM
Subject: [c-nsp] Redundant switch fabric


I have a solution design that requires redundant switch fabrics. I am 
interpreting this beyond just have redundant supervisors meaning redundant 
backplanes on the switch cards. Do the 6500 and 4500 support redundant 
fabrics? Will a 6748 function with one trace failed?


Note: This message and any attachments is intended solely for the use of 
the individual or entity to which it is addressed and may contain 
information that is non-public, proprietary, legally privileged, 
confidential, and/or exempt from disclosure. If you are not the intended 
recipient, you are hereby notified that any use, dissemination, 
distribution, or copying of this communication is strictly prohibited. If 
you have received this communication in error, please notify the original 
sender immediately by telephone or return email and destroy or delete this 
message along with any attachments immediately.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/ 


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Redundant switch fabric

2009-03-31 Thread Justin C. Darby
We had issues with 4.0(?) releases, mostly related to strange behavior 
of a few features (dhcp relay, DAI, port security, etc) that required a 
full reload after a software upgrade to clear up completely. 4.1(?) has 
been fine so far, and the last upgrade we did was 4.1(2) to 4.1(4) and 
it went through without any downtime. We skipped over 4.1(3) since we 
never got around to scheduling it.


Justin

Tony Varriale wrote:
I've had a colleague run into an issue going to 4.1.3 (long story but 
it's intrusive either way you slice it and is how all boxes are).  
What was your upgrade from and to?


tv
- Original Message - From: Justin C. Darby jcda...@usgs.gov
To: Brad Hedlund brhed...@cisco.com
Cc: cisco-nsp@puck.nether.net
Sent: Tuesday, March 31, 2009 11:51 AM
Subject: Re: [c-nsp] Redundant switch fabric



Mike,

Just to chime in here a bit with some experience - we've had Nexus 7K 
switch backplane modules fail - unless you are pushing near 100% 
backplane utilization you don't even notice until it emails you or 
your config monitoring program notices the failed module. In recent 
NX-OS releases, In Service Software Upgrades are working properly 
100% of the time for us, and outside of the fact it can take 3-4 
hours to upgrade a fully loaded switch, there's no real downtime if 
you've got working port redundancy across modules, and modules only 
go down one at a time like they're supposed to.


Considering how distributed and redundant components of the switch 
are - it's pretty unlikely you'd run into huge redundancy problems 
with any single component. I don't have enough N7K's to play with 
Virtual Port Channels (vPCs), but it'd be interesting to see if they 
have any issues when upgrading switches. vPCs can add extreme (and 
usable) redundancy to multi-chassis design, if you want to go a step 
farther.


Justin

P.S. Comments made here are my own and should not in any way be 
considered an endorsement by the U.S. Federal Government.


Brad Hedlund wrote:

Mike,
The 6500 and 4500 have the switch fabric on the supervisor 
engines, so by

having dual supervisors, you in effect have a redundant fabric.

The 6748 actually has 4 traces, each 20G.  2 traces connect to the 
active
supervisor containing the active switch fabric.  The remaining 2 
traces are
standby connections to the standby supervisor/fabric.  So, when a 
supervisor
engine and its fabric fails, the 2 standby traces are enabled and 
the full
40G of bandwidth remains.  You never, under normal circumstances, 
have only

a single trace active on 6748.  Newer versions of IOS provide a hot
standby fabric feature which allows this fabric trace switch over 
to happen

faster - roughly 50ms.

For the best in redundant designs, consider the Nexus 7000, where 
the switch

fabric is decoupled from the supervisor engines into a series redundant
fabric modules installed into the back of the switch.  Should a 
supervisor
engine fail in Nexus 7000 there is ZERO impact to the switch fabric, 
because

the supervisor engine does not forward data plane traffic.

Cheers,

Brad Hedlund
bhedl...@cisco.com
http://www.internetworkexpert.org


On 3/31/09 9:05 AM, Mike Louis mlo...@nwnit.com wrote:



I have a solution design that requires redundant switch fabrics. I am
interpreting this beyond just have redundant supervisors meaning 
redundant

backplanes on the switch cards. Do the 6500 and 4500 support redundant
fabrics? Will a 6748 function with one trace failed?

Note: This message and any attachments is intended solely for the 
use of the
individual or entity to which it is addressed and may contain 
information that
is non-public, proprietary, legally privileged, confidential, 
and/or exempt

from disclosure. If you are not the intended recipient, you are hereby
notified that any use, dissemination, distribution, or copying of this
communication is strictly prohibited. If you have received this 
communication
in error, please notify the original sender immediately by 
telephone or return

email and destroy or delete this message along with any attachments
immediately.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/






___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/ 


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/



[c-nsp] IPSec tunnel between Cisco router and PFSense firewall.

2009-03-31 Thread luismi
Is there anyone with a template to connect a Cisco router to a PFSense
firewall using IPSec?

Well, I think any other template would be a good start point too.

Thanks in advance.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Multilink PPPoA

2009-03-31 Thread Steven Saner


On Mar 30, 2009, at 4:47 PM, Jason Lixfeld wrote:

I actually just set this up on the weekend.  You can use virtual- 
template 1 ip unnumbered with an ip pool on the headend and dialer0  
ip address negotiated on the client too if you don't want statically  
routed clients.



Thanks to a couple offline replies, I got this working. I'm replying  
with my working config for the benefit of anyone searching the  
archives and in case anyone has any comments on something that could  
be done different/better. Thanks for the help.


On the 7206 all I needed to do was add a Virtual-Template interface  
for the multilink and point the relevant pvcs to it:


interface Virtual-Template3
 ip unnumbered Loopback0
 ppp authentication pap dsl
 ppp authorization dsl
 ppp multilink

The only line different than my normal template is the ppp  
multilink line. The reference to dsl is the name of the radius  
profile that I'm using.


interface ATM2/0.1000 multipoint
 pvc 3/70
  encapsulation aal5mux ppp Virtual-Template3

 pvc 3/177
  encapsulation aal5mux ppp Virtual-Template3


On the client side (1721) I have the following:

interface Virtual-Template1
 ip address negotiated
 ppp pap sent-username xxx password 
 ppp multilink

The two ATM interfaces look like the following

interface ATM0
 no ip address
 no atm ilmi-keepalive
 dsl operating-mode auto
 dsl enable-training-log
 pvc 0/35
  encapsulation aal5mux ppp Virtual-Template1


That's it. I needed no Dialer interface. The two links come up, get  
bound and stay up. Is there any particular reason why a Dialer  
interface would be better or advisable?


Also the 1721 is running 12.3 mainline with ADSL WIC support and the  
7206 is running 12.4 mainline.


Steve

--
---
Steven Saner
st...@saner.net





___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Richard A Steenbergen
On Tue, Mar 31, 2009 at 10:01:24AM +0200, Gergely Antal wrote:
 I meant that you can not push 40G out of a 6704
 even with a dfc attached to it.But you can do it with a 6708
 with 1:1 subscription.

Worse, some days you can't even get 7G in from a single port on a 6704
with the other 3 ports unused. We routinely have problems with ingress
interface overruns or egress interface output queue overflows on 6704 
in that traffic range, and DFC doesn't make any difference.

It seems like it is head of line blocking, and TAC's only answer is
those things have no buffers, buy a 6708. The problem can usually be
worked around by changing the way traffic is being mapped between the
ports. For example, the one that seems to be the absolute worst case is
in one port and out the other on the same fabric channel, i.e. in port
1 and out port 2 or the same thing on 3/4.

-- 
Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] cisco-nsp Digest, Vol 76, Issue 117

2009-03-31 Thread gokhan senol
Hi 
I need a help about designing the topology below.
 
the customer has HQ and now wanna design disaster center. As you can guess both 
HQ and Disaster side have server farms. (Actv.Dir. - Exc. - some applıcation 
servers)
also there are  about 50 remote sides. its an mpls topology.
 
case  : if  HQ connection to ISP goes down then the server farm in HQ will be 
unreachable and the remote sides will access to disaster side. its ok
but if only one server goes down in HQ , what kind of config should I do on the 
HQ router so the connections come to that server will be rerouted to the backup 
server in disaster side. 
 
and also I thougt giving same ip subnet for HQ and disaster and also I thought 
using overlapping nat to access both side to eachtoher.  Is it correct design ?
 
thnkx evry1


  
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Jason Lixfeld

Looks like the 6708 isn't as bad as we think.

On 2009-03-31, at 4:12 PM, Richard A Steenbergen r...@e-gerbil.net  
wrote:



On Tue, Mar 31, 2009 at 10:01:24AM +0200, Gergely Antal wrote:

I meant that you can not push 40G out of a 6704
even with a dfc attached to it.But you can do it with a 6708
with 1:1 subscription.


Worse, some days you can't even get 7G in from a single port on a 6704
with the other 3 ports unused. We routinely have problems with ingress
interface overruns or egress interface output queue overflows on 6704
in that traffic range, and DFC doesn't make any difference.

It seems like it is head of line blocking, and TAC's only answer is
those things have no buffers, buy a 6708. The problem can usually be
worked around by changing the way traffic is being mapped between the
ports. For example, the one that seems to be the absolute worst case  
is
in one port and out the other on the same fabric channel, i.e. in  
port

1 and out port 2 or the same thing on 3/4.

--
Richard A Steenbergen r...@e-gerbil.net   http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1  
2CBC)

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Dodd, Steven
Keith Tokash has a pretty good writeup on the 6704 vs 6708 buffer issue:

http://www.cciecandidate.com/?p=505

-Steve

-Original Message-
From: cisco-nsp-boun...@puck.nether.net
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Jason Lixfeld
Sent: Tuesday, March 31, 2009 1:38 PM
To: Richard A Steenbergen
Cc: Gert Doering; cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] 10GE card for 7609

Looks like the 6708 isn't as bad as we think.

On 2009-03-31, at 4:12 PM, Richard A Steenbergen r...@e-gerbil.net  
wrote:

 On Tue, Mar 31, 2009 at 10:01:24AM +0200, Gergely Antal wrote:
 I meant that you can not push 40G out of a 6704
 even with a dfc attached to it.But you can do it with a 6708
 with 1:1 subscription.

 Worse, some days you can't even get 7G in from a single port on a 6704
 with the other 3 ports unused. We routinely have problems with ingress
 interface overruns or egress interface output queue overflows on 6704
 in that traffic range, and DFC doesn't make any difference.

 It seems like it is head of line blocking, and TAC's only answer is
 those things have no buffers, buy a 6708. The problem can usually be
 worked around by changing the way traffic is being mapped between the
 ports. For example, the one that seems to be the absolute worst case  
 is
 in one port and out the other on the same fabric channel, i.e. in  
 port
 1 and out port 2 or the same thing on 3/4.

 -- 
 Richard A Steenbergen r...@e-gerbil.net
http://www.e-gerbil.net/ras
 GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1  
 2CBC)
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 3750/3750E stack upgrade downtime?

2009-03-31 Thread Dirk-Jan van Helmond

I'm also interested in this question.

We're thinking about getting some Cisco CBS 3110 blade switches to  
aggeregate the interfaces from the bladeservers. The CBS3110 can stack  
and is factually just an 3750 in a blade enclosure and has the same  
roadmap as the 3750.
I would very much like to have ISSU on these switches, otherwise an  
IOS upgrade means downtime for an entire bladechassis, which is  
unacceptable.

Unfortunately ISSU is not supported and not on the roadmap :(

I've asked my accountmanager @Cisco, so you please ask yours. Maybe if  
we ask kind enough, they will think about it ;)



regards,
Dirk-Jan





On Mar 30, 2009, at 22:45 , Peter Rathlev wrote:


On Mon, 2009-03-30 at 16:20 -0400, Jeff Kell wrote:

Is there any way to roll an upgrade out to a 3750 stack without
abruptly rebooting the entire stack?


I would very much like to know if there is. AFAIK you can't complete  
the

upgrade without downtime. Two switches with even just rebuild version
differences can't live together in the same stack, so there's no  
chance

of upgrading without significant downtime.

We're starting to use 3750E with 10G trunks in between them instead of
Stackwise. This makes it possible to upgrade practically without
downtime.

Regards,
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Brian Mengel
This got me curious, so I did a bit of digging.

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/product_data_sheet09186a00801dce34.html

WS-X6704-10GE - 16MB port buffers
WS-X6708-10G-3CXL - 200 MB port buffers

http://www.cisco.com/en/US/prod/collateral/routers/ps368/data_sheet_c78-49152.html

7600-ES+4TG3CXL - 512MB port buffers



On Tue, Mar 31, 2009 at 4:37 PM, Jason Lixfeld ja...@lixfeld.ca wrote:
 Looks like the 6708 isn't as bad as we think.

 On 2009-03-31, at 4:12 PM, Richard A Steenbergen r...@e-gerbil.net wrote:

 On Tue, Mar 31, 2009 at 10:01:24AM +0200, Gergely Antal wrote:

 I meant that you can not push 40G out of a 6704
 even with a dfc attached to it.But you can do it with a 6708
 with 1:1 subscription.

 Worse, some days you can't even get 7G in from a single port on a 6704
 with the other 3 ports unused. We routinely have problems with ingress
 interface overruns or egress interface output queue overflows on 6704
 in that traffic range, and DFC doesn't make any difference.

 It seems like it is head of line blocking, and TAC's only answer is
 those things have no buffers, buy a 6708. The problem can usually be
 worked around by changing the way traffic is being mapped between the
 ports. For example, the one that seems to be the absolute worst case is
 in one port and out the other on the same fabric channel, i.e. in port
 1 and out port 2 or the same thing on 3/4.

 --
 Richard A Steenbergen r...@e-gerbil.net       http://www.e-gerbil.net/ras
 GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
 ___
 cisco-nsp mailing list  cisco-...@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

 ___
 cisco-nsp mailing list  cisco-...@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] 7609 SP fails over on KPA failure

2009-03-31 Thread Trevor Korthuis
We have 2 7609's that are failing over to the redundant SP approximately every 
5 days. These boxes are in network with PIM-SM multicast and IPV4 routed 
traffic running sup720's with SRA2. These failovers  started after 2 6516 GE 
cards were removed and replaced with 2 6704 10GE cards. When the 6516 cards 
were pulled, the boxes changed from Ingress Multicast Replication Mode to 
Egress mode. We have now forced the boxes back into Ingress Mode and have 
pulled the new 6704 cards.

We have a number of boxes with this configuration and these are the only 2 
boxes with these symptoms. Only difference with these boxes is that they have 2 
6704 cards running 4.7 firmware whereas the majority of 6704 cards in our 
network have 4.6 firmware.  In the lab, we have the same configuration and have 
multiple 6704's with the 4.7 firmware and in the same serial number range and 
cannot reproduce the issue. Cisco is currently trying to reproduce the issue in 
their labs. 

Cisco has determined that the ICC channel looks to be congested during one of 
these failovers, but cannot determine why.  Any chance that there is someone 
currently running 7609's in a multicast network having a similar sup720 
failover issue?

Trevor
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] OT: Cisco Anyconnect Client with IOS SSL

2009-03-31 Thread Felix Nkansah
Hi Team,
I am trying to setup the Cisco IOS SSL to support Anyconnect client.

Much as I have entered all the required commands, the configuration doesn't
work. My IOS is (C2800NM-ADVIPSERVICESK9-M), Version 12.4(22)T.

I would appreciate if any in this team with experience setting up anyconnect
with IOS can draw my attention to any caveats.

I have selected the necessary portion of my router config for your review,
if necessary.

Many thanks.

*
aaa new-model
!
aaa authentication login VPN local
aaa authorization network VPN local

crypto pki trustpoint TP-self-signed-2613188008
 enrollment selfsigned
 subject-name cn=IOS-Self-Signed-Certificate-2613188008
 revocation-check none
 rsakeypair TP-self-signed-2613188008

username remote secret 5 $1$86qN$CJ2uc1l7PYy7a5sNMrPK2/

ip local pool WEBVPN 192.168.250.11 192.168.250.111

webvpn gateway SSL
 hostname CIS-EDGE1
 ip address 80.87.77.18 port 443
 http-redirect port 80
 ssl encryption 3des-sha1 aes-sha1
 ssl trustpoint TP-self-signed-2613188008
 inservice
 !
webvpn install svc flash:/webvpn/svc_1.pkg sequence 1
 !
webvpn install svc flash:/webvpn/svc_2.pkg sequence 2
 !
webvpn install svc flash:/webvpn/svc_3.pkg sequence 3
 !
webvpn context SSL
 ssl authenticate verify all
 !
 !
 policy group SSL
   functions svc-enabled
   svc address-pool WEBVPN
   svc default-domain cisghana.com
   svc keep-client-installed
   svc dpd-interval gateway 30
   svc keepalive 300
   svc split dns cisghana.com
   svc split include 192.168.1.0 255.255.255.0
   svc split include 192.168.3.0 255.255.255.0
   svc split include 192.168.4.0 255.255.255.0
   svc split include 192.168.21.0 255.255.255.0
   svc dns-server primary 192.168.21.17
   svc dns-server secondary 192.168.21.18
 default-group-policy SSL
 aaa authentication list VPN
 aaa authorization list VPN
 gateway SSL domain cisghana.com
 logging enable
 inservice

interface Loopback1
 description For SSL VPN Use
 ip address 192.168.250.250 255.255.255.0

interface GigabitEthernet0/0.80
 encapsulation dot1Q 80
 ip address 80.87.77.18 255.255.255.248
 ip access-group OUTSIDE in //this acl permits ports 80 and 443 to the
interface
 no ip unreachables
 ip nat outside
 ip inspect CBAC out
 ip virtual-reassembly*
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 3750/3750E stack upgrade downtime?

2009-03-31 Thread Peter Rathlev
On Tue, 2009-03-31 at 22:44 +0200, Dirk-Jan van Helmond wrote:
 I've asked my accountmanager @Cisco, so you please ask yours. Maybe if  
 we ask kind enough, they will think about it ;)

Yeah, that usually works like a charm. Remember BFD for SVIs? ;-)

Regards,
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 3750/3750E stack upgrade downtime?

2009-03-31 Thread Tony Varriale

What was asked of the account manager?

tv
- Original Message - 
From: Peter Rathlev pe...@rathlev.dk

To: Dirk-Jan van Helmond c-...@djvh.nl
Cc: cisco-nsp cisco-nsp@puck.nether.net
Sent: Tuesday, March 31, 2009 4:50 PM
Subject: Re: [c-nsp] 3750/3750E stack upgrade downtime?



On Tue, 2009-03-31 at 22:44 +0200, Dirk-Jan van Helmond wrote:
I've asked my accountmanager @Cisco, so you please ask yours. Maybe if  
we ask kind enough, they will think about it ;)


Yeah, that usually works like a charm. Remember BFD for SVIs? ;-)

Regards,
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] L2TPv3 password keeps changing

2009-03-31 Thread Jared Gillis
I'm seeing this behavior as well on a 7204VXR, and google only turns up two
threads on c-nsp that have no replies.
Is this expected? Is there a workaround?

Lars Lystrup Christensen wrote:
  
 
 Hi all,
 
  
 
 When configuring L2TPv3 on one of our routers, I've noticed that the
 password keeps changing all the time, even tough the configuration has
 not been altered.
 
  
 
 The router is a 1811 running 12.4(6)T11 Advanced IP Services.
 
 __
 
 Med venlig hilsen / Kind regards
 
 Lars Lystrup Christensen 
 Director of Engineering, CCIE(tm) #20292
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/
 


-- 
Jared Gillis - ja...@corp.sonic.net   Sonic.net, Inc.
Network Operations2260 Apollo Way
707.522.1000 (Voice)  Santa Rosa, CA 95407
707.547.3400 (Support)http://www.sonic.net/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 3750/3750E stack upgrade downtime?

2009-03-31 Thread Peter Rathlev
  On Tue, 2009-03-31 at 22:44 +0200, Dirk-Jan van Helmond wrote:
  I've asked my accountmanager @Cisco, so you please ask yours. Maybe if  
  we ask kind enough, they will think about it ;)

 Tuesday, March 31, 2009 4:50 PM, Peter Rathlev wrote:
  Yeah, that usually works like a charm. Remember BFD for SVIs? ;-)

On Tue, 2009-03-31 at 17:48 -0500, Tony Varriale wrote:
 What was asked of the account manager?

We asked if we could have BFD for SVIs in SXH (and further). The AM
would do back with it, but we haven't seen any BFD for SVIs yet.

(I assume the question was for me.)

Regards,
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Ben Steele
Yes you can use the WS-670x in the 7600 with an RSP, I have a couple of
chassis with this at the moment, given they are the 6704(one with DFC)
10GE's but I can't see a 6708 not working either...
7600#sh mod
Mod Ports Card Type  Model  Serial
No.
--- - -- --
---
  14  CEF720 4 port 10-Gigabit Ethernet  WS-X6704-10GE  SERIAL
  24  CEF720 4 port 10-Gigabit Ethernet  WS-X6704-10GE  SERIAL
  3   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX SERIAL
  52  Route Switch Processor 720 (Active)RSP720-3CXL-GE SERIAL

On Wed, Apr 1, 2009 at 2:54 AM, Geoffrey Pendery ge...@pendery.net wrote:

 The stuff we've been reading (look at Supervisor Engines Supported
 on the data sheets for Cisco Catalyst 6500 Series 10 Gigabit Ethernet
 Interface Modules, or browse the line cards for the 7600, or go into
 Configurator tool) claims that the RSP 720 won't support the X6704 or
 X6708 10 Gig LAN cards, only the SIP/SPA/ES WAN type cards.

 I don't mean to kick off a big 6500 vs 7600 storm again, but does
 anyone know if this is incorrect?
 Can we buy a new 7609-S chassis, put a new RSP 720 in it, put 7600 IOS
 on that Sup, then plug in a WS-X6708-10G-3C and have it work?


 -Geoff


 On Mon, Mar 30, 2009 at 4:41 AM, Mark Tech techcon...@yahoo.com wrote:
 
  Hi
  I have a prospect for a 10G upstream customer and Upstream ISP
 connections. I would need to connect these into our 7609s running RSP
 720-3CXL's, at the moment I have found that the WS-X6704-10GE card may be
 suitable.
 
  My technical requirements are:
  10Gbps line rate
  IPv4
  Able to handle full Internet routing table
  Potentially IPv6 and MPLS in the future
 
  With the WS-X6704-10GE, there seems to be several options that are
 available with it i.e.
 
  Memory Option:
  MEM-XCEF720-256M
  Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A)
  MEM-XCEF720-512M
  Cat 6500 512MB DDR, xCEF720 (67xx interface, DFC3A/DFC3B)
  MEM-XCEF720-1GB
  Catalyst 6500 1GB DDR, xCEF720 (67xx interface, DFC3BXL)
 
  
  Distributed Forwarding Card Option
 
  WS-F6700-CFC
  Catalyst 6500 Central Fwd Card for WS-X67xx modules
  WS-F6700-DFC3B
  Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx
  WS-F6700-DFC3A
  Catalyst 6500 Dist Fwd Card for WS-X67xx modules
  WS-F6700-DFC3BXL
  Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx
  WS-F6700-DFC3C
  Catalyst 6500 Dist Fwd Card for WS-X67xx modules
  WS-F6700-DFC3CXL
  Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx
 
  I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?
 
  Regards
 
  Mark
 
 
 
 
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Jon Wolberg
How much RAM do you have in your 6704?  I have some too running in a RSP 
without issues and just got a new one.  It refused to push the FIB to the DFC 
and blew up due to low memory.  Our vendor only put a 256MB stick of RAM in 
this card when they usually have 1GB.

Other than that, I haven't had any issues.


Jon Wolberg
Operations Manager
PowerVPS / Defender Hosting
Defender Technologies Group, LLC.


- Original Message -
From: Ben Steele illcrit...@gmail.com
To: Geoffrey Pendery ge...@pendery.net
Cc: cisco-nsp@puck.nether.net
Sent: Tuesday, March 31, 2009 8:53:14 PM GMT -05:00 US/Canada Eastern
Subject: Re: [c-nsp] 10GE card for 7609

Yes you can use the WS-670x in the 7600 with an RSP, I have a couple of
chassis with this at the moment, given they are the 6704(one with DFC)
10GE's but I can't see a 6708 not working either...
7600#sh mod
Mod Ports Card Type  Model  Serial
No.
--- - -- --
---
  14  CEF720 4 port 10-Gigabit Ethernet  WS-X6704-10GE  SERIAL
  24  CEF720 4 port 10-Gigabit Ethernet  WS-X6704-10GE  SERIAL
  3   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX SERIAL
  52  Route Switch Processor 720 (Active)RSP720-3CXL-GE SERIAL

On Wed, Apr 1, 2009 at 2:54 AM, Geoffrey Pendery ge...@pendery.net wrote:

 The stuff we've been reading (look at Supervisor Engines Supported
 on the data sheets for Cisco Catalyst 6500 Series 10 Gigabit Ethernet
 Interface Modules, or browse the line cards for the 7600, or go into
 Configurator tool) claims that the RSP 720 won't support the X6704 or
 X6708 10 Gig LAN cards, only the SIP/SPA/ES WAN type cards.

 I don't mean to kick off a big 6500 vs 7600 storm again, but does
 anyone know if this is incorrect?
 Can we buy a new 7609-S chassis, put a new RSP 720 in it, put 7600 IOS
 on that Sup, then plug in a WS-X6708-10G-3C and have it work?


 -Geoff


 On Mon, Mar 30, 2009 at 4:41 AM, Mark Tech techcon...@yahoo.com wrote:
 
  Hi
  I have a prospect for a 10G upstream customer and Upstream ISP
 connections. I would need to connect these into our 7609s running RSP
 720-3CXL's, at the moment I have found that the WS-X6704-10GE card may be
 suitable.
 
  My technical requirements are:
  10Gbps line rate
  IPv4
  Able to handle full Internet routing table
  Potentially IPv6 and MPLS in the future
 
  With the WS-X6704-10GE, there seems to be several options that are
 available with it i.e.
 
  Memory Option:
  MEM-XCEF720-256M
  Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A)
  MEM-XCEF720-512M
  Cat 6500 512MB DDR, xCEF720 (67xx interface, DFC3A/DFC3B)
  MEM-XCEF720-1GB
  Catalyst 6500 1GB DDR, xCEF720 (67xx interface, DFC3BXL)
 
  
  Distributed Forwarding Card Option
 
  WS-F6700-CFC
  Catalyst 6500 Central Fwd Card for WS-X67xx modules
  WS-F6700-DFC3B
  Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx
  WS-F6700-DFC3A
  Catalyst 6500 Dist Fwd Card for WS-X67xx modules
  WS-F6700-DFC3BXL
  Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx
  WS-F6700-DFC3C
  Catalyst 6500 Dist Fwd Card for WS-X67xx modules
  WS-F6700-DFC3CXL
  Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx
 
  I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?
 
  Regards
 
  Mark
 
 
 
 
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 10GE card for 7609

2009-03-31 Thread Ben Steele
1GB on the DFC, 256MB definitely wouldn't cut it for us.

On Wed, Apr 1, 2009 at 11:30 AM, Jon Wolberg j...@defenderhosting.comwrote:

 How much RAM do you have in your 6704?  I have some too running in a RSP
 without issues and just got a new one.  It refused to push the FIB to the
 DFC and blew up due to low memory.  Our vendor only put a 256MB stick of RAM
 in this card when they usually have 1GB.

 Other than that, I haven't had any issues.


 Jon Wolberg
 Operations Manager
 PowerVPS / Defender Hosting
 Defender Technologies Group, LLC.


 - Original Message -
 From: Ben Steele illcrit...@gmail.com
 To: Geoffrey Pendery ge...@pendery.net
 Cc: cisco-nsp@puck.nether.net
 Sent: Tuesday, March 31, 2009 8:53:14 PM GMT -05:00 US/Canada Eastern
 Subject: Re: [c-nsp] 10GE card for 7609

 Yes you can use the WS-670x in the 7600 with an RSP, I have a couple of
 chassis with this at the moment, given they are the 6704(one with DFC)
 10GE's but I can't see a 6708 not working either...
 7600#sh mod
 Mod Ports Card Type  Model  Serial
 No.
 --- - -- --
 ---
  14  CEF720 4 port 10-Gigabit Ethernet  WS-X6704-10GE  SERIAL
  24  CEF720 4 port 10-Gigabit Ethernet  WS-X6704-10GE  SERIAL
  3   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX SERIAL
  52  Route Switch Processor 720 (Active)RSP720-3CXL-GE SERIAL

 On Wed, Apr 1, 2009 at 2:54 AM, Geoffrey Pendery ge...@pendery.net
 wrote:

  The stuff we've been reading (look at Supervisor Engines Supported
  on the data sheets for Cisco Catalyst 6500 Series 10 Gigabit Ethernet
  Interface Modules, or browse the line cards for the 7600, or go into
  Configurator tool) claims that the RSP 720 won't support the X6704 or
  X6708 10 Gig LAN cards, only the SIP/SPA/ES WAN type cards.
 
  I don't mean to kick off a big 6500 vs 7600 storm again, but does
  anyone know if this is incorrect?
  Can we buy a new 7609-S chassis, put a new RSP 720 in it, put 7600 IOS
  on that Sup, then plug in a WS-X6708-10G-3C and have it work?
 
 
  -Geoff
 
 
  On Mon, Mar 30, 2009 at 4:41 AM, Mark Tech techcon...@yahoo.com wrote:
  
   Hi
   I have a prospect for a 10G upstream customer and Upstream ISP
  connections. I would need to connect these into our 7609s running RSP
  720-3CXL's, at the moment I have found that the WS-X6704-10GE card may be
  suitable.
  
   My technical requirements are:
   10Gbps line rate
   IPv4
   Able to handle full Internet routing table
   Potentially IPv6 and MPLS in the future
  
   With the WS-X6704-10GE, there seems to be several options that are
  available with it i.e.
  
   Memory Option:
   MEM-XCEF720-256M
   Catalyst 6500 256MB DDR, xCEF720 (67xx interface, DFC3A)
   MEM-XCEF720-512M
   Cat 6500 512MB DDR, xCEF720 (67xx interface, DFC3A/DFC3B)
   MEM-XCEF720-1GB
   Catalyst 6500 1GB DDR, xCEF720 (67xx interface, DFC3BXL)
  
   
   Distributed Forwarding Card Option
  
   WS-F6700-CFC
   Catalyst 6500 Central Fwd Card for WS-X67xx modules
   WS-F6700-DFC3B
   Catalyst 6500 Dist Fwd Card, 256K Routes for WS-X67xx
   WS-F6700-DFC3A
   Catalyst 6500 Dist Fwd Card for WS-X67xx modules
   WS-F6700-DFC3BXL
   Catalyst 6500 Dist Fwd Card- 3BXL, for WS-X67xx
   WS-F6700-DFC3C
   Catalyst 6500 Dist Fwd Card for WS-X67xx modules
   WS-F6700-DFC3CXL
   Catalyst 6500 Dist Fwd Card- 3CXL, for WS-X67xx
  
   I assume that I would need MEM-XCEF720-1GB and WS-F6700-DFC3CXL?
  
   Regards
  
   Mark
  
  
  
  
   ___
   cisco-nsp mailing list  cisco-nsp@puck.nether.net
   https://puck.nether.net/mailman/listinfo/cisco-nsp
   archive at http://puck.nether.net/pipermail/cisco-nsp/
  
  ___
  cisco-nsp mailing list  cisco-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/cisco-nsp
  archive at http://puck.nether.net/pipermail/cisco-nsp/
 
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] BGP convergence

2009-03-31 Thread arnoldus Subiyanto

Hello..

I'am is new member here..
My name aditya from bali-indonesia..

I want to conduct research to examine the speed of convergence in BGP.
What's
his friends all know that there is software that can be used to view
the update process of the BGP routing table. things that i want to know  is  :
 1. Large table; 
 2. Large memory is used; 
 3. Speed peering; 
 4. AS-PATH length
Or in cisco router have command to see that ?

One more,,
whether there is a standard speed needed to make a BGP router convergence?
thanks for your help ..

_
See all the ways you can stay connected to friends and family
http://www.microsoft.com/windows/windowslive/default.aspx
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/