Re: [c-nsp] clue-bat requested for v6 loopback into IGP

2009-12-18 Thread Steve Bertrand
Steve Bertrand wrote:
 After a long day, I'm certain that I'm missing something simple. I'm
 trying to get a loopback address advertised into OSPF, after a direct
 ptp setup has already been established ( I can ping6 from ptp interface
 to ptp interface ).

...

 ps. usually things just 'click' after I send out a public message, so
 here's to trying ;)

Thanks to all who replied. I did get it, and what I missed was a simple:

(config-subif)#ipv6 ospf 1 area 0.0.0.0

...on the interface that is part of the ptp which needs to advertise the
address of the loopback..

On the former-lacking Quagga box:

O* 2607:f118:1::ff1/128 [110/1] via fe80::215:faff:fe1d:dd40, em2.97,
00:00:15

Cheers!

Steve
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Port channel bug in SXI3

2009-12-18 Thread David Hughes

This now has a bug ID associated with it.  We've got the same problem on SXI2 
and SXI3.  For anyone interested, the Bug ID is CSCtd93384.


David
...


On 15/12/2009, at 11:59 AM, David Hughes wrote:

 Hi
 
 Since moving to SXI3 we've seen issues with port channels.  Problems such as 
 the physical interfaces and port channel config getting out of sync.  A sh 
 run int on a member of the Po will say it's shutdown but a sh run int on 
 the Po itself shows it's up (and a sh int does too).  It's not impacting on 
 the operation of the box but it's confusing the hell out of some of the 
 engineers having to work on them.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-18 Thread Marian Ďurkovič
On Fri, 18 Dec 2009 10:16:08 +1000, Brad Henshaw wrote
 Flow control comes with its own set of challenges however such as 
 varied support across vendors and models and the fact that it's almost 
 never QoS-aware in the kind of edge switches you're using.

Flowcontrol doesn't need to be QOS-aware in this scenario.

On the switch side, it's enough if it supports plain RX flowcontrol
i.e. flowcontrol receive [desired|on]. Then the wireless link can
send pause frames to slow down the switch port automatically to the
real bandwidth and output buffering / QOS configuration on the switch
port is applied as expected.


 With kind regards,

 M.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Data Center switch replacement

2009-12-18 Thread Gert Doering
Hi,

On Thu, Dec 17, 2009 at 08:42:49PM -0500, ch...@lavin-llc.com wrote:
 adherence to our standard has led to two problems. First, several servers
 didn't get cabled with two connections. Second, the folks who manage the
 servers have challenges with the NIC configurations. So while we expect
 many of the servers can sustain the loss of one NIC, we have several that
 we know and many that we may not know, will lose network connectivity as
 we flip the connection to the new switch.

Now that's a good opportunity to clean up broken server configurations and
connections.

If it's meant to be redundant, and it isn't, then it's not the network's
fault if it breaks.  Go and fix it!

gert
-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


pgpZhV8abR6BS.pgp
Description: PGP signature
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] Data Center switch replacement

2009-12-18 Thread Pavel Skovajsa
I second that,
as a rule of thumb in all migrations in production environments it is always
much better to go with the step-by-step aproach (if possible), and don't do
the tempting big bang implementation.

Yes it is true this will be more costly - more cabling, more rack space,
more management around it, more man hour workmore spreadsheets - but it is
quite easy to built a businness case around it, as some servers just NEED to
be up and you cannot risk too much.

Also in case you have tight change process it provides an easy way to
explain to management that the backout procedure is straightforward - replug
the server NIC to the previous port.

While doing migrations of servers it is always better to have a
server/application personell checking each server as some
applications/OS/drivers might not like the replugging (especially when in
the middle of something) and might decide to crash/kill/destroy.for
example we had experience with teaming NIC drivers that decided to shut the
whole Team as soon as something happened to one of the NICs - and found
this out only during the replugging.
Also - nobody is perfect, especially in the inter-tower field where the
server people think that the network guys are responsible for their NIC
settings, so we usually find misconfigured NICs - no teaming setup,
incorrect teaming modes etc. etc. - so going with step-by-step is always
better.

Hope it helps,
-pavel skovajsa







On Fri, Dec 18, 2009 at 3:42 AM, Randy McAnally r...@fast-serv.com wrote:


  How about you individually move each connection over to the secondary
  switch one at a time.  This should only be a 30 second downtime
  window per port, I'd think?  Once you've migrated everybody off of
  the primary switch, pull it, upgrade it and then move everybody back
  one-by-one?  This would minimize everybody's downtime and I think
  would go over better with your clients.  Plus, you can drag out the
  upgrade over time rather than an all or none scenario.

 Agreed.  What if something goes wrong or takes longer than expected --
 wouldn't you like to know by the time you've moved the first cable and not
 after the original switch is completely offline and de-racked?
 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 12.2SB or 12.2SRC/SRD on 7200?

2009-12-18 Thread Paolo Lucente
Hi,

On Fri, Dec 18, 2009 at 01:25:10PM +0800, Mark Tinka wrote:

 The EoS/EoL announcement for SRC just went out yesterday. 
 Recommended migration plan is now SRD and SRE (when it does 
 come out).

Well, SRE is already out for a few weeks now. But whether it
can be considered for deployment, it's different story, ie.
on a 7600 after 6 days of no-frills MPLS/IS-IS/BGP (just an
handful of peers):

[ ... ]
Dec 8 21:02:37 -xx-xxx- 636: Dec 8 20:02:31.868 UTC: 
%BGP-4-BGP_OUT_OF_MEMORY: BGP resetting because of memory exhaustion.
Dec 8 21:02:42 -xx-xxx- 637: Dec 8 20:02:39.212 UTC: %BGP-5-ADJCHANGE: 
neighbor xxx.xxx.xx.xxx Down No memory 
[ ... ]

Will see what comes out of TAC; for now need to cry another
bit to get 4-bytes ASN on 7600; on the 7200 people in the SP
arena can usually fall back to a recent 12.0S.

Cheers,
Paolo

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Serial link CTS=down link UP

2009-12-18 Thread Marcelo Zilio
Hi,

Debug keeps showing the following messages. I don't think is much helpfull.

Router#debug serial interface
Router#
83: Dec 18 08:53:17.521 BST: Serial0/1/0(out): StEnq, myseq 61, yourseen
60, DTE up
84: Dec 18 08:53:17.533 BST: Serial0/1/0(in): Status, myseq 61, pak size
19
Router#
85: Dec 18 08:53:27.521 BST: Serial0/1/0(out): StEnq, myseq 62, yourseen
61, DTE up
86: Dec 18 08:53:27.537 BST: Serial0/1/0(in): Status, myseq 62, pak size
14
Router#
87: Dec 18 08:53:37.521 BST: Serial0/1/0(out): StEnq, myseq 63, yourseen
62, DTE up
88: Dec 18 08:53:37.537 BST: Serial0/1/0(in): Status, myseq 63, pak size
14
As far as I could see CTS is always down. It is not flapping.

I'm talking to the Service Provider guys. I'll let you know the results.
Thanks for all responses!
On Thu, Dec 17, 2009 at 4:18 PM, Michael K. Smith - Adhost 
mksm...@adhost.com wrote:

   -Original Message-
  From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-
  boun...@puck.nether.net] On Behalf Of Marcelo Zilio
  Sent: Thursday, December 17, 2009 10:04 AM
  To: cisco-nsp@puck.nether.net
  Subject: [c-nsp] Serial link CTS=down link UP
 
  Hi,
 
  Has anyone seen this in serial interfaces before?
  Link is UP and traffic is going through, however router shows CTS=down
  besides a lot CRCs/Input Errors.
  It doesn't make sense to me the parameter which should advise that the
  link
  is ready to go is DOWN while there is traffic on it.
  Users are complaining some application are slow.
 
  The router is a Cisco 2811 IOS 12.4(15)T10.
 
  Router#sh int s0/1/0
  Serial0/1/0 is up, line protocol is up
Hardware is GT96K Serial
MTU 1500 bytes, BW 256 Kbit/sec, DLY 2 usec,
   reliability 255/255, txload 40/255, rxload 42/255
Encapsulation FRAME-RELAY IETF, loopback not set
Keepalive set (10 sec)
CRC checking enabled
LMI enq sent  48, LMI stat recvd 48, LMI upd recvd 0, DTE LMI up
LMI enq recvd 0, LMI stat sent  0, LMI upd sent  0
LMI DLCI 0  LMI type is ANSI Annex D  frame relay DTE  segmentation
  inactive
FR SVC disabled, LAPF state down
Broadcast queue 0/64, broadcasts sent/dropped 7/0, interface
  broadcasts 0
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of show interface counters 00:07:55
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops:
 0
Queueing strategy: dual fifo
Output queue: high size/max/dropped 0/256/0
Output queue: 0/128 (size/max)
30 second input rate 43000 bits/sec, 68 packets/sec
30 second output rate 41000 bits/sec, 78 packets/sec
   34746 packets input, 2956769 bytes, 0 no buffer
   Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
   602 input errors, 602 CRC, 433 frame, 107 overrun, 0 ignored, 323
  abort
   43237 packets output, 3308125 bytes, 0 underruns
   0 output errors, 0 collisions, 0 interface resets
   0 unknown protocol drops
   0 output buffer failures, 0 output buffers swapped out
   0 carrier transitions
   DCD=up  DSR=up  DTR=up  RTS=up  *CTS=down*
 
 With all those errors I would say you have a physical layer problem or a
 clocking issue.  Perhaps the CTS is flapping between up and down and
 you're catching it on the down.  What happens if you debug the
 interface?

 Regards,

 Mike

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] 12.2SB or 12.2SRC/SRD on 7200?

2009-12-18 Thread Gert Doering
Hi,

On Fri, Dec 18, 2009 at 11:01:43AM +, Paolo Lucente wrote:
 Will see what comes out of TAC; for now need to cry another
 bit to get 4-bytes ASN on 7600; on the 7200 people in the SP
 arena can usually fall back to a recent 12.0S.

Haha.  No IPv6 in 12.0S for 7200.

gert
-- 
USENET is *not* the non-clickable part of WWW!
   //www.muc.de/~gert/
Gert Doering - Munich, Germany g...@greenie.muc.de
fax: +49-89-35655025g...@net.informatik.tu-muenchen.de


pgprVMqXGNtLP.pgp
Description: PGP signature
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] Data Center switch replacement

2009-12-18 Thread chris

 Now that's a good opportunity to clean up broken server configurations and
 connections.

 If it's meant to be redundant, and it isn't, then it's not the network's
 fault if it breaks.  Go and fix it!

 gert
 --

Thanks to everyone that responded. I appreciate learning how much several
of us have in common. I especially liked those that shared stories with me
about similar challenges of server and NIC settings for what should be a
redundant design with Primary/Secondary configurations.

I'll update my recommended options to include a third scenario.
1. Complete blackout to power down each switch and replace it with the new
one.
2. Eat the cabling/rack/etc. cost and stand up the new switches and
migrate the connections in one night (performing some due diligence ahead
of time) and hoping all servers are properly configured for a
Primary/Secondary network connection.
3. Eat the cabling/rack/etc. cost and stand up the new switches and
migrate slowly over a period of several maintenance windows while hoping
we don't have any more line card failures during the extended migration
period.

Much appreciated,
-chris


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Data Center switch replacement

2009-12-18 Thread chris
 Hi,

 On Thu, Dec 17, 2009 at 08:42:49PM -0500, ch...@lavin-llc.com wrote:
 adherence to our standard has led to two problems. First, several
 servers
 didn't get cabled with two connections. Second, the folks who manage the
 servers have challenges with the NIC configurations. So while we expect
 many of the servers can sustain the loss of one NIC, we have several
 that
 we know and many that we may not know, will lose network connectivity as
 we flip the connection to the new switch.

 Now that's a good opportunity to clean up broken server configurations and
 connections.

 If it's meant to be redundant, and it isn't, then it's not the network's
 fault if it breaks.  Go and fix it!

 gert
 --

Thanks to everyone that responded. I appreciate learning how much several
of us have in common. I especially liked those that shared stories with me
about similar challenges of server and NIC settings for what should be a
redundant design with Primary/Secondary configurations.

I'll update my recommended options to include a third scenario.
1. Complete blackout to power down each switch and replace it with the new
one.
2. Eat the cabling/rack/etc. cost and stand up the new switches and
migrate the connections in one night (performing some due diligence ahead
of time) and hoping all servers are properly configured for a
Primary/Secondary network connection.
3. Eat the cabling/rack/etc. cost and stand up the new switches and
migrate slowly over a period of several maintenance windows while hoping
we don't have any more line card failures during the extended migration
period.

Much appreciated,
-chris


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Data Center switch replacement

2009-12-18 Thread Brian Spade
Hi,

On Thu, Dec 17, 2009 at 6:42 PM, Randy McAnally r...@fast-serv.com wrote:


  How about you individually move each connection over to the secondary
  switch one at a time.  This should only be a 30 second downtime
  window per port, I'd think?  Once you've migrated everybody off of
  the primary switch, pull it, upgrade it and then move everybody back
  one-by-one?  This would minimize everybody's downtime and I think
  would go over better with your clients.  Plus, you can drag out the
  upgrade over time rather than an all or none scenario.

 Agreed.  What if something goes wrong or takes longer than expected --
 wouldn't you like to know by the time you've moved the first cable and not
 after the original switch is completely offline and de-racked?


+2

For example, install new switch, make it's connections to the existing AGG
layer but also interconnect it to your existing ACC layer, pre-configure
port assignments from current access switch 2 to new switch 2, move
connections one at a time off of your current access switch 2 to new switch
2.   Remove old access switch 2, install new access switch 1,
rinse-and-repeat.

Single-homed devices will have down-time from however long it takes you to
physically move the port plus your CAM timeout.  If you can do this fast
enough you might not reset some TCP connections.

However, if there is a lack of infrastructure for this... fix the servers or
determine how long the hosts will be down :-)

/bs
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/